Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | How To Mfa Number Match | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md | description: Learn how to use number matching in MFA notifications Previously updated : 01/31/2023 Last updated : 02/03/2023 -+ # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events. GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationM ### When will my tenant see number matching if I don't use the Azure portal or Graph API to roll out the change? -Number match will be enabled for all users of Microsoft Authenticator after February 27, 2023. Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users. +Number match will be enabled for all users of Microsoft Authenticator push notifications after February 27, 2023. Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users. ### Will the changes after February 27th, 2023, override number matching settings that are configured for a group in the Authentication methods policy? If the user has a different default authentication method, there won't be any ch Regardless of their default method, any user who is prompted to sign-in with Authenticator push notifications will see number match after February 27th, 2023. If the user is prompted for another method, they won't see any change. -### Will users who don't use number matching be able to perform MFA? --It depends on how the **Enable and Target** tab is configured. The scope for number match approvals will change under the **Configure** tab to include everyone, but it only applies for users and groups targeted on the **Enable and Target** tab for Push or Any. However, if Target on the **Enable and Target** tab is set to specific groups for Push or Any, and the user isn't a member of those groups, then they won't receive the number matching approvals once the change is implemented after February 27th, 2023 because they aren't a member of the groups defined on the **Enable and Target** tab for Push and/or Any. - ### Is number matching supported with MFA Server? No, number matching isn't enforced because it's not a supported feature for MFA Server, which is [deprecated](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454). |
active-directory | How To Add Remove User To Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-user-to-group.md | This article describes how you can add or remove a new user for a group in Permi ## Add a user -1. Navigate to the [Microsoft Entra admin center](https://entr.microsoft.com/#home). +1. Navigate to the [Microsoft Entra admin center](https://entra.microsoft.com/#home). 1. From the Azure Active Directory tile, select **Go to Azure Active Directory**. 1. From the navigation pane, select the **Groups** drop-down menu, then **All groups**. 1. Select the group name for the group you want to add the user to. This article describes how you can add or remove a new user for a group in Permi ## Remove a user -1. Navigate to the Microsoft [Entra admin center](https://entr.microsoft.com/#home). +1. Navigate to the Microsoft [Entra admin center](https://entra.microsoft.com/#home). 1. From the Azure Active Directory tile, select **Go to Azure Active Directory**. 1. From the navigation pane, select the **Groups** drop-down menu, then **All groups**. 1. Select the group name for the group you want to remove the user from. |
active-directory | Onboard Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md | -> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md). +> A *global administrator* or *root user* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md). ## Explanation |
active-directory | Howto Conditional Access Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md | Sign-in frequency previously applied to only to the first factor authentication ### User sign-in frequency and device identities -On Azure AD joined, hybrid Azure AD joined, or Azure AD registered devices, unlocking the device or signing in interactively will satisfy the sign-in frequency policy. In the following two examples user sign-in frequency is set to 1 hour: +On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](/azure/active-directory/develop/scenario-desktop-acquire-token-wam) plugin can refresh a PRT during native application authentication using WAM. -Example 1: +Note: The timestamp captured from user log-in is not necessarily the same as the last recorded timestamp of PRT refresh because of the 4-hour refresh cycle. The case when it is the same is when a PRT has expired and a user log-in refreshes it for 4 hours. In the following examples, assume SIF policy is set to 1 hour and PRT is refreshed at 00:00. ++Example 1: *when you continue to work on the same doc in SPO for an hour* - At 00:00, a user signs in to their Windows 10 Azure AD joined device and starts work on a document stored on SharePoint Online. - The user continues working on the same document on their device for an hour. - At 01:00, the user is prompted to sign in again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator. -Example 2: +Example 2: *when pausing work with a background task running in the browser, then interacting again after the SIF policy time has passed* -- At 00:00, a user signs in to their Windows 10 Azure AD joined device and starts work on a document stored on SharePoint Online.+- At 00:00, a user signs in to their Windows 10 Azure AD joined device and starts to upload a document to SharePoint Online. +- At 00:10, the user gets up and takes a break locking their device. The background upload continues to SharePoint Online. +- At 02:45, the user returns from their break and unlocks the device. The background upload shows completion. +- At 02:45, the user is prompted to sign in when they interact again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator since the last sign-in happened at 00:00. ++If the client app (under activity details) is a Browser, we defer sign in frequency enforcement of events/policies on background services until the next user interaction.    ++Example 3: *with 4-hour refresh cycle of primary refresh token from unlock* ++Scenario 1 - User returns within cycle ++- At 00:00, a user signs into their Windows 10 Azure AD joined device and starts work on a document stored on SharePoint Online. - At 00:30, the user gets up and takes a break locking their device. - At 00:45, the user returns from their break and unlocks the device.-- At 01:45, the user is prompted to sign in again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator since the last sign-in happened at 00:45.+- At 01:00, the user is prompted to sign in again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator, 1 hour after the initial sign-in. -Example 3: If the client app (under activity details) is a Browser, we defer sign in frequency enforcement of events/policies on background services until the next user interaction. +Scenario 2 - User returns outside cycle -- At 00:00, a user signs in to their Windows 10 Azure AD joined device and starts to upload a document to SharePoint Online.-- At 00:10, the user gets up and takes a break locking their device. The background upload continues to SharePoint Online. -- At 02:45, the user returns from their break and unlocks the device. The background upload shows completion. -- At 02:45, the user is prompted to sign in when they interact again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator since the last sign-in happened at 00:00. +- At 00:00, a user signs into their Windows 10 Azure AD joined device and starts work on a document stored on SharePoint Online. +- At 00:30, the user gets up and takes a break locking their device. +- At 04:45, the user returns from their break and unlocks the device. +- At 05:45, the user is prompted to sign in again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator, 1 hour after the PRT was refreshed at 04:45 (over 4hrs after the initial sign-in at 00:00). ### Require reauthentication every time |
active-directory | Licensing Service Plan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic - **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]->This information last updated on December 5th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). +>This information last updated on February 3rd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). ><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Microsoft 365 Audio Conferencing | MCOMEETADV | 0c266dff-15dd-4b49-8397-2bb16070ed52 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | | Azure Active Directory Basic | AAD_BASIC | 2b9c8e7c-319c-43a2-a2a0-48c5c6161de7 | AAD_BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | MICROSOFT AZURE ACTIVE DIRECTORY BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | | Azure Active Directory Premium P1 | AAD_PREMIUM | 078d2b04-f1bd-4111-bbd4-b4b1b354cef4 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) |+| Azure Active Directory Premium P1 for faculty | AAD_PREMIUM_FACULTY | 30fc3c36-5a95-4956-ba57-c09c2a600bb9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9) | | Azure Active Directory Premium P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) | | Azure Information Protection Plan 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | | Business Apps (free) | SMB_APPS | 90d8b3f8-712e-4f7b-aa1e-62e7ae6cbe96 | DYN365BC_MS_INVOICING (39b5c996-467e-4e60-bd62-46066f572726)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | Microsoft Invoicing (39b5c996-467e-4e60-bd62-46066f572726)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Enterprise Mobility + Security G5 GCC | EMSPREMIUM_GOV | 8a180c2b-f4cf-4d44-897c-3d32acc4a60b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_ENTERPRISE) (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Exchange Enterprise CAL Services (EOP, DLP) | EOP_ENTERPRISE_PREMIUM | e8ecdf70-47a8-4d39-9d15-093624b7f640 | EOP_ENTERPRISE_PREMIUM (75badc48-628e-4446-8460-41344d73abd6)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | Exchange Enterprise CAL Services (EOP, DLP) (75badc48-628e-4446-8460-41344d73abd6)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | | Exchange Online (Plan 1) | EXCHANGESTANDARD | 4b9405b0-7788-4568-add1-99614e613b69 | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c) |+| Exchange Online (Plan 1) for Students | EXCHANGESTANDARD_STUDENT | ad2fe44a-915d-4e2b-ade1-6766d50a9d9c | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | +| Exchange Online (Plan 1) for Alumni with Yammer | EXCHANGESTANDARD_ALUMNI | aa0f9eb7-eff2-4943-8424-226fb137fcad | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Exchange Online (PLAN 2) | EXCHANGEENTERPRISE | 19ec0d23-8335-4cbd-94ac-6050e30712fa | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) | | Exchange Online Archiving for Exchange Online | EXCHANGEARCHIVE_ADDON | ee02fd1b-340e-4a4b-b355-4a514e4c8943 | EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793) | | Exchange Online Archiving for Exchange Server | EXCHANGEARCHIVE | 90b5e015-709a-4b8b-b08e-3200f994494c | EXCHANGE_S_ARCHIVE (da040e0a-b393-4bea-bb76-928b3fa1cf5a) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE SERVER (da040e0a-b393-4bea-bb76-928b3fa1cf5a) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Exchange Online POP | EXCHANGETELCO | cb0a98a8-11bc-494c-83d9-c1b1ac65327e | EXCHANGE_B_STANDARD (90927877-dcff-4af6-b346-2332c0b15bb7) | EXCHANGE ONLINE POP (90927877-dcff-4af6-b346-2332c0b15bb7) | | Exchange Online Protection | EOP_ENTERPRISE | 45a2423b-e884-448d-a831-d9e139c52d2f | EOP_ENTERPRISE (326e2b78-9d27-42c9-8509-46c827743a17) | Exchange Online Protection (326e2b78-9d27-42c9-8509-46c827743a17) | | Intune | INTUNE_A | 061f9ace-7d42-4136-88ac-31dc755f143f | INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |+| Intune for Education | INTUNE_EDU | d9d89b70-a645-4c24-b041-8d3cb1884ec7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>AAD_EDU (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Active Directory for Education (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Windows Store Service (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | | Microsoft Dynamics AX7 User Trial | AX7_USER_TRIAL | fcecd1f9-a91e-488d-a918-a96cdb6ce2b0 | ERP_TRIAL_INSTANCE (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Operations Trial Environment (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Microsoft Azure Multi-Factor Authentication | MFA_STANDALONE | cb2020b1-d8f6-41c0-9acd-8ff3d6d7831b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0) | | Microsoft Defender for Office 365 (Plan 2) | THREAT_INTELLIGENCE | 3dd6cf57-d688-4eed-ba52-9e40b5468c3e | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Power BI Pro | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro CE | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro Dept | POWER_BI_PRO_DEPT | 3a6a908c-09c5-406a-8170-8ebb63c42882 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |+| Power BI Pro for Faculty | POWER_BI_PRO_FACULTY | de5f128b-46d7-4cfc-b915-a89ba060ea56 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) | | Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | | Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) | | Project for Office 365 | PROJECTCLIENT | a10d5e58-74da-4312-95c8-76be4e5b75a0 | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | | Project Online Essentials | PROJECTESSENTIALS | 776df282-9fc0-4862-99e2-70e561b9909e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |+| Project Online Essentials for Faculty | PROJECTESSENTIALS_FACULTY | e433b246-63e7-4d0b-9efa-7940fa3264d6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | Project Online Essentials for GCC | PROJECTESSENTIALS_GOV | ca1a159a-f09e-42b8-bb82-cb6420f54c8e | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_ESSENTIALS_GOV (fdcb7064-f45c-46fa-b056-7e0e9fdf4bf3)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Essentials for Government (fdcb7064-f45c-46fa-b056-7e0e9fdf4bf3)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) | | Project Online Premium | PROJECTPREMIUM | 09015f9f-377f-4538-bbb5-f75ceb09358a | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | Project Online Premium without Project Client | PROJECTONLINE_PLAN_1 | 2db84718-652c-47a7-860c-f10d8abbdae3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Project Plan 1 (for Department) | PROJECT_PLAN1_DEPT | 84cd610f-a3f8-4beb-84ab-d9d2c902c6c9 | DYN365_CDS_FOR_PROJECT_P1 (a6f677b3-62a6-4644-93e7-2a85d240845e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power_Automate_For_Project_P1 (00283e6b-2bd8-440f-a2d5-87358e4c89a1)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>PROJECT_P1 (4a12c688-56c6-461a-87b1-30d6f32136f9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1) | Common Data Service for Project P1 (a6f677b3-62a6-4644-93e7-2a85d240845e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate for Project P1 (00283e6b-2bd8-440f-a2d5-87358e4c89a1)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>Project P1 (4a12c688-56c6-461a-87b1-30d6f32136f9)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1) | | Project Plan 3 | PROJECTPROFESSIONAL | 53818b1b-4a27-454b-8896-0dba576410e6 | DYN365_CDS_PROJECT (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_FOR_PROJECT (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>PROJECT_PROFESSIONAL (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Common Data Service for Project (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Project (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>Project P3 (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | Project Plan 3 (for Department) | PROJECT_PLAN3_DEPT | 46102f44-d912-47e7-b0ca-1bd7b70ada3b | DYN365_CDS_PROJECT (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_FOR_PROJECT (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>PROJECT_PROFESSIONAL (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Common Data Service for Project (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Project (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>Project P3 (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |+| Project Plan 3 for Faculty | PROJECTPROFESSIONAL_FACULTY | 46974aed-363e-423c-9e6a-951037cec495 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT_EDU (664a2fed-6c7a-468e-af35-d61740f0ec90)<br/>PROJECT_PROFESSIONAL_FACULTY (22572403-045f-432b-a660-af949c0a77b5)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>DYN365_CDS_PROJECT (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>FLOW_FOR_PROJECT (fa200448-008c-4acb-abd4-ea106ed2199d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service for Education (664a2fed-6c7a-468e-af35-d61740f0ec90)<br/>Project P3 for Faculty (22572403-045f-432b-a660-af949c0a77b5)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Common Data Service for Project (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>Power Automate for Project (fa200448-008c-4acb-abd4-ea106ed2199d) | | Project Plan 3 for GCC | PROJECTPROFESSIONAL_GOV | 074c6829-b3a0-430a-ba3d-aca365e57065 | SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_CLIENT_SUBSCRIPTION_GOV (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>SHAREPOINT_PROJECT_GOV (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Office for the web (Government) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Desktop Client for Government (45c6831b- ad74-4c7f-bd03-7c2b3fa39067)<br/>Project Online Service for Government (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) | | Project Plan 5 for GCC | PROJECTPREMIUM_GOV | f2230877-72be-4fec-b1ba-7156d6f75bd6 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_CLIENT_SUBSCRIPTION_GOV (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>SHAREPOINT_PROJECT_GOV (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Office for the web (Government) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Desktop Client for Government (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>Project Online Service for Government (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) | | Rights Management Adhoc | RIGHTSMANAGEMENT_ADHOC | 8c4ce438-32a7-4ac5-91a6-e22ae08d9c8b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ADHOC (7a39d7dd-e456-4e09-842a-0204ee08187b) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Rights Management Adhoc (7a39d7dd-e456-4e09-842a-0204ee08187b) | |
active-directory | 1 Secure Access Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/1-secure-access-posture.md | Title: Determine your security posture for external collaboration with Azure Active Directory -description: Before you can execute an external access security plan, you must determine what you are trying to achieve. + Title: Determine your security posture for external access with Azure Active Directory +description: Learn about governance of external access and assessing collaboration needs, by scenario -+ Previously updated : 08/19/2022 Last updated : 02/03/2023 -# Determine your security posture for external access +# Determine your security posture for external access with Azure Active Directory -As you consider governing external access, youΓÇÖll need to assess the security and collaboration needs for your organization overall, and within each scenario. At the organizational level, consider the amount of control you need your IT team to have over day-to-day collaboration. Organizations in regulated industries may require more IT control. For example, a defense contractor may be required to positively identify and document each external user, their access, and the removal of access. This requirement may be on all access, or on specific scenarios or workloads. On the other end of the spectrum, a consulting firm may generally allow end users to determine the external users they need to collaborate with, within certain IT guard rails. +As you consider the governance of external access, assess your organization's security and collaboration needs, by scenario. You can start with the level of control the IT team has over the day-to-day collaboration of end users. Organizations in highly regulated industries might require more IT team control. For example, defense contractors can have a requirement to positively identify and document external users, their access, and access removal: all access, scenario-based, or workloads. Consulting agencies can use certain features to allow end users to determine the external users they collaborate with. - +  -> [!NOTE] -> Overly tight control on collaboration can lead to higher IT budgets, reduced productivity, and delayed business outcomes. When official collaboration channels are perceived as too onerous, end users tend to go around IT provided systems to get their jobs done, by for example emailing unsecured documents. --## Think in terms of scenarios + > [!NOTE] + > A high degree of control over collaboration can lead to higher IT budgets, reduced productivity, and delayed business outcomes. When official collaboration channels are perceived as onerous, end users tend to evade official channels. An example is end users sending unsecured documents by email. -In many cases IT can delegate partner access, at least in some scenarios, while providing guard rails for security. The IT guard rails can be help ensure that intellectual property stays secure, while empowering employees to collaborate with partners to get work done. +## Scenario-based planning -As you consider the scenarios within your organization, assess the need for employee versus business partner access to resources. A bank may have compliance needs that restrict access to certain resources, like user account information, to a small group of internal employees. Conversely, the same bank may enable delegated access for partners working on a marketing campaign. +IT teams can delegate partner access to empower employees to collaborate with partners. This delegation can occur while maintaining sufficient security to protect intellectual property. - +Compile and assess your organizations scenarios to help assess employee versus business partner access to resources. Financial institutions might have compliance standards that restrict employee access to resources such as account information. Conversely, the same institutions can enable delegated partner access for projects such as marketing campaigns. -In each scenario, consider +  -* the sensitivity of the information at risk +### Scenario considerations -* whether you need to restrict what partners can see about other users +Use the following list to help measure the level of access control. -* the cost of a breach vs the weight of centralized control and end-user friction +* Information sensitivity, and associated risk of its exposure +* Partner access to information about other end users +* The cost of a breach versus the overhead of centralized control and end-user friction - You may also start with centrally managed controls to meet compliance targets and delegate control to end users over time. All access management models may simultaneously coexist within an organization. +Organizations can start with highly managed controls to meet compliance targets, and then delegate some control to end users, over time. There can be simultaneous access-management models in an organization. -The use of [partner managed credentials](../external-identities/what-is-b2b.md) provides your organization with an essential signal that terminates access to your resources once the external user has lost access to the resources of their own company. +> [!NOTE] +> Partner-managed credentials are a method to signal the termination of access to resources, when an external user loses access to resources in their own company. Learn more: [B2B collaboration overview](../external-identities/what-is-b2b.md) -## Goals of securing external access +## External-access security goals -The goals of IT-governed and delegated access differ. +The goals of IT-governed and delegated access differ. The primary goals of IT-governed access are: -**The primary goals of IT-governed access are to:** +* Meet governance, regulatory, and compliance (GRC) targets +* High level of control over partner access to information about end users, groups, and other partners -* Meet governance, regulatory, and compliance (GRC) targets. +The primary goals of delegating access are: -* Tightly control partner access and what partners can see about member users, groups, and other partners. +* Enable business owners to determine collaboration partners, with security constraints +* Enable partners to request access, based on rules defined by business owners -**The primary goals of delegating access are to:** +### Common goals -* Enable business owners to govern who they collaborate with, within IT constraints. +#### Control access to applications, data, and content -* Enable business partners to request access based on rules defined by business owners. +Levels of control can be accomplished through various methods, depending on your version of Azure AD and Microsoft 365. -Whichever you enact for your organization and scenarios you'll need to: +* [Azure AD plans and pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) +* [Microsoft 365](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans). -* **Control access to applications, data, and content**. This can be accomplished through a variety of methods, depending on your versions of [Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) and [Microsoft 365](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans). +#### Reduce attack surface -* **Reduce the attack surface**. [Privileged identity management](../privileged-identity-management/pim-configure.md), [data loss prevention (DLP),](/exchange/security-and-compliance/data-loss-prevention/data-loss-prevention) and [encryption capabilities](/exchange/security-and-compliance/data-loss-prevention/data-loss-prevention) reduce the attack surface. +* [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md) - manage, control, and monitor access to resources in Azure AD, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune +* [Data loss prevention in Exchange Server](/exchange/policy-and-compliance/data-loss-prevention/data-loss-prevention?view=exchserver-2019&preserve-view=true) -* **Regularly review activity and audit log to confirm compliance**. IT can delegate access decisions to business owners through entitlement management while access reviews provide a way to periodically confirm continued access. Automated data classification with sensitivity labels helps to automate encryption of sensitive content making it easy for employee end users to comply. +#### Confirm compliance with activity and audit log reviews -## Next steps +IT teams can delegate access decisions to business owners through entitlement management, while access reviews help confirm continued access. You can use automated data classification with sensitivity labels to automate the encryption of sensitive content, easing compliance for end users. -See the following articles on securing external access to resources. We recommend you take the actions in the listed order. +## Next steps -1. [Determine your security posture for external access](1-secure-access-posture.md) (You are here.) +See the following articles to learn more about securing external access to resources. We recommend you follow the listed order. -2. [Discover your current state](2-secure-access-current-state.md) +1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) (You're here) -3. [Create a governance plan](3-secure-access-plan.md) +2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) -4. [Use groups for security](4-secure-access-groups.md) +3. [Create a security plan for external access](3-secure-access-plan.md) -5. [Transition to Azure AD B2B](5-secure-access-b2b.md) +4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) -6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md) +5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) -7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) +6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) -8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) +7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md) -9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md) - +8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) -ΓÇï +9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure AD](9-secure-access-teams-sharepoint.md) |
active-directory | 9 Secure Access Teams Sharepoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md | Title: Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory -description: Secure access to Microsoft 365 services as a part of your overall external access security. +description: Secure access to Microsoft 365 services as a part of your external access security plan -+ Previously updated : 08/20/2022 Last updated : 02/02/2023 -# Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business +# Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory -Microsoft Teams, SharePoint, and OneDrive for Business are three of the most used ways to collaborate and share content with external users. If the ΓÇ£approvedΓÇ¥ methods are too restrictive, users will go outside of approved methods by emailing content or setting up insecure external processes and applications, such as a personal DropBox or OneDrive. Your goal is to balance your security needs with ease of collaboration. +Use this article to determine and configure your organization's external collaboration using Microsoft Teams, OneDrive for Business, and SharePoint. A common challenge is balancing security and ease of collaboration for end users and external users. If an approved collaboration method is perceived as restrictive and onerous, end users evade the approved method. End users might email unsecured content, or set up external processes and applications, such as a personal DropBox or OneDrive. -This article guides you to determine and configure external collaboration to meet your business goals using Microsoft Teams and SharePoint. +## External Identities settings and Azure Active Directory -## Governance begins in Azure Active Directory +Sharing in Microsoft 365 is partially governed by the **External Identities, External collaboration** settings in Azure Active Directory (Azure AD). If external sharing is disabled or restricted in Azure AD, it overrides sharing settings configured in Microsoft 365. An exception is if Azure AD B2B integration isn't enabled. You can configure SharePoint and OneDrive to support ad-hoc sharing via one-time password (OTP). The following screenshot shows the External Identities, External collaboration settings dialog. -Sharing in Microsoft 365 is in part governed by the [External Identities | External collaboration settings](https://aad.portal.azure.com/) in Azure Active Directory (Azure AD). If external sharing is disabled or restricted in Azure AD, it overrides any sharing settings configured in Microsoft 365. An exception to this is that if Azure AD B2B integration isn't enabled, SharePoint and OneDrive can be configured to support ad-hoc sharing via one-time passcodes (OTP). +  - +Learn more: -### Guest user access --There are three choices for guest user access, which controls what guest users can see after being invited. --To prevent guest users from seeing details of other guest users, and being able to enumerate group membership, choose Guest users have limited access to properties and memberships of directory objects. --### Guest invite settings --These settings determine who can invite guests and how those guests can be invited. These settings are only enabled if the integration with B2B is enabled. --We recommend enabling administrators and users in the guest inviter role can invite. This setting allows controlled collaboration processes to be set up, as in the following example. --* Team owner submits a ticket to be assigned the Guest inviter role, and +* [Azure Active Directory admin center](https://aad.portal.azure.com/) +* [External Identities in Azure AD](../external-identities/external-identities-overview.md) - * Becomes responsible for all guest invitations. +### Guest user access - * Agrees not to directly add users to the underlying SharePoint +Guest users are invited to have access to resources. - * Is accountable to perform regular access reviews, and revoke access as appropriate. +1. Go to the Azure Active Directory admin center. +2. Select **All Services**. +3. Under **Categories**, select **Identity**. +4. From the list, select **External Identities**. +5. Select **External collaboration settings**. +6. Find the **Guest user access** option. -* Central IT does the following +To prevent guest-user access to other guest-user details, and to prevent enumeration of group membership, select **Guest users have limited access to properties and memberships of directory objects**. - * Enables external sharing by granting the requested role upon training completion. +### Guest invite settings - * Assigns Azure AD P2 license to the Microsoft 365 group owner to enable access reviews. - * Creates a Microsoft 365 group access review. +Guest invite settings determine who invites guests and how guests are invited. The settings are enabled if the B2B integration is enabled. It's recommended that administrators and users, in the Guest Inviter role, can invite. This setting allows setup of controlled collaboration processes. For example: - * Confirms that access reviews are occurring. +* Team owner submits a ticket requesting assignment to the Guest Inviter role: + * Responsible for guest invitations + * Agrees to not add users to SharePoint + * Performs regular access reviews + * Revokes access as needed - * Removes users directly added to the underlying SharePoint. +* The IT team: + * After training is complete, the IT team grants the Guest Inviter role + * To enable access reviews, assigns Azure AD P2 license to the Microsoft 365 group owner + * Creates a Microsoft 365 group access review + * Confirms access reviews occur + * Removes users added to SharePoint - Set **Enable Email One-time Passcodes for guests (Preview) and Enable up guest self-service sign via user flows** to **yes**. This setting takes advantage of the integration with Azure AD External collaboration settings. +1. Select **Email one-time passcodes for guests**. +2. For **Enable guest self-service sign up via user flows**, select **Yes**. ### Collaboration restrictions -There are three choices under collaboration restrictions. Your business requirements dictate which you will choose. +For the Collaboration restrictions option, the organization's business requirements dictate the choice of invitation. -* **Allow invitations to be sent to any domain** means any user can be invited to collaborate. +* **Allow invitations to be sent to any domain** - any user can be invited +* **Deny invitations to the specified domains** - any user outside those domains can be invited +* **Allow invitations only to the specified domains** - any user outside those domains can't be invited -* **Deny invitations to the specified domains** means any user outside of those can be invited to collaborate. +## External users and guest users in Teams -* **Allow invitations only to the specified domains** means that any user outside of those specified domains cannot be invited. +Teams differentiates between external users (outside your organization) and guest users (guest accounts). You can manage collaboration setting in the [Teams Admin portal](https://admin.teams.microsoft.com/company-wide-settings/external-communications) under Org-wide settings. Authorized account credentials are required to sign in to the Teams Admin portal. -## Govern access in Teams +* **External Access** - Teams allows external access by default. The organization can communicate with all external domains + * Use External Access setting to restrict or allow domains +* **Guest Access** - manage guest access in Teams -[Teams differentiates between external users (anyone outside your organization) and guest users (those with guest accounts)](/microsoftteams/communicate-with-users-from-other-organizations?WT.mc_id=TeamsAdminCenterCSH%e2%80%8b)). You manage collaboration setting in the [Teams Admin portal](https://admin.teams.microsoft.com/company-wide-settings/external-communications) under Org-wide settings. +Learn more: [Use guest access and external access to collaborate with people outside your organization](/microsoftteams/communicate-with-users-from-other-organizations). > [!NOTE]-> External identities collaboration settings in Azure Active Directory control the effective permissions. You can increase restrictions in Teams, but not decrease them from what is set in Azure AD. --* **External Access settings**. By default, Teams allows external access, which means that organization can communicate with all external domains. If you want to restrict or allow specific domains just for Teams, you can do so here. --* **Guest Access**. Guest access controls what guest users can do in teams. --To learn more about managing external access in Teams, see the following resources. --* [Manage external access in Microsoft Teams](/microsoftteams/manage-external-access) --* [Microsoft 365 identity models and Azure Active Directory](/microsoft-365/enterprise/about-microsoft-365-identity) +> The External Identities collaboration feaure in Azure AD controls permissions. You can increase restrictions in Teams, but restrictions can't be lower than Azure AD settings. -* [Identity models and authentication for Microsoft Teams](/MicrosoftTeams/identify-models-authentication) +Learn more: -* [Sensitivity labels for Microsoft Teams](/MicrosoftTeams/sensitivity-labels) +* [Manage external meetings and chat in Microsoft Teams](/microsoftteams/manage-external-access) +* [Microsoft 365 identity models and Azure AD](/microsoft-365/enterprise/about-microsoft-365-identity) +* [Identity models and authentication for Microsoft Teams](/microsoftteams/identify-models-authentication) +* [Sensitivity labels for Microsoft Teams](/microsoftteams/sensitivity-labels) ## Govern access in SharePoint and OneDrive -SharePoint administrators have many settings available for collaboration. Organization-wide settings are managed from the SharePoint admin center. Settings can be adjusted for each SharePoint site. We recommend that your organization-wide settings be at your minimum necessary security levels, and that you increase security on specific sites as needed. For example, for a high-risk project, you may want to restrict users to certain domains, and disable the ability of members to invite guests. +SharePoint administrators can find organization-wide settings in the SharePoint admin center. It's recommended that your organization-wide settings are the minimum security levels. Increase security on some sites, as needed. For example, for a high-risk project, restrict users to certain domains, and disable members from inviting guests. -### Integrating SharePoint and One-drive with Azure AD B2B +Learn more: +* [SharePoint admin center](https://microsoft-admin.sharepoint.com) - access permissions are required +* [Get started with the SharePoint admin center](/sharepoint/get-started-new-admin-center) +* [External sharing overview](/sharepoint/external-sharing-overview) -As a part of your overall strategy for governing external collaboration, we recommend that you [enable the Preview of SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration-preview) . +### Integrating SharePoint and OneDrive with Azure AD B2B -Azure AD B2B provides authentication and management of guest users. With SharePoint and OneDrive integration, [Azure AD B2B one-time passcodes](../external-identities/one-time-passcode.md) are used for external sharing of files, folders, list items, document libraries, and sites. This feature provides an upgraded experience from the existing [secure external sharing recipient experience](/sharepoint/what-s-new-in-sharing-in-targeted-release). +As a part of your strategy to govern external collaboration, it's recommended you enable SharePoint and OneDrive integration with Azure AD B2B. Azure AD B2B has guest-user authentication and management. With SharePoint and OneDrive integration, use one-time passcodes for external sharing of files, folders, list items, document libraries, and sites. -> [!NOTE] -> If you enable the preview for Azure AD B2B integration, then SharePoint and OneDrive sharing is subject to the Azure AD organizational relationships settings, such as **Members can invite** and **Guests can invite**. --### Sharing policies --*External Sharing* can be set for both SharePoint and OneDrive. OneDrive restrictions can't be more permissive than the SharePoint settings. --  --SharePoint integration with Azure AD B2B changes how controls interact with accounts. --* **Anyone**. Not recommended -- * Regardless of integration status, enabling Anyone links means no Azure policies will be applied when this type of link is used. -- * In a scenario of governed collaboration, don't enable this functionality. - > [!NOTE] - > You may find a scenario where you need to enable this setting for a specific site, in which case you would enable it here, and set the greater restriction on individual sites. +Learn more: +* [Email one-time passcode authentication](../external-identities/one-time-passcode.md) +* [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration) +* [B2B collaboration overview](../external-identities/what-is-b2b.md) -* **New and existing guests**. Recommended if you have integration enabled. -- * **With Azure AD B2B integration** enabled, new and existing guests will have an Azure AD B2B guest account that can be managed with Azure AD policies. -- * **Without Azure AD B2B integration** enabled, new guests will not have an Azure AD B2B account created, and they cannot be managed from Azure AD. Whether existing guests have an Azure AD B2B account depends on how the guest was created. --* **Existing guests**. Recommended if you do not have integration enabled. +> [!NOTE] +> If you enable Azure AD B2B integration, then SharePoint and OneDrive sharing is subject to the Azure AD organizational relationships settings, such as **Members can invite** and **Guests can invite**. - * With this enabled, users can only share with other users already in your directory. +### Sharing policies in SharePoint and OneDrive -* **Only people in your organization**. Not recommended when you need to collaborate with external users. +In the Azure AD admin center, you can use the External Sharing settings for SharePoint and OneDrive to help configure sharing policies. OneDrive restrictions can't be more permissive than SharePoint settings. - * Regardless of integration status, users will only be able to share with users in your organization. +Learn more: [External sharing overview](/sharepoint/external-sharing-overview) -* **Limit external sharing by domain**. By default SharePoint allows external access, which means that sharing is allowed with all external domains. If you want to restrict or allow specific domains just for SharePoint, you can do so here. +  -* **Allow only users in specific security groups to share externally**. This setting restricts who can share content in SharePoint and OneDrive, while the setting in Azure AD applies to all applications. Restricting who can share can be useful if you want to require your users to take a training about sharing securely, then at completion add them to an approved sharing security group. If this setting is selected, and users do not have a way to gain access to being an ΓÇ£approved sharer,ΓÇ¥ they may instead find unapproved ways to share. +#### External sharing settings recommendations -* **Allow guests to share items they donΓÇÖt own**. We recommend leaving this disabled. +Use the guidance in this section when configuring external sharing. -* **People who use a verification code must reauthenticate after this many days (default is 30)**. We recommend enabling this setting. +* **Anyone** - Not recommended. If enabled, regardless of integration status, no Azure policies are applied for this link type. + * Don't enable this functionality for governed collaboration + * Use it for restrictions on individual sites +* **New and existing guests** - Recommended, if integration is enabled + * Azure AD B2B integration enabled: new and current guests have an Azure AD B2B guest account you can manage with Azure AD policies + * Azure AD B2B integration not enabled: new guests don't have an Azure AD B2B account, and can't be managed from Azure AD + * Guests have an Azure AD B2B account, depending on how the guest was created +* **Existing guests** - Recommended, if you don't have integration enabled + * With this option enabled, users can share with other users in your directory +* **Only people in your organization** - Not recommended with external user collaboration + * Regardless of integration status, users can share with other users in your organization +* **Limit external sharing by domain** - By default, SharePoint allows external access. Sharing is allowed with external domains. + * Use this option to restrict or allow domains for SharePoint +* **Allow only users in specific security groups to share externally** - Use this setting to restrict who shares content in SharePoint and OneDrive. The setting in Azure AD applies to all applications. Use the restriction to direct users to training about secure sharing. Completion is the signal to add them to a sharing security group. If this setting is selected, and users can't become an approved sharer, they might find unapproved ways to share. +* **Allow guests to share items they donΓÇÖt own** - Not recommended. The guidance is to disable this feature. +* **People who use a verification code must reauthenticate after this many days (default is 30)** - Recommended ### Access controls -Access controls setting will affect all users in your organization. Given that you may not be able to control whether external users have compliant devices, we will not address those controls here. +Access controls setting affect all users in your organization. Because you might not be able to control whether external users have compliant devices, the controls won't be addressed in this article. -* **Idle session sign-out**. We recommend that you enable this control, which allows you to warn and sign-out users on unmanaged devices after a period of inactivity. You can configure the period of inactivity and the warning. --* **Network location**. Setting this control means you can allow access only form IP addresses that your organization owns. In external collaboration scenarios, set this only if all of your external partners will access resources only form within your network, or via your VPN. +* **Idle session sign-out** - Recommended + * Use this option to warn and sign out users on unmanaged devices, after a period of inactivity + * You can configure the period of inactivity and the warning +* **Network location** - Set this control to allow access from IP addresses your organization owns. + * For external collaboration, set this control if your external partners access resources when in your network, or with your virtual private network (VPN). ### File and folder links -In the SharePoint admin center, you can also set how file and folder links are shared. You can also configure these setting for each site. +In the SharePoint admin center, you can set how file and folder links are shared. You can configure the setting for each site. -  +  -If you have enabled the integration with Azure AD B2B, sharing of files and folders with those outside of the organization will result in a B2B user being created when files and folder are shared. +With Azure AD B2B integration enabled, sharing files and folders with users outside the organization results in the creation of a B2B user. -We recommend setting the default link type to **Only people in your organization**, and default permissions to **Edit**. Doing so ensures that items are shared thoughtfully. You can then customize this setting for per-site default that meet specific collaboration needs. +1. For **Choose the type of link that's selected by default when users share files and folders in SharePoint and OneDrive**, select **Only people in your organization**. +2. For **Choose the permission that's selected by default for sharing links**, select **Edit**. -### Anyone links +You can customize this setting for a per-site default. -We do not recommend enabling anyone links. If you do, we recommend setting an expiration, and consider restricting them to view permissions. If you choose View only permissions for files or folders, users will not be able to change Anyone links to include edit privileges. +### Anyone links -To learn more about governing external access to SharePoint see the following: +Enabling Anyone links isn't recommended. If you enable it, set an expiration, and restrict users to view permissions. If you select View only permissions for files or folders, users can't change Anyone links to include edit privileges. -* [SharePoint external sharing overview](/sharepoint/external-sharing-overview) +Learn more: -* [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration-preview) +* [External sharing overview](/sharepoint/external-sharing-overview) +* [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration) -#### Next steps +## Next steps -See the following articles on securing external access to resources. We recommend you take the actions in the listed order. +See the following articles to learn more about securing external access to resources. We recommend you follow the listed order. -1. [Determine your security posture for external access](1-secure-access-posture.md) +1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) -2. [Discover your current state](2-secure-access-current-state.md) +2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) -3. [Create a governance plan](3-secure-access-plan.md) +3. [Create a security plan for external access](3-secure-access-plan.md) -4. [Use groups for security](4-secure-access-groups.md) +4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) -5. [Transition to Azure AD B2B](5-secure-access-b2b.md) +5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) -6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md) +6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) -7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) +7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md) -8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) +8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) -9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md) (You are here.) +9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure AD](9-secure-access-teams-sharepoint.md) (You're here) |
active-directory | Service Accounts Computer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-computer.md | Title: Secure computer accounts | Azure Active Directory -description: A guide to helping secure on-premises computer accounts. + Title: Secure on-premises computer accounts with Active Directory +description: A guide to help secure on-premises computer accounts, or LocalSystem accounts, with Active Directory -+ Previously updated : 08/20/2022 Last updated : 02/03/2023 -# Secure on-premises computer accounts +# Secure on-premises computer accounts with Active Directory -A computer account, or LocalSystem account, is a built-in, highly privileged account with access to virtually all resources on the local computer. The account is not associated with any signed-on user account. Services run as LocalSystem access network resources by presenting the computer's credentials to remote servers in the format <domain_name>\\<computer_name>$. The computer account's predefined name is NT AUTHORITY\SYSTEM. You can use it to start a service and provide security context for that service. +A computer account, or LocalSystem account, is highly privileged with access to almost all resources on the local computer. The account isn't associated with signed-on user accounts. Services run as LocalSystem access network resources by presenting the computer credentials to remote servers in the format `<domain_name>\\<computer_name>$`. The computer account predefined name is `NT AUTHORITY\SYSTEM`. You can start a service and provide security context for that service. - +  ## Benefits of using a computer account -A computer account provides the following benefits: +A computer account has the following benefits: -* **Unrestricted local access**: The computer account provides complete access to the machineΓÇÖs local resources. +* **Unrestricted local access** - the computer account provides complete access to the machine's local resources +* **Automatic password management** - removes the need for manually changed passwords. The account is a member of Active Directory, and its password is changed automatically. With a computer account, there's no need to register the service principal name. +* **Limited access rights off-machine** - the default access-control list in Active Directory Domain Services (AD DS) permits minimal access to computer accounts. During access by an unauthorized user, the service has limited access to network resources. -* **Automatic password management**: Removes the need for you to manually change passwords. The account is a member of Active Directory, and the account password is changed automatically. Using a computer account eliminates the need to register the service principal name for the service. +## Computer account security-posture assessment -* **Limited access rights off-machine**: The default access-control list in Active Directory Domain Services (AD DS) permits minimal access to computer accounts. In the event of access by an unauthorized user, the service would have only limited access to resources on your network. --## Assess the security posture of computer accounts --Some potential challenges and associated mitigations when you use a computer account are listed in the following table: +Use the following table to review potential computer-account issues and mitigations. -| Issue | Mitigation | +| Computer-account issue | Mitigation | | - | - |-| Computer accounts are subject to deletion and re-creation when the computer leaves and rejoins the domain. | Validate the need to add a computer to an Active Directory group, and verify which computer account has been added to a group by using the example scripts in the next section of this article.| -| If you add a computer account to a group, all services that run as LocalSystem on that computer are given the access rights of the group.| Be selective about the group memberships of your computer account. Avoid making a computer account a member of any domain administrator groups, because the associated service has complete access to AD DS. | -| Improper network defaults for LocalSystem. | Do not assume that the computer account has the default limited access to network resources. Instead, check group memberships for the account carefully. | -| Unknown services that run as LocalSystem. | Ensure that all services that run under the LocalSystem account are Microsoft services or trusted services from third parties. | -| | | +| Computer accounts are subject to deletion and re-creation when the computer leaves and rejoins the domain. | Confirm the requirement to add a computer to an Active Directory group. To verify computer accounts added to a group, use the scripts in the following section.| +| If you add a computer account to a group, services that run as LocalSystem on that computer get group access rights.| Be selective about computer-account group memberships. Don't make a computer account a member of a domain administrator group. The associated service has complete access to AD DS. | +| Inaccurate network defaults for LocalSystem. | Don't assume the computer account has the default limited access to network resources. Instead, confirm group memberships for the account. | +| Unknown services that run as LocalSystem. | Ensure services that run under the LocalSystem account are Microsoft services, or trusted services. | -## Find services that run under the computer account +## Find services and computer accounts -To find services that run under the LocalSystem context, use the following PowerShell cmdlet: +To find services that run under the computer account, use the following PowerShell cmdlet: ```powershell Get-WmiObject win32_service | select Name, StartName | Where-Object {($_.StartName -eq "LocalSystem")} To find computer accounts that are members of identity administrators groups (do Get-ADGroupMember -Identity Administrators -Recursive | Where objectClass -eq "computer" ``` -## Move from computer accounts +## Computer account recommendations > [!IMPORTANT]-> Computer accounts are highly privileged accounts and should be used only when your service needs unrestricted access to local resources on the machine and you can't use a managed service account (MSA). --* Check with your service owner to see whether their service can be run by using an MSA, and use a group managed service account (gMSA) or a standalone managed service account (sMSA) if your service supports it. +> Computer accounts are highly privileged, therefore use them if your service requires unrestricted access to local resources, on the machine, and you can't use a managed service account (MSA). -* Use a domain user account with only the permissions that you need to run your service. +* Confirm the service owner's service runs with an MSA +* Use a group managed service account (gMSA), or a standalone managed service account (sMSA), if your service supports it +* Use a domain user account with the permissions needed to run the service ## Next steps To learn more about securing service accounts, see the following articles: -* [Introduction to on-premises service accounts](service-accounts-on-premises.md) +* [Securing on-premises service accounts](service-accounts-on-premises.md) * [Secure group managed service accounts](service-accounts-group-managed.md) * [Secure standalone managed service accounts](service-accounts-standalone-managed.md)-* [Secure user accounts](service-accounts-user-on-premises.md) +* [Secure user-based service accounts in Active Directory](service-accounts-user-on-premises.md) * [Govern on-premises service accounts](service-accounts-govern-on-premises.md) |
active-directory | Whats Deprecated Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md | Title: What's deprecated in Azure Active Directory? description: Learn about features being deprecated in Azure Active Directory-+ Use the following table to learn about changes including deprecations, retiremen |Functionality, feature, or service|Change|New tenant change date |Current tenant change date| |||||-|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2022|Jun 30, 2022| -|Microsoft Authenticator app [Number matching](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/defend-your-users-from-mfa-fatigue-attacks/ba-p/2365677)|Feature change|Feb 27, 2023|Feb 27, 2023| +|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|Feb 27, 2023|Feb 27, 2023| |Azure AD DS [virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023|Mar 1, 2023| |[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|Nov 1, 2022|Mar 31, 2023|-|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 2023|Jun 2023| +|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023|Jun 30, 2023| +|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023|Jun 30, 2023| |[Azure AD PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023|Jun 30, 2023| |[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Sep 30, 2024|Sep 30, 2024| |
active-directory | Deprecated Azure Ad Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/deprecated-azure-ad-connect.md | |
active-directory | How To Upgrade Previous Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-upgrade-previous-version.md | |
active-directory | Whatis Azure Ad Connect V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md | |
active-directory | Whatis Azure Ad Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect.md | |
active-directory | Tutorial Manage Certificates For Federated Single Sign On | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md | In this article, we cover common questions and information related to certificat This tutorial is relevant only to apps that are configured to use Azure AD SSO through [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) federation. -Using the information in this tutorial, an administrator of the application learns how to: +In this tutorial, an administrator of the application learns how to: > [!div class="checklist"] > * Generate certificates for gallery and non-gallery applications Using the information in this tutorial, an administrator of the application lear When you add a new application from the gallery and configure a SAML-based sign-on (by selecting **Single sign-on** > **SAML** from the application overview page), Azure AD generates a self-signed certificate for the application that is valid for three years. To download the active certificate as a security certificate (**.cer**) file, return to that page (**SAML-based sign-on**) and select a download link in the **SAML Signing Certificate** heading. You can choose between the raw (binary) certificate or the Base64 (base 64-encoded text) certificate. For gallery applications, this section might also show a link to download the certificate as federation metadata XML (an **.xml** file), depending on the requirement of the application. -You can also download an active or inactive certificate by selecting the **SAML Signing Certificate** heading's **Edit** icon (a pencil), which displays the **SAML Signing Certificate** page. Select the ellipsis (**...**) next to the certificate you want to download, and then choose which certificate format you want. You have the additional option to download the certificate in privacy-enhanced mail (PEM) format. This format is identical to Base64 but with a **.pem** file name extension, which isn't recognized in Windows as a certificate format. +You can also download an active or inactive certificate by selecting the **SAML Signing Certificate** heading's **Edit** icon (a pencil), which displays the **SAML Signing Certificate** page. Select the ellipsis (**...**) next to the certificate you want to download, and then choose which certificate format you want. You have the other option to download the certificate in privacy-enhanced mail (PEM) format. This format is identical to Base64 but with a **.pem** file name extension, which isn't recognized in Windows as a certificate format. :::image type="content" source="media/manage-certificates-for-federated-single-sign-on/all-certificate-download-options.png" alt-text="SAML signing certificate download options (active and inactive)."::: ## Customize the expiration date for your federation certificate and roll it over to a new certificate -By default, Azure configures a certificate to expire after three years when it's created automatically during SAML single sign-on configuration. Because you can't change the date of a certificate after you save it, you have to: +By default, Azure configures a certificate to expire after three years when it's created automatically during SAML single sign-on configuration. Because you can't change the date of a certificate after you save it, you've to: 1. Create a new certificate with the desired date. 1. Save the new certificate. Next, download the new certificate in the correct format, upload it to the appli If your application doesn't have any validation for the certificate's expiration, and the certificate matches in both Azure Active Directory and your application, your application is still accessible despite having an expired certificate. Ensure your application can validate the certificate's expiration date. +If you intend to keep certificate expiry validation disabled, then the new certificate shouldn't be created until your scheduled maintenance window for the certificate rollover. If both an expired and an inactive valid certificate exist on the application, Azure AD will automatically utilize the valid certificate. In this case, users may experience application outage. + ## Add email notification addresses for certificate expiration -Azure AD will send an email notification 60, 30, and 7 days before the SAML certificate expires. You may add more than one email address to receive notifications. To specify the email address(es) you want the notifications to be sent to: +Azure AD will send an email notification 60, 30, and 7 days before the SAML certificate expires. You may add more than one email address to receive notifications. To specify the email address(es), you want the notifications to be sent to: 1. In the **SAML Signing Certificate** page, go to the **notification email addresses** heading. By default, this heading uses only the email address of the admin who added the application. 1. Below the final email address, type the email address that should receive the certificate's expiration notice, and then press Enter. 1. Repeat the previous step for each email address you want to add.-1. For each email address you want to delete, select the **Delete** icon (a garbage can) next to the email address. +1. For each email address you want to delete, select the **Delete** icon (garbage can) next to the email address. 1. Select **Save**. You can add up to five email addresses to the Notification list (including the email address of the admin who added the application). If you need more people to be notified, use the distribution list emails. |
active-directory | Cross Tenant Synchronization Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md | In this step, you automatically redeem invitations in the source tenant. 1. Select **Save**. -## Step 5: Create a configuration application in the source tenant +## Step 5: Create a configuration in the source tenant <br/>**Source tenant** Restoring a previously soft-deleted user in the target tenant isn't supported. Manually restore the soft-deleted user in the target tenant. For more information, see [Restore or remove a recently deleted user using Azure Active Directory](../fundamentals/active-directory-users-restore.md). +#### Symptom - Unable to delete a configuration ++On the **Configurations** page, there isn't a way to delete a configuration. ++**Cause** ++Currently, there isn't a way to delete a configuration on the **Configurations** page. Instead, you must delete the configuration in **Enterprise applications**. ++**Solution** ++1. In the source tenant, select **Azure Active Directory** > **Enterprise applications**. ++1. In the list of all applications, find the name of your configuration. If necessary, you can search by the configuration name. ++1. Select the configuration and then select **Properties**. ++1. Select **Delete** and then **Yes** to delete the configuration. ++ :::image type="content" source="./media/cross-tenant-synchronization-configure/enterprise-applications-configuration-delete.png" alt-text="Screenshot of the Enterprise applications Properties page showing how to delete a configuration." lightbox="./media/cross-tenant-synchronization-configure/enterprise-applications-configuration-delete.png"::: + ## Next steps - [Tutorial: Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Concept Activity Logs Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md | Depending on where you want to route the audit log data, you also need one of th - For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage). * An **[Azure Event Hubs namespace](../../event-hubs/event-hubs-create.md)** to integrate with third-party solutions. -Once you have your endpoint established, go to **Azure AD** and then **Diagnostic settings.** From here you can choose what logs to send to the endpoint of your choice. For more information, see the **Create diagnostic settings** section of the [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md#create-diagnostic-settings) article. +Once you have your endpoint established, go to **Azure AD** and then **Diagnostic settings.** From here, you can choose what logs to send to the endpoint of your choice. For more information, see the **Create diagnostic settings** section of the [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md#create-diagnostic-settings) article. ## Cost considerations -If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hubs. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources, including the storage account that you use for archival and the Event Hubs that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size. +If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hubs. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources. These resources could include the storage account that you use for archival and the Event Hubs that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size. Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md). ### Storage size for activity logs -Every audit log event uses about 2 KB of data storage. Sign in event logs are about 4 KB of data storage. For a tenant with 100,000 users, which would incur about 1.5 million events per day, you would need about 3 GB of data storage per day. Because writes occur in approximately five-minute batches, you can anticipate approximately 9,000 write operations per month. +Every audit log event uses about 2 KB of data storage. Sign-in event logs are about 4 KB of data storage. For a tenant with 100,000 users, which would incur about 1.5 million events per day, you would need about 3 GB of data storage per day. Because writes occur in approximately five-minute batches, you can anticipate around 9,000 write operations per month. The following table contains a cost estimate of, depending on the size of the tenant, a general-purpose v2 storage account in West US for at least one year of retention. To create a more accurate estimate for the data volume that you anticipate for your application, use the [Azure storage pricing calculator](https://azure.microsoft.com/pricing/details/storage/blobs/). If you want to know for how long the activity data is stored in a Premium tenant Events are batched into approximately five-minute intervals and sent as a single message that contains all the events within that timeframe. A message in the Event Hubs has a maximum size of 256 KB. If the total size of all the messages within the timeframe exceeds that volume, multiple messages are sent. -For example, about 18 events per second ordinarily occur for a large tenant of more than 100,000 users, a rate that equates to 5,400 events every five minutes. Because audit logs are about 2 KB per event, this equates to 10.8 MB of data. Therefore, 43 messages are sent to the event hub in that five-minute interval. +For example, about 18 events per second ordinarily occur for a large tenant of more than 100,000 users, a rate that equates to 5,400 events every five minutes. Audit logs are about 2 KB per event, which equates to 10.8 MB of data. Therefore, 43 messages are sent to the event hub in that five-minute interval. -The following table contains estimated costs per month for a basic event hub in West US, depending on the volume of event data which can vary from tenant to tenant as per many factors like user sign-in behavior etc. To calculate an accurate estimate of the data volume that you anticipate for your application, use the [Event Hubs pricing calculator](https://azure.microsoft.com/pricing/details/event-hubs/). +The following table contains estimated costs per month for a basic event hub in West US. The volume of event data can vary from tenant to tenant, based on factors like user sign-in behavior. To calculate an accurate estimate of the data volume that you anticipate for your application, use the [Event Hubs pricing calculator](https://azure.microsoft.com/pricing/details/event-hubs/). | Log category | Number of users | Events per second | Events per five-minute interval | Volume per interval | Messages per interval | Messages per month | Cost per month (est.) | |--|--|-|-||||-| This section answers frequently asked questions and discusses known issues with -**Q: How soon after an action will the corresponding logs show up in my event hub?** --**A**: The logs should show up in your event hub within two to five minutes after the action is performed. For more information about Event Hubs, see [What is Azure Event Hubs?](../../event-hubs/event-hubs-about.md). ----**Q: How soon after an action will the corresponding logs show up in my storage account?** --**A**: For Azure storage accounts, the latency is anywhere from 5 to 15 minutes after the action is performed. --- **Q: What happens if an Administrator changes the retention period of a diagnostic setting?** **A**: The new retention policy will be applied to logs collected after the change. Logs collected before the policy change will be unaffected. This section answers frequently asked questions and discusses known issues with -**Q: How do I integrate Azure AD activity logs with my SIEM system?** +**Q: How do I integrate Azure AD activity logs with my SIEM tools?** -**A**: You can do this in two ways: +**A**: You can do integrate with your SIEM tools in two ways: -- Use Azure Monitor with Event Hubs to stream logs to your SIEM system. First, [stream the logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) and then [set up your SIEM tool](tutorial-azure-monitor-stream-logs-to-event-hub.md#access-data-from-your-event-hub) with the configured event hub. +- Use Azure Monitor with Event Hubs to stream logs to your SIEM tool. First, [stream the logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) and then [set up your SIEM tool](tutorial-azure-monitor-stream-logs-to-event-hub.md#access-data-from-your-event-hub) with the configured event hub. - Use the [Reporting Graph API](concept-reporting-api.md) to access the data, and push it into the SIEM system using your own scripts. |
active-directory | How To View Applied Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md | As an Azure AD administrator, you can use the sign-in logs to: Some scenarios require you to get an understanding of how your Conditional Access policies were applied to a sign-in event. Common examples include: -- *Helpdesk administrators* who need to look at applied Conditional Access policies to understand if a policy is the root cause of a ticket that a user opened. +- Helpdesk administrators who need to look at applied Conditional Access policies to understand if a policy is the root cause of a ticket that a user opened. -- *Tenant administrators* who need to verify that Conditional Access policies have the intended effect on the users of a tenant.+- Tenant administrators who need to verify that Conditional Access policies have the intended effect on the users of a tenant. You can access the sign-in logs by using the Azure portal, Microsoft Graph, and PowerShell. ## Required administrator roles -To see applied Conditional Access policies in the sign-in logs, administrators must have permissions to view both the logs and the policies. +To see applied Conditional Access policies in the sign-in logs, administrators must have permissions to view *both* the logs and the policies. The least privileged built-in role that grants *both* permissions is *Security Reader*. As a best practice, your Global Administrator should add the Security Reader role to the related administrator accounts. -The least privileged built-in role that grants both permissions is *Security Reader*. As a best practice, your global administrator should add the Security Reader role to the related administrator accounts. --The following built-in roles grant permissions to read Conditional Access policies: +The following built-in roles grant permissions to *read Conditional Access policies*: - Global Administrator - - Global Reader - - Security Administrator - - Security Reader - - Conditional Access Administrator --The following built-in roles grant permission to view sign-in logs: +The following built-in roles grant permission to *view sign-in logs*: - Global Administrator - - Security Administrator - - Security Reader - - Global Reader - - Reports Reader ## Permissions for client apps If you use a client app to pull sign-in logs from Microsoft Graph, your app need Any of the following permissions is sufficient for a client app to access applied certificate authority (CA) policies in sign-in logs through Microsoft Graph: - `Policy.Read.ConditionalAccess` - - `Policy.ReadWrite.ConditionalAccess` - - `Policy.Read.All` ## Permissions for PowerShell Like any other client app, the Microsoft Graph PowerShell module needs client pe - `AuditLog.Read.All` - `Directory.Read.All` -These permissions are the least privileged permissions with the necessary access. --To consent to the necessary permissions, use: --`Connect-MgGraph -Scopes Policy.Read.ConditionalAccess, AuditLog.Read.All, Directory.Read.All` --To view the sign-in logs, use: +The following permissions are the least privileged permissions with the necessary access: -`Get-MgAuditLogSignIn` +- To consent to the necessary permissions: `Connect-MgGraph -Scopes Policy.Read.ConditionalAccess, AuditLog.Read.All, Directory.Read.All` +- To view the sign-in logs: `Get-MgAuditLogSignIn` For more information about this cmdlet, see [Get-MgAuditLogSignIn](/powershell/module/microsoft.graph.reports/get-mgauditlogsignin). The Azure AD Graph PowerShell module doesn't support viewing applied Conditional Access policies. Only the Microsoft Graph PowerShell module returns applied Conditional Access policies. -## Confirming access --On the **Conditional Access** tab, you see a list of Conditional Access policies applied to that sign-in event. --To confirm that you have admin access to view applied Conditional Access policies in the sign-in logs: --1. Go to the Azure portal. --2. In the upper-right corner, select your directory, and then select **Azure Active Directory** on the left pane. +## View Conditional Access policies in Azure AD sign-in logs -3. In the **Monitoring** section, select **Sign-in logs**. +The activity details of sign-in logs contain several tabs. The **Conditional Access** tab lists the Conditional Access policies applied to that sign-in event. -4. Select an item in the sign-in table to open the **Activity Details: Sign-ins context** pane. +1. Sign in to the [Azure portal](https://portal.azure.com) using the Security Reader role. +1. In the **Monitoring** section, select **Sign-in logs**. +1. Select a sign-in item from the table to open the **Activity Details: Sign-ins context** pane. +1. Select the **Conditional Access** tab. -5. Select the **Conditional Access** tab on the context pane. If your screen is small, you might need to select the ellipsis (**...**) to see all tabs on the context pane. +If you don't see the Conditional Access policies, confirm you're using a role that provides access to both the sign-in logs and the Conditional Access policies. ## Next steps -* [Sign-in error code reference](./concept-sign-ins.md) -* [Sign-in report overview](concept-sign-ins.md) +* [Troubleshoot sign-in problems](../conditional-access/troubleshoot-conditional-access.md#azure-ad-sign-in-events) +* [Review the Conditional Access sign-in logs FAQs](reports-faq.yml#conditional-access) +* [Learn about the sign-in logs](concept-sign-ins.md) |
active-directory | Howto Integrate Activity Logs With Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | Follow the steps below to send logs from Azure Active Directory to Azure Monitor * `ADFSSignInLogs` Active Directory Federation Services (ADFS) * `RiskyUsers` * `UserRiskEvents`- * `AADServicePrincipalRiskEvents` + The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview. * `NetworkAccessTrafficLogs` * `RiskyServicePrincipals`+ * `AADServicePrincipalRiskEvents` + * `EnrichedOffice365AuditLogs` 1. Select the **Destination details** for where you'd like to send the logs. Choose any or all of the following destinations. Additional fields appear, depending on your selection. |
active-directory | Overview Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md | In addition to the user interface, Azure AD also provides you with [programmatic ## Next steps -- [Risky sign-ins report](../identity-protection/overview-identity-protection.md)+- [Risky sign-ins report](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins) - [Audit logs report](concept-audit-logs.md) - [Sign-ins logs report](concept-sign-ins.md) |
active-directory | Reference Reports Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-data-retention.md | Title: How long does Azure AD store reporting data? | Microsoft Docs -description: Learn how long Azure stores the various types of reporting data. + Title: Azure Active Directory data retention | Microsoft Docs +description: Learn how long Azure Active Directory stores the various types of reporting data. Previously updated : 10/31/2022 Last updated : 02/03/2023 -# How long does Azure AD store reporting data? -+# Azure Active Directory data retention In this article, you learn about the data retention policies for the different activity reports in Azure Active Directory (Azure AD). -### When does Azure AD start collecting data? +## When does Azure AD start collecting data? | Azure AD Edition | Collection Start | | :-- | :-- | | Azure AD Premium P1 <br /> Azure AD Premium P2 | When you sign up for a subscription | | Azure AD Free| The first time you open [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) or use the [reporting APIs](./overview-reports.md) | ---### When is the activity data available in the Azure portal? --- **Immediately** - If you have already been working with reports in the Azure portal.-- **Within 2 hours** - If you havenΓÇÖt turned on reporting in the Azure portal.----### How soon can I see activities data after getting a premium license? --If you already have activities data with your free license, then you can see it immediately on upgrade. If you donΓÇÖt have any data, then it will take up to three days for the data to show up in the reports after you upgrade to a premium license. ----### When does Azure AD start collecting security signal data? --For security signals, the collection process starts when you opt-in to use the **Identity Protection Center**. --+If you already have activities data with your free license, then you can see it immediately on upgrade. If you donΓÇÖt have any data, then it will take up to three days for the data to show up in the reports after you upgrade to a premium license. For security signals, the collection process starts when you opt-in to use the **Identity Protection Center**. -### How long does Azure AD store the data? +## How long does Azure AD store the data? **Activity reports** For security signals, the collection process starts when you opt-in to use the * | Sign-ins | Seven days | 30 days | 30 days | | Azure AD MFA usage | 30 days | 30 days | 30 days | -You can retain the audit and sign-in activity data for longer than the default retention period outlined above by routing it to an Azure storage account using Azure Monitor. For more information, see [Archive Azure AD logs to an Azure storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). +You can retain the audit and sign-in activity data for longer than the default retention period outlined in the previous table by routing it to an Azure storage account using Azure Monitor. For more information, see [Archive Azure AD logs to an Azure storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). **Security signals** You can retain the audit and sign-in activity data for longer than the default r > [!NOTE] > Risky users are not deleted until the risk has been remediated. ---### Can I see last month's data after getting an Azure AD premium license? +## Can I see last month's data after getting an Azure AD premium license? **No**, you can't. Azure stores up to seven days of activity data for a free version. When you switch from a free to a premium version, you can only see up to 7 days of data. -+## Next steps ++- [Stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) +- [Learn how to download Azure AD logs](howto-download-logs.md) |
active-directory | Reference Reports Latencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-latencies.md | - Title: Azure Active Directory reporting latencies | Microsoft Docs -description: Learn about the amount of time it takes for reporting events to show up in your Azure portal ------- Previously updated : 10/31/2022-------# Azure Active Directory reporting latencies --Latency is the amount of time it takes for Azure Active Directory (Azure AD) reporting data to show up in the [Azure portal](https://portal.azure.com). This article lists the expected latency for the different types of reports. --## Activity reports --There are two types of activity reports: --- [Sign-ins](concept-sign-ins.md) ΓÇô Provides information about the usage of managed applications and user sign-in activities-- [Audit logs](concept-audit-logs.md) - Provides system activity information about users and groups, managed applications and directory activities--The following table lists the latency information for activity reports. --> [!NOTE] -> **Latency (95th percentile)** refers to the time by which 95% of the logs will be reported, and **Latency (99th percentile)** refers to the time by which 99% of the logs will be reported. -> --| Report | Latency (95th percentile) |Latency (99th percentile)| -| :-- | | | -| Audit logs | 2 mins | 5 mins | -| Sign-ins | 2 mins | 5 mins | --### How soon can I see activities data after getting a premium license? --When you upgrade to Azure AD P1 or P2 from a free version of Azure AD, the reports associated with P1 and P2 will begin to retain and display data from your tenant. You should expect a delay of roughly 24 hours from when you upgrade your tenant before all premium reporting features show data. Many premium reporting features will only begin retaining data after this 24 hour period following your upgrade to P1 or P2. --## Security reports --There are two types of security reports: --- [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who isn't the legitimate owner of a user account. -- [Users flagged for risk](../identity-protection/overview-identity-protection.md) - A risky user is an indicator for a user account that might have been compromised. --The following table lists the latency information for security reports. --| Report | Minimum | Average | Maximum | -| :-- | | | | -| Users at risk | 5 minutes | 15 minutes | 2 hours | -| Risky sign-ins | 5 minutes | 15 minutes | 2 hours | --## Risk detections --Azure AD uses adaptive machine learning algorithms and heuristics to detect suspicious actions that are related to your user accounts. Each detected suspicious action is stored in a record called a **risk detection**. --The following table lists the latency information for risk detections. --| Report | Minimum | Average | Maximum | -| :-- | | | | -| Sign-ins from anonymous IP addresses |5 minutes |15 Minutes |2 hours | -| Sign-ins from unfamiliar locations |5 minutes |15 Minutes |2 hours | -| Users with leaked credentials |2 hours |4 hours |8 hours | -| Impossible travel to atypical locations |5 minutes |1 hour |8 hours | -| Sign-ins from infected devices |2 hours |4 hours |8 hours | -| Sign-ins from IP addresses with suspicious activity |2 hours |4 hours |8 hours | ---## Next steps --* [Azure AD reports overview](overview-reports.md) -* [Programmatic access to Azure AD reports](concept-reporting-api.md) -* [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md) |
aks | Api Server Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md | API Server VNet Integration is supported for public or private clusters, and pub ## Region availability -API Server VNet Integration is available in all global Azure regions except the following: --- Southcentralus+API Server VNet Integration is available in all global Azure regions. ## Prerequisites For associated best practices, see [Best practices for network connectivity and [command-invoke]: command-invoke.md [container-registry-private-link]: ../container-registry/container-registry-private-link.md [virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server-[operator-best-practices-network]: operator-best-practices-network.md +[operator-best-practices-network]: operator-best-practices-network.md |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Net > - North Central US > - West Central US > - East US+> - UK South +> - Australia East ## Overview of overlay networking |
aks | Http Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md | In your template, provide values for *httpProxy*, *httpsProxy*, and *noProxy*. I ## Updating Proxy configurations -Values for *httpProxy*, and *httpsProxy* can't be changed after cluster creation. However, to support rolling CA certs and No Proxy settings, the values for *trustedCa* and *NoProxy* can be changed and applied to the cluster with the [az aks update][az-aks-update] command. +Values for *httpProxy*, and *httpsProxy* can't be changed after cluster creation. However, the values for *trustedCa* and *NoProxy* can be changed and applied to the cluster with the [az aks update][az-aks-update] command. An aks update for *NoProxy* will automatically inject new environment variables into pods with the new *NoProxy* values. Pods must be rotated for the apps to pick it up. For components under kubernetes, like containerd and the node itself, this won't take effect until a node image upgrade is performed. For example, assuming a new file has been created with the base64 encoded string of the new CA cert called *aks-proxy-config-2.json*, the following action updates the cluster. Or, you need to add new endpoint urls for your applications to No Proxy: |
aks | Image Cleaner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md | -It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which Image Cleaner can mitigate via automatic image identification and removal. +It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which Image Cleaner can mitigate via automatic image identification and removal. > [!NOTE]-> Image Cleaner is a feature based on [Eraser](https://github.com/Azure/eraser). +> Image Cleaner is a feature based on [Eraser](https://github.com/Azure/eraser). > On an AKS cluster, the feature name and property name is `Image Cleaner` while the relevant Image Cleaner pods' names contain `Eraser`. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] When enabled, an `eraser-controller-manager` pod is deployed on each agent node, Once an `ImageList` is generated, Image Cleaner will remove all the images in the list from node VMs. - ## Configuration options az aks update -g MyResourceGroup -n MyManagedCluster ## Logging -The deletion logs are stored in the `image-cleaner-kind-worker` pods. You can check these via `kubectl logs` or via the Container Insights pod log table if the [Azure Monitor add-on](./monitor-aks.md) is enabled. +Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images, and in `eraser-collector-xxx` pods for automatically deleted images. ++You can view these logs by running `kubectl logs <pod name> -n kubesystem`. However, this command may return only the most recent logs, since older logs are routinely deleted. To view all logs, follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table. ++1. Ensure that Azure monitoring is enabled on the cluster. For detailed steps, see [Enable Container Insights for AKS cluster](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster). ++1. Get the Log Analytics resource ID: ++ ```azurecli + az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster> + ``` ++ After a few minutes, the command returns JSON-formatted information about the solution, including the workspace resource ID: ++ ```json + "addonProfiles": { + "omsagent": { + "config": { + "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>" + }, + "enabled": true + } + } + ``` ++1. In the Azure portal, search for the workspace resource ID, then select **Logs**. ++1. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `eraser-collector-xxx` (for automatic mode). ++ ```kusto + let startTimestamp = ago(1h); + KubePodInventory + | where TimeGenerated > startTimestamp + | project ContainerID, PodName=Name, Namespace + | where PodName contains "name" and Namespace startswith "kube-system" + | distinct ContainerID, PodName + | join + ( + ContainerLog + | where TimeGenerated > startTimestamp + ) + on ContainerID + // at this point before the next pipe, columns from both tables are available to be "projected". Due to both + // tables having a "Name" column, we assign an alias as PodName to one column which we actually want + | project TimeGenerated, PodName, LogEntry, LogEntrySource + | summarize by TimeGenerated, LogEntry + | order by TimeGenerated desc + ``` ++1. Select **Run**. Any deleted image logs will appear in the **Results** area. ++ :::image type="content" source="media/image-cleaner/eraser-log-analytics.png" alt-text="Screenshot showing deleted image logs in the Azure portal." lightbox="media/image-cleaner/eraser-log-analytics.png"::: <!-- LINKS --> |
app-service | App Service Configuration References | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configuration-references.md | To get started with using App Configuration references in App Service, you'll fi 1. Create an App Configuration store by following the [App Configuration quickstart](../azure-app-configuration/quickstart-dotnet-core-app.md#create-an-app-configuration-store). + > [!NOTE] + > App Configuration references do not yet support network-restricted configuration stores. + 1. Create a [managed identity](overview-managed-identity.md) for your application. App Configuration references will use the app's system assigned identity by default, but you can [specify a user-assigned identity](#access-app-configuration-store-with-a-user-assigned-identity). 1. Enable the newly created identity to have the right set of access permissions on the App Configuration store. Update the [role assignments for your store](../azure-app-configuration/howto-integrate-azure-managed-service-identity.md#grant-access-to-app-configuration). You'll be assigning `App Configuration Data Reader` role to this identity, scoped over the resource. -> [!NOTE] -> App Configuration references do not yet support network-restricted configuration stores. - ### Access App Configuration Store with a user-assigned identity Some apps might need to reference configuration at creation time, when a system-assigned identity wouldn't yet be available. In these cases, a user-assigned identity can be created and given access to the App Configuration store, in advance. Follow these steps to [create user-assigned identity for App Configuration store](../azure-app-configuration/overview-managed-identity.md#adding-a-user-assigned-identity). Once you have granted permissions to the user-assigned identity, follow these st This configuration will apply to all references from this App. +## Granting your app access to referenced key vaults ++In addition to storing raw configuration values, Azure App Configuration has its own format for storing [Key Vault references][app-config-key-vault-references]. If the value of an App Configuration reference is a Key Vault reference in App Configuration store, your app will also need to have permission to access the key vault being specified. ++> [!NOTE] +> [The Azure App Configuration Key Vault references concept][app-config-key-vault-references] should not be confused with [the App Service and Azure Functions Key Vault references concept][app-service-key-vault-references]. Your app may use any combination of these, but there are some important differences to note. If your vault needs to be network restricted or you need the app to periodically update to latest versions, consider using the App Service and Azure Functions direct approach instead of using an App Configuration reference. ++[app-config-key-vault-references]: ../azure-app-configuration/use-key-vault-references-dotnet-core.md +[app-service-key-vault-references]: app-service-key-vault-references.md ++1. Identify the identity that you used for the App Configuration reference. Access to the vault must be granted to that same identity. ++1. Create an [access policy in Key Vault](../key-vault/general/security-features.md#privileged-access) for that identity. Enable the "Get" secret permission on this policy. Do not configure the "authorized application" or `applicationId` settings, as this is not compatible with a managed identity. + ## Reference syntax An App Configuration reference is of the form `@Microsoft.AppConfiguration({referenceString})`, where `{referenceString}` is replaced by below: To use an App Configuration reference for an [app setting](configure-common.md#c > [!TIP] > Most application settings using App Configuration references should be marked as slot settings, as you should have separate stores or labels for each environment. -> [!NOTE] -> Azure App Configuration also supports its own format for storing [Key Vault references](../azure-app-configuration/use-key-vault-references-dotnet-core.md). If the value of an App Configuration reference is a Key Vault reference in App Configuration store, the secret value will not be retrieved from Key Vault, as of yet. For using the secrets from KeyVault in App Service or Functions, please refer to the [Key Vault references in App Service](app-service-key-vault-references.md). - ### Considerations for Azure Files mounting Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount Azure Files as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests that modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it can't locate or create the content share, the request is blocked. |
app-service | Overview Authentication Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md | Title: Authentication and authorization description: Find out about the built-in authentication and authorization support in Azure App Service and Azure Functions, and how it can help secure your app against unauthorized access. ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5 Previously updated : 07/21/2021 Last updated : 02/03/2023 |
applied-ai-services | Try Form Recognizer Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-form-recognizer-studio.md | + + Title: "Quickstart: Form Recognizer Studio | v3.0" ++description: Form and document processing, data extraction, and analysis using Form Recognizer Studio +++++ Last updated : 02/02/2023++monikerRange: 'form-recog-3.0.0' +++# Get started: Form Recognizer Studio +++[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pre-trained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and other quickstarts. ++> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE56n49] ++## Prerequisites for new users ++* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). +* A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**Cognitive Services multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource. ++> [!TIP] +> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md). ++## Prebuilt models ++Prebuilt models help you add Form Recognizer features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. The following prebuilt models are currently supported by Form Recognizer: ++* [**General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document): extract text, tables, structure, key-value pairs and named entities. +* [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms. +* [**Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP). +* [**Layout**](https://formrecognizer.appliedai.azure.com/studio/layout): extract text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP). +* [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices. +* [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts. +* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports. +* [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards. ++After you've completed the prerequisites, navigate to [Form Recognizer Studio General Documents](https://formrecognizer.appliedai.azure.com/studio/document). ++In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar. ++ :::image border="true" type="content" source="../media/quickstarts/form-recognizer-general-document-demo-preview3.gif" alt-text="Selecting the General Document API to analysis a document in the Form Recognizer Studio."::: ++1. Select a Form Recognizer service feature from the Studio home page. ++1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections. ++1. Select the Analyze button to run analysis on the sample document or try your document by using the Add command. ++1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view. ++1. Observe the highlighted extracted content in the document view. Hover your mouse over the keys and values to see details. ++1. In the output section's Result tab, browse the JSON output to understand the service response format. ++1. In the Code tab, browse the sample code for integration. Copy and download to get started. ++## Added prerequisites for custom projects ++In addition to the Azure account and a Form Recognizer or Cognitive Services resource, you'll need: ++### Azure Blob Storage container ++A **standard performance** [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your training documents within your storage account. If you don't know how to create an Azure storage account with a container, following these quickstarts: ++* [**Create a storage account**](../../../storage/common/storage-account-create.md). When creating your storage account, make sure to select **Standard** performance in the **Instance details → Performance** field. +* [**Create a container**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When creating your container, set the **Public access level** field to **Container** (anonymous read access for containers and blobs) in the **New Container** window. ++### Configure CORS ++[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account. ++1. Select the CORS tab for the storage account. ++ :::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal."::: ++1. Start by creating a new CORS entry in the Blob service. ++1. Set the **Allowed origins** to `https://formrecognizer.appliedai.azure.com`. ++ :::image type="content" source="../media/quickstarts/cors-updated-image.png" alt-text="Screenshot that shows CORS configuration for a storage account."::: ++ > [!TIP] + > You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS. ++1. Select all the available 8 options for **Allowed methods**. ++1. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field. ++1. Set the **Max Age** to 120 seconds or any acceptable value. ++1. Select the save button at the top of the page to save the changes. ++CORS should now be configured to use the storage account from Form Recognizer Studio. ++### Sample documents set ++1. Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows: **Your storage account** → **Data storage** → **Containers** ++ :::image border="true" type="content" source="../media/sas-tokens/data-storage-menu.png" alt-text="Screenshot: Data storage menu in the Azure portal."::: ++1. Select a **container** from the list. ++1. Select **Upload** from the menu at the top of the page. ++ :::image border="true" type="content" source="../media/sas-tokens/container-upload-button.png" alt-text="Screenshot: container upload button in the Azure portal."::: ++1. The **Upload blob** window will appear. ++1. Select your file(s) to upload. ++ :::image border="true" type="content" source="../media/sas-tokens/upload-blob-window.png" alt-text="Screenshot: upload blob window in the Azure portal."::: ++> [!NOTE] +> By default, the Studio will use form documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../build-training-data-set.md#organize-your-data-in-subfolders-optional) ++## Custom models ++To create custom models, you start with configuring your project: ++1. From the Studio home, select the Custom model card to open the Custom models page. ++1. Use the "Create a project" command to start the new project configuration wizard. ++1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data. ++1. Review and submit your settings to create the project. ++1. From the labeling view, define the labels and their types that you're interested in extracting. ++1. Select the text in the document and select the label from the drop-down list or the labels pane. ++1. Label four more documents to get at least five documents labeled. ++1. Select the Train command and enter model name, select whether you want the custom template (form) or custom neural (document) model to start training your custom model. ++1. Once the model is ready, use the Test command to validate it with your test documents and observe the results. +++### Labeling as tables ++> [!NOTE] +> * With the release of API versions 2022-06-30-preview and later, custom template models will add support for [cross page tabular fields (tables)](../concept-custom-template.md#tabular-fields). +> * With the release of API versions 2022-06-30-preview and later, custom neural models will support [tabular fields (tables)](../concept-custom-template.md#tabular-fields) and models trained with API version 2022-08-31, or later will accept tabular field labels. ++1. Use the Delete command to delete models that aren't required. ++1. Download model details for offline viewing. ++1. Select multiple models and compose them into a new model to be used in your applications. ++Using tables as the visual pattern: ++For custom form models, while creating your custom models, you may need to extract data collections from your documents. Data collections may appear in a couple of formats. Using tables as the visual pattern: ++* Dynamic or variable count of values (rows) for a given set of fields (columns) ++* Specific collection of values for a given set of fields (columns and/or rows) ++**Label as dynamic table** ++Use dynamic tables to extract variable count of values (rows) for a given set of fields (columns): ++1. Add a new "Table" type label, select "Dynamic table" type, and name your label. ++1. Add the number of columns (fields) and rows (for data) that you need. ++1. Select the text in your page and then choose the cell to assign to the text. Repeat for all rows and columns in all pages in all documents. +++**Label as fixed table** ++Use fixed tables to extract specific collection of values for a given set of fields (columns and/or rows): ++1. Create a new "Table" type label, select "Fixed table" type, and name it. ++1. Add the number of columns and rows that you need corresponding to the two sets of fields. ++1. Select the text in your page and then choose the cell to assign it to the text. Repeat for other documents. +++### Signature detection ++>[!NOTE] +> Signature fields are currently only supported for custom template models. When training a custom neural model, labeled signature fields are ignored. ++To label for signature detection: (Custom form only) ++1. In the labeling view, create a new "Signature" type label and name it. ++1. Use the Region command to create a rectangular region at the expected location of the signature. ++1. Select the drawn region and choose the Signature type label to assign it to your drawn region. Repeat for other documents. +++## Next steps ++* Follow our [**Form Recognizer v3.0 migration guide**](../v3-migration-guide.md) to learn the differences from the previous version of the REST API. +* Explore our [**v3.0 SDK quickstarts**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs. +* Refer to our [**v3.0 REST API quickstarts**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features using the new REST API. ++[Get started with the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com). |
automation | Source Control Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md | Use this procedure to configure source control using the Azure portal. |Repository | Name of the repository or project. The first 200 repositories are retrieved. To search for a repository, type the name in the field and click **Search on GitHub**.| |Branch | Branch from which to pull the source files. Branch targeting isn't available for the TFVC source control type. | |Folder path | Folder that contains the runbooks to synchronize, for example, **/Runbooks**. Only runbooks in the specified folder are synchronized. Recursion isn't supported. |- |Auto Sync<sup>1</sup> | Setting that turns on or off automatic synchronization when a commit is made in the source control repository. | + |Auto Sync<sup>1</sup> | Setting that turns on or off automatic synchronization when a commit is made in the source control repository or GitHub repo. | |Publish Runbook | Setting of On if runbooks are automatically published after synchronization from source control, and Off otherwise. | |Description | Text specifying additional details about the source control. | - <sup>1</sup> To enable Auto Sync when configuring source control integration with Azure DevOps, you must be a Project Administrator.</br> + <sup>1 To enable Auto Sync when configuring the source control integration with Azure DevOps, you must be the Project Administrator or the GitHub repo owner. Collaborators can only configure Source Control without Auto Sync.</sup></br> Auto Sync does not work with Automation Private Link. If you enable the Private Link, the source control webhook invocations will fail as it is outside the network. :::image type="content" source="./media/source-control-integration/source-control-summary-inline.png" alt-text="Screenshot that describes the Source control summary." lightbox="./media/source-control-integration/source-control-summary-expanded.png"::: |
azure-arc | Quickstart Connect Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md | Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 11/04/2022 Last updated : 02/03/2023 ms.devlang: azurecli For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable >[!NOTE] > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported. +* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU. For a multi-node Kubernetes cluster environment, pods can get scheduled on different nodes. + * A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster. * Install [Helm 3](https://helm.sh/docs/intro/install). Ensure that the Helm 3 version is < 3.7.0. For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable ```azurepowershell-interactive Install-Module -Name Az.ConnectedKubernetes ```+ * An identity (user or service principal) which can be used to [log in to Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc. > [!IMPORTANT] For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable >[!NOTE] > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported. +* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU. For a multi-node Kubernetes cluster environment, pods can get scheduled on different nodes. + * A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster. * Install [Helm 3](https://helm.sh/docs/intro/install). Ensure that the Helm 3 version is < 3.7.0. |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | The following versions of the Windows and Linux operating system are officially * SUSE Linux Enterprise Server (SLES) 12 and 15 * Red Hat Enterprise Linux (RHEL) 7, 8 and 9 * Amazon Linux 2-* Oracle Linux 7 +* Oracle Linux 7 and 8 > [!NOTE] > On Linux, Azure Arc-enabled servers install several daemon processes. We only support using systemd to manage these processes. In some environments, systemd may not be installed or available, in which case Arc-enabled servers are not supported, even if the distribution is otherwise supported. These environments include **Windows Subsystem for Linux** (WSL) and most container-based systems, such as Kubernetes or Docker. The Azure Connected Machine agent can be installed on the node that runs the containers but not inside the containers themselves. |
azure-fluid-relay | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/customer-managed-keys.md | Request payload format: Example userAssignedIdentities and userAssignedIdentityResourceId: /subscriptions/ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testUserAssignedIdentity -Example keyEncryptionKeyUrl: https://test-key-vault.vault.azure.net/keys/testKey/testKeyVersionGuid +Example keyEncryptionKeyUrl: `https://test-key-vault.vault.azure.net/keys/testKey/testKeyVersionGuid` Notes: - Identity.type must be UserAssigned. It is the identity type of the managed identity that is assigned to the Fluid Relay resource. |
azure-fluid-relay | Connect Fluid Azure Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/connect-fluid-azure-service.md | The sections below will explain how to use `AzureClient` in your own application To connect to an Azure Fluid Relay instance, you first need to create an `AzureClient`. You must provide some configuration parameters including the tenant ID, service URL, and a token provider to generate the JSON Web Token (JWT) that will be used to authorize the current user against the service. The [@fluidframework/test-client-utils](https://fluidframework.com/docs/apis/test-client-utils/) package provides an [InsecureTokenProvider](https://fluidframework.com/docs/apis/test-client-utils/insecuretokenprovider-class) that can be used for development purposes. > [!CAUTION]-> The `InsecureTokenProvider` should only be used for development purposes because **using it exposes the tenant key secret in your client-side code bundle.** This must be replaced with an implementation of [ITokenProvider](https://fluidframework.com/docs/apis/azure-client/itokenprovider-interface/) that fetches the token from your own backend service that is responsible for signing it with the tenant key. An example implementation is [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class). For more information, see [How to: Write a TokenProvider with an Azure Function](../how-tos/azure-function-token-provider.md). +> The `InsecureTokenProvider` should only be used for development purposes because **using it exposes the tenant key secret in your client-side code bundle.** This must be replaced with an implementation of [ITokenProvider](https://fluidframework.com/docs/apis/azure-client/itokenprovider-interface/) that fetches the token from your own backend service that is responsible for signing it with the tenant key. An example implementation is [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class). For more information, see [How to: Write a TokenProvider with an Azure Function](../how-tos/azure-function-token-provider.md). Note that the `id` and `name` fields are arbitrary. ```javascript+const user = { id: "userId", name: "userName" }; + const config = { tenantId: "myTenantId",- tokenProvider: new InsecureTokenProvider("myTenantKey", { id: "userId" }), + tokenProvider: new InsecureTokenProvider("myTenantKey", user), endpoint: "https://myServiceEndpointUrl", type: "remote", }; |
azure-fluid-relay | Quickstart Dice Roll | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/quickstarts/quickstart-dice-roll.md | import { InsecureTokenProvider } from "@fluidframework/test-client-utils"; import { AzureClient } from "@fluidframework/azure-client"; ``` To configure the Azure client, replace the local connection `serviceConfig` object in `app.js` with your Azure Fluid Relay-service configuration values. These values can be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. Your `serviceConfig` object should look like this with the values replaced. (For information about how to find these values, see [How to: Provision an Azure Fluid Relay service](../how-tos/provision-fluid-azure-portal.md).) +service configuration values. These values can be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. Your `serviceConfig` object should look like this with the values replaced. (For information about how to find these values, see [How to: Provision an Azure Fluid Relay service](../how-tos/provision-fluid-azure-portal.md).) Note that the `id` and `name` fields are arbitrary. ```javascript+const user = { id: "userId", name: "userName" }; + const serviceConfig = { connection: { tenantId: "MY_TENANT_ID", // REPLACE WITH YOUR TENANT ID- tokenProvider: new InsecureTokenProvider("" /* REPLACE WITH YOUR PRIMARY KEY */, { id: "userId" }), + tokenProvider: new InsecureTokenProvider("" /* REPLACE WITH YOUR PRIMARY KEY */, user), endpoint: "https://myServiceEndpointUrl", // REPLACE WITH YOUR SERVICE ENDPOINT type: "remote", } |
azure-functions | Quickstart Netherite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-netherite.md | The snippet above is just a *minimal* configuration. Later, you may want to cons Your app is now ready for local development: You can start the Function app to test it. One way to do this is to run `func host start` on your application's root and executing a simple orchestrator Function. -While the function app is running, Netherite will publish load information about its active partitions to an Azure Storage table named "DurableTaskPartitions". You can use [Azure Storage Explorer](/articles/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows) to check that it's working as expected. If Netherite is running correctly, the table won't be empty; see the example below. +While the function app is running, Netherite will publish load information about its active partitions to an Azure Storage table named "DurableTaskPartitions". You can use [Azure Storage Explorer](/azure/vs-azure-tools-storage-manage-with-storage-explorer) to check that it's working as expected. If Netherite is running correctly, the table won't be empty; see the example below.  While the function app is running, Netherite will publish load information about ## Run your app on Azure -You need to create an Azure Functions app on Azure. To do this, follow the instructions in the **Create a function app** section of [these instructions](/articles/azure-functions/functions-create-function-app-portal.md#create-a-function-app-a-function). +You need to create an Azure Functions app on Azure. To do this, follow the instructions in the **Create a function app** section of [these instructions](/azure/azure-functions/functions-create-function-app-portal#create-a-function-app-a-function). ### Set up Event Hubs You will need to set up an Event Hubs namespace to run Netherite on Azure. You c #### Create an Event Hubs namespace -Follow [these steps](/articles/event-hubs/event-hubs-create#create-an-event-hubs-namespace) to create an Event Hubs namespace on the Azure portal. When creating the namespace, you may be prompted to: +Follow [these steps](/azure/event-hubs/event-hubs-create#create-an-event-hubs-namespace) to create an Event Hubs namespace on the Azure portal. When creating the namespace, you may be prompted to: 1. Choose a *resource group*: Use the same resource group as the Function app. 2. Choose a *plan* and provision *throughput units*. Select the defaults, this setting can be changed later. You can now deploy your code to the cloud and run your tests or workload on it. > [!NOTE] > For guidance on deploying your project to Azure, review the deployment instructions in the article for your programming language of choice in the [prerequisites section](#prerequisites). -For more information about the Netherite architecture, configuration, and workload behavior, including performance benchmarks, we recommend you take a look at the [Netherite documentation](https://microsoft.github.io/durabletask-netherite/#/). +For more information about the Netherite architecture, configuration, and workload behavior, including performance benchmarks, we recommend you take a look at the [Netherite documentation](https://microsoft.github.io/durabletask-netherite/#/). |
azure-government | Documentation Government Csp List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md | Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[AccountabilIT](https://accountabilit.com)| |[ACP Technologies](https://acp.us.com)| |[ActioNet](https://www.actionet.com/)|-|[AG Grace Inc](https://aggrace.com/)| |[ADNET Technologies](https://thinkadnet.com/)| |[Adoxio Business Solutions Limited](https://www.adoxio.com)| |[Advisicon, Inc](https://advisicon.com/)|+|[Advizex Technologies](https://advizex.com/)| |[Aeon Nexus Corp.](https://www.aeonnexus.com/)| |[Affigent](http://www.affigent.com/)|+|[AG Grace Inc](https://aggrace.com/)| |[Agile Defense Inc](https://agile-defense.com/)| |[Agile IT](https://www.agileit.com/)| |[Airnet Group](https://www.airnetgroup.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Apps4Rent](https://www.apps4rent.com)| |[Apptus](https://apttus.com)| |[ArcherPoint, Inc.](https://www.archerpoint.com)|+|[Arctic IT](https://arcticit.com/)| |[Ardalyst Federal LLC](https://ardalyst.com)| |[ArdentMC](https://www.ardentmc.com)| |[Army of Quants](https://www.armyofquants.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Bridge Partners LLC](https://www.bridgepartnersllc.com)| |[C2 Technology Solutions](https://c2techsol.com/)| |[CACI Inc - Federal](https://www.caci.com/)|+|[Caloudi Corporation](https://www.caloudi.com/)| |[Cambria Solutions, Inc.](https://www.cambriasolutions.com/)| |[Capgemini Government Solutions LLC](https://www.capgemini.com/us-en/service/capgemini-government-solutions/)| |[CAPSYS Technologies, LLC](https://www.capsystech.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[CB5 Solutions](https://www.cbfive.com/)| |[cBEYONData](https://cbeyondata.com/)| |[CBTS](https://www.cbts.com/)|+|[CDI LLC](https://www.cdillc.com/)| |[CDO Technologies Inc.](https://www.cdotech.com/contact/)| |[CDW-G, LLC](https://www.cdwg.com)| |[Centurylink](https://www.centurylink.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[CloudFit Software, LLC](https://www.cloudfitsoftware.com/)| |[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com)| |[CNSS - Cherokee Nation System Solutions LLC](https://cherokee-federal.com/about/cherokee-nation-system-solutions)|+|[Cobalt](https://www.cobalt.net/)| |[CodeLynx, LLC](http://www.codelynx.com/)| |[Columbus US, Inc.](https://www.columbusglobal.com)| |[Competitive Innovations, LLC](https://www.cillc.com)|+|[CompuNet Inc.](https://compunet.biz/)| |[Computer Solutions Inc.](http://cs-inc.co/)| |[Computex Technology Solutions](http://www.computex-inc.com/)| |[Communication Square LLC](https://www.communicationsquare.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Corporate Technologies LLC](https://www.gocorptech.com/)| |[Covenant Global](https://covenant.global/)| |[Covenant Technology Solutions Inc.](https://covenant-tech.net/)|+|[Core BTS](https://corebts.com/)| |[Crayon Software Experts LLC](https://www.crayon.com/)| |[Cre8tive Technology Design](https://www.ctnd.com/)| |[Crowe Horwath LLP](https://www.crowe.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Dell Federal Services](https://www.dellemc.com/en-us/industry/federal/federal-government-it.htm#)| |[Dell Marketing LP](https://www.dell.com/)| |[Delphi Technology Solutions](https://delphi-ts.com/)|+|[Derek Coleman & Associates Corporation](https://www.dcassociatesgroup.com/https://docsupdatetracker.net/index.html)| |[Developing Today LLC](https://www.developingtoday.net/)| |[DevHawk, LLC](https://www.devhawk.io)| |Diamond Capture Associates LLC| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[GovPlace](https://www.govplace.com/)| |[Gov4Miles](https://www.milestechnologies.com)| |Gravity Pro Consulting|-|[Green House Data](https://www.greenhousedata.com/)| |[GreenPages Technology Solutions](https://www.greenpages.com)| |[GRS Technology Solutions](https://www.grstechnologysolutions.com)| |[Hanu Software Solutions Inc.](https://www.hanusoftware.com/hanu/#contact)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Logicalis, Inc.](https://www.us.logicalis.com/)| |[Lucidius Group LLC](http://www.lucidiusgrp.com)| |[Lumen](https://www.lumen.com/)|+|[Lunavi](https://www.lunavi.com/)| |[M2 Technology, Inc.](http://www.m2ti.com/)| |[Magenium Solutions, LLC](https://www.magenium.com)| |[Mainstay Technologies](https://www.mstech.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Optuminsight Inc.](https://www.optum.com)| |[Orion Communications, Inc.](https://www.orioncom.com)| |[Outlook Insight, LLC](http://outlookinsight.com/)|+|[Overview Technology Solutions Inc.](https://overviewts.com/)| |[PA-Group](https://pa-group.us/)| |[Palecek Consulting Group](https://www.pcgit.net)| |[Pangea Group Inc.](http://www.pangea-group.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Perspecta](https://perspecta.com/)| |[Phacil (By Light)](https://www.bylight.com/phacil/)| |[Pharicode LLC](https://pharicode.com)|+|Philistin & Heller Group, Inc.| |[Picis Envision](https://www.picis.com/en/)| |[Pinao Consulting LLC](https://www.pcg-msp.com)| |[Pitech Solutions Inc](https://www.pitechsol.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |Remote Support Solutions Corp DBA RemoteWorks| |[Resource Metrix](https://www.rmtrx.com)| |[Revenue Solutions, Inc](https://www.revenuesolutionsinc.com)|+|[Ridge IT](https://www.ridgeit.com/)| |[RMON Networks Inc.](https://rmonnetworks.com/)| |[rmsource, Inc.](https://www.rmsource.com)| |[RoboTech Science, Inc. (Cyberscend)](https://cyberscend.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[SWC Technology Partners (BDO USA)](https://www.bdo.com/)| |[Sybatech, Inc. (Codepal Toolkit)](https://www.codepaltoolkit.com)| |[SyCom Technologies](https://www.sycomtech.com)|+|[Syndo LLC](https://www.syndo.llc/)| |[Synergy Technical, LLC](https://www.synergy-technical.com/)| |[Synoptek LLC](https://synoptek.com/)| |[Systems Engineering Inc](https://www.systemsengineering.com)| |[Systems Solutions Inc](https://www.ssi-net.com/)| |[Syvantis Technologies, Inc.](https://www.syvantis.com)| |[Taborda Solutions](https://tabordasolutions.com)|+|[TAU SIX LLC](https://www.tau-six.com/)| |[Techaxia LLC](https://www.techaxia.com)| |[TechFlow](https://www.techflow.com)| |[TechHouse GCC](https://www.tech-house.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Trusted Tech Team](https://www.trustedtechteam.com)| |[TSAChoice Inc.](https://www.tsachoice.com)| |[Turnkey Technologies, Inc.](https://www.turnkeytec.com)|+|[Tyto Athene LLC](https://gotyto.com/)| |[U2Cloud LLC](https://www.u2cloud.com)| |[UDRI - SSG](https://udayton.edu/udri/_resources/docs/ssg_v8.pdf)| |[Unisys Corp / Blue Bell](https://www.unisys.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Xgility](https://www.xgility.com)| |[Xtivia Inc.](https://www.xtivia.com)| |[ZL Technologies Inc.](https://www.zlti.com/)|+|[Zolon Tech](https://www.zolontech.com/)| |[Zones Inc](https://www.zones.com/site/home/https://docsupdatetracker.net/index.html)| |[ZR Systems Group LLC](https://zrsystems.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[American Technology Services LLC](https://networkats.com)| |[Applied Information Sciences](https://www.appliedis.com)| |[Applied Insight LLC](https://www.applied-insight.com)|+|[Applied Research Solutions](https://www.appliedres.com)| |[Arctic Information Technology, Inc.](https://arcticit.com)| |[Booz Allen Hamilton](https://www.boozallen.com/)| |[C3 Integrated Solutions, Inc.](https://www.c3isit.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[CGI Federal Inc.](https://www.cgi.com/us/en-us/federal)| |[Cloud Navigator, Inc - formerly ISC](https://cloudnav.com)| |[Conquest Cyber](https://conquestcyber.com/)|+|[Coretek](https://www.coretek.com/)| |[CyberSheath](https://cybersheath.com)| |[Daymark Solutions, Inc.](https://www.daymarksi.com/)| |[DLT](https://www.dlt.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Johnson Technology Systems Inc](https://www.jtsusa.com/)| |[KAMIND IT, Inc.](https://www.kamind.com/)| |[KTL Solutions, Inc.](https://www.ktlsolutions.com)|+|[Leidos](https://www.leidos.com/)| |[LiftOff, LLC](https://www.liftoffonline.com)| |[ManTech](https://www.mantech.com/)| |[Nimbus Logic, LLC](https://www.nimbus-logic.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[R3, LLC](https://www.r3-it.com/)| |[Red River](https://www.redriver.com)| |[SAIC](https://www.saic.com)|+|[SentinelBlue LLC](https://www.sentinelblue.com/)| |[Smartronix](https://www.smartronix.com)| |[Strategic Communications](https://yourstrategic.com/)| |[Summit 7 Systems, Inc.](https://www.summit7.us/)| |
azure-monitor | Legacy Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/legacy-pricing.md | - Title: Application Insights legacy enterprise (per node) pricing tier -description: Describes the legacy pricing tier for Application Insights. - Previously updated : 02/18/2022--- -# Application Insights legacy enterprise (per node) pricing tier -For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no extra cost. The Basic tier bills primarily on the volume of data that's ingested. --These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal. --The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you're charged for data ingested above the included allowance. If you're using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md). --For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/). --## Understanding billed usage on the legacy Enterprise (Per Node) tier --As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your billed usage with the usage you observe for each Application Insights resource complicated. --> [!WARNING] -> Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier. --## Per Node tier and Operations Management Suite subscription entitlements --Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an supplemental component at no extra cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost. The tier is described in more detailed later in the article. --Because this tier is applicable only to customers with an Operations Management Suite subscription, customers who don't have an Operations Management Suite subscription don't see an option to select this tier. --> [!NOTE] -> To ensure that you get this entitlement, your Application Insights resources must be in the Per Node pricing tier. This entitlement applies only as nodes. Application Insights resources in the Per GB tier don't realize any benefit. -> This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane. Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription. --## How the Per Node tier works --* You pay for each node that sends telemetry for any apps in the Per Node tier. - * A *node* is a physical or virtual server machine or a platform-as-a-service role instance that hosts your app. - * Development machines, client browsers, and mobile devices don't count as nodes. - * If your app has several components that send telemetry, such as a web service and a back-end worker, the components are counted separately. - * [Live Metrics Stream](../app/live-stream.md) data isn't counted for pricing purposes. In a subscription, your charges are per node, not per app. If you have five nodes that send telemetry for 12 apps, the charge is for five nodes. -* Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry from an app. The hourly charge is the quoted monthly charge divided by 744 (the number of hours in a 31-day month). -* A data volume allocation of 200 MB per day is given for each node that's detected (with hourly granularity). Unused data allocation isn't carried over from one day to the next. - * If you choose the Per Node pricing tier, each subscription gets a daily allowance of data based on the number of nodes that send telemetry to the Application Insights resources in that subscription. So, if you have five nodes that send data all day, you'll have a pooled allowance of 1 GB applied to all Application Insights resources in that subscription. It doesn't matter if certain nodes send more data than other nodes because the included data is shared across all nodes. If on a given day, the Application Insights resources receive more data than is included in the daily data allocation for this subscription, the per-GB overage data charges apply. - * The daily data allowance is calculated as the number of hours in the day (using UTC) that each node sends telemetry divided by 24 multiplied by 200 MB. So, if you have four nodes that send telemetry during 15 of the 24 hours in the day, the included data for that day would be ((4 × 15) / 24) × 200 MB = 500 MB. At the price of 2.30 USD per GB for data overage, the charge would be 1.15 USD if the nodes send 1 GB of data that day. - * The Per Node tier daily allowance isn't shared with applications for which you have chosen the Per GB tier. Unused allowance isn't carried over from day-to-day. --## Examples of how to determine distinct node count --| Scenario | Total daily node count | -|:|:-:| -| 1 application using 3 Azure App Service instances and 1 virtual server | 4 | -| 3 applications running on 2 VMs; the Application Insights resources for these applications are in the same subscription and in the Per Node tier | 2 | -| 4 applications whose Applications Insights resources are in the same subscription; each application running 2 instances during 16 off-peak hours, and 4 instances during 8 peak hours | 13.33 | -| Cloud services with 1 Worker Role and 1 Web Role, each running 2 instances | 4 | -| A 5-node Azure Service Fabric cluster running 50 microservices; each microservice running 3 instances | 5| --* The precise node counting depends on which Application Insights SDK your application is using. - * In SDK versions 2.2 and later, both the Application Insights [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) and the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) report each application host as a node. Examples are the computer name for physical server and VM hosts or the instance name for cloud services. The only exception is an application that uses only the [.NET Core](https://dotnet.github.io/) and the Application Insights Core SDK. In that case, only one node is reported for all hosts because the host name isn't available. - * For earlier versions of the SDK, the [Web SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) behaves like the newer SDK versions, but the [Core SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) reports only one node, regardless of the number of application hosts. - * If your application uses the SDK to set **roleInstance** to a custom value, by default, that same value is used to determine node count. - * If you're using a new SDK version with an app that runs from client machines or mobile devices, the node count might return a number that's large (because of the large number of client machines or mobile devices). ------## Next steps |
azure-resource-manager | Delete Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md | az resource delete \ To delete a resource group, you need access to the delete action for the **Microsoft.Resources/subscriptions/resourceGroups** resource. > [!IMPORTANT]-> The only permission required to delete a resource group is permission to the delete action for deleting resource groups. You do **not** need permission to delete individual resources within that resource group. Addtionally, delete actions that are specified in **notActions** for a roleAssignment are superseded by the resource group delete action. This is consistent with the scope heirarchy in the Azure role-based access control model. +> The only permission required to delete a resource group is permission to the delete action for deleting resource groups. You do **not** need permission to delete individual resources within that resource group. Additionally, delete actions that are specified in **notActions** for a roleAssignment are superseded by the resource group delete action. This is consistent with the scope heirarchy in the Azure role-based access control model. For a list of operations, see [Azure resource provider operations](../../role-based-access-control/resource-provider-operations.md). For a list of built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). |
azure-signalr | Signalr Howto Authorize Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md | Title: Authorize request to SignalR resources with Azure AD from Azure applicati description: This article provides information about authorizing request to SignalR resources with Azure AD from Azure applications Previously updated : 07/18/2022 Last updated : 02/03/2023 ms.devlang: csharp services.AddSignalR().AddAzureSignalR(option => ### Azure Functions SignalR bindings -> [!WARNING] -> SignalR trigger binding does not support identity-based connection yet and connection strings are still necessary. - Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Azure application identities to access your SignalR resources. Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample. On Azure portal, add settings as follows: See the following related articles: - [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)+- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md) |
backup | Guidance Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/guidance-best-practices.md | While scheduling your backup policy, consider the following points: ### Retention considerations -* Short-term retention can be "minutes" or "daily". Retention for "Weekly", "monthly" or "yearly" backup points is referred to as Long-term retention. +* Short-term retention can be "daily". Retention for "Weekly", "monthly" or "yearly" backup points is referred to as Long-term retention. * Long-term retention: |
bastion | Bastion Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md | Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtua |Benefit |Description| |--|--| |RDP and SSH through the Azure portal|You can get to the RDP and SSH session directly in the Azure portal using a single-click seamless experience.|-|Remote Session over TLS and firewall traversal for RDP/SSH|Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. Your RDP/SSH session is over TLS on port 443. This enables the traffic to traverse firewalls more securely.| +|Remote Session over TLS and firewall traversal for RDP/SSH|Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. Your RDP/SSH session is over TLS on port 443. This enables the traffic to traverse firewalls more securely. Bastion supports TLS 1.2 and above. Older TLS versions are not supported.| |No Public IP address required on the Azure VM| Azure Bastion opens the RDP/SSH connection to your Azure VM by using the private IP address on your VM. You don't need a public IP address on your virtual machine.| |No hassle of managing Network Security Groups (NSGs)| You don't need to apply any NSGs to the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines. For more information about NSGs, see [Network Security Groups](../virtual-network/network-security-groups-overview.md#security-rules).| |No need to manage a separate bastion host on a VM |Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity.| |
bastion | Vm Upload Download Native | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md | The steps in this section apply when connecting to a target VM from a Windows lo ## <a name="tunnel-command"></a>Upload files - SSH and RDP The steps in this section apply to native clients other than Windows, as well as Windows native clients that want to connect over SSH to upload files.-This section helps you upload files from your local computer to your target VM over SSH or RDP using the **az network bastion tunnel** command. This command doesn't support file download from the target VM to your local computer. To learn more about the tunnel command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md). +This section helps you upload files from your local computer to your target VM over SSH or RDP using the **az network bastion tunnel** command. To learn more about the tunnel command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md). > [!NOTE] > This command can be used to upload files from your local computer to the target VM. File download is not supported. |
center-sap-solutions | Deploy S4hana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/deploy-s4hana.md | Title: Deploy S/4HANA infrastructure (preview) description: Learn how to deploy S/4HANA infrastructure with Azure Center for SAP solutions through the Azure portal. You can deploy High Availability (HA), non-HA, and single-server configurations.-++ Previously updated : 10/19/2022 Last updated : 02/03/2023 #Customer intent: As a developer, I want to deploy S/4HANA infrastructure using Azure Center for SAP solutions so that I can manage SAP workloads in the Azure portal. There are three deployment options that you can select for your infrastructure, 1. For **SAP Transport Options**, you can choose to **Create a new SAP transport Directory** or **Use an existing SAP transport Directory** or completely skip the creation of transport directory by choosing **Dont include SAP transport directory** option. Currently, only NFS on AFS storage account fileshares are supported. - 1. If you choose to **Create a new SAP transport Directory**, this will create and mount a new transport fileshare on the SID. By Default, this option will create an NFS on AFS storage account and a transport fileshare in the resource group where SAP system wil be deployed. However, you can choose to create this storage account in a different resource group by providing the resource group name in **Transport Resource Group**. You can also provide a custom name for the storage account to be created under **Storage account name** section. Leaving the **Storage account name** will create the storage account with service default name **""SIDname""nfs""random characters""** in the chosen transport resource group. Creating a new transport directory will create a ZRS based replication for zonal deployments and LRS based replication for non-zonal deployments. If your region doesnt support ZRS replication deploying a zonal VIS will lead to a failure. In such cases, you can deploy a transport fileshare outside ACSS with ZRS replication and then create a zonal VIS where you select **Use an existing SAP transport Directory** to mount the pre-created fileshare. + 1. If you choose to **Create a new SAP transport Directory**, this will create and mount a new transport fileshare on the SID. By Default, this option will create an NFS on AFS storage account and a transport fileshare in the resource group where SAP system will be deployed. However, you can choose to create this storage account in a different resource group by providing the resource group name in **Transport Resource Group**. You can also provide a custom name for the storage account to be created under **Storage account name** section. Leaving the **Storage account name** will create the storage account with service default name **""SIDname""nfs""random characters""** in the chosen transport resource group. Creating a new transport directory will create a ZRS based replication for zonal deployments and LRS based replication for non-zonal deployments. If your region doesn't support ZRS replication deploying a zonal VIS will lead to a failure. In such cases, you can deploy a transport fileshare outside ACSS with ZRS replication and then create a zonal VIS where you select **Use an existing SAP transport Directory** to mount the pre-created fileshare. - 1. If you choose to **Use an existing SAP transport Directory**, select the pre - existing NFS fileshare under **File share name** option. The existing transport fileshare will be only mounted on this SID. The selected fileshare shall be in the same region as that of SAP system being created . Currently, file shares existing in a different region can not be selected. Provide the associated privated endpoint of the storage account where the selected fileshare exists under **Private Endpoint** option. + 1. If you choose to **Use an existing SAP transport Directory**, select the pre - existing NFS fileshare under **File share name** option. The existing transport fileshare will be only mounted on this SID. The selected fileshare shall be in the same region as that of SAP system being created. Currently, file shares existing in a different region can not be selected. Provide the associated privated endpoint of the storage account where the selected fileshare exists under **Private Endpoint** option. - 1. You can skip the creation of transport file share by selecting **Dont include SAP transport directory** option . The transport fileshare will neither be created or mounted for this SID. + 1. You can skip the creation of transport file share by selecting **Dont include SAP transport directory** option. The transport fileshare will neither be created or mounted for this SID. -1. Under **Configuration Details**, enter the FQDN for you SAP System . +1. Under **Configuration Details**, enter the FQDN for your SAP System. 1. For **SAP FQDN**, provide only the domain name for you system such "sap.contoso.com" |
center-sap-solutions | Get Quality Checks Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/get-quality-checks-insights.md | Title: Get quality checks and insights for a Virtual Instance for SAP solutions (preview) description: Learn how to get quality checks and insights for a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.-++ Last updated 10/19/2022 |
center-sap-solutions | Get Sap Installation Media | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/get-sap-installation-media.md | + + Title: Get SAP installation media (preview) +description: Learn how to download the necessary SAP media for installing the SAP software and upload it for use with Azure Center for SAP solutions. +++ Last updated : 02/03/2023+++#Customer intent: As a developer, I want to download the necessary SAP media for installing the SAP software and upload it for us with Azure Center for SAP solutions. +++# Get SAP installation media (preview) +++After you've [created infrastructure for your new SAP system using *Azure Center for SAP solutions*](deploy-s4hana.md), you need to install the SAP software on your SAP system. However, before you can do this installation, you need to get and upload the SAP installation media for use with Azure Center for SAP solutions. ++In this how-to guide, you'll learn how to get the SAP software installation media through different methods. You'll also learn how to upload the SAP media to an Azure Storage account to prepare for installation. The recommended method is to [run a pre-installation script to automate this upload process](#scripted-upload-method), however, you can also [manually upload the components](#manual-upload-method). ++## Prerequisites ++- An Azure subscription. +- An Azure account with **Contributor** role access to the subscriptions and resource groups in which the Virtual Instance for SAP solutions exists. +- A **User-assigned managed identity** with **Storage Blob Data Reader** and **Reader and Data Access** roles on the storage account which has the SAP software. +- A [network set up for your infrastructure deployment](prepare-network.md). +- A deployment of S/4HANA infrastructure. +- The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment. +- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. + - For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). + - For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure). + - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal. ++## Supported software ++Azure Center for SAP solutions supports the following SAP software versions: S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, and S/4HANA 2021 ISS 00. ++The following operating system (OS) software versions are compatible with these SAP software versions: ++| Publisher | Version | Generation SKU | Patch version name | Supported SAP Software Version | +| | - | -- | | | +| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 | +| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 | +| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 | +| SUSE | sles-sap-12-sp4 | gen2 | 2022.02.01 | S/4HANA 1909 SPS 03 | ++## Required components ++The following components are necessary for the SAP installation. ++- SAP software installation media (part of the `sapbits` container described later in this article) + - All essential SAP packages (*SWPM*, *SAPCAR*, etc.) + - SAP software (for example, *S/4HANA 2021 ISS 00*) +- Supporting software packages for the installation process. (These packages are downloaded automatically and used by Azure Center for SAP solutions during the installation.) + - `pip3` version `pip-21.3.1.tar.gz` + - `wheel` version 0.37.1 + - `jq` version 1.6 + - `ansible` version 2.9.27 + - `netaddr` version 0.8.0 +- The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and dependent BOMs (`HANA_2_00_059_v0004ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP15_latest.yaml`, `SWPM20SP13_latest.yaml`). They provide the following information: + - The full name of the SAP package (`name`) + - The package name with its file extension as downloaded (`archive`) + - The checksum of the package as specified by SAP (`checksum`) + - The shortened filename of the package (`filename`) + - The SAP URL to download the software (`url`) +- Template or INI files, which are stack XML files required to run the SAP packages. ++## Scripted upload method ++To prepare for SAP installation, you can upload the SAP components to your Azure Storage account using script. This method is recommended. ++### Set up storage account ++Before downloading the SAP software, set up an Azure Storage account to store the components. ++1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure. ++1. Create a container within the Azure Storage account named `sapbits`. ++ 1. On the storage account's sidebar menu, select **Containers** under **Data storage**. ++ 1. Select **+ Container**. ++ 1. On the **New container** pane, for **Name**, enter `sapbits`. ++ 1. Select **Create**. + + 1. Grant the **User-assigned managed identity**, which was used during infrastructure deployment, **Storage Blob Data Reader** and **Reader and Data Access** role access on this storage account. +++### Create virtual machine ++Next, set up a virtual machine (VM) where you will download the SAP components later. ++1. Create a **Ubuntu 20.04** VM in Azure. For more information, see [how to create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md). ++1. Sign in to the VM. ++1. Install the Azure CLI on the VM. ++ ```bash + curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash + ``` ++1. [Update the Azure CLI](/cli/azure/update-azure-cli) to version 2.30.0 or higher. +++1. Sign in to Azure. ++ ```azurecli + az login + ``` ++1. Install Ansible 2.9.27 on the VM. ++ ```bash + sudo pip3 install ansible==2.9.27 + ``` + +1. Clone the SAP automation repository from GitHub. ++ ```git bash + git clone https://github.com/Azure/sap-automation.git + ``` ++1. Change the branch to `main`. ++ ```git bash + git checkout main + ``` + +1. Optionally, check that your current branch is `main`. +++ ```git bash + git status + ``` ++### Download SAP media with script ++Next, download the SAP installation media to the VM using a script. ++1. Run the Ansible script **playbook_bom_download** with your own information. ++ The Ansible command that you run should look like: ++ ```azurecli + ansible-playbook ./sap-automation/deploy/ansible/playbook_bom_downloader.yaml -e "bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -e "s_user=<username>" -e "s_password=<password>" -e "sapbits_access_key=<storageAccountAccessKey>" -e "sapbits_location_base_path=<containerBasePath>" + ``` ++1. When asked if you have a storage account, enter `Y`. + + 1. For `<username>`, use your SAP username. + + 1. For `<password>`, use your SAP password. + +1. For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_** + +1. For `<storageAccountAccessKey>`, use your storage account's access key. To find the storage account's key: ++ 1. Find the storage account in the Azure portal that you created. + + 1. On the storage account's sidebar menu, select **Access keys** under **Security + networking**. + + 1. For **key1**, select **Show key and connection string**. + + 1. Copy the **Key** value. + +1. For `<containerBasePath>`, use the path to your `sapbits` container. To find the container path: ++ 1. Find the storage account that you created in the Azure portal. + + 1. Find the container named `sapbits`. + + 1. On the container's sidebar menu, select **Properties** under **Settings**. + + 1. Copy down the **URL** value. The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`. The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`. ++Now you can [install the SAP software](install-software.md) through Azure Center for SAP solutions. ++## Manual upload method ++To prepare for SAP installation, you can upload the SAP components to your Azure Storage account manually. This method isn't recommended; instead, it's recommended that you [upload the SAP components using a script](#scripted-upload-method). ++### Set up storage account manually ++First, set up an Azure Storage account for the SAP components: ++> [!NOTE] +> Don't change the folder name structure for any steps in this process. Otherwise, the installation process fails. ++1. Create a new Azure Storage account for storing the software components. ++1. Grant the roles **Storage Blob Data Reader** and **Reader and Data Access** to the user-assigned managed identity, which you used during infrastructure deployment. ++1. Create a container within the storage account. You can choose any container name, such as **sapbits**. ++1. Create a folder within the container, named **sapfiles**. ++1. Go to the **sapfiles** folder. ++1. Create two subfolders named **archives** and **boms**. ++1. In the **boms** folder, create four subfolders with the following names, depending on the SAP version that you're using.. ++ 1. For S/4HANA 1909 SPS 03: ++ 1. **HANA_2_00_059_v0003ms** ++ 1. **S41909SPS03_v0011ms** ++ 1. **SWPM20SP12_latest** ++ 1. **SUM20SP14_latest** ++ 1. For S/4HANA 2020 SPS 03: ++ 1. **HANA_2_00_064_v0001ms** ++ 1. **S42020SPS03_v0003ms** ++ 1. **SWPM20SP12_latest** ++ 1. **SUM20SP14_latest** ++ 1. For S/4HANA 2021 ISS 00: ++ 1. **HANA_2_00_064_v0001ms** ++ 1. **S4HANA_2021_ISS_v0001ms** ++ 1. **SWPM20SP12_latest** ++ 1. **SUM20SP14_latest** ++### Upload SAP media ++Next, upload the SAP software files to the storage account: ++1. Upload the following YAML files to the folders with the same name. Make sure to use the files that correspond to the SAP version that you're using. ++ 1. For S/4HANA 1909 SPS 03: ++ 1. [S41909SPS03_v0011ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml) ++ 1. [HANA_2_00_059_v0004ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml) ++ 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) ++ 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) ++ 1. For S/4HANA 2020 SPS 03: ++ 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) ++ 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) ++ 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) ++ 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) + + 1. For S/4HANA 2021 ISS 00: ++ 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) ++ 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) ++ 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) ++ 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) + +1. Depending on your SAP version, go to the folder **S41909SPS03_v0011ms** or **S42020SPS03_v0003ms** or **S4HANA_2021_ISS_v0001ms**. ++1. Create a subfolder named **templates**. ++1. Download the following files, depending on your SAP version. ++ 1. For S/4HANA 1909 SPS 03: + + 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/HANA_2_00_055_v1_install.rsp.j2) + + 1. [S41909SPS03_v0011ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-app-inifile-param.j2) + + 1. [S41909SPS03_v0011ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-dbload-inifile-param.j2) + + 1. [S41909SPS03_v0011ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-ers-inifile-param.j2) + + 1. [S41909SPS03_v0011ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-generic-inifile-param.j2) + + 1. [S41909SPS03_v0011ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-pas-inifile-param.j2) + + 1. [S41909SPS03_v0011ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scs-inifile-param.j2) + + 1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2) + + 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2) + + 1. For S/4HANA 2020 SPS 03: + + 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_055_v1_install.rsp.j2) + + 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_install.rsp.j2) + + 1. [S42020SPS03_v0003ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-app-inifile-param.j2) + + 1. [S42020SPS03_v0003ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-dbload-inifile-param.j2) + + 1. [S42020SPS03_v0003ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-ers-inifile-param.j2) + + 1. [S42020SPS03_v0003ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-generic-inifile-param.j2) + + 1. [S42020SPS03_v0003ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-pas-inifile-param.j2) + + 1. [S42020SPS03_v0003ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scs-inifile-param.j2) + + 1. [S42020SPS03_v0003ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scsha-inifile-param.j2) + + 1. For S/4HANA 2021 ISS 00: + + 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_055_v1_install.rsp.j2) + + 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_install.rsp.j2) + + 1. [NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_ASCS_S4HANA2021.CORE.HDB.ABAP_Distributed.params) + + 1. [NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params) + + 1. [NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params) + + 1. [NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params) + + 1. [NW_Users_Create-GENERIC.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_Users_Create-GENERIC.HDB.PD_Distributed.params) + + 1. [S4HANA_2021_ISS_v0001ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-app-inifile-param.j2) + + 1. [S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2) + + 1. [S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2) + + 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2) + + 1. [S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2) + + 1. [S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2) + + 1. [S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2) + + 1. [S4HANA_2021_ISS_v0001ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-web-inifile-param.j2) ++1. Upload all the files that you downloaded to the **templates** folder. ++1. Go back to the **sapfiles** folder, then go to the **archives** subfolder. ++1. Download all packages that aren't labeled as `download: false` from the main BOM URL. Choose the packages based on your SAP version. You can use the URL mentioned in the BOM to download each package. Make sure to download the exact package versions listed in each BOM. ++ 1. For S/4HANA 1909 SPS 03: ++ 1. [S41909SPS03_v0011ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml) + + 1. [HANA_2_00_059_v0004ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml) + + 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) + + 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) + + + 1. For S/4HANA 2020 SPS 03: ++ 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) + + 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) + + 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) + + 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) + + 1. For S/4HANA 2021 ISS 00: ++ 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) + + 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) + + 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) + + 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) ++1. Repeat the previous step for the main and dependent BOM files. ++1. Upload all the packages that you downloaded to the **archives** folder. Don't rename the files. ++1. Optionally, install other packages that aren't required. ++ 1. Download the package files. ++ 1. Upload the files to the **archives** folder. ++ 1. Open the `S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms` YAML file for the BOM. ++ 1. Edit the information for each optional package to `download:true`. ++ 1. Save and reupload the YAML file. Make sure you only have one YAML file in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the **boms** folder. ++Now you can [install the SAP software](install-software.md) through Azure Center for SAP solutions. ++## Next steps ++- [Install the SAP software](install-software.md) through Azure Center for SAP solutions |
center-sap-solutions | Install Software | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md | Title: Install SAP software (preview) -description: Learn how to install software on your SAP system created using Azure Center for SAP solutions. -+description: Learn how to install SAP software on an SAP system that you created using Azure Center for SAP solutions. You can either install the SAP software with Azure Center for SAP solutions, or install the software outside the service and detect the installed system. ++ Previously updated : 10/19/2022 Last updated : 02/03/2023 #Customer intent: As a developer, I want to install SAP software so that I can use Azure Center for SAP solutions.-In this how-to guide, you'll learn how to upload and install all the required components in your Azure account. You can either [run a pre-installation script to automate the upload process](#option-1-upload-software-components-with-script) or [manually upload the components](#option-2-upload-software-components-manually). Then, you can [run the software installation wizard](#install-software). +In this how-to guide, you'll learn two ways to install the SAP software for your system. Choose whichever method is appropriate for your use case. You can either: ++- [Install the SAP software through Azure Center for SAP solutions directly using the installation wizard](#install-sap-with-azure-center-for-sap-solutions). +- [Install the SAP software outside of Azure Center for SAP solutions, then detect the installed system from the service](#install-sap-through-outside-method). ## Prerequisites +Review the prerequisites for your preferred installation method: [through the Azure Center for SAP solutions installation wizard](#prerequisites-for-wizard-installation) or [through an outside method](#prerequisites-for-outside-installation) ++### Prerequisites for wizard installation + - An Azure subscription.-- An Azure account with **Contributor** role access to the subscriptions and resource groups in which the VIS exists.-- A **User-assigned managed identity** with **Storage Blob Data Reader** and **Reader and Data Access** roles on the Storage Account which has the SAP software. +- An Azure account with **Contributor** role access to the subscriptions and resource groups in which the Virtual Instance for SAP solutions exists. +- A user-assigned managed identity with **Storage Blob Data Reader** and **Reader and Data Access** roles on the Storage Account which has the SAP software. - A [network set up for your infrastructure deployment](prepare-network.md). - A deployment of S/4HANA infrastructure. - The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment.-- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure).- - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal. --## Supported software --Azure Center for SAP solutions supports the following SAP software version: **S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00**. -Following is the operating system (OS) software versions compatibility with SAP Software Version: --| Publisher | Version | Generation SKU | Patch version name | Supported SAP Software Version | -| | - | -- | | | -| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 | -| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 | -| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 | -| SUSE | sles-sap-12-sp4 | gen2 | 2022.02.01 | S/4HANA 1909 SPS 03 | --## Required components --The following components are necessary for the SAP installation. --- SAP software installation media (part of the `sapbits` container described later in this article)- - All essential SAP packages (*SWPM*, *SAPCAR*, etc.) - - SAP software (for example, *S/4HANA 2021 ISS 00*) -- Supporting software packages for the installation process which will be downloaded automatially and used by ACSS during the installation). - - `pip3` version `pip-21.3.1.tar.gz` - - `wheel` version 0.37.1 - - `jq` version 1.6 - - `ansible` version 2.9.27 - - `netaddr` version 0.8.0 -- The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0004ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP15_latest.yaml`, `SWPM20SP13_latest.yaml`). They provide the following information:- - The full name of the SAP package (`name`) - - The package name with its file extension as downloaded (`archive`) - - The checksum of the package as specified by SAP (`checksum`) - - The shortened filename of the package (`filename`) - - The SAP URL to download the software (`url`) -- Template or INI files, which are stack XML files required to run the SAP packages.--## Option 1: Upload software components with script --You can use the following method to upload the SAP components to your Azure account using scripts. Then, you can [run the software installation wizard](#install-software) to install the SAP software. We recommend using this method. --You also can [upload the components manually](#option-2-upload-software-components-manually) instead. --### Set up storage account --Before you can download the software, set up an Azure Storage account for storing the software. --1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure. --1. Create a container within the Azure Storage account named `sapbits`. -- 1. On the storage account's sidebar menu, select **Containers** under **Data storage**. -- 1. Select **+ Container**. -- 1. On the **New container** pane, for **Name**, enter `sapbits`. -- 1. Select **Create**. - - 1. Grant the **User-assigned managed identity**, which was used during infrastructure deployment, **Storage Blob Data Reader** and **Reader and Data Access** role access on this storage account. +- If you are installing an SAP System through Azure Center for SAP solutions, you should have the SAP installation media available in a storage account. For more information, see [how to download the SAP installation media](get-sap-installation-media.md). +- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). + - For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure). + - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal. +### Prerequisites for outside installation -### Download SAP media --You can download the SAP installation media required to install the SAP software, using a script as described in this section. --1. Create an Ubuntu 20.04 VM in Azure --1. Sign in to the VM. --1. Install the Azure CLI on the VM. -- ```bash - curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash - ``` --1. [Update the Azure CLI](/cli/azure/update-azure-cli) to version 2.30.0 or higher. ---1. Sign in to Azure: -- ```azurecli - az login - ``` --1. Install Ansible 2.9.27 on the ubuntu VM -- ```bash - sudo pip3 install ansible==2.9.27 - ``` - -1. Clone the SAP automation repository from GitHub. -- ```git bash - git clone https://github.com/Azure/sap-automation.git - ``` --1. Change the branch to main -- ```git bash - git checkout main - ``` - -1. [Optional] : Verify if the current branch is "main" --- ```git bash - git status - ``` --1. Run the Ansible script **playbook_bom_download** with your own information. -- - When asked if you have a storage account, enter `Y`. - - For `<username>`, use your SAP username. - - For `<password>`, use your SAP password. - - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_** - - For `<storageAccountAccessKey>`, use your storage account's access key. To find the storage account's key: --- 1. Find the storage account in the Azure portal that you created. -- 1. On the storage account's sidebar menu, select **Access keys** under **Security + networking**. -- 1. For **key1**, select **Show key and connection string**. -- 1. Copy the **Key** value. - - - For `<containerBasePath>`, use the path to your `sapbits` container. To find the container path: -- 1. Find the storage account that you created in the Azure portal. -- 1. Find the container named `sapbits`. -- 1. On the container's sidebar menu, select **Properties** under **Settings**. +- An Azure subscription. +- An Azure account with **Contributor** role access to the subscriptions and resource groups in which the Virtual Instance for SAP solutions exists. +- A user-assigned managed identity that you created during infrastructure deployment with **Contributor** role access on the subscription, or on all resource groups (compute, network and storage) that the SAP System is a part of. +- Infrastructure for the SAP system that you previously created through Azure Center for SAP solution. Don't make any changes to this infrastructure. +- An SAP System (and underlying infrastructure resources) that is up and running. +- Optionally, you can add fully installed application servers to the system before detecting the SAP software; then, the SAP system with additional application servers will also be detected. + - If you add additional application servers to this Virtual Instance for SAP solutions after infrastructure deployment, the previously created user-assigned managed identity also needs **Contributor** role access on the subscription or on the resource group under which this new application server exists. + - The number of application virtual machines installed should not be less than the number created during the infrastructure deployment phase in Azure Center for SAP solutions. You can still detect additional application servers. - 1. Copy down the **URL** value. The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`. - The format is `https://<your-storage-account>.blob.core.windows.net/sapbits` - - - - Ansible command to run - +Only the following scenarios are supported for this installation method: +- Infrastructure for S4/HANA was created through Azure Center for SAP solutions. The S4/HANA application was installed outside Azure Center for SAP solutions through a different tool. +- Only S4/HANA installation done outside Azure Center for SAP solutions can be detected. If you have installed a different SAP Application than S4/HANA, the detection will fail. +- If you want a fresh installation of S4/HANA software on the infrastructure deployed by Azure Center for SAP solutions, use the wizard installation option instead. - ```azurecli - ansible-playbook ./sap-automation/deploy/ansible/playbook_bom_downloader.yaml -e "bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -e "s_user=<username>" -e "s_password=<password>" -e "sapbits_access_key=<storageAccountAccessKey>" -e "sapbits_location_base_path=<containerBasePath>" - ``` +## Install SAP with Azure Center for SAP solutions --Now, you can [install the SAP software](#install-software) using the installation wizard. --## Option 2: Upload software components manually --You can use the following method to download and upload the SAP components to your Azure storage account manually. Then, you can [run the software installation wizard](#install-software) to install the SAP software. --You also can [run scripts to automate this process](#option-1-upload-software-components-with-script) instead. --1. Create a new Azure storage account for storing the software components. -1. Grant the roles **Storage Blob Data Reader** and **Reader and Data Access** to the user-assigned managed identity, which you used during infrastructure deployment. -1. Create a container within the storage account. You can choose any container name; for example, **sapbits**. -1. Create a folder within the container, named **sapfiles**. - > [!WARNING] - > Don't change the folder name structure for any steps in this process. Otherwise, the installation process can fail. -1. Go to the **sapfiles** folder. -1. Create two subfolders named **archives** and **boms**. -1. In the **boms** folder, create four subfolders as follows. --- - For S/4HANA 1909 SPS 03, make following folders - 1. **HANA_2_00_059_v0003ms** - 1. **S41909SPS03_v0011ms** - 1. **SWPM20SP12_latest** - 1. **SUM20SP14_latest** - - - - For S/4HANA 2020 SPS 03, make following folders - 1. **HANA_2_00_064_v0001ms** - 1. **S42020SPS03_v0003ms** - 1. **SWPM20SP12_latest** - 1. **SUM20SP14_latest** - - - - For S/4HANA 2021 ISS 00, make following folders - 1. **HANA_2_00_064_v0001ms** - 1. **S4HANA_2021_ISS_v0001ms** - 1. **SWPM20SP12_latest** - 1. **SUM20SP14_latest** - -1. Upload the following YAML files to the folders with the same name. -- - For S/4HANA 1909 SPS 03, - 1. [S41909SPS03_v0011ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml) - 1. [HANA_2_00_059_v0004ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml) - 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) - 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) -- - For S/4HANA 2020 SPS 03, - 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) - 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) - 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) - 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) - - - For S/4HANA 2021 ISS 00, - 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) - 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) - 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) - 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) - -1. Depending upon the SAP product version you are installing go to **S41909SPS03_v0011ms** or **S42020SPS03_v0003ms** or **S4HANA_2021_ISS_v0001ms** folder and create a subfolder named **templates**. -1. Download the following files. Then, upload all the files to the **templates** folder. - - For S/4HANA 1909 SPS 03, - 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/HANA_2_00_055_v1_install.rsp.j2) - 1. [S41909SPS03_v0011ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-app-inifile-param.j2) - 1. [S41909SPS03_v0011ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-dbload-inifile-param.j2) - 1. [S41909SPS03_v0011ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-ers-inifile-param.j2) - 1. [S41909SPS03_v0011ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-generic-inifile-param.j2) - 1. [S41909SPS03_v0011ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-pas-inifile-param.j2) - 1. [S41909SPS03_v0011ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scs-inifile-param.j2) - 1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2) - 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2) - - - For S/4HANA 2020 SPS 03, - 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_055_v1_install.rsp.j2) - 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_install.rsp.j2) - 1. [S42020SPS03_v0003ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-app-inifile-param.j2) - 1. [S42020SPS03_v0003ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-dbload-inifile-param.j2) - 1. [S42020SPS03_v0003ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-ers-inifile-param.j2) - 1. [S42020SPS03_v0003ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-generic-inifile-param.j2) - 1. [S42020SPS03_v0003ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-pas-inifile-param.j2) - 1. [S42020SPS03_v0003ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scs-inifile-param.j2) - 1. [S42020SPS03_v0003ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scsha-inifile-param.j2) - - - For S/4HANA 2021 ISS 00, - 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_055_v1_install.rsp.j2) - 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_install.rsp.j2) - 1. [NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_ASCS_S4HANA2021.CORE.HDB.ABAP_Distributed.params) - 1. [NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params) - 1. [NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params) - 1. [NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params) - 1. [NW_Users_Create-GENERIC.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_Users_Create-GENERIC.HDB.PD_Distributed.params) - 1. [S4HANA_2021_ISS_v0001ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-app-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-web-inifile-param.j2) - -1. Go back to the **sapfiles** folder, then go to the **archives** subfolder. -1. Download all packages that aren't labeled as `download: false` in the main BOM URL shown below. You can use the URL mentioned in the BOM to download each package. Make sure to download the exact package versions listed in each BOM. Repeat this step for the main and dependent BOM files. - - For S/4HANA 1909 SPS 03, - 1. [S41909SPS03_v0011ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml) - 1. [HANA_2_00_059_v0004ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml) - 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) - 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) - - - For S/4HANA 2020 SPS 03, - 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) - 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) - 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) - 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) - - - For S/4HANA 2021 ISS 00, - 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) - 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) - 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml) - 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml) - -1. Upload all the packages that you downloaded to the **archives** folder. Don't rename the files. -1. Optionally, you can install other packages that aren't required. - 1. Download the package files. - 1. Upload the files to the **archives** folder. - 1. Open the `S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms` YAML file for the BOM. - 1. Edit the information for each optional package to `download:true`. - 1. Save the YAML file and reupload the yaml file. There shall be only one yaml file in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the "boms" folder --Now, you can [install the SAP software](#install-software) using the installation wizard. --## Install software --To install the SAP software on Azure, use the Azure Center for SAP solutions installation wizard. +To install the SAP software directly, use the Azure Center for SAP solutions installation wizard. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Virtual Instance for SAP solutions**. -1. Select your Virtual Instance for SAP solutions (VIS) instance. +1. Select your Virtual Instance for SAP solutions instance. -1. On the **Overview** page for the VIS resource, select **Install SAP software**. +1. On the **Overview** page for the Virtual Instance for SAP solutions resource, select **Install SAP software**. 1. In the **Prerequisites** tab of the wizard, review the prerequisites. Then, select **Next**. To install the SAP software on Azure, use the Azure Center for SAP solutions ins 1. Wait for the installation to complete. The process takes approximately three hours. You can see the progress, along with estimated times for each step, in the wizard. -1. After the installation completes, sign in with your SAP system credentials. Refer to [this section](manage-virtual-instance.md) to find the SAP system and HANA DB credentials for the newly installed system. +1. After the installation completes, sign in with your SAP system credentials. To find the SAP system and HANA DB credentials for the newly installed system, see [how to manage a Virtual Instance for SAP solutions](manage-virtual-instance.md). ++## Install SAP through outside method ++If you install the SAP software elsewhere, you need to detect the software installation and update your Virtual Instance for SAP solutions metadata. ++1. Sign in to the [Azure portal](https://portal.azure.com). Make sure to sign in with an Azure account that has **Contributor** role access to the subscription or resource groups where the SAP system exists. ++1. Search for and select **Azure Center for SAP solutions** in the Azure portal's search bar. ++1. Select **Virtual Instances for SAP solutions**. Then select the Virtual Instance for SAP solutions resource that you want to detect. ++1. On the resource's overview page, select **Confirm already installed software**. Read all the instructions, then select **Confirm**. Extensions will now be installed on ASCS, APP and DB virtual machines and start discovering SAP metadata. ++1. Wait for the Virtual Instance for SAP solutions resource to be detected and populated with the metadata. The process completes after all SAP system components have been detected. ++1. Review the Virtual Instance for SAP solutions resource in the Azure portal. The resource page now shows the SAP system resources, and information about the system. + ## Limitations The following are known limitations and issues. -### Application Servers +### Application servers You can install a maximum of 10 Application Servers, excluding the Primary Application Server. ### SAP package version changes -1. When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues. +When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues. If you encounter this problem, follow these steps: If you encounter this problem, follow these steps: - `permissions` to `0755` - `url` to the new SAP download URL -1. Reupload the BOM file(s) in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the "boms" folder +1. Reupload the BOM file(s) in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the `boms` folder ### Special characters like $ in S-user password is not accepted while downloading the BOM. -1. Follow the step by step instructions upto cloning the 'SAP Automation repository from GitHub' in **Download SAP media** section. +1. Clone the SAP automation repository. For more information, see [how to download the SAP installation media](get-sap-installation-media.md). ++ ```git bash + git clone https://github.com/Azure/sap-automation.git + ``` -1. Before running the Ansible playbook set the SPASS environment variable below. Single quotes should be present in the below command +1. Before running the Ansible playbook set the SPASS environment variable below. Single quotes should be present in the command. ```bash export SPASS='password_with_special_chars' ```-1. Then run the ansible playbook +1. Run the Ansible playbook: -```azurecli + ```azurecli ansible-playbook ./sap-automation/deploy/ansible/playbook_bom_downloader.yaml -e "bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -e "s_user=<username>" -e "s_password=$SPASS" -e "sapbits_access_key=<storageAccountAccessKey>" -e "sapbits_location_base_path=<containerBasePath>"- ``` + ``` -- For `<username>`, use your SAP username.-- For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**-- For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the Download SAP media section-- For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the Download SAP media section.-- The format is `https://<your-storage-account>.blob.core.windows.net/sapbits` + - For `<username>`, use your SAP username. + - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_** + - For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the Download SAP media section + - For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the Download SAP media section. The format is `https://<your-storage-account>.blob.core.windows.net/sapbits` -This should resolve the problem and you can proceed with next steps as described in the section. ## Next steps - [Monitor SAP system from Azure portal](monitor-portal.md)-- [Manage a VIS](manage-virtual-instance.md)+- [Manage a Virtual Instance for SAP solutions](manage-virtual-instance.md) |
center-sap-solutions | Manage Virtual Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/manage-virtual-instance.md | Title: Manage a Virtual Instance for SAP solutions (preview) description: Learn how to configure a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.-++ Previously updated : 10/19/2022 Last updated : 02/03/2023 #Customer intent: As a developer, I want to configure my Virtual Instance for SAP solutions resource so that I can find system properties and connect to databases. |
center-sap-solutions | Manage With Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/manage-with-azure-rbac.md | + + Title: Manage Azure Center for SAP solutions resources with Azure RBAC (preview) +description: Use Azure role-based access control (Azure RBAC) to manage access to your SAP workloads within Azure Center for SAP solutions. +++++ Last updated : 02/03/2023++++# Management of Azure Center for SAP solutions resources with Azure RBAC (preview) ++++[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) enables granular access management for Azure. You can use Azure RBAC to manage Virtual Instance for SAP solutions resources within Azure Center for SAP solutions. For example, you can separate duties within your team and grant only the amount of access that users need to perform their jobs. ++*Users* or *user-assigned managed identities* require minimum roles or permissions to use the different capabilities in Azure Center for SAP solutions. ++There are [Azure built-in roles](../role-based-access-control/built-in-roles.md) for Azure Center for SAP solutions, or you can [create Azure custom roles](../role-based-access-control/custom-roles.md) for more control. Azure Center for SAP solutions provides the following built-in roles to deploy and manage SAP systems on Azure: ++- The **Azure Center for SAP solutions administrator** role has the required permissions for a user to deploy infrastructure, install SAP, and manage SAP systems from Azure Center for SAP solutions. The role allows users to: + - Deploy infrastructure for a new SAP system + - Install SAP software + - Register existing SAP systems as a [Virtual Instance for SAP solutions (VIS)](overview.md#what-is-a-virtual-instance-for-sap-solutions) resource. + - View the health and status of SAP systems. + - Perform operations such as **Start** and **Stop** on the VIS resource. + - Do all possible actions with Azure Center for SAP solutions, including the deletion of the VIS resource. +- The **Azure Center for SAP solutions service role** is intended for use by the user-assigned managed identity. The Azure Center for SAP solutions service uses this identity to deploy and manage SAP systems. This role has permissions to support the deployment and management capabilities in Azure Center for SAP solutions. +- The **Azure Center for SAP solutions reader** role has permissions to view all VIS resources. ++> [!NOTE] +> If you're creating a new user-assigned managed identity when you deploy a new SAP system or register an existing system, the user must also have the **Managed Identity Contributor** role. This role is required to make role assignments to a user-assigned managed identity. ++## Deploy infrastructure for new SAP system ++To deploy infrastructure for a new SAP system, a *user* and *user-assigned managed identity* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Azure Center for SAP solutions administrator** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Workloads/sapVirtualInstances/write` | +| `Microsoft.Workloads/Operations/read` | +| `Microsoft.Workloads/Locations/OperationStatuses/read` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSizingRecommendations/action` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSapSupportedSku/action` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getDiskConfigurations/action` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getAvailabilityZoneDetails/action` | +| `Microsoft.Resources/subscriptions/resourcegroups/deployments/read` | +| `Microsoft.Resources/subscriptions/resourcegroups/deployments/write` | +| `Microsoft.Network/virtualNetworks/read` | +| `Microsoft.Network/virtualNetworks/subnets/read` | +| `Microsoft.Network/virtualNetworks/subnets/write` | +| `Microsoft.Compute/sshPublicKeys/write` | +| `Microsoft.Compute/sshPublicKeys/read` | +| `Microsoft.Compute/sshPublicKeys /*/generateKeyPair/action` | +| `Microsoft.Storage/storageAccounts/read` | +| `Microsoft.Storage/storageAccounts/blobServices/read` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/read` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | +| `Microsoft.Storage/storageAccounts/fileServices/read` | +| `Microsoft.Storage/storageAccounts/fileServices/shares/read` | +++| Built-in roles for *user-assigned managed identities* | +| - | +| **Azure Center for SAP solutions service role** | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| `Microsoft.Compute/disks/read` | +| `Microsoft.Compute/disks/write` | +| `Microsoft.Compute/virtualMachines/read` | +| `Microsoft.Compute/virtualMachines/write` | +| `Microsoft.Compute/virtualMachines/extensions/read` | +| `Microsoft.Compute/virtualMachines/extensions/write` | +| `Microsoft.Compute/virtualMachines/extensions/delete` | +| `Microsoft.Compute/virtualMachines/instanceView/read` | +| `Microsoft.Compute/availabilitySets/read` | +| `Microsoft.Compute/availabilitySets/write` | +| `Microsoft.Network/loadBalancers/read` | +| `Microsoft.Network/loadBalancers/write` | +| `Microsoft.Network/loadBalancers/backendAddressPools/read` | +| `Microsoft.Network/loadBalancers/backendAddressPools/write` | +| `Microsoft.Network/loadBalancers/backendAddressPools/join/action` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/read` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/join/action` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/write` | +| `Microsoft.Network/networkInterfaces/read` | +| `Microsoft.Network/networkInterfaces/write` | +| `Microsoft.Network/networkInterfaces/join/action` | +| `Microsoft.Network/networkInterfaces/ipconfigurations/read` | +| `Microsoft.Network/networkInterfaces/ipconfigurations/join/action` | +| `Microsoft.Network/privateEndpoints/read` | +| `Microsoft.Network/privateEndpoints/write` | +| `Microsoft.Network/virtualNetworks/read` | +| `Microsoft.Network/virtualNetworks/subnets/read` | +| `Microsoft.Network/virtualNetworks/subnets/joinLoadBalancer/action` | +| `Microsoft.Network/virtualNetworks/subnets/join/action` | +| `Microsoft.Storage/storageAccounts/read` | +| `Microsoft.Storage/storageAccounts/write` | +| `Microsoft.Storage/storageAccounts/listAccountSas/action` | +| `Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action` | +| `Microsoft.Storage/storageAccounts/blobServices/read` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/read` | +| `Microsoft.Storage/storageAccounts/fileServices/read` | +| `Microsoft.Storage/storageAccounts/fileServices/write` | +| `Microsoft.Storage/storageAccounts/fileServices/shares/read` | +| `Microsoft.Storage/storageAccounts/fileServices/shares/write` | ++## Install SAP software ++To install SAP software, a *user* and *user-assigned managed identity* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Azure Center for SAP solutions administrator** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Workloads/sapVirtualInstances/write` | +| `Microsoft.Workloads/sapVirtualInstances/applicationInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/centralInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/databaseInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/read` | +| `Microsoft.Workloads/Operations/read` | +| `Microsoft.Workloads/Locations/OperationStatuses/read` | +| `Microsoft.Storage/storageAccounts/read` | +| `Microsoft.Storage/storageAccounts/blobServices/read` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/read` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | +| `Microsoft.Storage/storageAccounts/fileServices/read` | +| `Microsoft.Storage/storageAccounts/fileServices/shares/read` | ++| Built-in roles for *user-assigned managed identities* | +| - | +| **Azure Center for SAP solutions service role** | +| **Reader and Data Access** | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| `Microsoft.Compute/disks/read` | +| `Microsoft.Compute/virtualMachines/read` | +| `Microsoft.Compute/disks/write` | +| `Microsoft.Compute/virtualMachines/write` | +| `Microsoft.Compute/virtualMachines/extensions/delete` | +| `Microsoft.Compute/virtualMachines/extensions/read` | +| `Microsoft.Compute/virtualMachines/extensions/write` | +| `Microsoft.Compute/virtualMachines/instanceView/read` | +| `Microsoft.Network/loadBalancers/read` | +| `Microsoft.Network/loadBalancers/backendAddressPools/read` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/read` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read` | +| `Microsoft.Network/networkInterfaces/read` | +| `Microsoft.Network/networkInterfaces/ipconfigurations/read` | +| `Microsoft.Network/privateEndpoints/read` | +| `Microsoft.Network/virtualNetworks/read` | +| `Microsoft.Network/virtualNetworks/subnets/read` | +| `Microsoft.Storage/storageAccounts/read` | +| `Microsoft.Storage/storageAccounts/listAccountSas/action` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/read` | +| `Microsoft.Storage/storageAccounts/fileServices/read` | +| `Microsoft.Storage/storageAccounts/fileServices/shares/read` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | +| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action` | +| `Microsoft.Storage/storageAccounts/write` | +| `Microsoft.Storage/storageAccounts/listAccountSas/action` | +| `Microsoft.Storage/storageAccounts/fileServices/write` | +| `Microsoft.Storage/storageAccounts/fileServices/shares/write` | ++## Register and manage existing SAP system ++To register an existing SAP system and manage that system with Azure Center for SAP solutions, a *user* or *user-assigned managed identity* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Azure Center for SAP solutions administrator** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Workloads/sapVirtualInstances/write` | +| `Microsoft.Compute/virtualMachines/read` | ++| Built-in roles for *user-assigned managed identities* | +| - | +| **Azure Center for SAP solutions service role** | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| `Microsoft.Compute/virtualMachines/read` | +| `Microsoft.Compute/disks/read` | +| `Microsoft.Compute/disks/write` | +| `Microsoft.Compute/virtualMachines/write` | +| `Microsoft.Compute/virtualMachines/extensions/read` | +| `Microsoft.Compute/virtualMachines/extensions/write` | +| `Microsoft.Compute/virtualMachines/instanceView/read` | +| `Microsoft.Network/loadBalancers/read` | +| `Microsoft.Network/loadBalancers/backendAddressPools/read` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/read` | +| `Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read` | +| `Microsoft.Network/networkInterfaces/read` | +| `Microsoft.Network/networkInterfaces/ipconfigurations/read` | +| `Microsoft.Network/virtualNetworks/read` | +| `Microsoft.Network/virtualNetworks/subnets/read` | ++## View VIS resources ++To view VIS resources, a *user* or *user-assigned managed identity* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Azure Center for SAP solutions reader** | +| **Reader** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Workloads/sapVirtualInstances/applicationInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/centralInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/databaseInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/read` | +| `Microsoft.Workloads/Operations/read` | +| `Microsoft.Workloads/Locations/OperationStatuses/read` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSizingRecommendations/action` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSapSupportedSku/action` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getDiskConfigurations/action` | +| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getAvailabilityZoneDetails/action` | +| `Microsoft.Insights/Metrics/Read` | +| `Microsoft.ResourceHealth/AvailabilityStatuses/read` | ++| Built-in roles for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++| Built-in permissions for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++## Start SAP system ++To start the SAP system from a VIS resource, a *user* and *user-assigned managed identity* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Azure Center for SAP solutions administrator** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Workloads/sapVirtualInstances/start/action` | ++| Built-in roles for *user-assigned managed identities* | +| - | +| **Azure Center for SAP solutions service role** | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| `Microsoft.Compute/virtualMachines/read` | +| `Microsoft.Compute/virtualMachines/extensions/read` | +| `Microsoft.Compute/virtualMachines/extensions/write` | +| `Microsoft.Compute/virtualMachines/instanceView/read` | ++## Stop SAP system ++To stop the SAP system from a VIS resource, a *user* and *user-assigned managed identity* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Azure Center for SAP solutions administrator** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Workloads/sapVirtualInstances/stop/action` | ++| Built-in roles for *user-assigned managed identities* | +| - | +| **Azure Center for SAP solutions service role** | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| `Microsoft.Compute/virtualMachines/read` | +| `Microsoft.Compute/virtualMachines/extensions/read` | +| `Microsoft.Compute/virtualMachines/extensions/write` | +| `Microsoft.Compute/virtualMachines/instanceView/read` | ++## View cost analysis ++To view the cost analysis, a *user* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Cost Management Reader** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Consumption/*/read**` | +| `Microsoft.CostManagement/*/read` | +| `Microsoft.Billing/billingPeriods/read` | +| `Microsoft.Resources/subscriptions/read` | +| `Microsoft.Resources/subscriptions/resourceGroups/read` | +| `Microsoft.Billing/billingProperty/read` | ++| Built-in roles for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++## View Quality Insights ++To view Quality Insights, a *user* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Reader** | ++ Minimum permissions for *users* | +| - | +| None, except the minimum role assignment. | ++| Built-in roles for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++## Set up Azure Monitor for SAP solutions ++To set up Azure Monitor for SAP solutions for your SAP resources, a *user* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Contributor** | ++| Minimum permissions for *users* | +| - | +| None, except the minimum role assignment. | ++| Built-in roles for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++## Delete VIS resource ++To delete a VIS resource, a *user* or *user-assigned managed identity* requires the following role or permissions. ++| Built-in roles for *users* | +| - | +| **Azure Center for SAP solutions administrator** | ++| Minimum permissions for *users* | +| - | +| `Microsoft.Workloads/sapVirtualInstances/delete` | +| `Microsoft.Workloads/sapVirtualInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/applicationInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/centralInstances/read` | +| `Microsoft.Workloads/sapVirtualInstances/databaseInstances/read` | ++| Built-in roles for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++| Minimum permissions for *user-assigned managed identities* | +| - | +| This scenario isn't applicable to *user-assigned managed identities*. | ++## Next steps ++- [Manage VIS resources in Azure Center for SAP solutions](manage-virtual-instance.md) |
center-sap-solutions | Monitor Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/monitor-portal.md | Title: Monitor SAP system from the Azure portal (preview) description: Learn how to monitor the health and status of your SAP system, along with important SAP metrics, using the Azure Center for SAP solutions within the Azure portal.-++ Last updated 10/19/2022 |
center-sap-solutions | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/overview.md | Title: Azure Center for SAP solutions (preview) description: Azure Center for SAP solutions is an Azure offering that makes SAP a top-level workload on Azure. You can use Azure Center for SAP solutions to deploy or manage SAP systems on Azure seamlessly.-++ Last updated 10/19/2022 |
center-sap-solutions | Prepare Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/prepare-network.md | Title: Prepare network for infrastructure deployment (preview) description: Learn how to prepare a network for use with an S/4HANA infrastructure deployment with Azure Center for SAP solutions through the Azure portal.-++ Last updated 10/19/2022 |
center-sap-solutions | Register Existing System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md | Title: Register existing SAP system (preview) description: Learn how to register an existing SAP system in Azure Center for SAP solutions through the Azure portal. You can visualize, manage, and monitor your existing SAP system through Azure Center for SAP solutions.-++ Previously updated : 10/19/2022 Last updated : 02/03/2023 #Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions. The following SAP system configurations aren't supported in Azure Center for SAP - Dual stack (ABAP and Java) - Systems distributed across peered virtual networks - Systems using IPv6 addresses+- SAP system with multiple Application Server Instances on a single Virtual Machine +- SAP system with [clustered Application Server architecture](../virtual-machines/workloads/sap/high-availability-guide-rhel-with-dialog-instance.md) +- Multiple SIDs running on same set of Virtual Machines. For example, two or more SIDs sharing a single VM for ASCS instance. ## Enable resource permissions |
center-sap-solutions | Start Stop Sap Systems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/start-stop-sap-systems.md | Title: Start and stop SAP systems (preview) description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.-++ Last updated 10/19/2022 |
center-sap-solutions | View Cost Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/view-cost-analysis.md | Title: View post-deployment cost analysis in Azure Center for SAP solutions (preview) description: Learn how to view the cost of running an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.-++ Last updated 10/19/2022 |
cognitive-services | Call Read Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md | It returns a JSON response that contains a **status** field with the following p You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate. > [!NOTE]-> The free tier limits the request rate to 20 calls per minute. The paid tier allows 10 requests per second (RPS) that can be increased upon request. Note your Azure resource identfier and region, and open an Azure support ticket or contact your account team to request a higher request per second (RPS) rate. +> The free tier limits the request rate to 20 calls per minute. The paid tier allows 30 requests per second (RPS) that can be increased upon request. Note your Azure resource identfier and region, and open an Azure support ticket or contact your account team to request a higher request per second (RPS) rate. When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores. |
cognitive-services | How To Custom Speech Test And Train | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md | Text and audio that you use to test and train a custom model should include samp The following table lists accepted data types, when each data type should be used, and the recommended quantity. Not every data type is required to create a model. Data requirements will vary depending on whether you're creating a test or training a model. -| Data type | Used for testing | Recommended quantity | Used for training | Recommended quantity | +| Data type | Used for testing | Recommended for testing | Used for training | Recommended for training | |--|--|-|-|-| | [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-20 hours of audio | | [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio | |
cognitive-services | Speech Ssml Phonetic Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md | See the sections in this article for the phonemes that are specific to each loca ## zh-TW [!INCLUDE [zh-TW](./includes/phonetic-sets/text-to-speech/zh-tw.md)] +## Map X-SAMPA to IPA -*** |
cognitive-services | Speech Synthesis Markup Pronunciation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-pronunciation.md | Usage of the `phoneme` element's attributes are described in the following table | Attribute | Description | Required or optional | | - | - | - |-| `alphabet` | The phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` – See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element. | Optional | +| `alphabet` | The phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` – See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li><li>`x-sampa` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md#map-x-sampa-to-ipa)</li></ul><br>The alphabet applies only to the `phoneme` in the element. | Optional | | `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text-to-speech rejects the entire SSML document and produces none of the speech output specified in the document.<br/><br/>For `ipa`, to stress one syllable by placing stress symbol before this syllable, you need to mark all syllables for the word. Or else, the syllable before this stress symbol will be stressed. For `sapi`, if you want to stress one syllable, you need to place the stress symbol after this syllable, whether or not all syllables of the word are marked.| Required | ### phoneme examples The supported values for attributes of the `phoneme` element were [described pre </speak> ``` +```xml +<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> + <voice name="en-US-JennyNeural"> + <phoneme alphabet='x-sampa' ph='he."lou'>hello</phoneme> + </voice> +</speak> +``` + ## Custom lexicon You can define how single entities (such as company, a medical term, or an emoji) are read in SSML by using the [phoneme](#phoneme-element) and [sub](#sub-element) elements. To define how multiple entities are read, create an XML structured custom lexicon file. Then you upload the custom lexicon XML file and reference it with the SSML `lexicon` element. |
cognitive-services | Create Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/create-resource.md | recommendations: false # Create a resource and deploy a model using Azure OpenAI -Use this article to get started with Azure OpenAI with step-by-step instructions to create a resource and deploy a model. While the steps for resource creation and model deployment can be completed in a few minutes, the actual deployment process itself can take more than hour. It is recommended to create your resource, start your deployment, and then check back in on your deployment later rather than actively waiting for the deployment to complete. +Use this article to get started with Azure OpenAI with step-by-step instructions to create a resource and deploy a model. While the steps for resource creation and model deployment can be completed in a few minutes, the actual deployment process itself can take more than hour. You can create your resource, start your deployment, and then check back in on your deployment later rather than actively waiting for the deployment to complete. ::: zone pivot="web-portal" ::: zone-end ::: zone pivot="cli" ::: zone-end |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quickstart.md | |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md | -The Chat APIs provide an **auto-scaling** service for persistently storied text and data communication. Other key features include: +The Chat APIs provide an **auto-scaling** service for persistently stored text and data communication. Other key features include: - **Custom Identity and Addressing** - Azure Communication Services provides generic [identities](../identity-model.md) that are used to address communication endpoints. Clients use these identities to authenticate to the Azure service and communicate with each other in `chat threads` you control. - **Encryption** - Chat SDKs encrypt traffic and prevents tampering on the wire. |
communication-services | Calling Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md | Title: Teams calling and chat interoperability description: Teams calling and chat interoperability--++ Last updated 10/15/2021 -> Calling and chat interoperability is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/F3WLqPjw0D) and we will review your scenario(s) and evaluate your participation in the preview. +> Calling and chat interoperability is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/F3WLqPjw0D), and we'll review your scenario(s) and evaluate your participation in the preview. >-> Private Preview APIs and SDKs are provided without a service-level agreement, and are not appropriate for production workloads and should only be used with test users and test data. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> Private Preview APIs and SDKs are provided without a service-level agreement, aren't appropriate for production workloads, and should only be used with test users and test data. Certain features may not be supported or have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > -> For support, questions or to provide feedback or report issues, please use the [Teams Interop ad hoc calling and chat channel](https://teams.microsoft.com/l/channel/19%3abfc7d5e0b883455e80c9509e60f908fb%40thread.tacv2/Teams%2520Interop%2520ad%2520hoc%2520calling%2520and%2520chat?groupId=d78f76f3-4229-4262-abfb-172587b7a6bb&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47). You must be a member of the Azure Communication Service TAP team. +> For support, questions, or to provide feedback or report issues, please use the [Teams interop ad hoc calling and chat channel](https://teams.microsoft.com/l/channel/19%3abfc7d5e0b883455e80c9509e60f908fb%40thread.tacv2/Teams%2520Interop%2520ad%2520hoc%2520calling%2520and%2520chat?groupId=d78f76f3-4229-4262-abfb-172587b7a6bb&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47). You must be a member of the Azure Communication Service TAP team. -As part of this preview, the Azure Communication Services SDKs can be used to build applications that enable bring your own identity (BYOI) users to start 1:1 calls or 1:n chats with Teams users. [Standard Azure Communication Services pricing](https://azure.microsoft.com/pricing/details/communication-services/) applies to these users, but there's no extra fee for the interoperability capability itself. +As part of this preview, the Azure Communication Services SDKs can be used to build applications that enable bring your own identity (BYOI) users to start 1:1 calls or 1:n chats with Teams users. [Standard Azure Communication Services pricing](https://azure.microsoft.com/pricing/details/communication-services/) applies to these users, but there's no extra fee for the interoperability capability itself. Custom applications built with Azure Communication Services to connect and communicate with Teams users or Teams voice applications can be used by end users or by bots, and there's no differentiation in how they appear to Teams users in Teams applications unless explicitly indicated by the developer of the application with a display name. ++To enable calling and chat between your Communication Services users and your Teams tenant, allow your tenant via the [form](https://forms.office.com/r/F3WLqPjw0D) and enable the connection between the tenant and Communication Services resource. ## Enabling calling and chat interoperability in your Teams tenant-To enable calling and chat between your Communication Services users and your Teams tenant, use the new Teams PowerShell cmdlet [Set-CsTeamsAcsFederationConfiguration](/powershell/module/teams/set-csteamsacsfederationconfiguration). This cmdlet is only available to participants in the private preview. +Azure AD user with [Teams administrator role](/azure/active-directory/roles/permissions-reference#teams-administrator) can run PowerShell cmdlet with MicrosoftTeams module to enable the Communication Services resource in the tenant. First, open the PowerShell and validate the existence of the Teams module with the following command: ++```script +Get-module *teams* +``` ++If you don't see the MicrosoftTeams module, you need to install it first. To install the module, you need to run PowerShell as an administrator. Then run the following command: ++```script + Install-Module -Name MicrosoftTeams +``` ++You'll be informed about the modules that are going to be installed, which you can confirm with a `Y` or `A` answer. If the module is installed but is outdated, you can run the following command to update the module: ++```script + Update-Module MicrosoftTeams +``` ++When the module is installed and ready, you can connect to the MicrosftTeams module with the following command. You'll be prompted with an interactive window to log in. The user account that you're going to use needs to have Teams administrator permissions. Otherwise, you might get an `access denied` response in the next steps. ++```script +Connect-MicrosoftTeams +``` ++After successful login, you can run the cmdlet [Set-CsTeamsAcsFederationConfiguration](/powershell/module/teams/set-csteamsacsfederationconfiguration) to enable Communication Services resource in your tenant. Replace the text `IMMUTABLE_RESOURCE_ID` with an immutable resource ID in your communication resource. You can find more details on how to get this information [here](../troubleshooting-info.md#getting-immutable-resource-id). ++```script +$allowlist = @('IMMUTABLE_RESOURCE_ID') +Set-CsTeamsAcsFederationConfiguration -EnableAcsUsers $True -AllowedAcsResources $allowlist +``` + -Custom applications built with Azure Communication Services to connect and communicate with Teams users can be used by end users or by bots, and there's no differentiation in how they appear to Teams users, unless explicitly indicated by the developer of the application. +## Get Teams user ID -To start a call or chat with a Teams user, the user's Azure Active Directory (Azure AD) object ID is required. This can be obtained using [Microsoft Graph API](/graph/api/resources/users) or from your on-premises directory if you are using [Azure AD Connect](../../../active-directory/hybrid/how-to-connect-sync-whatis.md) (or some other mechanism) to synchronize between your on-premises directory and Azure AD. +To start a call or chat with a Teams user or Teams Voice application, you need an identifier of the target. You have the following options to retrieve the ID: +- User interface of [Azure AD](../troubleshooting-info.md?#getting-user-id) or with on-premises directory synchronization [Azure AD Connect](../../../active-directory/hybrid/how-to-connect-sync-whatis.md) +- Programmatically via [Microsoft Graph API](/graph/api/resources/users) ## Calling-With the Calling SDK, a Communication Services user or endpoint can start a 1:1 call with Teams users, identified by their Azure Active Directory (Azure AD) object ID. You can easily modify an existing application that calls other Communication Services users to instead call a Teams user. +With the Calling SDK, a Communication Services user or endpoint can start a 1:1 call with Teams users, identified by their Azure Active Directory (Azure AD) object ID. You can easily modify an existing application that calls other Communication Services users to call Teams users. [Manage calls - An Azure Communication Services how-to guide | Microsoft Docs](../../how-tos/calling-sdk/manage-calls.md?pivots=platform-web) const call = callAgent.startCall([teamsCallee]); [Communication Services voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) are raised for calls between a Communication Services user and Teams users. **Limitations and known issues**-- This functionality is not currently available in the .NET Calling SDK.+- This functionality isn't currently available in the .NET Calling SDK. - Teams users must be in "TeamsOnly" mode. Skype for Business users can't receive 1:1 calls from Communication Services users. - Escalation to a group call isn't supported. - Communication Services call recording isn't available for 1:1 calls.-- Advanced call routing capabilities such as call forwarding, group call pickup, simulring, and voice mail are not supported.+- Advanced call routing capabilities such as call forwarding, group call pickup, simultaneous ringing, and voice mail aren't supported. - Teams users can't set Communication Services users as forwarding/transfer targets.-- There are a number of features in the Teams client that do not work as expected during 1:1 calls with Communication Services users.-- Third-party [devices for Teams](/MicrosoftTeams/devices/teams-ip-phones) and [Skype IP phones](/skypeforbusiness/certification/devices-ip-phones) are not supported.+- There are many features in the Teams client that don't work as expected during 1:1 calls with Communication Services users. +- Third-party [devices for Teams](/MicrosoftTeams/devices/teams-ip-phones) and [Skype IP phones](/skypeforbusiness/certification/devices-ip-phones) aren't supported. ## Chat-With the Chat SDK, Communication Services users or endpoints can have group chats with Teams users, identified by their Azure Active Directory (AAD) object ID. You can easily modify an existing application that creates chats with other Communication Services users, to instead create chats with Teams users. Below is an example on how to use the Chat SDK to add Teams users as participants. To learn how to use Chat SDK to send a message, manage participants and more, see our [quickstart](../../quickstarts/chat/get-started.md?pivots=programming-language-javascript). +With the Chat SDK, Communication Services users or endpoints can have group chats with Teams users, identified by their Azure Active Directory (AAD) object ID. You can easily modify an existing application that creates chats with other Communication Services users to create chats with Teams users instead. Here is an example of how to use the Chat SDK to add Teams users as participants. To learn how to use Chat SDK to send a message, manage participants, and more, see our [quickstart](../../quickstarts/chat/get-started.md?pivots=programming-language-javascript). Creating a chat with a Teams user: ```js createChatThreadRequest, createChatThreadOptions ); const threadId = createChatThreadResult.chatThread.id; return threadId; } ``` -To make it easier to test, we have published a sample app [here](https://github.com/Azure-Samples/communication-services-web-chat-hero/tree/teams-interop-chat-adhoc). Update the app with your Communication Services resource and interop enabled Teams tenant to get started. +To make testing easier, we've published a sample app [here](https://github.com/Azure-Samples/communication-services-web-chat-hero/tree/teams-interop-chat-adhoc). Update the app with your Communication Services resource and interop enabled Teams tenant to get started. **Limitations and known issues** </br>-While in private preview, a Communication Services user can do various actions using the Communication Services Chat SDK, including sending and receiving of plain and rich text messages, typing indicators, read receipts, real-time notifications and more. However, most of the Teams chat features aren't supported. Here are some key behaviors and known issues: -- Chats can only be initiated by Communication Services users. -- Communication Services users can't send or receive gifs, images, or files. Links to files and images can be shared.+While in private preview, a Communication Services user can do various actions using the Communication Services Chat SDK, including sending and receiving plain and rich text messages, typing indicators, read receipts, real-time notifications, and more. However, most of the Teams chat features aren't supported. Here are some key behaviors and known issues: +- Communication Services users can only initiate chats. +- Communication Services users can't send or receive GIFs, images, or files. Links to files and images can be shared. - Communication Services users can delete the chat. This removes the Teams user from the chat thread and hides the message history from the Teams client.-- Known issue: Communication Services users aren't displayed correctly in the participant list. They are currently displayed as External but their people card might be inconsistent. +- Known issue: Communication Services users aren't displayed correctly in the participant list. They're currently displayed as External, but their people cards might need to be more consistent. - Known issue: A chat can't be escalated to a call from within the Teams app. -- Known issue: Editing of messages by the Teams user is not supported. +- Known issue: Editing of messages by the Teams user isn't supported. ## Privacy-Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting. +Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting. -Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation. +Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced. You must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation. |
communication-services | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/security.md | + + Title: Azure AD API permissions for communication as Teams external user ++description: This article describes Azure AD API permissions for communication as a Teams external user with Azure Communication Services. +++++ Last updated : 02/02/2023+++++# Security of communication as Teams external user ++In this article, you'll learn about the security measures and frameworks implemented by Microsoft Teams and Azure Communication Services to provide a secure collaboration environment. The products implement data encryption, secure real-time communication, two-factor authentication, user authentication, and authorization to prevent common security threats. The security frameworks for these services are based on industry standards and best practices. ++## Microsoft Teams +Microsoft Teams handles security using a combination of technologies and processes to mitigate common security threats and provide a secure collaboration environment. Teams implement multiple layers of security, including data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and two-factor authentication for added protection. The security framework for Teams is built on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security covering all stages of development. Teams also undergo regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Teams integrates with Microsoft's suite of security products and services, such as Azure Active Directory, to provide customers with a comprehensive security solution. You can learn here more about [security in Microsoft Teams](/microsoftteams/teams-security-guide.md). ++Additionally, Microsoft Teams provides several policies and tenant configurations to control Teams external users joining and in-meeting experience. Teams administrators can use settings in the Microsoft Teams admin center or PowerShell to control whether Teams external users can join Teams meetings, bypass lobby, start a meeting, participate in chat, or default role assignment. You can learn more about the [policies here](./teams-administration.md). ++## Azure Communication Services +Azure Communication Services handles security by implementing various security measures to prevent and mitigate common security threats. These measures include data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and authentication mechanisms to verify the identity of users. The security framework for Azure Communication Services is based on industry standards and best practices. Azure also undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure Communication Services integrates with other Azure security services, such as Azure Active Directory, to provide customers with a comprehensive security solution. Customers can also control access to the services and manage their security settings through the Azure portal. You can learn here more about [Azure security baseline](/security/benchmark/azure/baselines/azure-communication-services-security-baseline?toc=/azure/communication-services/toc.json), about security of [call flows](../../call-flows.md) and [call flow topologies](../../detailed-call-flows.md). |
communication-services | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/security.md | + + Title: Security of communication as Teams user ++description: This article describes the security of communication as a Teams user with Azure Communication Services. +++++ Last updated : 02/02/2022+++++# Security of communication as Teams user +In this article, you'll learn about the security measures and frameworks implemented by Microsoft Teams, Azure Communication Services, and Azure Active Directory to provide a secure collaboration environment. The products implement data encryption, secure real-time communication, two-factor authentication, user authentication, and authorization to prevent common security threats. The security frameworks for these services are based on industry standards and best practices. ++## Microsoft Teams +Microsoft Teams handles security using a combination of technologies and processes to mitigate common security threats and provide a secure collaboration environment. Teams implement multiple layers of security, including data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and two-factor authentication for added protection. The security framework for Teams is built on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security covering all stages of development. Teams also undergo regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Teams integrates with Microsoft's suite of security products and services, such as Azure Active Directory, to provide customers with a comprehensive security solution. You can learn here more about [security in Microsoft Teams](/microsoftteams/teams-security-guide.md). ++## Azure Communication Services +Azure Communication Services handles security by implementing various security measures to prevent and mitigate common security threats. These measures include data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and authentication mechanisms to verify the identity of users. The security framework for Azure Communication Services is based on industry standards and best practices. Azure also undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure Communication Services integrates with other Azure security services, such as Azure Active Directory, to provide customers with a comprehensive security solution. Customers can also control access to the services and manage their security settings through the Azure portal. You can learn here more about [Azure security baseline](/security/benchmark/azure/baselines/azure-communication-services-security-baseline?toc=/azure/communication-services/toc.json), about security of [call flows](../../call-flows.md) and [call flow topologies](../../detailed-call-flows.md). ++## Azure Active Directory +Azure Active Directory provides a range of security features for Microsoft Teams to help handle common security threats and provide a secure collaboration environment. Azure AD helps to secure user authentication and authorization, allowing administrators to manage user access to Teams and other applications through a single, centralized platform. Azure AD also integrates with Teams to provide multi-factor authentication and conditional access policies, which can be used to enforce security policies and control access to sensitive information. The security framework for Azure Active Directory is based on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security that covers all stages of development. Azure AD undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure AD integrates with other Azure security services, such as Azure Information Protection, to provide customers with a comprehensive security solution. You can learn here more about [Azure identity management security](/azure/security/fundamentals/identity-management-overview.md). |
cosmos-db | Index Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md | Azure Cosmos DB supports two indexing modes: > [!NOTE] > Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmoslazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing). -By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index documents as they're written. +By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index items as they're written. ## <a id="index-size"></a>Index size Any indexing policy has to include the root path `/*` as either an included or a - If the indexing mode is set to **consistent**, the system properties `id` and `_ts` are automatically indexed. +- If an explicitly indexed path doesn't exist in an item, a value will be added to the index to indicate that the path is undefined. ++All explicitly included paths will have values added to the index for each item in the container, even if the path is undefined for a given item. + See [this section](how-to-manage-indexing-policy.md#indexing-policy-examples) for indexing policy examples for including and excluding paths. ## Include/exclude precedence Azure Cosmos DB, by default, won't create any spatial indexes. If you would like Queries that have an `ORDER BY` clause with two or more properties require a composite index. You can also define a composite index to improve the performance of many equality and range queries. By default, no composite indexes are defined so you should [add composite indexes](how-to-manage-indexing-policy.md#composite-index) as needed. -Unlike with included or excluded paths, you can't create a path with the `/*` wildcard. Every composite path has an implicit `/?` at the end of the path that you don't need to specify. Composite paths lead to a scalar value that is the only value included in the composite index. +Unlike with included or excluded paths, you can't create a path with the `/*` wildcard. Every composite path has an implicit `/?` at the end of the path that you don't need to specify. Composite paths lead to a scalar value that is the only value included in the composite index. If a path in a composite index doesn't exist in an item, a value will be added to the index to indicate that the path is undefined. When defining a composite index, you specify: |
cosmos-db | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md | As items are inserted into an Azure Cosmos DB container, the database grows hori ## Create a database account -Before you can create a document database, you need to create a API for NoSQL account with Azure Cosmos DB. +Before you can create a document database, you need to create an API for NoSQL account with Azure Cosmos DB. [!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount.md)] Before you can create a document database, you need to create a API for NoSQL ac ## Clone the sample application -Now let's switch to working with code. Let's clone a API for NoSQL app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically. +Now let's switch to working with code. Let's clone an API for NoSQL app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer. This step is optional. If you're interested in learning how the database resourc [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=QueryItems)] +## Run the app ++Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database. ++1. In the git terminal window, `cd` to the sample code folder. ++ ```bash + cd azure-cosmos-java-getting-started + ``` ++2. In the git terminal window, use the following command to install the required Java packages. ++ ```bash + mvn package + ``` ++3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal) ++ ```bash + mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY + ``` ++ The terminal window displays a notification that the FamilyDB database was created. + +4. The app creates database with name `AzureSampleFamilyDB` +5. The app creates container with name `FamilyContainer` +6. The app will perform point reads using object IDs and partition key value (which is lastName in our sample). +7. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson') +8. The app doesn't delete the created resources. Return to the Azure portal to [clean up the resources](#clean-up-resources) from your account so you don't incur charges. + # [Async API](#tab/async) ### Managing database resources using the asynchronous (async) API This step is optional. If you're interested in learning how the database resourc [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=QueryItems)] -- ## Run the app Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database. Now go back to the Azure portal to get your connection string information and la 6. The app will perform point reads using object IDs and partition key value (which is lastName in our sample). 7. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson') +8. The app doesn't delete the created resources. Return to the Azure portal to [clean up the resources](#clean-up-resources) from your account so you don't incur charges. +++## [Passwordless Sync API](#tab/passwordlesssync) ++++## Authenticate using DefaultAzureCredential +++You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` will automatically discover and use the account you signed-in with in the previous step. ++### Managing database resources using the synchronous (sync) API ++* `CosmosClient` initialization. The `CosmosClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute requests against the service. + + [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=CreatePasswordlessSyncClient)] ++* Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container. ++ ```azurecli-interactive + # Create a SQL API database + az cosmosdb sql database create \ + --account-name msdocs-cosmos-nosql \ + --resource-group msdocs \ + --name AzureSampleFamilyDB + ``` + + ```azurecli-interactive + # Create a SQL API container + az cosmosdb sql container create \ + --account-name msdocs-cosmos-nosql \ + --resource-group msdocs \ + --database-name AzureSampleFamilyDB \ + --name FamilyContainer \ + --partition-key-path '/lastName' + ``` ++* Item creation by using the `createItem` method. ++ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=CreateItem)] + +* Point reads are performed using `readItem` method. ++ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=ReadItem)] ++* SQL queries over JSON are performed using the `queryItems` method. ++ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=QueryItems)] ++## Run the app ++Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database. ++1. In the git terminal window, `cd` to the sample code folder. ++ ```bash + cd azure-cosmos-java-getting-started + ``` ++2. In the git terminal window, use the following command to install the required Java packages. ++ ```bash + mvn package + ``` ++3. In the git terminal window, use the following command to start the Java application. Replace `SYNCASYNCMODE` with `sync-passwordless` or `async-passwordless`, depending upon which sample code you'd like to run. Replace `YOUR_COSMOS_DB_HOSTNAME` with the quoted URI value from the portal, and replace `YOUR_COSMOS_DB_MASTER_KEY` with the quoted primary key from portal. ++ ```bash + mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY + ``` ++ The terminal window displays a notification that the FamilyDB database was created. ++4. The app will reference the database and container you created via Azure CLI earlier. + +5. The app will perform point reads using object IDs and partition key value (which is lastName in our sample). +6. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson') + 7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources). from your account so that you don't incur charges. +## [Passwordless Async API](#tab/passwordlessasync) ++++## Authenticate using DefaultAzureCredential +++You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` will automatically discover and use the account you signed-in with in the previous step. ++### Managing database resources using the asynchronous (async) API ++* Async API calls return immediately, without waiting for a response from the server. In light of this, the following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API. ++* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute asynchronous requests against the service. + + [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=CreatePasswordlessAsyncClient)] ++* Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container. ++ ```azurecli-interactive + # Create a SQL API database + az cosmosdb sql database create \ + --account-name msdocs-cosmos-nosql \ + --resource-group msdocs \ + --name AzureSampleFamilyDB + ``` + + ```azurecli-interactive + # Create a SQL API container + az cosmosdb sql container create \ + --account-name msdocs-cosmos-nosql \ + --resource-group msdocs \ + --database-name AzureSampleFamilyDB \ + --name FamilyContainer \ + --partition-key-path '/lastName' + ``` ++* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream which issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program does not terminate during item creation. **The proper asynchronous programming practice is not to block on async calls - in realistic use-cases requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.** ++ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=CreateItem)] + +* As with the sync API, point reads are performed using `readItem` method. ++ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=ReadItem)] ++* As with the sync API, SQL queries over JSON are performed using the `queryItems` method. ++ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=QueryItems)] ++## Run the app ++Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database. ++1. In the git terminal window, `cd` to the sample code folder. ++ ```bash + cd azure-cosmos-java-getting-started + ``` ++2. In the git terminal window, use the following command to install the required Java packages. ++ ```bash + mvn package + ``` ++3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync-passwordless` or `async-passwordless` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal) ++ ```bash + mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY + ``` ++ The terminal window displays a notification that the `AzureSampleFamilyDB` database was created. ++4. The app will reference the database and container you created via Azure CLI earlier. + +5. The app will perform point reads using object IDs and partition key value (which is lastName in our sample). +6. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson') ++7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources). from your account so that you don't incur charges. ++++ ## Review SLAs in the Azure portal [!INCLUDE [cosmosdb-tutorial-review-slas](../includes/cosmos-db-tutorial-review-slas.md)] |
cost-management-billing | Mca Understand Pricesheet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-understand-pricesheet.md | tags: billing Previously updated : 09/15/2021 Last updated : 02/03/2023 -If you are a billing profile Owner, Contributor, Reader, or Invoice Manager you can download your organization's price sheet from the Azure portal. See [View and download your organization's pricing](ea-pricing.md). +If you're a billing profile Owner, Contributor, Reader, or Invoice Manager you can download your organization's price sheet from the Azure portal. See [View and download your organization's pricing](ea-pricing.md). ## Terms and descriptions in your price sheet The following section describes the important terms shown in your Microsoft Cust | meterId | Unique identifier for the meter. | | meterCategory | Name of the classification category for the meter. For example, _Cloud services_, _Networking_, etc. | | meterName | Name of the meter. The meter represents the deployable resource of an Azure service. |-| meterSubCategory | Name of the meter sub-classification category. | +| meterSubCategory | Name of the meter subclassification category. | | meterType | Name of the meter type. | | meterRegion | Name of the region where the meter for the service is available. Identifies the location of the datacenter for certain services that are priced based on datacenter location. |-| Product | Name of the product accruing the charges.Ex: Basic SQL DB vs Standard SQL DB | +| priceType | Price type for a product. For example, an Azure resource has its pay-as-you-go rate with priceType as *Consumption*. If the resource is eligible for a savings plan, it also has its savings plan rate with another priceType as *SavingsPlan*. | +| Product | Name of the product accruing the charges. For example, Basic SQL DB vs Standard SQL DB. | | productId | Unique identifier for the product whose meter is consumed. | | productOrderName | Name of the purchased product plan. |-| serviceFamily | Type of Azure service.Ex: Compute, Analytics, Security | +| serviceFamily | Type of Azure service. For example, Compute, Analytics, and Security. | +| Term | Duration associated with `priceType`. For example, SavingsPlan priceType has two commitment options: one year and three years. The Term will be *P1Y* for a one-year commitment and *P3Y* for a three-year commitment. | | tierMinimumUnits | Defines the lower bound of the tier range for which prices are defined. For example, if the range is 0 to 100, tierMinimumUnits would be 0. | | unitOfMeasure | Identifies the units of measure for billing for the service. For example, compute services are billed per hour. |-| unitPrice | Price per unit at the time of billing (not the effective blended price) as specific to a meter and product order name. Note: The unit price is not the same as the effective price in usage details downloads in case of services that have differential prices across tiers. In case of services with multi-tiered pricing, the effective price is a blended rate across the tiers and does not show a tier-specific unit price. The blended price or effective price is the net price for the consumed quantity spanning across the multiple tiers (where each tier has a specific unit price). | -+| unitPrice | Price per unit at the time of billing (not the effective blended price) as specific to a meter and product order name. Note: The unit price isn't the same as the effective price in usage details downloads when services have differential prices across tiers. If services have multi-tiered pricing, the effective price is a blended rate across the tiers and doesn't show a tier-specific unit price. The blended price or effective price is the net price for the consumed quantity spanning across the multiple tiers (where each tier has a specific unit price). | ## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)] |
data-factory | Concepts Change Data Capture Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md | The new Change Data Capture resource in ADF allows for full fidelity change data * Currently, when creating source/target mappings, each source and target is only allowed to be used once. * Continuous, real-time streaming is coming soon. * Allow schema drift is coming soon.+* Complex types are currently unsupported. For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md). |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | After you've completed this quickstart, you'll have a Dev Box configuration read ## Prerequisites To complete this quick start, make sure that you have:-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).+- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Owner or Contributor permissions on an Azure Subscription or a specific resource group. - Network Contributor permissions on an existing virtual network (owner or contributor) or permission to create a new virtual network and subnet. - User licenses. To use Dev Box, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory P1. |
event-hubs | Azure Event Hubs Kafka Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md | Title: Use Azure Event Hubs from an Apache Kafka app -description: This article provides you the information on using Azure Event Hubs to stream data from Apache Kafka applications without setting up a Kafka cluster. + Title: Azure Event Hubs for Apache Kafka ecosystems +description: Learn how Apache Kafka application developers can use Azure Event Hubs instead of building and using their own Kafka clusters. Last updated 02/01/2023 keywords: "Kafka, Azure, topics, message-broker" -# Use Azure Event Hubs from Apache Kafka applications --This article provides information about using Azure Event Hubs to stream data from [Apache Kafka](https://kafka.apache.org) applications without setting up a Kafka cluster on your own. +# Azure Event Hubs for Apache Kafka ecosystems +Azure Event Hubs provides an Apache Kafka endpoint on an event hub, which enables users to connect to the event hub using the Kafka protocol. You can often use an event hub's Kafka endpoint from your applications without any code changes. You modify only the configuration, that is, update the connection string in configurations to point to the Kafka endpoint exposed by your event hub instead of pointing to a Kafka cluster. Then, you can start streaming events from your applications that use the Kafka protocol into event hubs, which are equivalent to Kafka topics. > [!NOTE]-> Event Hubs supports Apache Kafka's producer and consumer APIs clients at version 1.0 and above. ---## Azure Event Hubs for Apache Kafka overview +> Event Hubs for Kafka Ecosystems supports [Apache Kafka version 1.0](https://kafka.apache.org/10/documentation.html) and later. -The Event Hubs for Apache Kafka feature provides a protocol head on top of Azure Event Hubs that is protocol compatible with Apache Kafka clients built for Apache Kafka server versions 1.0 and later and supports for both reading from and writing to Event Hubs, which are equivalent to Apache Kafka topics. --You can often use the Event Hubs Kafka endpoint from your applications without code changes and only modify the configuration: Update the connection string in configurations to point to the Kafka endpoint exposed by your event hub instead of pointing to your Kafka cluster. Then, you can start streaming events from your applications that use the Kafka protocol into Event Hubs. --Conceptually, Kafka and Event Hubs are very similar: they're both partitioned logs built for streaming data, whereby the client controls which part of the retained log it wants to read. The following table maps concepts between Kafka and Event Hubs. +This article provides detailed information on using Azure Event Hubs to stream data from [Apache Kafka](https://kafka.apache.org) applications without setting up a Kafka cluster on your own. ### Kafka and Event Hubs conceptual mapping +Conceptually, Kafka and Event Hubs are very similar. They're both partitioned logs built for streaming data, whereby the client controls which part of the retained log it wants to read. The following table maps concepts between Kafka and Event Hubs. + | Kafka Concept | Event Hubs Concept| | | | | Cluster | Namespace | |
expressroute | Expressroute Locations Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md | The following table shows connectivity locations and the service providers for e | **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix, PacketFabric | | **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX, Interxion, Megaport, Telefonica | | **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect |-| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Orange, Telstra Corporation, TPG Telecom | +| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet, Devoli, Equinix, Megaport, NETSG, NEXTDC, Optus, Orange, Telstra Corporation, TPG Telecom | | **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | Claro, C3ntro, Equinix, Megaport, Neutrona Networks | | **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | Supported | Colt, Equinix, Fastweb, IRIDEOS, Retelit | | **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | Supported | Cologix, Megaport | The following table shows connectivity locations and the service providers for e | **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported |GlobalConnect, Megaport, Telenor | | **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | Sweden Central | Supported | Equinix, Interxion, Megaport, Telia Carrier | | **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ |-| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NextDC | +| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NETSG, NextDC | | **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom, Chunghwa Telecom, FarEasTone | | **Tel Aviv** | Bezeq International | 2 | n/a | Supported | | | **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> | |
hdinsight | Hdinsight 40 Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md | This table lists certain HDInsight 4.0 cluster types that have retired or will b ||-||--| | HDInsight 4.0 Spark | 2.3 | June 30, 2020 | June 30, 2020 | | HDInsight 4.0 Kafka | 1.1 | Dec 31, 2020 | Dec 31, 2020 |-| HDInsight 4.0 Kafka | 2.1.0 * | Sep 30, 2022 | Oct 1, 2022 | --* Customers can't create new Kafka 2.1.0 clusters but existing 2.1.0 clusters won't be impacted and will get basic support until September 30, 2022. +| HDInsight 4.0 Kafka | 2.1.0 | Sep 30, 2022 | Oct 1, 2022 | ## Next steps |
healthcare-apis | Dicom Services Conformance Statement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md | The following parameters for each query are supported: We support searching the following attributes and search types. -| Attribute Keyword | Study | Series | Instance | -| :- | :: | :-: | :: | -| `StudyInstanceUID` | X | X | X | -| `PatientName` | X | X | X | -| `PatientID` | X | X | X | -| `PatientBirthDate` | X | X | X | -| `AccessionNumber` | X | X | X | -| `ReferringPhysicianName` | X | X | X | -| `StudyDate` | X | X | X | -| `StudyDescription` | X | X | X | -| `ModalitiesInStudy` | X | X | X | -| `SeriesInstanceUID` | | X | X | -| `Modality` | | X | X | -| `PerformedProcedureStepStartDate` | | X | X | -| `SOPInstanceUID` | | | X | +| Attribute Keyword | All Studies | All Series | All Instances | Study's Series | Study's Instances | Study Series' Instances | +| :- | :: | :-: | :: | :: | :-: | :: | +| `StudyInstanceUID` | X | X | X | | | | +| `PatientName` | X | X | X | | | | +| `PatientID` | X | X | X | | | | +| `PatientBirthDate` | X | X | X | | | | +| `AccessionNumber` | X | X | X | | | | +| `ReferringPhysicianName` | X | X | X | | | | +| `StudyDate` | X | X | X | | | | +| `StudyDescription` | X | X | X | | | | +| `ModalitiesInStudy` | X | X | X | | | | +| `SeriesInstanceUID` | | X | X | X | X | | +| `Modality` | | X | X | X | X | | +| `PerformedProcedureStepStartDate` | | X | X | X | X | | +| `ManufacturerModelName` | | X | X | X | X | | +| `SOPInstanceUID` | | | X | | X | X | #### Search matching |
iot-dps | Concepts Device Oem Security Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-oem-security-practices.md | For more information, see [provisioning](about-iot-dps.md#provisioning-process) ## Resources In addition to the recommended security practices in this article, Azure IoT provides resources to help with selecting secure hardware and creating secure IoT deployments: -- Azure IoT [security recommendations](../iot-fundamentals/security-recommendations.md) to guide the deployment process. +- Azure IoT [security best practices](../iot-fundamentals/iot-security-best-practices.md) to guide the deployment process. - The [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/) offers a service to help create secure IoT deployments. - For help with evaluating your hardware environment, see the whitepaper [Evaluating your IoT Security](https://download.microsoft.com/download/D/3/9/D3948E3C-D5DC-474E-B22F-81BA8ED7A446/Evaluating_Your_IOT_Security_whitepaper_EN_US.pdf). - For help with selecting secure hardware, see [The Right Secure Hardware for your IoT Deployment](https://download.microsoft.com/download/C/0/5/C05276D6-E602-4BB1-98A4-C29C88E57566/The_right_secure_hardware_for_your_IoT_deployment_EN_US.pdf). |
iot-edge | How To Update Iot Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md | keywords: Previously updated : 11/29/2022 Last updated : 2/2/2023 The way that you update the IoT Edge agent and IoT Edge hub containers depends o Check the version of the IoT Edge agent and IoT Edge hub modules currently on your device using the commands `iotedge logs edgeAgent` or `iotedge logs edgeHub`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the runtime module versions. -  ### Understand IoT Edge tags If you use specific tags in your deployment (for example, mcr.microsoft.com/azur 1. In the IoT Hub in the Azure portal, select your IoT Edge device, and select **Set Modules**. -1. In the **IoT Edge Modules** section, select **Runtime Settings**. +1. On the **Modules** tab, select **Runtime Settings**. -  + :::image type="content" source="./media/how-to-update-iot-edge/configure-runtime.png" alt-text="Screenshot that shows location of the Runtime Settings tab."::: -1. In **Runtime Settings**, update the **Image** value for **Edge Hub** with the desired version. Don't select **Save** yet. +1. In **Runtime Settings**, update the **Image URI** value in the **Edge Agent** section with the desired version. Don't select **Apply** yet. -  + :::image type="content" source="./media/how-to-update-iot-edge/runtime-settings-edgeagent.png" alt-text="Screenshot that shows where to update the image U R I with your version in the Edge Agent."::: -1. Collapse the **Edge Hub** settings, or scroll down, and update the **Image** value for **Edge Agent** with the same desired version. +1. Select the **Edge Hub** tab and update the **Image URI** value with the same desired version. -  + :::image type="content" source="./media/how-to-update-iot-edge/runtime-settings-edgehub.png" alt-text="Screenshot that shows where to update the image U R I with your version in the Edge Hub."::: -1. Select **Save**. +1. Select **Apply** to save changes. -1. Select **Review + create**, review the deployment, and select **Create**. +1. Select **Review + create**, review the deployment as seen in the JSON file, and select **Create**. ## Special case: Update from 1.0 or 1.1 to latest release Currently, there's no support for IoT Edge version 1.4 running on Windows device -Now that the IoT Edge service running on your devices has been updated, follow the steps in this article to also [Update the runtime containers](#update-the-runtime-containers). -+Now that the latest IoT Edge service is running on your devices, you also need to [Update the runtime containers](#update-the-runtime-containers) to the latest version. The updating process for runtime containers is the same as the updating process the IoT Edge service. ## Next steps |
iot-edge | Iot Edge As Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-as-gateway.md | Custom or third-party modules that are often specific to the downstream device's There are two patterns for translation gateways: *protocol translation* and *identity translation*. - ### Protocol translation |
iot-edge | Iot Edge Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-runtime.md | The IoT Edge runtime is responsible for the following functions on IoT Edge devi - An IoT Edge device and the cloud - IoT Edge devices - The responsibilities of the IoT Edge runtime fall into two categories: communication and module management. These two roles are performed by two components that are part of the IoT Edge runtime. The *IoT Edge agent* deploys and monitors the modules, while the *IoT Edge hub* is responsible for communication. The IoT Edge hub isn't a full version of IoT Hub running locally. IoT Edge hub s To reduce the bandwidth that your IoT Edge solution uses, the IoT Edge hub optimizes how many actual connections are made to the cloud. IoT Edge hub takes logical connections from modules or downstream devices and combines them for a single physical connection to the cloud. The details of this process are transparent to the rest of the solution. Clients think they have their own connection to the cloud even though they're all being sent over the same connection. The IoT Edge hub can either use the AMQP or the MQTT protocol to communicate upstream with the cloud, independently from protocols used by downstream devices. However, the IoT Edge hub currently only supports combining logical connections into a single physical connection by using AMQP as the upstream protocol and its multiplexing capabilities. AMQP is the default upstream protocol. - IoT Edge hub can determine whether it's connected to IoT Hub. If the connection is lost, IoT Edge hub saves messages or twin updates locally. Once a connection is reestablished, it syncs all the data. The location used for this temporary cache is determined by a property of the IoT Edge hub's module twin. The size of the cache isn't capped and will grow as long as the device has storage capacity. For more information, see [Offline capabilities](offline-capabilities.md). IoT Edge hub facilitates local communication. It enables device-to-module and mo The brokering mechanism uses the same routing features as IoT Hub to specify how messages are passed between devices or modules. First devices or modules specify the inputs on which they accept messages and the outputs to which they write messages. Then a solution developer can route messages between a source (for example, outputs), and a destination (for example, inputs), with potential filters. - Routing can be used by devices or modules built with the Azure IoT Device SDKs using the AMQP protocol. All messaging IoT Hub primitives (for example, telemetry), direct methods, C2D, twins, are supported but communication over user-defined topics isn't supported. |
iot-edge | Production Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md | Before you put any device in production you should know how you're going to mana * IoT Edge * CA certificates -[Device Update for IoT Hub](../iot-hub-device-update/index.yml) (Preview) is a service that enables you to deploy over-the-air updates (OTA) for your IoT Edge devices. +[Device Update for IoT Hub](../iot-hub-device-update/index.yml) is a service that enables you to deploy over-the-air updates (OTA) for your IoT Edge devices. Alternative methods for updating IoT Edge require physical or SSH access to the IoT Edge device. For more information, see [Update the IoT Edge runtime](how-to-update-iot-edge.md). To update multiple devices, consider adding the update steps to a script or use an automation tool like Ansible. |
iot-fundamentals | Iot Security Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-architecture.md | Title: IoT Security Architecture -description: Security architecture guidelines and considerations for Azure IoT solutions -+ Title: Security architecture ++description: Security architecture guidelines and considerations for Azure IoT solutions illustrated using the IoT reference architecture + Previously updated : 08/26/2022- Last updated : 02/10/2023+ -# Internet of Things (IoT) security architecture +# Security architecture for IoT solutions ++When you design and architect an IoT solution, it's important to understand the potential threats and include appropriate defenses. Understanding how an attacker might compromise a system helps you to make sure that the appropriate mitigations are in place from the start. ++## Threat modeling ++Microsoft recommends using a threat modeling process as part of your IoT solution design. If you're not familiar with threat modeling and the secure development lifecycle, see: ++- [Threat modeling](https://www.microsoft.com/securityengineering/sdl/threatmodeling) +- [Secure development best practices on Azure](/azure/security/develop/secure-dev-overview) +- [Getting started guide](/azure/security/develop/threat-modeling-tool-getting-started) ++## Security in IoT ++It's helpful to divide your IoT architecture into several zones as part of the threat modeling exercise: ++- Device +- Field gateway +- Cloud gateway +- Service ++Each zone often has its own data and authentication and authorization requirements. You can also use zones to isolate damage and restrict the impact of low trust zones on higher trust zones. ++Each zone is separated by a _trust boundary_, shown as the dotted red line in the following diagram. It represents a transition of data from one source to another. During this transition, the data could be subject to the following threats: ++- Spoofing +- Tampering +- Repudiation +- Information disclosure +- Denial of service +- Elevation of privilege ++To learn more, see the [STRIDE model](/azure/security/develop/threat-modeling-tool-threats#stride-model). +++You can use STRIDE to model the threats to each component within each zone. The following sections elaborate on each of the components and specific security concerns and solutions that should be put into place. ++The remainder of this article discusses the threats and mitigations for these zones and components in more detail. ++## Device zone ++The device environment is the space around the device where physical access and local network digital access to the device is feasible. A local network is assumed to be distinct and insulated from ΓÇô but potentially bridged to ΓÇô the public internet. The device environment includes any short-range wireless radio technology that permits peer-to-peer communication of devices. It doesn't include any network virtualization technology creating the illusion of such a local network. It doesn't include public operator networks that require any two devices to communicate across public network space if they were to enter a peer-to-peer communication relationship. ++## Field gateway zone ++A field gateway is a device, appliance, or general-purpose server computer software that acts as communication enabler and, potentially, as a device control system and device data processing hub. The field gateway zone includes the field gateway itself and all the devices attached to it. Field gateways act outside dedicated data processing facilities, are usually location bound, are potentially subject to physical intrusion, and have limited operational redundancy. A field gateway is typically a thing that an attacker could physically sabotage if they gained physical access. ++A field gateway differs from a traffic router in that it has had an active role in managing access and information flow. The field gateway has two distinct surface areas. One faces the devices attached to it and represents the inside of the zone. The other faces all external parties and is the edge of the zone. ++## Cloud gateway zone ++A cloud gateway is a system that enables remote communication from and to devices or field gateways deployed in multiple sites. The cloud gateway typically enables a cloud-based control and data analysis system, or a federation of such systems. In some cases, a cloud gateway may immediately facilitate access to special-purpose devices from terminals such as tablets or phones. In the cloud gateway zone, operational measures prevent targeted physical access and aren't necessarily exposed to a public cloud infrastructure. ++A cloud gateway may be mapped into a network virtualization overlay to insulate the cloud gateway and all of its attached devices or field gateways from any other network traffic. The cloud gateway itself isn't a device control system or a processing or storage facility for device data; those facilities interface with the cloud gateway. The cloud gateway zone includes the cloud gateway itself along with all field gateways and devices directly or indirectly attached to it. The edge of the zone is a distinct surface area that all external parties communicate through. ++## Services zone ++A service in this context is any software component or module that interfaces with devices through a field or cloud gateway. A service can collect data from the devices and command and control those devices. A service is a mediator that acts under its identity towards gateways and other subsystems to: ++- Store and analyze data +- Issue commands to devices based on data insights or schedules +- Expose information and control capabilities to authorized end users ++## IoT devices ++IoT devices are often special-purpose devices that range from simple temperature sensors to complex factory production lines with thousands of components inside them. Example IoT device capabilities include: ++- Measuring and reporting environmental conditions +- Turning valves +- Controlling servos +- Sounding alarms +- Switching lights on or off ++The purpose of these devices dictates their technical design and the available budget for their production and scheduled lifetime operation. The combination of these factors constrains the available operational energy budget, physical footprint, and available storage, compute, and security capabilities. ++Things that can go wrong with an automated or remotely controlled IoT device include: ++- Physical defects +- Control logic defects +- Willful unauthorized intrusion and manipulation. ++The consequences of these failures could be severe such as destroyed production lots, buildings burnt down, or injury and death. Therefore, there's a high security bar for devices that make things move or that report sensor data that results in commands that cause things to move. ++### Device control and device data interactions ++Connected special-purpose devices have a significant number of potential interaction surface areas and interaction patterns, all of which must be considered to provide a framework for securing digital access to those devices. _Digital access_ refers to operations that are carried out through software and hardware rather than through direct physical access to the device. For example, physical access could be controlled by putting the device into a room with a lock on the door. While physical access can't be denied using software and hardware, measures can be taken to prevent physical access from leading to system interference. ++As you explore the interaction patterns, look at _device control_ and _device data_ with the same level of attention. Device control refers to any information provided to a device with the intention of modifying its behavior. Device data refers to information that a device emits to any other party about its state and the observed state of its environment. ++## Threat modeling for the Azure IoT reference architecture ++This section uses the [Azure IoT reference architecture](/azure/architecture/reference-architectures/iot) to demonstrate how to think about threat modeling for IoT and how to address the threats identified: +++The following diagram provides a simplified view of the reference architecture by using a data flow diagram model: +++The architecture separates the device and field gateway capabilities. This approach enables you to use more secure field gateway devices. Field gateway devices can communicate with the cloud gateway using secure protocols, which typically require greater processing power than a simple device, such as a thermostat, could provide on its own. In the **Azure Services Zone** in the diagram, the Azure IoT Hub service is the cloud gateway. ++Based on the architecture outlined previously, the following sections show some threat modeling examples. The examples focus on the core elements of a threat model: ++- Processes +- Communication +- Storage ++### Processes ++Here are some examples of threats in the processes category. The threats are categorized based on the STRIDE model: ++**Spoofing**: An attacker may extract cryptographic keys from a device, either at the software or hardware level. The attacked then uses these keys to access the system from a different physical or virtual device by using the identity of the original device. ++**Denial of Service**: A device can be rendered incapable of functioning or communicating by interfering with radio frequencies or cutting wires. For example, a surveillance camera that had its power or network connection intentionally knocked out can't report data, at all. ++**Tampering**: An attacker may partially or wholly replace the software on the device. If the device's cryptographic keys are available to the attackers code, it can then use the identity of the device. ++**Tampering**: A surveillance camera that's showing a visible-spectrum picture of an empty hallway could be aimed at a photograph of such a hallway. A smoke or fire sensor could be reporting someone holding a lighter under it. In either case, the device may be technically fully trustworthy towards the system, but it reports manipulated information. ++**Tampering**: An attacker may use extracted cryptographic keys to intercept and suppress data sent from the device and replace it with false data that's authenticated with the stolen keys. ++**Information Disclosure**: If the device is running manipulated software, such manipulated software could potentially leak data to unauthorized parties. ++**Information Disclosure**: An attacker may use extracted cryptographic keys to inject code into the communication path between the device and field gateway or cloud gateway to siphon off information. ++**Denial of Service**: The device can be turned off or turned into a mode where communication isn't possible (which is intentional in many industrial machines). ++**Tampering**: The device can be reconfigured to operate in a state unknown to the control system (outside of known calibration parameters) and thus provide data that can be misinterpreted ++**Elevation of Privilege**: A device that does specific function can be forced to do something else. For example, a valve that is programmed to open half way can be tricked to open all the way. ++**Spoofing/Tampering/Repudiation**: If not secured (which is rarely the case with consumer remote controls), an attacker can manipulate the state of a device anonymously. A good illustration is a remote control that can turn off any TV. ++The following table shows example mitigations to these threats. The values in the threat column are abbreviations: ++- Spoofing (S) +- Tampering (T) +- Repudiation (R) +- Information disclosure (I) +- Denial of service (D) +- Elevation of privilege (E) ++| Component | Threat | Mitigation | Risk | Implementation | +| | | | | | +| Device |S |Assigning identity to the device and authenticating the device |Replacing device or part of the device with some other device. How do you know you're talking to the right device? |Authenticating the device, using Transport Layer Security (TLS) or IPSec. Infrastructure should support using pre-shared key (PSK) on those devices that can't handle full asymmetric cryptography. Use Azure AD, [OAuth](https://www.rfc-editor.org/pdfrfc/rfc6755.txt.pdf) | +|| TRID |Apply tamperproof mechanisms to the device, for example, by making it hard to impossible to extract keys and other cryptographic material from the device. |The risk is if someone is tampering the device (physical interference). How are you sure, that device hasn't been tampered with. |The most effective mitigation is a trusted platform module (TPM). A TPM stores keys in special on-chip circuitry from which the keys can't be read, but can only be used for cryptographic operations that use the key. Memory encryption of the device. Key management for the device. Signing the code. | +|| E |Having access control of the device. Authorization scheme. |If the device allows for individual actions to be performed based on commands from an outside source, or even compromised sensors, it allows the attack to perform operations not otherwise accessible. |Having authorization scheme for the device | +| Field Gateway |S |Authenticating the Field gateway to Cloud Gateway (such as cert based, PSK, or Claim based.) |If someone can spoof Field Gateway, then it can present itself as any device. |TLS RSA/PSK, IPSec, [RFC 4279](https://tools.ietf.org/html/rfc4279). All the same key storage and attestation concerns of devices in general ΓÇô best case is use TPM. 6LowPAN extension for IPSec to support Wireless Sensor Networks (WSN). | +|| TRID |Protect the Field Gateway against tampering (TPM) |Spoofing attacks that trick the cloud gateway thinking it's talking to field gateway could result in information disclosure and data tampering |Memory encryption, TPMs, authentication. | +|| E |Access control mechanism for Field Gateway | | | ++### Communication ++Here are some examples of threats in the communication category. The threats are categorized based on the STRIDE model: ++**Denial of Service**: Constrained devices are generally under DoS threat when they actively listen for inbound connections or unsolicited datagrams on a network. An attacker can open many connections in parallel and either not service them or service them slowly, or flood the device with unsolicited traffic. In both cases, the device can effectively be rendered inoperable on the network. ++**Spoofing, Information Disclosure**: Constrained devices and special-purpose devices often have one-for-all security facilities such as password or PIN protection. Sometimes they wholly rely on trusting the network, and grant access to information to any device is on the same network. If the network is protected by a shared key that gets disclosed, an attacker could control the device or observe the data it transmits. ++**Spoofing**: an attacker may intercept or partially override the broadcast and spoof the originator. ++**Tampering**: An attacker may intercept or partially override the broadcast and send false information. ++**Information Disclosure:** An attacker may eavesdrop on a broadcast and obtain information without authorization. ++**Denial of Service:** An attacker may jam the broadcast signal and deny information distribution. ++The following table shows example mitigations to these threats: ++| Component | Threat | Mitigation | Risk | Implementation | +| | | | | | +| Device IoT Hub |TID |(D)TLS (PSK/RSA) to encrypt the traffic |Eavesdropping or interfering the communication between the device and the gateway |Security on the protocol level. With custom protocols, you need to figure out how to protect them. In most cases, the communication takes place from the device to the IoT Hub (device initiates the connection). | +| Device to Device |TID |(D)TLS (PSK/RSA) to encrypt the traffic. |Reading data in transit between devices. Tampering with the data. Overloading the device with new connections |Security on the protocol level (MQTT/AMQP/HTTP/CoAP. With custom protocols, you need to figure out how to protect them. The mitigation for the DoS threat is to peer devices through a cloud or field gateway and have them only act as clients towards the network. The peering may result in a direct connection between the peers after having been brokered by the gateway | +| External Entity Device |TID |Strong pairing of the external entity to the device |Eavesdropping the connection to the device. Interfering the communication with the device |Securely pairing the external entity to the device NFC/Bluetooth LE. Controlling the operational panel of the device (Physical) | +| Field Gateway Cloud Gateway |TID |TLS (PSK/RSA) to encrypt the traffic. |Eavesdropping or interfering the communication between the device and the gateway |Security on the protocol level (MQTT/AMQP/HTTP/CoAP). With custom protocols, you need to figure out how to protect them. | +| Device Cloud Gateway |TID |TLS (PSK/RSA) to encrypt the traffic. |Eavesdropping or interfering the communication between the device and the gateway |Security on the protocol level (MQTT/AMQP/HTTP/CoAP). With custom protocols, you need to figure out how to protect them. | ++### Storage ++The following table shows example mitigations to the storage threats: ++| Component | Threat | Mitigation | Risk | Implementation | +| | | | | | +| Device storage |TRID |Storage encryption, signing the logs |Reading data from the storage, tampering with telemetry data. Tampering with queued or cached command control data. Tampering with configuration or firmware update packages while cached or queued locally can lead to OS and/or system components being compromised |Encryption, message authentication code (MAC), or digital signature. Where possible, strong access control through resource access control lists (ACLs) or permissions. | +| Device OS image |TRID | |Tampering with OS /replacing the OS components |Read-only OS partition, signed OS image, encryption | +| Field Gateway storage (queuing the data) |TRID |Storage encryption, signing the logs |Reading data from the storage, tampering with telemetry data, tampering with queued or cached command control data. Tampering with configuration or firmware update packages (destined for devices or field gateway) while cached or queued locally can lead to OS and/or system components being compromised |BitLocker | +| Field Gateway OS image |TRID | |Tampering with OS /replacing the OS components |Read-only OS partition, signed OS image, Encryption | ## See also |
iot-fundamentals | Iot Security Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-best-practices.md | Title: Internet of Things (IoT) security best practices -description: Best practices for securing your IoT data and infrastructure -+ Title: Security best practices ++description: Security best practices for building, deploying, and operating your IoT solution. Includes recommendations for devices, data, and infrastructure + Previously updated : 08/26/2022- Last updated : 02/10/2023+ -# Security best practices for Internet of Things (IoT) +# Security best practices for IoT solutions -## See also +You can divide security in an IoT solution into the following three areas: -Read about IoT Hub security in [Control access to IoT Hub](../iot-hub/iot-hub-devguide-security.md) in the IoT Hub developer guide. +- **Device security**: Securing the IoT device while it's deployed in the wild. ++- **Connection security**: Ensuring all data transmitted between the IoT device and IoT Hub is confidential and tamper-proof. ++- **Cloud security**: Providing a means to secure data while it moves through, and is stored in the cloud. ++Implementing the recommendations in this article will help you meet the security obligations described in the shared responsibility model. To learn more about what Microsoft does to fulfill service provider responsibilities, see [Shared responsibilities for cloud computing](../security/fundamentals/shared-responsibility.md). ++## Responsibilities ++You can develop and execute an IoT security strategy with the active participation of the various players involved in the manufacturing, development, and deployment of IoT devices and infrastructure. The following list is a high-level description of these players. ++- **Hardware manufacturer/integrator**: The manufacturers of IoT hardware you're deploying, the integrators assembling hardware from various manufacturers, or the suppliers providing the hardware. ++- **Solution developer**: The solution developer may part of an in-house team or a system integrator specializing in this activity. The IoT solution developer can develop various components of the IoT solution from scratch, or integrate various off-the-shelf or open-source components. ++- **Solution deployer**: After an IoT solution is developed, it needs to be deployed in the field. This process involves deployment of hardware, interconnection of devices, and deployment of solutions in hardware devices or the cloud. ++- **Solution operator**: After the IoT solution is deployed, it requires long-term operations, monitoring, upgrades, and maintenance. These tasks can be done by an in-house team that monitors the correct behavior of overall IoT infrastructure. ++## Microsoft Defender for IoT ++Microsoft Defender for IoT can automatically monitor some of the recommendations included in this article. Microsoft Defender for IoT should be the first line of defense to protect your resources in Azure. Microsoft Defender for IoT periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities. It then provides you with recommendations on how to address them. ++- To learn more about Microsoft Defender for IoT recommendations, see [Security recommendations in Microsoft Defender for IoT](../security-center/security-center-recommendations.md). +- To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT?](../security-center/security-center-introduction.md). ++## Device security ++- **Scope hardware to minimum requirements**: Select your device hardware to include the minimum features required for its operation, and nothing more. For example, only include USB ports if they're necessary for the operation of the device in your solution. Extra features can expose the device to unwanted attack vectors. ++- **Select tamper proof hardware**: Select device hardware with built-in mechanisms to detect physical tampering, such as the opening of the device cover or the removal of a part of the device. These tamper signals can be part of the data stream uploaded to the cloud, which can alert operators to these events. ++- **Select secure hardware**: If possible choose device hardware that includes security features such as secure and encrypted storage and boot functionality based on a Trusted Platform Module. These features make devices more secure and help protect the overall IoT infrastructure. ++- **Enable secure upgrades**: Firmware upgrades during the lifetime of the device are inevitable. Build devices with secure paths for upgrades and cryptographic assurance of firmware versions to secure your devices during and after upgrades. ++- **Follow a secure software development methodology**: The development of secure software requires you to consider security from the inception of the project all the way through implementation, testing, and deployment. The [Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) provides a step-by-step approach to building secure software. ++- **Use device SDKs whenever possible**: Device SDKs implement various security features such as encryption and authentication that help you develop robust and secure device applications. To learn more, see [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md). ++- **Choose open-source software with care**: Open-source software provides an opportunity to quickly develop solutions. When you're choosing open-source software, consider the activity level of the community for each open-source component. An active community ensures that software is supported and that issues are discovered and addressed. An obscure and inactive open-source software project might not be supported and issues aren't likely be discovered. ++- **Deploy hardware securely**: IoT deployments may require you to deploy hardware in unsecure locations, such as in public spaces or unsupervised locales. In such situations, ensure that hardware deployment is as tamper-proof as possible. For example, if the hardware has USB ports ensure that they're covered securely. ++- **Keep authentication keys safe**: During deployment, each device requires device IDs and associated authentication keys generated by the cloud service. Keep these keys physically safe even after the deployment. Any compromised key can be used by a malicious device to masquerade as an existing device. ++- **Keep the system up-to-date**: Ensure that device operating systems and all device drivers are upgraded to the latest versions. Keeping operating systems up-to-date helps ensure that they're protected against malicious attacks. ++- **Protect against malicious activity**: If the operating system permits, install the latest antivirus and antimalware capabilities on each device operating system. ++- **Audit frequently**: Auditing IoT infrastructure for security-related issues is key when responding to security incidents. Most operating systems provide built-in event logging that you should review frequently to make sure no security breach has occurred. A device can send audit information as a separate telemetry stream to the cloud service where it can be analyzed. ++- **Follow device manufacturer security and deployment best practices**: If the device manufacturer provides security and deployment guidance, follow that guidance in addition to the generic guidance listed in this article. ++- **Use a field gateway to provide security services for legacy or constrained devices**: Legacy and constrained devices might lack the capability to encrypt data, connect with the Internet, or provide advanced auditing. In these cases, a modern and secure field gateway can aggregate data from legacy devices and provide the security required for connecting these devices over the Internet. Field gateways can provide secure authentication, negotiation of encrypted sessions, receipt of commands from the cloud, and many other security features. ++## Connection security ++- **Use X.509 certificates to authenticate your devices to IoT Hub**: IoT Hub supports both X509 certificate-based authentication and security tokens as methods for a device to authenticate with your IoT hub. If possible, use X509-based authentication in production environments as it provides greater security. To learn more, see [Authenticating a device to IoT Hub](../iot-hub/iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub). ++- **Use Transport Layer Security (TLS) 1.2 to secure connections from devices**: IoT Hub uses TLS to secure connections from IoT devices and services. Three versions of the TLS protocol are currently supported: 1.0, 1.1, and 1.2. TLS 1.0 and 1.1 are considered legacy. To learn more, see [Transport Layer Security (TLS) support in IoT Hub](../iot-hub/iot-hub-tls-support.md). ++- **Ensure you have a way to update the TLS root certificate on your devices**: TLS root certificates are long-lived, but they still may expire or be revoked. If there's no way of updating the certificate on the device, the device may not be able to connect to IoT Hub or any other cloud service at a later date. ++- **Consider using Azure Private Link**: Azure Private Link lets you connect your devices to a private endpoint on your VNet, enabling you to block access to your IoT hub's public device-facing endpoints. To learn more, see [Ingress connectivity to IoT Hub using Azure Private Link](../iot-hub/virtual-network-support.md#ingress-connectivity-to-iot-hub-using-azure-private-link). ++## Cloud security ++- **Follow a secure software development methodology**: The development of secure software requires you to consider security from the inception of the project all the way through implementation, testing, and deployment. The [Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) provides a step-by-step approach to building secure software. ++- **Choose open-source software with care**: Open-source software provides an opportunity to quickly develop solutions. When you're choosing open-source software, consider the activity level of the community for each open-source component. An active community ensures that software is supported and that issues are discovered and addressed. An obscure and inactive open-source software project might not be supported and issues aren't likely be discovered. ++- **Integrate with care**: Many software security flaws exist at the boundary of libraries and APIs. Functionality that may not be required for the current deployment might still be available by through an API layer. To ensure overall security, make sure to check all interfaces of components being integrated for security flaws. ++- **Protect cloud credentials**: An attacker can use the cloud authentication credentials you use to configure and operate your IoT deployment to gain access to and compromise your IoT system. Protect the credentials by changing the password frequently, and don't use these credentials on public machines. ++- **Define access controls for your IoT hub**: Understand and define the type of access that each component in your IoT Hub solution needs based on the required functionality. There are two ways you can grant permissions for the service APIs to connect to your IoT hub: [Azure Active Directory](../iot-hub/iot-hub-dev-guide-azure-ad-rbac.md) or [Shared Access signatures](../iot-hub/iot-hub-dev-guide-sas.md). ++- **Define access controls for backend services**: Other Azure services can consume the data your IoT Hub ingests from your devices by using the IoT hub's Event Hubs-compatible endpoint. You can also use IoT Hub message routing to deliver the data from your devices to other Azure services. Understand and configure appropriate access permissions for IoT Hub to connect to these services. To learn more, see [Read device-to-cloud messages from the built-in endpoint](../iot-hub/iot-hub-devguide-messages-read-builtin.md) and [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](../iot-hub/iot-hub-devguide-messages-d2c.md). ++- **Monitor your IoT solution from the cloud**: Monitor the overall health of your IoT Hub solution using the [metrics in Azure Monitor](../iot-hub/monitor-iot-hub.md). ++- **Set up diagnostics**: Monitor your operations by logging events in your solution, and then sending the diagnostic logs to Azure Monitor. To learn more, see [Monitor and diagnose problems in your IoT hub](../iot-hub/monitor-iot-hub.md). ++## Next steps ++Read about IoT Hub security in [Azure security baseline for Azure IoT Hub](/security/benchmark/azure/baselines/iot-hub-security-baseline?toc=/azure/iot-hub/TOC.json) and [Security in your IoT workload](/azure/architecture/framework/iot/iot-security). |
iot-fundamentals | Iot Security Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-deployment.md | - Title: Secure your Azure Internet of Things (IoT) deployment | Microsoft Docs -description: This article details how to secure your Azure IoT deployment. It links to implementation level details for configuring and deploying each component. ----- Previously updated : 08/24/2022---# Secure your Internet of Things (IoT) deployment ---## See also --Read about IoT Hub security in [Control access to IoT Hub](../iot-hub/iot-hub-devguide-security.md) in the IoT Hub developer guide. |
iot-fundamentals | Iot Security Ground Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-ground-up.md | - Title: Security for Internet of Things (IoT) from the ground up -description: This article describes the built-in security features of the Microsoft Azure IoT solution accelerators ---- Previously updated : 08/24/2022---# Security for Internet of Things (IoT) from the ground up ---## Next steps --Read about IoT Hub security in [Control access to IoT Hub](../iot-hub/iot-hub-devguide-security.md) in the IoT Hub developer guide. |
iot-fundamentals | Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/security-recommendations.md | - Title: Security recommendations for Azure IoT | Microsoft Docs -description: This article summarizes additional steps to ensure security in your Azure IoT Hub solution. ---- Previously updated : 08/24/2022-----# Security recommendations for Azure Internet of Things (IoT) deployment --This article contains security recommendations for IoT. Implementing these recommendations will help you fulfill your security obligations as described in our shared responsibility model. For more information on what Microsoft does to fulfill service provider responsibilities, read [Shared responsibilities for cloud computing](../security/fundamentals/shared-responsibility.md). --Some of the recommendations included in this article can be automatically monitored by Microsoft Defender for IoT, the first line of defense in protecting your resources in Azure. It periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities. It then provides you with recommendations on how to address them. --- For more information on Microsoft Defender for IoT recommendations, see [Security recommendations in Microsoft Defender for IoT](../security-center/security-center-recommendations.md).-- For information on Microsoft Defender for IoT see the [What is Microsoft Defender for IoT?](../security-center/security-center-introduction.md)--## General --| Recommendation | Comments | -|-|-| -| Stay up-to-date | Use the latest versions of supported platforms, programming languages, protocols, and frameworks. | -| Keep authentication keys safe | Keep the device IDs and their authentication keys physically safe after deployment. This will avoid a malicious device masquerade as a registered device. | -| Use device SDKs when possible | Device SDKs implement a variety of security features, such as, encryption, authentication, and so on, to assist you in developing a robust and secure device application. See [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) for more information. | --## Identity and access management --| Recommendation | Comments | -|-|-| -| Define access control for the hub | [Understand and define the type of access](iot-security-deployment.md#securing-the-cloud) each component will have in your IoT Hub solution, based on the functionality. The allowed permissions are *Registry Read*, *RegistryReadWrite*, *ServiceConnect*, and *DeviceConnect*. Default [shared access policies in your IoT hub](../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions) can also help define the permissions for each component based on its role. | -| Define access control for backend services | Data ingested by your IoT Hub solution can be consumed by other Azure services such as [Azure Cosmos DB](../cosmos-db/index.yml), [Stream Analytics](../stream-analytics/index.yml), [App Service](../app-service/index.yml), [Logic Apps](../logic-apps/index.yml), and [Blob storage](../storage/blobs/storage-blobs-introduction.md). Make sure to understand and allow appropriate access permissions as documented for these services. | --## Data protection --| Recommendation | Comments | -|-|-| -| Secure device authentication | Ensure secure communication between your devices and your IoT hub, by using either [a unique identity key or security token](iot-security-deployment.md#iot-hub-security-tokens), or [an on-device X.509 certificate](iot-security-deployment.md#x509-certificate-based-device-authentication) for each device. Use the appropriate method to [use security tokens based on the chosen protocol (MQTT, AMQP, or HTTPS)](../iot-hub/iot-hub-dev-guide-sas.md). | -| Secure device communication | IoT Hub secures the connection to the devices using Transport Layer Security (TLS) standard, supporting versions 1.2 and 1.0. Use [TLS 1.2](https://tools.ietf.org/html/rfc5246) to ensure maximum security. | -| Secure service communication | IoT Hub provides endpoints to connect to backend services such as [Azure Storage](../storage/index.yml) or [Event Hubs](../event-hubs/index.yml) using only the TLS protocol, and no endpoint is exposed on an unencrypted channel. Once this data reaches these backend services for storage or analysis, make sure to employ appropriate security and encryption methods for that service, and protect sensitive information at the backend. | --## Networking --| Recommendation | Comments | -|-|-| -| Protect access to your devices | Keep hardware ports in your devices to a bare minimum to avoid unwanted access. Additionally, build mechanisms to prevent or detect physical tampering of the device. Read [IoT security best practices](iot-security-best-practices.md) for details. | -| Build secure hardware | Incorporate security features such as encrypted storage, or Trusted Platform Module (TPM), to keep devices and infrastructure more secure. Keep the device operating system and drivers upgraded to latest versions, and if space permits, install antivirus and antimalware capabilities. Read [IoT security architecture](iot-security-architecture.md) to understand how this can help mitigate several security threats. | --## Monitoring --| Recommendation | Comments | Supported by Microsoft Defender for IoT | -|-|-|--| -| Monitor unauthorized access to your devices | Use your device operating system's logging feature to monitor any security breaches or physical tampering of the device or its ports. | Yes | -| Monitor your IoT solution from the cloud | Monitor the overall health of your IoT Hub solution using the [metrics in Azure Monitor](../iot-hub/monitor-iot-hub.md). | Yes | -| Set up diagnostics | Closely watch your operations by logging events in your solution, and then sending the diagnostic logs to Azure Monitor to get visibility into the performance. Read [Monitor and diagnose problems in your IoT hub](../iot-hub/monitor-iot-hub.md) for more information. | Yes | --## Next steps --For advanced scenarios involving Azure IoT, you may need to consider additional security requirements. See [IoT security architecture](iot-security-architecture.md) for more guidance. |
iot-hub-device-update | Deploy Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/deploy-update.md | An Azure CLI environment: 1. Schedule your deployment to start immediately or in the future. > [!TIP]- > By default, the **Start** date and time is 24 hours from your current time. Be sure to select a different date and time if you want the deployment to begin earlier. + > By default, the **Start** date and time is set to Immediately. Be sure to select a different date and time if you want the deployment to begin later. :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows the Create deployment screen" lightbox="media/deploy-update/create-deployment.png"::: |
iot-hub-device-update | Device Update Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md | Title: Understand how Device Update for IoT Hub uses IoT Plug and Play | Microso description: Device Update for IoT Hub uses to discover and manage devices that are over-the-air update capable. Previously updated : 1/26/2022 Last updated : 2/2/2023 The Device Update agent uses agent metadata fields to send information to Device |resultDetails|string|device to cloud|Customer-defined free form string to provide additional result details. Returned to the twin without parsing|| |stepResults|map|device to cloud|The result reported by the agent containing result code, extended result code, and result details for step updates. | "step_1": { "resultCode": 0,"extendedResultCode": 0, "resultDetails": ""}| |state|integer|device to cloud| An integer that indicates the current state of the Device Update agent. | See [State](#state) section for details. |-|workflow|complex|device to cloud| A set of values that indicate which deployment the agent is currently working on, ID of current deployment, and acknowledgment of any retry request sent from service to agent.|"workflow": {"action": 3,"ID": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01","retryTimestamp": "2022-01-26T11:33:29.9680598Z"}| +|workflow|complex|device to cloud| A set of values that indicate which deployment the agent is currently working on, ID of current deployment, and acknowledgment of any retry request sent from service to agent. Note that the workflow ID reports a "nodeployment" value once the deployment is cancelled. |"workflow": {"action": 3,"ID": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01","retryTimestamp": "2022-01-26T11:33:29.9680598Z"}| |installedUpdateId|string|device to cloud|An ID of the update that is currently installed (through Device Update). This value is a string capturing the Update ID JSON or null for a device that has never taken an update through Device Update.|installedUpdateID{\"provider\":\"contoso\",\"name\":\"image-update\",\"version\":\"1.0.0\"}"| #### Device properties |
key-vault | Quick Create Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-node.md | Title: Quickstart - Azure Key Vault certificate client library for JavaScript ( description: Learn how to create, retrieve, and delete certificates from an Azure key vault using the JavaScript client library Previously updated : 01/04/2023 Last updated : 02/01/2023 ms.devlang: javascript-+ -# Quickstart: Azure Key Vault certificate client library for JavaScript (version 4) +# Quickstart: Azure Key Vault certificate client library for JavaScript Get started with the Azure Key Vault certificate client library for JavaScript. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for certificates. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete certificates from an Azure key vault using the JavaScript client library This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) 1. Run the `login` command. - ```azurecli-interactive + ```azurecli az login ``` Create a Node.js application that uses your key vault. npm install @azure/keyvault-certificates ``` -1. Install the Azure Identity library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault. +1. Install the Azure Identity client library, [@azure/identity](https://www.npmjs.com/package/@azure/identity), to authenticate to a Key Vault. ```terminal npm install @azure/identity Create a Node.js application that uses your key vault. ## Grant access to your key vault -Create an access policy for your key vault that grants key permissions to your user account +Create a vault access policy for your key vault that grants key permissions to your user account. ```azurecli-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --key-permissions delete get list create purge +az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --certificate-permissions delete get list create purge update ``` ## Set environment variables This application is using key vault name as an environment variable called `KEY_VAULT_NAME`. -Windows +### [Windows](#tab/windows) + ```cmd set KEY_VAULT_NAME=<your-key-vault-name> ```` +### [PowerShell](#tab/powershell) + Windows PowerShell ```powershell $Env:KEY_VAULT_NAME="<your-key-vault-name>" ``` -macOS or Linux +### [macOS or Linux](#tab/linux) + ```cmd export KEY_VAULT_NAME=<your-key-vault-name> ```+++## Authenticate and create a client ++Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/javascript/api/@azure/identity/#@azure-identity-getdefaultazurecredential) method provided by the [Azure Identity client library](/javascript/api/@azure/identity) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. ++In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview). ++In this code, the name of your key vault is used to create the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code). ## Code example -These code samples demonstrate how to create a client, set a certificate, retrieve a certificate, and delete a certificate. +This code uses the following [Key Vault Certificate classes and methods](/javascript/api/overview/azure/keyvault-certificates-readme): + +* [DefaultAzureCredential class](/javascript/api/@azure/identity/#@azure-identity-getdefaultazurecredential) +* [CertificateClient class](/javascript/api/@azure/keyvault-certificates/certificateclient) + * [beginCreateCertificate](/javascript/api/@azure/keyvault-certificates/certificateclient#@azure-keyvault-certificates-certificateclient-begincreatecertificate) + * [getCertificate](/javascript/api/@azure/keyvault-certificates/certificateclient#@azure-keyvault-certificates-certificateclient-getcertificate) + * [getCertificateVersion](/javascript/api/@azure/keyvault-certificates/certificateclient#@azure-keyvault-certificates-certificateclient-getcertificateversion) + * [updateCertificateProperties](/javascript/api/@azure/keyvault-certificates/certificateclient#@azure-keyvault-certificates-certificateclient-updatecertificateproperties) + * [updateCertificatePolicy](/javascript/api/@azure/keyvault-certificates/certificateclient#@azure-keyvault-certificates-certificateclient-updatecertificateproperties) + * [beginDeleteCertificate](/javascript/api/@azure/keyvault-certificates/certificateclient#@azure-keyvault-certificates-certificateclient-begindeletecertificate) +* [PollerLike interface](/javascript/api/@azure/core-lro/pollerlike) + * [getResult](/javascript/api/@azure/core-lro/pollerlike#@azure-core-lro-pollerlike-getresult) + * [pollUntilDone](/javascript/api/@azure/core-lro/pollerlike@azure-core-lro-pollerlike-polluntildone) + ### Set up the app framework These code samples demonstrate how to create a client, set a certificate, retrie // - AZURE_TENANT_ID: The tenant ID in Azure Active Directory // - AZURE_CLIENT_ID: The application (client) ID registered in the AAD tenant // - AZURE_CLIENT_SECRET: The client secret for the registered application- const url = process.env["AZURE_KEY_VAULT_URI"] || "<keyvault-url>"; const credential = new DefaultAzureCredential(); const keyVaultName = process.env["KEY_VAULT_NAME"];+ if(!keyVaultName) throw new Error("KEY_VAULT_NAME is empty"); const url = "https://" + keyVaultName + ".vault.azure.net"; const client = new CertificateClient(url, credential); |
key-vault | Quick Create Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-node.md | Title: Quickstart - Azure Key Vault key client library for JavaScript (version description: Learn how to create, retrieve, and delete keys from an Azure key vault using the JavaScript client library Previously updated : 01/04/2023 Last updated : 02/02/2023 ms.devlang: javascript-+ -# Quickstart: Azure Key Vault key client library for JavaScript (version 4) +# Quickstart: Azure Key Vault key client library for JavaScript Get started with the Azure Key Vault key client library for JavaScript. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for cryptographic keys. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete keys from an Azure key vault using the JavaScript key client library This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) 1. Run the `login` command. - ```azurecli-interactive + ```azurecli az login ``` Create a Node.js application that uses your key vault. ## Install Key Vault packages -1. Using the terminal, install the Azure Key Vault secrets library, [@azure/keyvault-keys](https://www.npmjs.com/package/@azure/keyvault-keys) for Node.js. +1. Using the terminal, install the Azure Key Vault secrets client library, [@azure/keyvault-keys](https://www.npmjs.com/package/@azure/keyvault-keys) for Node.js. ```terminal npm install @azure/keyvault-keys ``` -1. Install the Azure Identity library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault. +1. Install the Azure Identity client library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault. ```terminal npm install @azure/identity Create a Node.js application that uses your key vault. Create an access policy for your key vault that grants key permissions to your user account ```azurecli-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --key-permissions delete get list create purge +az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --key-permissions delete get list create update purge ``` ## Set environment variables This application is using key vault name as an environment variable called `KEY_VAULT_NAME`. -Windows +### [Windows](#tab/windows) + ```cmd set KEY_VAULT_NAME=<your-key-vault-name> ```` +### [PowerShell](#tab/powershell) + Windows PowerShell ```powershell $Env:KEY_VAULT_NAME="<your-key-vault-name>" ``` -macOS or Linux +### [macOS or Linux](#tab/linux) + ```cmd export KEY_VAULT_NAME=<your-key-vault-name> ```+++## Authenticate and create a client ++Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/javascript/api/@azure/identity/#@azure-identity-getdefaultazurecredential) method provided by the [Azure Identity client library](/javascript/api/@azure/identity) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. ++In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview). ++In this code, the name of your key vault is used to create the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code). ## Code example -This code sample demonstrates how to create a client, set a key, retrieve a key, and delete a key. +The code samples below will show you how to create a client, set a secret, retrieve a secret, and delete a secret. ++This code uses the following [Key Vault Secret classes and methods](/javascript/api/overview/azure/keyvault-keys-readme): + +* [DefaultAzureCredential class](/javascript/api/@azure/identity/#@azure-identity-getdefaultazurecredential) +* [KeyClient class](/javascript/api/@azure/keyvault-keys/keyclient) + * [createKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-createkey) + * [createEcKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-createeckey) + * [createRsaKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-creatersakey) + * [getKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-getkey) + * [listPropertiesOfKeys](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-listpropertiesofkeys) + * [updateKeyProperties](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-updatekeyproperties) + * [beginDeleteKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-begindeletekey) + * [getDeletedKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-getdeletedkey) + * [purgeDeletedKey](/javascript/api/@azure/keyvault-keys/keyclient#@azure-keyvault-keys-keyclient-purgedeletedkey) ### Set up the app framework This code sample demonstrates how to create a client, set a key, retrieve a key, const credential = new DefaultAzureCredential(); const keyVaultName = process.env["KEY_VAULT_NAME"];+ if(!keyVaultName) throw new Error("KEY_VAULT_NAME is empty"); const url = "https://" + keyVaultName + ".vault.azure.net"; const client = new KeyClient(url, credential); |
key-vault | Built In Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md | Managed HSM local RBAC has several built-in roles. You can assign these roles to > - All the data action names have a 'Microsoft.KeyVault/managedHsm' prefix, which is omitted in the tables for brevity. > - All role names have a prefix "Managed HSM" which is omitted in the below table for brevity. -|Data Action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption | Backup | Crypto Auditor| +|Data Action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption User | Backup | Crypto Auditor| ||||||||| |**Security Domain management**| /securitydomain/download/action|<center>X</center>|||||| |
key-vault | Quick Create Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-node.md | Title: Quickstart - Azure Key Vault secret client library for JavaScript (versi description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the JavaScript client library Previously updated : 02/03/2022 Last updated : 02/02/2023 ms.devlang: javascript-+ -# Quickstart: Azure Key Vault secret client library for JavaScript (version 4) +# Quickstart: Azure Key Vault secret client library for JavaScript Get started with the Azure Key Vault secret client library for JavaScript. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete secrets from an Azure key vault using the JavaScript client library This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli 1. Run the `login` command. - ```azurecli-interactive + ```azurecli az login ``` Create a Node.js application that uses your key vault. ## Install Key Vault packages -1. Using the terminal, install the Azure Key Vault secrets library, [@azure/keyvault-secrets](https://www.npmjs.com/package/@azure/keyvault-secrets) for Node.js. +1. Using the terminal, install the Azure Key Vault secrets client library, [@azure/keyvault-secrets](https://www.npmjs.com/package/@azure/keyvault-secrets) for Node.js. ```terminal npm install @azure/keyvault-secrets ``` -1. Install the Azure Identity library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault. +1. Install the Azure Identity client library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault. ```terminal npm install @azure/identity Create a Node.js application that uses your key vault. ## Grant access to your key vault -Create an access policy for your key vault that grants secret permissions to your user account with the [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) command. +Create a vault access policy for your key vault that grants secret permissions to your user account with the [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) command. ```azurecli-az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --secret-permissions delete get list set purge +az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --secret-permissions delete get list set purge update ``` ## Set environment variables This application is using key vault name as an environment variable called `KEY_VAULT_NAME`. -Windows +### [Windows](#tab/windows) + ```cmd set KEY_VAULT_NAME=<your-key-vault-name> ````++### [PowerShell](#tab/powershell) + Windows PowerShell ```powershell $Env:KEY_VAULT_NAME="<your-key-vault-name>" ``` -macOS or Linux +### [macOS or Linux](#tab/linux) + ```cmd export KEY_VAULT_NAME=<your-key-vault-name> ```++++## Authenticate and create a client ++Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/javascript/api/@azure/identity/#@azure-identity-getdefaultazurecredential) method provided by the [Azure Identity client library](/javascript/api/@azure/identity) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. ++In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview). ++In this code, the name of your key vault is used to create the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code). ## Code example The code samples below will show you how to create a client, set a secret, retrieve a secret, and delete a secret. +This code uses the following [Key Vault Secret classes and methods](/javascript/api/overview/azure/keyvault-secretss-readme): + +* [DefaultAzureCredential](/javascript/api/@azure/identity/#@azure-identity-getdefaultazurecredential) +* [SecretClient class](/javascript/api/@azure/keyvault-secrets/secretclient) + * [setSecret](/javascript/api/@azure/keyvault-secrets/secretclient#@azure-keyvault-secrets-secretclient-setsecret) + * [getSecret](/javascript/api/@azure/keyvault-secrets/secretclient#@azure-keyvault-secrets-secretclient-getsecret) + * [updateSecretProperties](/javascript/api/@azure/keyvault-secrets/secretclient#@azure-keyvault-secrets-secretclient-updatesecretproperties) + * [beginDeleteSecret](/javascript/api/@azure/keyvault-secrets/secretclient#@azure-keyvault-secrets-secretclient-begindeletesecret) ++### Set up the app framework + 1. Create new text file and paste the following code into the **index.js** file. ```javascript const { SecretClient } = require("@azure/keyvault-secrets"); const { DefaultAzureCredential } = require("@azure/identity"); - // Load the .env file if it exists - const dotenv = require("dotenv"); - dotenv.config(); - async function main() {+ // If you're using MSI, DefaultAzureCredential should "just work". + // Otherwise, DefaultAzureCredential expects the following three environment variables: + // - AZURE_TENANT_ID: The tenant ID in Azure Active Directory + // - AZURE_CLIENT_ID: The application (client) ID registered in the AAD tenant + // - AZURE_CLIENT_SECRET: The client secret for the registered application const credential = new DefaultAzureCredential(); const keyVaultName = process.env["KEY_VAULT_NAME"];+ if(!keyVaultName) throw new Error("KEY_VAULT_NAME is empty"); const url = "https://" + keyVaultName + ".vault.azure.net"; const client = new SecretClient(url, credential); The code samples below will show you how to create a client, set a secret, retri }); console.log("updated secret: ", updatedSecret); - // Delete the secret - // If we don't want to purge the secret later, we don't need to wait until this finishes + // Delete the secret immediately without ability to restore or purge. await client.beginDeleteSecret(secretName); } |
load-balancer | Backend Pool Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md | There are two ways of configuring a backend pool: * IP address -To preallocate a backend pool with an IP address range that later will contain virtual machines and virtual machine scale sets, configure the pool by IP address and virtual network ID. +To preallocate a backend pool with an IP address range that later will contain virtual machines and Virtual Machine Scale Sets, configure the pool by IP address and virtual network ID. This article focuses on configuration of backend pools by IP addresses. ## Configure backend pool by IP address and virtual network $net = @{ Name = 'myNic' ResourceGroupName = 'myResourceGroup' Location = 'eastus'- PrivateIpAddress = '10.0.0.4' + PrivateIpAddress = '10.0.0.5' Subnet = $virtualNetwork.Subnets[0] } $nic = New-AzNetworkInterface @net az vm create \ * IP based backends can only be used for Standard Load Balancers * The backend resources must be in the same virtual network as the load balancer for IP based LBs * A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service- * [Private endpoint resources](../private-link/private-endpoint-overview.md) can't be placed in a IP based backend pool + * [Private endpoint resources](../private-link/private-endpoint-overview.md) can't be placed in an IP based backend pool * ACI containers aren't currently supported by IP based LBs * Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer * Inbound NAT Rules canΓÇÖt be specified by IP address * You can configure IP based and NIC based backend pools for the same load balancer. You canΓÇÖt create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool.- * A virtual machine in the same virtual network as an internal load balancer cannot access the frontend of the ILB and its backend VMs simultaneously + * A virtual machine in the same virtual network as an internal load balancer can't access the frontend of the ILB and its backend VMs simultaneously >[!Important] > When a backend pool is configured by IP address, it will behave as a Basic Load Balancer with default outbound enabled. For secure by default configuration and applications with demanding outbound needs, configure the backend pool by NIC. |
load-balancer | Quickstart Basic Public Load Balancer Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-powershell.md | $lbrule = @{ IdleTimeoutInMinutes = '15' FrontendIpConfiguration = $feip BackendAddressPool = $bePool+ Probe = $probe } $rule = New-AzLoadBalancerRuleConfig @lbrule |
machine-learning | Concept Automl Forecasting Methods | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md | AutoML uses several methods to forecast time series values. These methods can be As an example, consider the problem of forecasting daily demand for a particular brand of orange juice from a grocery store. Let $y_t$ represent the demand for this brand on day $t$. A **time series model** predicts demand at $t+1$ using some function of historical demand, -$y_{t+1} = f(y_t, y_{t-1}, \cdots, y_{t-s})$. +$y_{t+1} = f(y_t, y_{t-1}, \ldots, y_{t-s})$. The function $f$ often has parameters that we tune using observed demand from the past. The amount of history that $f$ uses to make predictions, $s$, can also be considered a parameter of the model. Again, $g$ generally has a set of parameters, including those governing regulari > [!IMPORTANT] > AutoML's forecasting regression models assume that all features provided by the user are known into the future, at least up to the forecast horizon. -AutoML's forecasting regression models can also be augmented to use historical values of the target and predictors. The result is a hybrid model with characteristics of a time series model and a pure regression model. Historical quantities are additional predictor variables in the regression and we refer to them as **lagged quantities**. The _order_ of the lag refers to how far back the value is known. For example, the current value of an order two lag of the target for our orange juice demand example is the observed juice demand from two days ago. +AutoML's forecasting regression models can also be augmented to use historical values of the target and predictors. The result is a hybrid model with characteristics of a time series model and a pure regression model. Historical quantities are additional predictor variables in the regression and we refer to them as **lagged quantities**. The _order_ of the lag refers to how far back the value is known. For example, the current value of an order-two lag of the target for our orange juice demand example is the observed juice demand from two days ago. Another notable difference between the time series models and the regression models is in the way they generate forecasts. Time series models are generally defined by recursion relations and produce forecasts one-at-a-time. To forecast many periods into the future, they iterate up-to the forecast horizon, feeding previous forecasts back into the model to generate the next one-period-ahead forecast as needed. In contrast, the regression models are so-called **direct forecasters** that generate _all_ forecasts up to the horizon in one go. Direct forecasters can be preferable to recursive ones because recursive models compound prediction error when they feed previous forecasts back into the model. When lag features are included, AutoML makes some important modifications to the training data so that the regression models can function as direct forecasters. See the [lag features article](./concept-automl-forecasting-lags.md) for more details. The following table lists the forecasting models implemented in AutoML and what Time Series Models | Regression Models -| ---[Naive, Seasonal Naive, Average, Seasonal Average](https://otexts.com/fpp3/simple-methods.html), [ARIMA(X)](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), [Exponential Smoothing](https://www.statsmodels.org/dev/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html) | [Linear SGD](https://scikit-learn.org/stable/modules/linear_model.html#stochastic-gradient-descent-sgd), [LARS LASSO](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso), [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net), [Prophet](https://facebook.github.io/prophet/), [K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression), [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression), [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests), [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees), [Gradient Boosted Trees](https://scikit-learn.org/stable/modules/ensemble.html#regression), [LightGBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [XGBoost](https://xgboost.readthedocs.io/en/latest/parameter.html), Temporal Convolutional Network +[Naive, Seasonal Naive, Average, Seasonal Average](https://otexts.com/fpp3/simple-methods.html), [ARIMA(X)](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), [Exponential Smoothing](https://www.statsmodels.org/dev/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html) | [Linear SGD](https://scikit-learn.org/stable/modules/linear_model.html#stochastic-gradient-descent-sgd), [LARS LASSO](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso), [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net), [Prophet](https://facebook.github.io/prophet/), [K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression), [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression), [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests), [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees), [Gradient Boosted Trees](https://scikit-learn.org/stable/modules/ensemble.html#regression), [LightGBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [XGBoost](https://xgboost.readthedocs.io/en/latest/parameter.html), [ForecastTCN](./how-to-auto-train-forecast.md#enable-deep-learning) -The models in each category are listed roughly in order of the complexity of patterns they're able to incorporate, also known as the **model capacity**. A Naive model, which simply forecasts the last observed value, has low capacity while the Temporal Convolutional Network (TCN), a deep neural network with potentially millions of tunable parameters, has high capacity. +The models in each category are listed roughly in order of the complexity of patterns they're able to incorporate, also known as the **model capacity**. A Naive model, which simply forecasts the last observed value, has low capacity while the Temporal Convolutional Network (ForecastTCN), a deep neural network with potentially millions of tunable parameters, has high capacity. Importantly, AutoML also includes **ensemble** models that create weighted combinations of the best performing models to further improve accuracy. For forecasting, we use a [soft voting ensemble](https://scikit-learn.org/stable/modules/ensemble.html#voting-regressor) where composition and weights are found via the [Caruana Ensemble Selection Algorithm](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf). Importantly, AutoML also includes **ensemble** models that create weighted combi ## How AutoML uses your data -AutoML accepts time series data in tabular, "wide" format; that is, each variable must have its own corresponding column. AutoML requires that one of the columns must be the time axis for the forecasting problem which is parsable into a datetime type. The simplest time series data set consists of a **time column** and a numeric **target column**. The target is the variable one intends to predict into the future. An example of the format in this simple case follows below: +AutoML accepts time series data in tabular, "wide" format; that is, each variable must have its own corresponding column. AutoML requires one of the columns to be the time axis for the forecasting problem. This column must be parsable into a datetime type. The simplest time series data set consists of a **time column** and a numeric **target column**. The target is the variable one intends to predict into the future. The following is an example of the format in this simple case: timestamp | quantity | -- timestamp | SKU | price | advertised | quantity In this example, there's a SKU, a retail price, and a flag indicating whether an item was advertised in addition to the timestamp and target quantity. There are evidently two series in this dataset - one for the JUICE1 SKU and one for the BREAD3 SKU; the `SKU` column is a **time series ID column** since grouping by it gives two groups containing a single series each. Before sweeping over models, AutoML does basic validation of the input configuration and data and adds engineered features. +### Data length requirements +To train a forecasting model, you must have a sufficient amount of historical data. This threshold quantity varies with the training configuration. If you've provided validation data, the minimum number of training observations required per time series is given by, ++$T_{\text{user validation}} = H + \text{max}(l_{\text{max}}, s_{\text{window}}) + 1$, ++where $H$ is the forecast horizon, $l_{\text{max}}$ is the maximum lag order, and $s_{\text{window}}$ is the window size for rolling aggregation features. If you're using cross-validation, the minimum number of observations is, ++ $T_{\text{CV}} = 2H + (n_{\text{CV}} - 1) n_{\text{step}} + \text{max}(l_{\text{max}}, s_{\text{window}}) + 1$, ++where $n_{\text{CV}}$ is the number of cross-validation folds and $n_{\text{step}}$ is the CV step size, or offset between CV folds. The basic logic behind these formulas is that you should always have at least a horizon of training observations for each time series, including some padding for lags and cross-validation splits. See [forecasting model selection](./concept-automl-forecasting-sweeping.md#model-selection) for more details on cross-validation for forecasting. + ### Missing data handling-AutoML's time series models generally require data with regularly spaced observations in time. Regularly spaced, here, includes cases like monthly or yearly observations where the number of days between observations may vary. Prior to modeling, AutoML must ensure that series are values are not missing _and_ that the observations are regular. Hence, there are two missing data cases: +AutoML's time series models require regularly spaced observations in time. Regularly spaced, here, includes cases like monthly or yearly observations where the number of days between observations may vary. Prior to modeling, AutoML must ensure there are no missing series values _and_ that the observations are regular. Hence, there are two missing data cases: * A value is missing for some cell in the tabular data * A _row_ is missing which corresponds with an expected observation given the time series frequency timestamp | quantity ... | ... 2013-12-31 | 347 -This series ostensibly has a daily frequency, but there's no observation for 2012-01-02. In this case, AutoML will attempt to fill in the data by adding a new row for 2012-01-02. The new value for the `quantity` column, and any other columns in the data, will then be imputed like other missing values. Clearly, AutoML must know the series frequency in order to fill in observation gaps like this. AutoML automatically detects this frequency, or, optionally, the user can provide it in the configuration. +This series ostensibly has a daily frequency, but there's no observation for Jan. 2, 2012. In this case, AutoML will attempt to fill in the data by adding a new row for Jan. 2, 2012. The new value for the `quantity` column, and any other columns in the data, will then be imputed like other missing values. Clearly, AutoML must know the series frequency in order to fill in observation gaps like this. AutoML automatically detects this frequency, or, optionally, the user can provide it in the configuration. -The imputation method for filling missing values can be configured in the input. The default methods are listed in the following table: +The imputation method for filling missing values can be [configured](./how-to-auto-train-forecast.md#custom-featurization) in the input. The default methods are listed in the following table: Column Type | Default Imputation Method -- | Numeric Feature | Median value Missing values for categorical features are handled during numerical encoding by including an additional category corresponding to a missing value. Imputation is implicit in this case. ### Automated feature engineering-AutoML generally adds new columns to user data in an effort to increase modeling accuracy. Engineered feature can include the following: +AutoML generally adds new columns to user data to increase modeling accuracy. Engineered feature can include the following: Feature Group | Default/Optional | --Calendar features derived from the time index (for example, day of week) | Default +[Calendar features](./concept-automl-forecasting-calendar-features.md) derived from the time index (for example, day of week) | Default +Categorical features derived from time series IDs | Default Encoding categorical types to numeric type | Default Indicator features for holidays associated with a given country or region | Optional-Lags of target quantity | Optional +[Lags of target quantity](./concept-automl-forecasting-lags.md) | Optional Lags of feature columns | Optional Rolling window aggregations (for example, rolling average) of target quantity | Optional-Seasonal decomposition (STL) | Optional +Seasonal decomposition ([STL](https://otexts.com/fpp3/stl.html)) | Optional ++You can configure featurization from the AutoML SDK via the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) class or from the [AzureML Studio web interface](how-to-use-automated-ml-for-ml-models.md#customize-featurization). ++### Non-stationary time series detection and handling ++A time series where mean and variance change over time is called a **non-stationary**. For example, time series that exhibit stochastic trends are non-stationary by nature. To visualize this, the following image plots a series that is generally trending upward. Now, compute and compare the mean (average) values for the first and the second half of the series. Are they the same? Here, the mean of the series in the first half of the plot is significantly smaller than in the second half. The fact that the mean of the series depends on the time interval one is looking at, is an example of the time-varying moments. Here, the mean of a series is the first moment. +++Next, let's examine the following image, which plots the original series in first differences, $\Delta y_{t} = y_t - y_{t-1}$. The mean of the series is roughly constant over the time range while the variance appears to vary. Thus, this is an example of a first order stationary times series. ++++AutoML regression models can't inherently deal with stochastic trends, or other well-known problems associated with non-stationary time series. As a result, out-of-sample forecast accuracy can be poor if such trends are present. -The user can configure featurization from the AutoML SDK via the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) class or from the [AzureML Studio web interface](how-to-use-automated-ml-for-ml-models.md#customize-featurization). +AutoML automatically analyzes time series dataset to determine stationarity. When non-stationary time series are detected, AutoML applies a differencing transform automatically to mitigate the impact of non-stationary behavior. ### Model sweeping After data has been prepared with missing data handling and feature engineering, AutoML sweeps over a set of models and hyper-parameters using a [model recommendation service](https://www.microsoft.com/research/publication/probabilistic-matrix-factorization-for-automated-machine-learning/). The models are ranked based on validation or cross-validation metrics and then, optionally, the top models may be used in an ensemble model. The best model, or any of the trained models, can be inspected, downloaded, or deployed to produce forecasts as needed. See the [model sweeping and selection](./concept-automl-forecasting-sweeping.md) article for more details. When a dataset contains more than one time series, as in the given data example, Each Series in Own Group (1:1) | All Series in Single Group (N:1) -| ---Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, Temporal Convolutional Network +Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, ForecastTCN More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb). |
machine-learning | Concept Automl Forecasting Sweeping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-sweeping.md | Naive, Seasonal Naive, Average, Seasonal Average | Time series | No sweeping wit Exponential Smoothing, ARIMA(X) | Time series | Grid search for within-class sweeping Prophet | Regression | No sweeping within class Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost | Regression | AutoML's [model recommendation service](https://www.microsoft.com/research/publication/probabilistic-matrix-factorization-for-automated-machine-learning/) dynamically explores hyper-parameter spaces-Temporal Convolutional Network | Regression | Static list of models followed by random search over network size, dropout ratio, and learning rate. +ForecastTCN | Regression | Static list of models followed by random search over network size, dropout ratio, and learning rate. For a description of the different model types, see the [forecasting models](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section of the methods overview article. AutoML has two validation configurations - cross-validation and explicit validat AutoML follows the usual cross-validation procedure, training a separate model on each fold and averaging validation metrics from all folds. -Cross-validation for forecasting jobs is configured by setting the number of cross-validation folds and, optionally, the number of time periods between two consecutive cross-validation folds. See the [training and validation data](./how-to-auto-train-forecast.md#training-and-validation-data) guide for more information and an example of configuring cross-validation for forecasting. +Cross-validation for forecasting jobs is configured by setting the number of cross-validation folds and, optionally, the number of time periods between two consecutive cross-validation folds. See the [custom cross-validation settings](./how-to-auto-train-forecast.md#custom-cross-validation-settings) guide for more information and an example of configuring cross-validation for forecasting. You can also bring your own validation data. Learn more in the [configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data) article. |
machine-learning | Concept Azure Machine Learning V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md | An Azure Machine Learning [component](concept-component.md) is a self-contained ## Next steps -* [How to migrate from v1 to v2](how-to-migrate-from-v1.md) +* [How to upgrade from v1 to v2](how-to-migrate-from-v1.md) * [Train models with the v2 CLI and SDK](how-to-train-model.md) |
machine-learning | Concept Mlflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md | Learn more at [Guidelines for deploying MLflow models](how-to-deploy-mlflow-mode * [Deploy MLflow to Online Endpoints](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK. * [Deploy MLflow to Online Endpoints with safe rollout](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints_progresive.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK with progressive rollout of models and the deployment of multiple model's versions in the same endpoint. * [Deploy MLflow to web services (V1)](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_web_service.ipynb): Demonstrates how to deploy models in MLflow format to web services (ACI/AKS v1) using MLflow SDK.-* [Deploying models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks. +* [Deploying models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks. ## Training MLflow projects (preview) |
machine-learning | Concept V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md | The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date ## Next steps -* [How to migrate from v1 to v2](how-to-migrate-from-v1.md) +* [How to upgrade from v1 to v2](how-to-migrate-from-v1.md) * Get started with CLI v2 * [Install and set up CLI (v2)](how-to-configure-cli.md) |
machine-learning | How To Access Azureml Behind Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md | The following terms and information are used throughout this article: * __Azure service tags__: A service tag is an easy way to specify the IP ranges used by an Azure service. For example, the `AzureMachineLearning` tag represents the IP addresses used by the Azure Machine Learning service. > [!IMPORTANT]- > Azure service tags are only supported by some Azure services. If you are using a non-Azure solution such as a 3rd party firewall, download a list of [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). Extract the file and search for the service tag within the file. The IP addresses may change periodically. + > Azure service tags are only supported by some Azure services. For a list of service tags supported with network security groups and Azure Firewall, see the [Virtual network service tags](/azure/virtual-network/service-tags-overview) article. + > + > If you are using a non-Azure solution such as a 3rd party firewall, download a list of [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). Extract the file and search for the service tag within the file. The IP addresses may change periodically. * __Region__: Some service tags allow you to specify an Azure region. This limits access to the service IP addresses in a specific region, usually the one that your service is in. In this article, when you see `<region>`, substitute your Azure region instead. For example, `BatchNodeManagement.<region>` would be `BatchNodeManagement.uswest` if your Azure Machine Learning workspace is in the US West region. __Azure Machine Learning compute instance and compute cluster hosts__ > * The host for __Azure Key Vault__ is only needed if your workspace was created with the [hbi_workspace](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace) flag enabled. > * Ports 8787 and 18881 for __compute instance__ are only needed when your Azure Machine workspace has a private endpoint. > * In the following table, replace `<storage>` with the name of the default storage account for your Azure Machine Learning workspace.+> * In the following table, replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. > * Websocket communication must be allowed to the compute instance. If you block websocket traffic, Jupyter notebooks won't work correctly. # [Azure public](#tab/public) __Azure Machine Learning compute instance and compute cluster hosts__ | Compute cluster/instance | `graph.windows.net` | TCP | 443 | | Compute instance | `*.instances.azureml.net` | TCP | 443 | | Compute instance | `*.instances.azureml.ms` | TCP | 443, 8787, 18881 |-| Compute instance | `*.tundra.azureml.ms` | UDP | 5831 | +| Compute instance | `<region>.tundra.azureml.ms` | UDP | 5831 | | Compute instance | `*.batch.azure.com` | ANY | 443 | | Compute instance | `*.service.batch.com` | ANY | 443 | | Microsoft storage access | `*.blob.core.windows.net` | TCP | 443 | __Azure Machine Learning compute instance and compute cluster hosts__ | Compute cluster/instance | `graph.windows.net` | TCP | 443 | | Compute instance | `*.instances.azureml.us` | TCP | 443 | | Compute instance | `*.instances.azureml.ms` | TCP | 443, 8787, 18881 |+| Compute instance | `<region>.tundra.azureml.us` | UDP | 5831 | | Microsoft storage access | `*.blob.core.usgovcloudapi.net` | TCP | 443 | | Microsoft storage access | `*.table.core.usgovcloudapi.net` | TCP | 443 | | Microsoft storage access | `*.queue.core.usgovcloudapi.net` | TCP | 443 | __Azure Machine Learning compute instance and compute cluster hosts__ | Compute cluster/instance | `graph.chinacloudapi.cn` | TCP | 443 | | Compute instance | `*.instances.azureml.cn` | TCP | 443 | | Compute instance | `*.instances.azureml.ms` | TCP | 443, 8787, 18881 |+| Compute instance | `<region>.tundra.azureml.cn` | UDP | 5831 | | Microsoft storage access | `*.blob.core.chinacloudapi.cn` | TCP | 443 | | Microsoft storage access | `*.table.core.chinacloudapi.cn` | TCP | 443 | | Microsoft storage access | `*.queue.core.chinacloudapi.cn` | TCP | 443 | |
machine-learning | How To Auto Train Forecast | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md | Title: Set up AutoML for time-series forecasting description: Set up Azure Machine Learning automated ML to train time-series forecasting models with the Azure Machine Learning Python SDK. --++ - Previously updated : 11/18/2021+ Last updated : 01/27/2023 show_latex: true # Set up AutoML to train a time-series forecasting model with Python ++> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning SDK you are using:"] +> * [v1](./v1/how-to-auto-train-forecast-v1.md) +> * [v2 (current version)](how-to-auto-train-forecast.md) -In this article, you learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/). +In this article, you'll learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme). To do so, you: > [!div class="checklist"]-> * Prepare data for time series modeling. -> * Configure specific time-series parameters in an [`AutoMLConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object. -> * Run predictions with time-series data. +> * Prepare data for training. +> * Configure specific time-series parameters in a [Forecasting Job](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob). +> * Get predictions from trained time-series models. For a low code experience, see the [Tutorial: Forecast demand with automated machine learning](tutorial-automated-ml-forecast.md) for a time-series forecasting example using automated ML in the [Azure Machine Learning studio](https://ml.azure.com/). -Unlike classical time series methods, in automated ML, past time-series values are "pivoted" to become additional dimensions for the regressor together with other predictors. This approach incorporates multiple contextual variables and their relationship to one another during training. Since multiple factors can influence a forecast, this method aligns itself well with real world forecasting scenarios. For example, when forecasting sales, interactions of historical trends, exchange rate, and price all jointly drive the sales outcome. +AutoML uses standard machine learning models along with well-known time series models to create forecasts. Our approach incorporates multiple contextual variables and their relationship to one another during training. Since multiple factors can influence a forecast, this method aligns itself well with real world forecasting scenarios. For example, when forecasting sales, interactions of historical trends, exchange rate, and price can all jointly drive the sales outcome. For more details, see our article on [forecasting methodology](./concept-automl-forecasting-methods.md). ## Prerequisites For this article you need, * An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). -* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns. -- [!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)] +* The ability to launch AutoML training jobs. Follow the [how-to guide for setting up AutoML](how-to-configure-auto-train.md) for details. ## Training and validation data -The most important difference between a forecasting regression task type and regression task type within automated ML is including a feature in your training data that represents a valid time series. A regular time series has a well-defined and consistent frequency and has a value at every sample point in a continuous time span. +Input data for AutoML forecasting must contain valid time series in tabular format. Each variable must have its own corresponding column in the data table. AutoML requires at least two columns: a **time column** representing the time axis and the **target column** which is the quantity to forecast. Other columns can serve as predictors. For more details, see [how AutoML uses your data](./concept-automl-forecasting-methods.md#how-automl-uses-your-data). > [!IMPORTANT]-> When training a model for forecasting future values, ensure all the features used in training can be used when running predictions for your intended horizon. <br> <br>For example, when creating a demand forecast, including a feature for current stock price could massively increase training accuracy. However, if you intend to forecast with a long horizon, you may not be able to accurately predict future stock values corresponding to future time-series points, and model accuracy could suffer. +> When training a model for forecasting future values, ensure all the features used in training can be used when running predictions for your intended horizon. <br> <br> For example, a feature for current stock price could massively increase training accuracy. However, if you intend to forecast with a long horizon, you may not be able to accurately predict future stock values corresponding to future time-series points, and model accuracy could suffer. -You can specify separate [training data and validation data](concept-automated-ml.md#training-validation-and-test-data) directly in the `AutoMLConfig` object. Learn more about the [AutoMLConfig](#configure-experiment). +AutoML forecasting jobs require that your training data is represented as an **MLTable** object. An MLTable specifies a data source and steps for loading the data. For more information and use cases, see the [MLTable how-to guide](./how-to-mltable.md). As a simple example, suppose your training data is contained in a CSV file in a local directory, `./train_data/timeseries_train.csv`. You can define a new MLTable by copying the following YAML code to a new file, `./train_data/MLTable`: -For time series forecasting, only **Rolling Origin Cross Validation (ROCV)** is used for validation by default. ROCV divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds. This strategy preserves the time series data integrity and eliminates the risk of data leakage. +```yml +$schema: https://azuremlschemas.azureedge.net/latest/MLTable.schema.json +type: mltable +paths: + - file: ./timeseries_train.csv -Pass your training and validation data as one dataset to the parameter `training_data`. Set the number of cross validation folds with the parameter `n_cross_validations` and set the number of periods between two consecutive cross-validation folds with `cv_step_size`. You can also leave either or both parameters empty and AutoML will set them automatically. +transformations: + - read_delimited: + delimiter: ',' + encoding: ascii +``` +You can now define an input data object, which is required to start a training job, using the AzureML Python SDK as follows: ```python-automl_config = AutoMLConfig(task='forecasting', - training_data= training_data, - n_cross_validations="auto", # Could be customized as an integer - cv_step_size = "auto", # Could be customized as an integer - ... - **time_series_settings) +from azure.ai.ml.constants import AssetTypes +from azure.ai.ml import Input ++# Training MLTable defined locally, with local data to be uploaded +my_training_data_input = Input( + type=AssetTypes.MLTABLE, path="./train_data" +) ``` +You can specify [validation data](concept-automated-ml.md#training-validation-and-test-data) in a similar way, by creating a MLTable and an input data object. Alternatively, if you don't supply validation data, AutoML automatically creates cross-validation splits from your training data to use for model selection. See our article on [forecasting model selection](./concept-automl-forecasting-sweeping.md#model-selection) for more details. Also see [training data length requirements](./concept-automl-forecasting-methods.md#data-length-requirements) for details on how much training data you need to successfully train a forecasting model. -You can also bring your own validation data, learn more in [Configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data). +Learn more about how AutoML applies cross validation to [prevent over fitting](concept-manage-ml-pitfalls.md#prevent-overfitting). -Learn more about how AutoML applies cross validation to [prevent over-fitting models](concept-manage-ml-pitfalls.md#prevent-overfitting). +## Compute to run experiment +AutoML uses AzureML Compute, which is a fully managed compute resource, to run the training job. In the following example, a compute cluster named `cpu-compute` is created: -## Configure experiment +[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/configuration.ipynb?name=create-cpu-compute)] -The [`AutoMLConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object defines the settings and data necessary for an automated machine learning task. Configuration for a forecasting model is similar to the setup of a standard regression model, but certain models, configuration options, and featurization steps exist specifically for time-series data. +## Configure experiment -### Supported models +There are several options that you can use to configure your AutoML forecasting experiment. These configuration parameters are set in the automl.forecasting() task method. You can also set job training settings and exit criteria with the set_training() and set_limits() functions, respectively. -Automated machine learning automatically tries different models and algorithms as part of the model creation and tuning process. As a user, there is no need for you to specify the algorithm. For forecasting experiments, both native time-series and deep learning models are part of the recommendation system. +The following example shows how to create a forecasting job with normalized root mean squared error as the primary metric and automatically configured cross-validation folds: ->[!Tip] -> Traditional regression models are also tested as part of the recommendation system for forecasting experiments. See a complete list of the [supported models](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels) in the SDK reference documentation. +```python +from azure.ai.ml import automl ++# note that the below is a code snippet -- you might have to modify the variable values to run it successfully +forecasting_job = automl.forecasting( + compute=compute_name, + experiment_name=exp_name, + training_data=my_training_data_input, + target_column_name=target_column_name, + primary_metric="NormalizedRootMeanSquaredError", + n_cross_validations="auto", +) +# Limits are all optional +forecasting_job.set_limits( + timeout_minutes=120, + trial_timeout_minutes=30, + max_concurrent_trials=4, +) +``` ### Configuration settings+Forecasting tasks have many settings that are specific to forecasting. Use the set_forecast_settings() method of a ForecastingJob to set forecasting parameters. In the following example, we provide the name of the time column in the training data and set the forecast horizon: -Similar to a regression problem, you define standard training parameters like task type, number of iterations, training data, and number of cross-validations. Forecasting tasks require the `time_column_name` and `forecast_horizon` parameters to configure your experiment. If the data includes multiple time series, such as sales data for multiple stores or energy data across different states, automated ML automatically detects this and sets the `time_series_id_column_names` parameter (preview) for you. You can also include additional parameters to better configure your run, see the [optional configurations](#optional-configurations) section for more detail on what can be included. +```python +# Forecasting specific configuration +forecasting_job.set_forecast_settings( + time_column_name=time_column_name, + forecast_horizon=24 +) +``` -> [!IMPORTANT] -> Automatic time series identification is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +The time column name is a required setting and you should generally set the forecast horizon according to your prediction scenario. If your data contains multiple time series, you can specify the names of the **time series ID columns**. These columns, when grouped, define the individual series. For example, suppose that you have data consisting of hourly sales from different stores and brands. The following sample shows how to set the time series ID columns assuming the data contains columns named "store" and "brand": -| Parameter name | Description | -|-|-| -|`time_column_name`|Used to specify the datetime column in the input data used for building the time series and inferring its frequency.| -|`forecast_horizon`|Defines how many periods forward you would like to forecast. The horizon is in units of the time series frequency. Units are based on the time interval of your training data, for example, monthly, weekly that the forecaster should predict out.| +```python +# Forecasting specific configuration +# Add time series IDs for store and brand +forecasting_job.set_forecast_settings( + ..., # other settings + time_series_id_column_names=['store', 'brand'] +) +``` -The following code, -* Leverages the [`ForecastingParameters`](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters) class to define the forecasting parameters for your experiment training -* Sets the `time_column_name` to the `day_datetime` field in the data set. -* Sets the `forecast_horizon` to 50 in order to predict for the entire test set. +AutoML tries to automatically detect time series ID columns in your data if none are specified. -```python -from azureml.automl.core.forecasting_parameters import ForecastingParameters +Other settings are optional and reviewed in the [optional settings](#optional-settings) section. -forecasting_parameters = ForecastingParameters(time_column_name='day_datetime', - forecast_horizon=50, - freq='W') - -``` +### Optional settings ++Optional configurations are available for forecasting tasks, such as enabling deep learning and specifying a target rolling window aggregation. A complete list of parameters is available in the [forecast_settings API doc](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings). -These `forecasting_parameters` are then passed into your standard `AutoMLConfig` object along with the `forecasting` task type, primary metric, exit criteria, and training data. +#### Model search settings ++There are two optional settings that control the model space where AutoML searches for the best model, `allowed_training_algorithms` and `blocked_training_algorithms`. To restrict the search space to a given set of model classes, use allowed_training_algorithms as in the following sample: ```python-from azureml.core.workspace import Workspace -from azureml.core.experiment import Experiment -from azureml.train.automl import AutoMLConfig -import logging --automl_config = AutoMLConfig(task='forecasting', - primary_metric='normalized_root_mean_squared_error', - experiment_timeout_minutes=15, - enable_early_stopping=True, - training_data=train_data, - label_column_name=label, - n_cross_validations="auto", # Could be customized as an integer - cv_step_size = "auto", # Could be customized as an integer - enable_ensembling=False, - verbosity=logging.INFO, - forecasting_parameters=forecasting_parameters) +# Only search ExponentialSmoothing and ElasticNet models +forecasting_job.set_training( + allowed_training_algorithms=["ExponentialSmoothing", "ElasticNet"] +) ``` -The amount of data required to successfully train a forecasting model with automated ML is influenced by the `forecast_horizon`, `n_cross_validations`, and `target_lags` or `target_rolling_window_size` values specified when you configure your `AutoMLConfig`. +In this case, the forecasting job _only_ searches over Exponential Smoothing and Elastic Net model classes. To remove a given set of model classes from the search space, use the blocked_training_algorithms as in the following sample: -The following formula calculates the amount of historic data that what would be needed to construct time series features. +```python +# Search over all model classes except Prophet +forecasting_job.set_training( + blocked_training_algorithms=["Prophet"] +) +``` -Minimum historic data required: (2x `forecast_horizon`) + #`n_cross_validations` + max(max(`target_lags`), `target_rolling_window_size`) +Now, the job searches over all model classes _except_ Prophet. For a list of forecasting model names that are accepted in `allowed_training_algorithms` and `blocked_training_algorithms`, see [supported forecasting models](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting) and [supported regression models](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression). -An `Error exception` is raised for any series in the dataset that does not meet the required amount of historic data for the relevant settings specified. +#### Enable deep learning -### Featurization steps +AutoML ships with a custom deep neural network (DNN) model called `ForecastTCN`. This model is a [temporal convolutional network](https://arxiv.org/abs/1803.01271), or TCN, that applies common imaging task methods to time series modeling. Namely, one-dimensional "causal" convolutions form the backbone of the network and enable the model to learn complex patterns over long durations in the training history. -In every automated machine learning experiment, automatic scaling and normalization techniques are applied to your data by default. These techniques are types of **featurization** that help *certain* algorithms that are sensitive to features on different scales. Learn more about default featurization steps in [Featurization in AutoML](how-to-configure-auto-features.md#automatic-featurization) -However, the following steps are performed only for `forecasting` task types: +The ForecastTCN often achieves higher accuracy than standard time series models when there are thousands or more observations in the training history. However, it also takes longer to train and sweep over ForecastTCN models due to their higher capacity. -* Detect time-series sample frequency (for example, hourly, daily, weekly) and create new records for absent time points to make the series continuous. -* Impute missing values in the target (via forward-fill) and feature columns (using median column values) -* Create features based on time series identifiers to enable fixed effects across different series -* Create time-based features to assist in learning seasonal patterns -* Encode categorical variables to numeric quantities -* Detect the non-stationary time series and automatically differencing them to mitigate the impact of unit roots. +You can enable the ForecastTCN in AutoML by setting the `enable_dnn_training` flag in the set_training() method as follows: -To view the full list of possible engineered features generated from time series data, see [TimeIndexFeaturizer Class](/python/api/azureml-automl-runtime/azureml.automl.runtime.featurizer.transformer.timeseries.time_index_featurizer). +```python +# Include ForecastTCN models in the model search +forecasting_job.set_training( + enable_dnn_training=True +) +``` -> [!NOTE] -> Automated machine learning featurization steps (feature normalization, handling missing data, -> converting text to numeric, etc.) become part of the underlying model. When using the model for -> predictions, the same featurization steps applied during training are applied to -> your input data automatically. +To enable DNN for an AutoML experiment created in the Azure Machine Learning studio, see the [task type settings in the studio UI how-to](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment). -#### Customize featurization +> [!NOTE] +> * When you enable DNN for experiments created with the SDK, [best model explanations](how-to-machine-learning-interpretability-automl.md) are disabled. +> * DNN support for forecasting in Automated Machine Learning is not supported for runs initiated in Databricks. +> * GPU compute types are recommended when DNN training is enabled -You also have the option to customize your featurization settings to ensure that the data and features that are used to train your ML model result in relevant predictions. +#### Target rolling window aggregation -Supported customizations for `forecasting` tasks include: +Recent values of the target are often impactful features in a forecasting model. Rolling window aggregations allow you to add rolling aggregations of data values as features. Generating and using these features as extra contextual data helps with the accuracy of the train model. -|Customization|Definition| -|--|--| -|**Column purpose update**|Override the auto-detected feature type for the specified column.| -|**Transformer parameter update** |Update the parameters for the specified transformer. Currently supports *Imputer* (fill_value and median).| -|**Drop columns** |Specifies columns to drop from being featurized.| +Consider an energy demand forecasting scenario where weather data and historical demand are available. +The table shows resulting feature engineering that occurs when window aggregation is applied over the most recent three hours. Columns for **minimum, maximum,** and **sum** are generated on a sliding window of three hours based on the defined settings. For instance, for the observation valid on September 8, 2017 4:00am, the maximum, minimum, and sum values are calculated using the **demand values** for September 8, 2017 1:00AM - 3:00AM. This window of three hours shifts along to populate data for the remaining rows. -To customize featurizations with the SDK, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. Learn more about [custom featurizations](how-to-configure-auto-features.md#customize-featurization). + ->[!NOTE] -> The **drop columns** functionality is deprecated as of SDK version 1.19. Drop columns from your dataset as part of data cleansing, prior to consuming it in your automated ML experiment. +You can enable rolling window aggregation features and set the window size through the set_forecast_settings() method. In the following sample, we set the window size to "auto" so that AutoML will automatically determine a good value for your data: ```python-featurization_config = FeaturizationConfig() --# `logQuantity` is a leaky feature, so we remove it. -featurization_config.drop_columns = ['logQuantitity'] +forecasting_job.set_forecast_settings( + ..., # other settings + target_rolling_window_size='auto' +) +``` -# Force the CPWVOL5 feature to be of numeric type. -featurization_config.add_column_purpose('CPWVOL5', 'Numeric') +#### Short series handling -# Fill missing values in the target column, Quantity, with zeroes. -featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0}) +Automated ML considers a time series a **short series** if there aren't enough data points to conduct the train and validation phases of model development. See [training data length requirements](./concept-automl-forecasting-methods.md#data-length-requirements) for more details on length requirements. -# Fill mising values in the `INCOME` column with median value. -featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"}) -``` +AutoML has several actions it can take for short series. These actions are configurable with the `short_series_handling_config` setting. The default value is "auto." The following table describes the settings: -If you're using the Azure Machine Learning studio for your experiment, see [how to customize featurization in the studio](how-to-use-automated-ml-for-ml-models.md#customize-featurization). +|Setting|Description +|| +|`auto`| The default value for short series handling. <br> - _If all series are short_, pad the data. <br> - _If not all series are short_, drop the short series. +|`pad`| If `short_series_handling_config = pad`, then automated ML adds random values to each short series found. The following lists the column types and what they're padded with: <br> - Object columns with NaNs <br> - Numeric columns with 0 <br> - Boolean/logic columns with False <br> - The target column is padded with random values with mean of zero and standard deviation of 1. +|`drop`| If `short_series_handling_config = drop`, then automated ML drops the short series, and it will not be used for training or prediction. Predictions for these series will return NaN's. +|`None`| No series is padded or dropped -## Optional configurations +In the following example, we set the short series handling so that all short series are padded to the minimum length: -Additional optional configurations are available for forecasting tasks, such as enabling deep learning and specifying a target rolling window aggregation. A complete list of additional parameters is available in the [ForecastingParameters SDK reference documentation](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters). +```python +forecasting_job.set_forecast_settings( + ..., # other settings + short_series_handling_config='pad' +) +``` -### Frequency & target data aggregation +>[!WARNING] +>Padding may impact the accuracy of the resulting model, since we are introducing artificial data just to get past training without failures. If many of the series are short, then you may also see some impact in explainability results -Leverage the frequency, `freq`, parameter to help avoid failures caused by irregular data, that is data that doesn't follow a set cadence, like hourly or daily data. +#### Frequency & target data aggregation -For highly irregular data or for varying business needs, users can optionally set their desired forecast frequency, `freq`, and specify the `target_aggregation_function` to aggregate the target column of the time series. Leverage these two settings in your `AutoMLConfig` object can help save some time on data preparation. +Use the frequency and data aggregation options to avoid failures caused by irregular data. Your data is irregular if it doesn't follow a set cadence in time, like hourly or daily. Point-of-sales data is a good example of irregular data. In these cases, AutoML can aggregate your data to a desired frequency and then build a forecasting model from the aggregates. -Supported aggregation operations for target column values include: +You need to set the `frequency` and `target_aggregate_function` settings to handle irregular data. The frequency setting accepts [Pandas DateOffset strings](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) as input. Supported values for the aggregation function are: |Function | Description || Supported aggregation operations for target column values include: |`min`| Minimum value of a target  |`max`| Maximum value of a target  -### Enable deep learning --> [!NOTE] -> DNN support for forecasting in Automated Machine Learning is in **preview** and not supported for local runs or runs initiated in Databricks. --You can also apply deep learning with deep neural networks, DNNs, to improve the scores of your model. Automated ML's deep learning allows for forecasting univariate and multivariate time series data. +* The target column values are aggregated according to the specified operation. Typically, sum is appropriate for most scenarios. +* Numerical predictor columns in your data are aggregated by sum, mean, minimum value, and maximum value. As a result, automated ML generates new columns suffixed with the aggregation function name and applies the selected aggregate operation. +* For categorical predictor columns, the data is aggregated by mode, the most prominent category in the window. +* Date predictor columns are aggregated by minimum value, maximum value and mode. -Deep learning models have three intrinsic capabilities: -1. They can learn from arbitrary mappings from inputs to outputs -1. They support multiple inputs and outputs -1. They can automatically extract patterns in input data that spans over long sequences. --To enable deep learning, set the `enable_dnn=True` in the `AutoMLConfig` object. +The following example sets the frequency to hourly and the aggregation function to summation: ```python-automl_config = AutoMLConfig(task='forecasting', - enable_dnn=True, - ... - forecasting_parameters=forecasting_parameters) +# Aggregate the data to hourly frequency +forecasting_job.set_forecast_settings( + ..., # other settings + frequency='H', + target_aggregate_function='sum' +) ```-> [!Warning] -> When you enable DNN for experiments created with the SDK, [best model explanations](how-to-machine-learning-interpretability-automl.md) are disabled. --To enable DNN for an AutoML experiment created in the Azure Machine Learning studio, see the [task type settings in the studio UI how-to](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment). - -### Target rolling window aggregation +#### Custom cross-validation settings -Often the best information a forecaster can have is the recent value of the target. Target rolling window aggregations allow you to add a rolling aggregation of data values as features. Generating and using these features as extra contextual data helps with the accuracy of the train model. +There are two customizable settings that control cross-validation for forecasting jobs: the number of folds, `n_cross_validations`, and the step size defining the time offset between folds, `cv_step_size`. See [forecasting model selection](./concept-automl-forecasting-sweeping.md#model-selection) for more information on the meaning of these parameters. By default, AutoML sets both settings automatically based on characteristics of your data, but advanced users may want to set them manually. For example, suppose you have daily sales data and you want your validation setup to consist of five folds with a seven-day offset between adjacent folds. The following code sample shows how to set these: -For example, say you want to predict energy demand. You might want to add a rolling window feature of three days to account for thermal changes of heated spaces. In this example, create this window by setting `target_rolling_window_size= 3` in the `AutoMLConfig` constructor. +```python +from azure.ai.ml import automl -The table shows resulting feature engineering that occurs when window aggregation is applied. Columns for **minimum, maximum,** and **sum** are generated on a sliding window of three based on the defined settings. Each row has a new calculated feature, in the case of the timestamp for September 8, 2017 4:00am the maximum, minimum, and sum values are calculated using the **demand values** for September 8, 2017 1:00AM - 3:00AM. This window of three shifts along to populate data for the remaining rows. +# Create a job with five CV folds +forecasting_job = automl.forecasting( + ..., # other training parameters + n_cross_validations=5, +) - +# Set the step size between folds to seven days +forecasting_job.set_forecast_settings( + ..., # other settings + cv_step_size=7 +) +``` -View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb). +### Custom featurization -### Short series handling +By default, AutoML augments training data with engineered features to increase the accuracy of the models. See [automated feature engineering](./concept-automl-forecasting-methods.md#automated-feature-engineering) for more information. Some of the preprocessing steps can be customized using the `set_featurization()` method of the forecasting job. -Automated ML considers a time series a **short series** if there are not enough data points to conduct the train and validation phases of model development. The number of data points varies for each experiment, and depends on the max_horizon, the number of cross validation splits, and the length of the model lookback, that is the maximum of history that's needed to construct the time-series features. +Supported customizations for forecasting include: -Automated ML offers short series handling by default with the `short_series_handling_configuration` parameter in the `ForecastingParameters` object. +|Customization|Description|Options +|--|--| +|**Column purpose update**|Override the auto-detected feature type for the specified column.|"Categorical", "DateTime", "Numeric" +|**Transformer parameter update**|Update the parameters for the specified imputer.|`{"strategy": "constant", "fill_value": <value>}`, `{"strategy": "median"}`, `{"strategy": "ffill"}` -To enable short series handling, the `freq` parameter must also be defined. To define an hourly frequency, we will set `freq='H'`. View the frequency string options by visiting the [pandas Time series page DataOffset objects section](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects). To change the default behavior, `short_series_handling_configuration = 'auto'`, update the `short_series_handling_configuration` parameter in your `ForecastingParameter` object. +For example, suppose you have a retail demand scenario where the data includes features like price, an "on sale" flag, and a product type. The following sample shows how you can set customized types and imputers for these features: ```python-from azureml.automl.core.forecasting_parameters import ForecastingParameters --forecast_parameters = ForecastingParameters(time_column_name='day_datetime', - forecast_horizon=50, - short_series_handling_configuration='auto', - freq = 'H', - target_lags='auto') +from azure.ai.ml.automl import ColumnTransformer ++# Customize imputation methods for price and is_on_sale features +# Median value imputation for price, constant value of zero for is_on_sale +transformer_params = { + "imputer": [ + ColumnTransformer(fields=["price"], parameters={"strategy": "median"}), + ColumnTransformer(fields=["is_on_sale"], parameters={"strategy": "constant", "fill_value": 0}), + ], +} ++# Set the featurization +# Ensure that product_type feature is interpreted as categorical +forecasting_job.set_featurization( + mode="custom", + transformer_params=transformer_params, + column_name_and_types={"product_type": "Categorical"}, +) ```-The following table summarizes the available settings for `short_series_handling_config`. - -|Setting|Description -|| -|`auto`| The following is the default behavior for short series handling <li> *If all series are short*, pad the data. <br> <li> *If not all series are short*, drop the short series. -|`pad`| If `short_series_handling_config = pad`, then automated ML adds random values to each short series found. The following lists the column types and what they are padded with: <li>Object columns with NaNs <li> Numeric columns with 0 <li> Boolean/logic columns with False <li> The target column is padded with random values with mean of zero and standard deviation of 1. -|`drop`| If `short_series_handling_config = drop`, then automated ML drops the short series, and it will not be used for training or prediction. Predictions for these series will return NaN's. -|`None`| No series is padded or dropped -->[!WARNING] ->Padding may impact the accuracy of the resulting model, since we are introducing artificial data just to get past training without failures. <br> <br> If many of the series are short, then you may also see some impact in explainability results --### Non-stationary time series detection and handling --A time series whose moments (mean and variance) change over time is called a **non-stationary**. For example, time series that exhibit stochastic trends are non-stationary by nature. To visualize this, the below image plots a series that is generally trending upward. Now, compute and compare the mean (average) values for the first and the second half of the series. Are they the same? Here, the mean of the series in the first half of the plot is significantly smaller than in the second half. The fact that the mean of the series depends on the time interval one is looking at, is an example of the time-varying moments. Here, the mean of a series is the first moment. - -Next, let's examine the image below, which plots the the original series in first differences, $x_t = y_t - y_{t-1}$ where $x_t$ is the change in retail sales and $y_t$ and $y_{t-1}$ represent the original series and its first lag, respectively. The mean of the series is roughly constant regardless the time frame one is looking at. This is an example of a first order stationary times series. The reason we added the first order term is because the first moment (mean) does not change with time interval, the same cannot be said about the variance, which is a second moment. ----AutoML Machine learning models can not inherently deal with stochastic trends, or other well-known problems associated with non-stationary time series. As a result, their out of sample forecast accuracy will be "poor" if such trends are present. --AutoML automatically analyzes time series dataset to check whether it is stationary or not. When non-stationary time series are detected, AutoML applies a differencing transform automatically to mitigate the impact of non-stationary time series. +If you're using the Azure Machine Learning studio for your experiment, see [how to customize featurization in the studio](how-to-use-automated-ml-for-ml-models.md#customize-featurization). ## Run the experiment -When you have your `AutoMLConfig` object ready, you can submit the experiment. After the model finishes, retrieve the best run iteration. -+After all settings are configured, you can launch the forecasting job via the `mlcient` as follows: ```python-ws = Workspace.from_config() -experiment = Experiment(ws, "Tutorial-automl-forecasting") -local_run = experiment.submit(automl_config, show_output=True) -best_run, fitted_model = local_run.get_output() +# Submit the AutoML job +returned_job = ml_client.jobs.create_or_update( + forecasting_job +) ++print(f"Created job: {returned_job}") ++# Get a URL for the status of the job +returned_job.services["Studio"].endpoint ``` -## Forecasting with best model +## Forecasting with a trained model -Use the best model iteration to forecast values for data that wasn't used to train the model. +Once you've used AutoML to train and select a best model, the next step is to evaluate the model. If it meets your requirements, you can use it to generate forecasts into the future. This section shows how to write Python scripts for evaluation and prediction. For an example of deploying a trained model with an inference script, see our [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb). ### Evaluating model accuracy with a rolling forecast -Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a so-called rolling evaluation which rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows to obtain statistically robust estimates for some set of chosen metrics. Ideally, the test set for the evaluation is long relative to the model's forecast horizon. Estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable. +Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a rolling evaluation that rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows. Ideally, the test set for the evaluation is long relative to the model's forecast horizon. Estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable. -For example, suppose you train a model on daily sales to predict demand up to two weeks (14 days) into the future. If there is sufficient historic data available, you might reserve the final several months to even a year of the data for the test set. The rolling evaluation begins by generating a 14-day-ahead forecast for the first two weeks of the test set. Then, the forecaster is advanced by some number of days into the test set and you generate another 14-day-ahead forecast from the new position. The process continues until you get to the end of the test set. +For example, suppose you train a model on daily sales to predict demand up to two weeks (14 days) into the future. If there's sufficient historic data available, you might reserve the final several months to even a year of the data for the test set. The rolling evaluation begins by generating a 14-day-ahead forecast for the first two weeks of the test set. Then, the forecaster is advanced by some number of days into the test set and you generate another 14-day-ahead forecast from the new position. The process continues until you get to the end of the test set. -To do a rolling evaluation, you call the `rolling_forecast` method of the `fitted_model`, then compute desired metrics on the result. For example, assume you have test set features in a pandas DataFrame called `test_features_df` and the test set actual values of the target in a numpy array called `test_target`. A rolling evaluation using the mean squared error is shown in the following code sample: +To do a rolling evaluation, you call the `rolling_forecast` method of the `fitted_model`, then compute desired metrics on the result. A rolling evaluation inference script is shown in the following code sample: ```python-from sklearn.metrics import mean_squared_error -rolling_forecast_df = fitted_model.rolling_forecast( - test_features_df, test_target, step=1) -mse = mean_squared_error( - rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name]) +""" +This is the script that is executed on the compute instance. It relies +on the model.pkl file which is uploaded along with this script to the +compute instance. +""" ++import os +import pandas as pd ++from sklearn.externals import joblib +++def init(): + global target_column_name + global fitted_model ++ target_column_name = os.environ["TARGET_COLUMN_NAME"] + # AZUREML_MODEL_DIR is an environment variable created during deployment + # It is the path to the model folder (./azureml-models) + # Please provide your model's folder name if there's one + model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model.pkl") + try: + fitted_model = joblib.load(model_path) + except Exception: + print("Loading pickle failed. Trying torch.load()") ++ import torch + model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model.pt") + device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") + fitted_model = torch.load(model_path, map_location=device) +++def run(mini_batch): + print(f"run method start: {__file__}, run({mini_batch})") + resultList = [] + for test in mini_batch: + if not test.endswith(".csv"): + continue + X_test = pd.read_csv(test, parse_dates=[fitted_model.time_column_name]) + y_test = X_test.pop(target_column_name).values ++ # Make a rolling forecast, advancing the forecast origin by 1 period on each iteration through the test set + X_rf = fitted_model.rolling_forecast( + X_test, y_test, step=1, ignore_data_errors=True + ) ++ resultList.append(X_rf) ++ return pd.concat(resultList, sort=False, ignore_index=True) ``` -In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). +In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` depends on the length of the test set and this step size. For more details and examples, see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ### Prediction into the future -The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). +The [forecast_quantiles()](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-forecast-quantiles) generates forecasts for given quantiles of the prediction distribution. This method thus provides a way to get a point forecast with a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features. You can also use the `forecast_destination` parameter in the `forecast_quantiles label_query = test_labels.copy().astype(np.float) label_query.fill(np.nan) label_fcst, data_trans = fitted_model.forecast_quantiles(- test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) + test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8) +) ``` -Often customers want to understand the predictions at a specific quantile of the distribution. For example, when the forecast is used to control inventory like grocery items or virtual machines for a cloud service. In such cases, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". The following demonstrates how to specify which quantiles you'd like to see for your predictions, such as 50th or 95th percentile. If you don't specify a quantile, like in the aforementioned code example, then only the 50th percentile predictions are generated. +No quantiles are specified here, so only the point forecast is generated. You may want to understand the predictions at a specific quantile of the distribution. For example, when the forecast is used to control inventory like grocery items or virtual machines for a cloud service. In such cases, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". The following sample demonstrates how to specify forecast quantiles, such as 50th or 95th percentile: ```python-# specify which quantiles you would like -fitted_model.quantiles = [0.05,0.5, 0.9] +# Get forecasts for the 5th, 50th, and 90th percentiles +fitted_model.quantiles = [0.05, 0.5, 0.9] fitted_model.forecast_quantiles(- test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) + test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8) +) ``` -You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example. +You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](~/azureml-examples-main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example. After the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values. -Supply a data set in the same format as the test set `test_dataset` but with future datetimes, and the resulting prediction set is the forecasted values for each time-series step. Assume the last time-series records in the data set were for 12/31/2018. To forecast demand for the next day (or as many periods as you need to forecast, <= `forecast_horizon`), create a single time series record for each store for 01/01/2019. +Supply a data set in the same format as the test set `test_dataset` but with future datetimes, and the resulting prediction set is the forecasted values for each time-series step. Assume the last records in the data set were for December 31, 2018. To forecast demand, create a time series record for each store starting on January 1, 2019. ```output day_datetime,store,week_of_year day_datetime,store,week_of_year 01/01/2019,A,1 ``` -Repeat the necessary steps to load this future data to a dataframe and then run `best_run.forecast_quantiles(test_dataset)` to predict future values. +Repeat the necessary steps to load this future data to a data frame and then run `best_run.forecast_quantiles(test_dataset)` to predict future values. > [!NOTE] > In-sample predictions are not supported for forecasting with automated ML when `target_lags` and/or `target_rolling_window_size` are enabled. -## Forecasting at scale +## Forecasting at scale +++> [!IMPORTANT] +> Many models and hierarchical time series are currently only supported in AzureML v1. Support for AzureML v2 is forthcoming. There are scenarios where a single machine learning model is insufficient and multiple machine learning models are needed. For instance, predicting sales for each individual store for a brand, or tailoring an experience to individual users. Building a model for each instance can lead to improved results on many machine learning problems. -Grouping is a concept in time series forecasting that allows time series to be combined to train an individual model per group. This approach can be particularly helpful if you have time series which require smoothing, filling or entities in the group that can benefit from history or trends from other entities. Many models and hierarchical time series forecasting are solutions powered by automated machine learning for these large scale forecasting scenarios. +Grouping is a concept in time series forecasting that allows time series to be combined to train an individual model per group. This approach can be particularly helpful if you have time series that require smoothing, filling or entities in the group that can benefit from history or trends from other entities. Many models and hierarchical time series forecasting are solutions powered by automated machine learning for these large scale forecasting scenarios. ### Many models -The Azure Machine Learning many models solution with automated machine learning allows users to train and manage millions of models in parallel. Many models The solution accelerator leverages [Azure Machine Learning pipelines](concept-ml-pipelines.md) to train the model. Specifically, a [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline%28class%29) object and [ParalleRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep) are used and require specific configuration parameters set through the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig). +The Azure Machine Learning many models solution with automated machine learning allows users to train and manage millions of models in parallel. The Many Models Solution Accelerator uses [Azure Machine Learning pipelines](concept-ml-pipelines.md) to train the model. Specifically, a [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline%28class%29) object and [ParalleRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep) are used and require specific configuration parameters set through the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig). The following diagram shows the workflow for the many models solution. mm_paramters = ManyModelsTrainParameters(automl_settings=automl_settings, partit ### Hierarchical time series forecasting -In most applications, customers have a need to understand their forecasts at a macro and micro level of the business; whether that be predicting sales of products at different geographic locations, or understanding the expected workforce demand for different organizations at a company. The ability to train a machine learning model to intelligently forecast on hierarchy data is essential. +In most applications, customers have a need to understand their forecasts at a macro and micro level of the business; whether that is predicting sales of products at different geographic locations, or understanding the expected workforce demand for different organizations at a company. The ability to train a machine learning model to intelligently forecast on hierarchy data is essential. -A hierarchical time series is a structure in which each of the unique series are arranged into a hierarchy based on dimensions such as, geography or product type. The following example shows data with unique attributes that form a hierarchy. Our hierarchy is defined by: the product type such as headphones or tablets, the product category which splits product types into accessories and devices, and the region the products are sold in. +A hierarchical time series is a structure in which the series have nested attributes. Geographic or product catalog attributes are natural examples. The following example shows data with unique attributes that form a hierarchy. Our hierarchy is defined by: the product type such as headphones or tablets, the product category which splits product types into accessories and devices, and the region the products are sold in.  hts_parameters = HTSTrainParameters( ## Example notebooks -See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including: +See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs) for detailed code examples of advanced forecasting configuration including: ++* [deep learning models](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb) +* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb) +* [manual configuration for lags and rolling window aggregation features](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb) -* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) -* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) -* [configurable lags](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) -* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) ## Next steps * Learn more about [How to deploy an AutoML model to an online endpoint](how-to-deploy-automl-endpoint.md). * Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md).-* Learn about [how AutoML builds forecasting models](./concept-automl-forecasting-methods.md). +* Learn about [how AutoML builds forecasting models](./concept-automl-forecasting-methods.md). +* Learn how to [configure AutoML for various forecasting scenarios](./how-to-automl-forecasting-faq.md#what-modeling-configuration-should-i-use). |
machine-learning | How To Automl Forecasting Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md | -1. [Bike share example](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) -2. [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb) +1. [Bike share example](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb) +2. [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb) 3. [Many models](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) 4. [Forecasting Recipes](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) 5. [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb) One common source of slow runtime is training AutoML with default settings on da ## How can I make AutoML faster? See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer to understand why it may be slow in your case. Consider the following configuration changes that may speed up your job:-- Block time series models like ARIMA and Prophet+- [Block time series models](./how-to-auto-train-forecast.md#model-search-settings) like ARIMA and Prophet - Turn off look-back features like lags and rolling windows - Reduce - number of trials/iterations Consider the following configuration changes that may speed up your job: ## What modeling configuration should I use? There are four basic configurations supported by AutoML forecasting:- -1. **Default AutoML** is recommended if the dataset has a small number of time series that have roughly similar historic behavior. -- Advantages: - - Simple to configure from code/SDK or AzureML Studio - - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information. --- Disadvantages: -- - Regression models may be less accurate if the time series in the training data have divergent behavior - - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information. --2. **AutoML with deep learning** is recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over temporal convolutional neural network (TCN) models during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information. -- Advantages - - Simple to configure from code/SDK or AzureML Studio - - Cross-learning opportunities since the TCN pools data over all series - - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information. -- Disadvantages - - Training can take much longer due to the complexity of DNN models - - > [!NOTE] - > We recommend using compute nodes with GPUs when deep learning is enabled to best take advantage of high DNN capacity. Training time can be much faster in comparison to nodes with only CPUs. See the [GPU optimized compute](../virtual-machines/sizes-gpu.md) article for more information. -3. **Many Models** is recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information. - - Advantages: - - Scalable - - Potentially higher accuracy when time series have divergent behavior from one another. - - Disadvantages: - - No cross-learning across time series - - You can't configure or launch Many Models jobs from AzureML Studio, only the code/SDK experience is currently available. +|Configuration|Scenario|Pros|Cons| +|--|--|--|--| +|**Default AutoML**|Recommended if the dataset has a small number of time series that have roughly similar historic behavior.|- Simple to configure from code/SDK or AzureML Studio <br><br> - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information.|- Regression models may be less accurate if the time series in the training data have divergent behavior <br> <br> - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information.| +|**AutoML with deep learning**|Recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over temporal convolutional neural network (TCN) models during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information.|- Simple to configure from code/SDK or AzureML Studio <br> <br> - Cross-learning opportunities since the TCN pools data over all series <br> <br> - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information.|- Training can take much longer due to the complexity of DNN models <br> <br> - Series with small amounts of history are unlikely to benefit from these models.| +|**Many Models**|Recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information.|- Scalable <br> <br> - Potentially higher accuracy when time series have divergent behavior from one another.|- No cross-learning across time series <br> <br> - You can't configure or launch Many Models jobs from AzureML Studio, only the code/SDK experience is currently available.| +|**Hierarchical Time Series**|HTS is recommended if the series in your data have nested, hierarchical structure and you need to train or make forecasts at aggregated levels of the hierarchy. See the [hierarchical time series forecasting](how-to-auto-train-forecast.md#hierarchical-time-series-forecasting) section for more information.|- Training at aggregated levels can reduce noise in the leaf node time series and potentially lead to higher accuracy models. <br> <br> - Forecasts can be retrieved for any level of the hierarchy by aggregating or dis-aggregating forecasts from the training level.|- You need to provide the aggregation level for training. AutoML doesn't currently have an algorithm to find an optimal level.| -4. **Hierarchical Time Series**, or HTS, is recommended if the series in your data have nested, hierarchical structure and you need to train or make forecasts at aggregated levels of the hierarchy. See the [hierarchical time series forecasting](how-to-auto-train-forecast.md#hierarchical-time-series-forecasting) section for more information. -- Advantages - - Training at aggregated levels can reduce noise in the leaf node time series and potentially lead to higher accuracy models - - Forecasts can be retrieved for any level of the hierarchy by aggregating or disaggregating forecasts from the training level. - - Disadvantages - - You need to provide the aggregation level for training. AutoML doesn't currently have an algorithm to find an optimal level. +> [!NOTE] +> We recommend using compute nodes with GPUs when deep learning is enabled to best take advantage of high DNN capacity. Training time can be much faster in comparison to nodes with only CPUs. See the GPU optimized compute article for more information. - > [!NOTE] - > HTS is designed for tasks where training or prediction is required at aggregated levels in the hierarchy. For hierarchical data requiring only leaf node training and prediction, use [Many Models](./how-to-auto-train-forecast.md#many-models) instead. +> [!NOTE] +> HTS is designed for tasks where training or prediction is required at aggregated levels in the hierarchy. For hierarchical data requiring only leaf node training and prediction, use [Many Models](./how-to-auto-train-forecast.md#many-models) instead. ## How can I prevent over-fitting and data leakage? AutoML uses machine learning best practices, such as cross-validated model selec - The input data contains **feature columns that are derived from the target with a simple formula**. For example, a feature that is an exact multiple of the target can result in a nearly perfect training score. The model, however, will likely not generalize to out-of-sample data. We advise you to explore the data prior to model training and to drop columns that "leak" the target information. - The training data uses **features that are not known into the future**, up to the forecast horizon. AutoML's regression models currently assume all features are known to the forecast horizon. We advise you to explore your data prior to training and remove any feature columns that are only known historically.-- There are **significant structural differences - regime changes - between the training, validation, or test portions of the data**. For example, consider the effect of the COVID-19 pandemic on demand for almost any good during 2020 and 2021; this is a classic example of a regime change. Over-fitting due to regime change is the most challenging issue to address because it's highly scenario dependent and can require deep knowledge to identify. As a first line of defense, try to reserve 10 - 20% of the total history for validation, or cross-validation, data. This is not always possible if the training history is short, but is generally a best practice. See our guide on [configuring validation](./how-to-auto-train-forecast.md#training-and-validation-data) for more information. +- There are **significant structural differences - regime changes - between the training, validation, or test portions of the data**. For example, consider the effect of the COVID-19 pandemic on demand for almost any good during 2020 and 2021; this is a classic example of a regime change. Over-fitting due to regime change is the most challenging issue to address because it's highly scenario dependent and can require deep knowledge to identify. As a first line of defense, try to reserve 10 - 20% of the total history for validation, or cross-validation, data. It isn't always possible to reserve this amount of validation data if the training history is short, but is a best practice. See our guide on [configuring validation](./how-to-auto-train-forecast.md#training-and-validation-data) for more information. + ## What if my time series data doesn't have regularly spaced observations? AutoML's forecasting models all require that training data have regularly spaced observations with respect to the calendar. This requirement includes cases like monthly or yearly observations where the number of days between observations may vary. There are two cases where time dependent data may not meet this requirement: -- The data has a well defined frequency, but **there are missing observations that create gaps in the series**. In this case, AutoML will attempt to detect the frequency, fill in new observations for the gaps, and impute missing target and feature values therein. The imputation methods can be optionally configured by the user via SDK settings or through the Web UI. See the [custom featurization](./how-to-auto-train-forecast.md#customize-featurization) +- The data has a well defined frequency, but **there are missing observations that create gaps in the series**. In this case, AutoML will attempt to detect the frequency, fill in new observations for the gaps, and impute missing target and feature values therein. The imputation methods can be optionally configured by the user via SDK settings or through the Web UI. See the [custom featurization](./how-to-auto-train-forecast.md#custom-featurization) guide for more information on configuring imputation. -- **The data does not have a well defined frequency**. That is, the duration between observations does not have a discernible pattern. Transactional data, like that from a point-of-sales system, is one example. In this case, you can set AutoML to aggregate your data to a chosen frequency. You can choose a regular frequency that best suites the data and the modeling objectives. See the [data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation) section for more information.+- **The data doesn't have a well defined frequency**. That is, the duration between observations doesn't have a discernible pattern. Transactional data, like that from a point-of-sales system, is one example. In this case, you can set AutoML to aggregate your data to a chosen frequency. You can choose a regular frequency that best suites the data and the modeling objectives. See the [data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation) section for more information. ## How do I choose the primary metric? The primary metric is very important since its value on validation data determin > We do not recommend using the R2 score, or _R_<sup>2</sup>, as a primary metric for forecasting. > [!NOTE]-> AutoML does not support custom, or user-provided functions for the primary metric. You must choose one of the predefined primary metrics that AutoML supports. +> AutoML doesn't support custom, or user-provided functions for the primary metric. You must choose one of the predefined primary metrics that AutoML supports. ## How can I improve the accuracy of my model? The primary metric is very important since its value on validation data determin - Add new features that may help predict the target. Subject matter expertise can help greatly when selecting training data. - Compare validation and test metric values and determine if the selected model is under-fitting or over-fitting the data. This knowledge can guide you to a better training configuration. For example, you might determine that you need to use more cross-validation folds in response to over-fitting. -### How do I fix an Out-Of-Memory error? +## Will AutoML always select the same best model given the same training data and configuration? ++[AutoML's model search process](./concept-automl-forecasting-sweeping.md#model-sweeping) is not deterministic, so it does not always select the same model given the same data and configuration. ++## How do I fix an Out-Of-Memory error? There are two types of memory issues: - RAM Out-of-Memory For default AutoML settings, RAM Out-of-Memory may be fixed by using compute nod Disk Out-of-Memory errors may be resolved by deleting the compute cluster and creating a new one. -### What advanced forecasting scenarios are supported by AutoML? +## What advanced forecasting scenarios are supported by AutoML? We support the following advanced prediction scenarios: - Quantile forecasts - Robust model evaluation via [rolling forecasts](./how-to-auto-train-forecast.md#evaluating-model-accuracy-with-a-rolling-forecast) - Forecasting beyond the forecast horizon-- Forecasting when there is a gap in time between training and forecasting periods.+- Forecasting when there's a gap in time between training and forecasting periods. -See the [advanced forecasting scenarios notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb) for examples and details. +See the [advanced forecasting scenarios notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb) for examples and details. ## How do I view metrics from forecasting training jobs? -See our [metrics in studio UI](./v1/how-to-log-view-metrics.md#view-run-metrics-in-the-studio-ui) guide for finding training and validation metric values. Note that you can view metrics for any forecasting model trained in AutoML by navigating to a model from the AutoML job UI in the studio and clicking on the "metrics" tab. +See our [metrics in studio UI](how-to-log-view-metrics.md#view-jobsruns-information-in-the-studio) guide for finding training and validation metric values. You can view metrics for any forecasting model trained in AutoML by navigating to a model from the AutoML job UI in the studio and clicking on the "metrics" tab. :::image type="content" source="media/how-to-automl-forecasting-faq/metrics_UI.png" alt-text="A view of the metric interface for an AutoML forecasting model."::: ## How do I debug failures with forecasting training jobs? -If your AutoML forecasting job fails, you will see an error message in the studio UI that may help to diagnose and fix the problem. The best source of information about the failure beyond the error message is the driver log for the job. Check out the [run logs](./v1/how-to-log-view-metrics.md#view-and-download-log-files-for-a-run) guide for instructions on finding driver logs. +If your AutoML forecasting job fails, you'll see an error message in the studio UI that may help to diagnose and fix the problem. The best source of information about the failure beyond the error message is the driver log for the job. Check out the [run logs](how-to-log-view-metrics.md#view-and-download-diagnostic-logs) guide for instructions on finding driver logs. > [!NOTE] > For Many Models or HTS job, training is usually on multi-node compute clusters. Logs for these jobs are present for each node IP address. You will need to search for error logs in each node in this case. The error logs, along with the driver logs, are in the `user_logs` folder for each node IP. -### What is a workspace / environment / experiment/ compute instance / compute target? +## What is a workspace / environment / experiment/ compute instance / compute target? If you aren't familiar with Azure Machine Learning concepts, start with the ["What is AzureML"](overview-what-is-azure-machine-learning.md) article and the [workspaces](./concept-workspace.md) article. |
machine-learning | How To Configure Auto Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md | Guardrail|Status|Condition for trigger You can customize your featurization settings to ensure that the data and features that are used to train your ML model result in relevant predictions. -To customize featurizations, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. If you're using the Azure Machine Learning studio for your experiment, see the [how-to article](how-to-use-automated-ml-for-ml-models.md#customize-featurization). To customize featurization for forecastings task types, refer to the [forecasting how-to](how-to-auto-train-forecast.md#customize-featurization). +To customize featurizations, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. If you're using the Azure Machine Learning studio for your experiment, see the [how-to article](how-to-use-automated-ml-for-ml-models.md#customize-featurization). To customize featurization for forecastings task types, refer to the [forecasting how-to](v1/how-to-auto-train-forecast-v1.md#customize-featurization). Supported customizations include: |
machine-learning | How To Log Mlflow Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md | accuracy = accuracy_score(y_test, y_pred) ``` > [!TIP]-> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb) demonstrates how to log a model with preprocessing using pipelines. +> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb) demonstrates how to log a model with preprocessing using pipelines. ## Logging models with a custom signature, environment or samples |
machine-learning | How To Migrate From V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md | Title: 'Migrate from v1 to v2' + Title: 'Upgrade from v1 to v2' -description: Migrate from v1 to v2 of Azure Machine Learning REST APIs, CLI extension, and Python SDK. +description: Upgrade from v1 to v2 of Azure Machine Learning REST APIs, CLI extension, and Python SDK. This section gives an overview of specific resources and assets in Azure ML. See ### Workspace -Workspaces don't need to be migrated with v2. You can use the same workspace, regardless of whether you're using v1 or v2. +Workspaces don't need to be upgraded with v2. You can use the same workspace, regardless of whether you're using v1 or v2. -If you create workspaces using automation, do consider migrating the code for creating a workspace to v2. Typically Azure resources are managed via Azure Resource Manager (and Bicep) or similar resource provisioning tools. Alternatively, you can use the [CLI (v2) and YAML files](how-to-manage-workspace-cli.md#create-a-workspace). +If you create workspaces using automation, do consider upgrading the code for creating a workspace to v2. Typically Azure resources are managed via Azure Resource Manager (and Bicep) or similar resource provisioning tools. Alternatively, you can use the [CLI (v2) and YAML files](how-to-manage-workspace-cli.md#create-a-workspace). For a comparison of SDK v1 and v2 code, see [Workspace management in SDK v1 and SDK v2](migrate-to-v2-resource-workspace.md). For a comparison of SDK v1 and v2 code, see [Compute management in SDK v1 and SD ### Endpoint and deployment (endpoint and web service in v1) -With SDK/CLI v1, you can deploy models on ACI or AKS as web services. Your existing v1 model deployments and web services will continue to function as they are, but Using SDK/CLI v1 to deploy models on ACI or AKS as web services is now consiered as **legacy**. For new model deployments, we recommend migrating to v2. In v2, we offer [managed endpoints or Kubernetes endpoints](./concept-endpoints.md). The following table guides our recommendation: +With SDK/CLI v1, you can deploy models on ACI or AKS as web services. Your existing v1 model deployments and web services will continue to function as they are, but Using SDK/CLI v1 to deploy models on ACI or AKS as web services is now consiered as **legacy**. For new model deployments, we recommend upgrading to v2. In v2, we offer [managed endpoints or Kubernetes endpoints](./concept-endpoints.md). The following table guides our recommendation: -|Endpoint type in v2|Migrate from|Notes| +|Endpoint type in v2|Upgrade from|Notes| |-|-|-| |Local|ACI|Quick test of model deployment locally; not for production.| |Managed online endpoint|ACI, AKS|Enterprise-grade managed model deployment infrastructure with near real-time responses and massive scaling for production.| With SDK/CLI v1, you can deploy models on ACI or AKS as web services. Your exist |Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-premises, giving flexibility and granular control at the cost of IT overhead.| For a comparison of SDK v1 and v2 code, see [Upgrade deployment endpoints to SDK v2](migrate-to-v2-deploy-endpoints.md).-For upgrade steps from your existing ACI web services to managed online endpoints, see our [upgrade guide article](migrate-to-v2-managed-online-endpoints.md) and [blog](https://aka.ms/acimoemigration). +For migration steps from your existing ACI web services to managed online endpoints, see our [upgrade guide article](migrate-to-v2-managed-online-endpoints.md) and [blog](https://aka.ms/acimoemigration). ### Jobs (experiments, runs, pipelines in v1) For details about Key Vault, see [Use authentication credential secrets in Azure ## Scenarios across the machine learning lifecycle -There are a few scenarios that are common across the machine learning lifecycle using Azure ML. We'll look at a few and give general recommendations for migrating to v2. +There are a few scenarios that are common across the machine learning lifecycle using Azure ML. We'll look at a few and give general recommendations for upgrading to v2. ### Azure setup The solution accelerator for MLOps with v2 is being developed at https://github. ### A note on GitOps with v2 -A key paradigm with v2 is serializing machine learning entities as YAML files for source control with `git`, enabling better GitOps approaches than were possible with v1. For instance, you could enforce policy by which only a service principal used in CI/CD pipelines can create/update/delete some or all entities, ensuring changes go through a governed process like pull requests with required reviewers. Since the files in source control are YAML, they're easy to diff and track changes over time. You and your team may consider shifting to this paradigm as you migrate to v2. +A key paradigm with v2 is serializing machine learning entities as YAML files for source control with `git`, enabling better GitOps approaches than were possible with v1. For instance, you could enforce policy by which only a service principal used in CI/CD pipelines can create/update/delete some or all entities, ensuring changes go through a governed process like pull requests with required reviewers. Since the files in source control are YAML, they're easy to diff and track changes over time. You and your team may consider shifting to this paradigm as you upgrade to v2. You can obtain a YAML representation of any entity with the CLI via `az ml <entity> show --output yaml`. Note that this output will have system-generated properties, which can be ignored or deleted. |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md | In this article you learn how to secure the following training compute resources The following configurations are in addition to those listed in the [Prerequisites](#prerequisites) section, and are specific to **creating** a compute instances/clusters configured for no public IP: -+ Your workspace must use a private endpoint to connect to the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md). ++ You must use a workspace private endpoint for the compute resource to communicate with Azure Machine Learning services from the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md). + In your VNet, allow **outbound** traffic to the following service tags or fully qualified domain names (FQDN): The following configurations are in addition to those listed in the [Prerequisit - [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md). - [Azure's outbound connectivity methods](/azure/load-balancer/load-balancer-outbound-connections#scenarios). + For more information on service tags that can be used with Azure Firewall, see the [Virtual network service tags](/azure/virtual-network/service-tags-overview) article. + Use the following information to create a compute instance or cluster with no public IP address: # [Azure CLI](#tab/cli) |
machine-learning | How To Submit Spark Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md | If the CLI or SDK code defines an option to use managed identity, Azure Machine ``` ### Attach user assigned managed identity using `ARMClient`-1. Install [DMClient](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API. +1. Install [ARMClient](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API. 1. Create a JSON file that defines the user-assigned managed identity that should be attached to the workspace: ```json { ml_client.jobs.stream(returned_spark_job.name) > To use an attached Synapse Spark pool, define the `compute` parameter in the `azure.ai.ml.spark` function, instead of `resources`. # [Studio UI](#tab/ui)-This functionality isn't available in the Studio UI. The Studio UI doesn't support this feature. -- ### Submit a standalone Spark job from Azure Machine Learning Studio UI To submit a standalone Spark job using the Azure Machine Learning Studio UI: To submit a standalone Spark job using the Azure Machine Learning Studio UI: 1. Review the job specification before submitting it. 1. Select **Create** to submit the standalone Spark job. ++ ## Spark component in a pipeline job A Spark component offers the flexibility to use the same component in multiple [Azure Machine Learning pipelines](./concept-ml-pipelines.md), as a pipeline step. |
machine-learning | How To Track Experiments Mlflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md | To compare and evaluate the quality of your jobs and models in AzureML Studio, u The [MLflow with Azure ML notebooks](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow) demonstrate and expand upon concepts presented in this article. - * [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines. - * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure ML using MLflow. + * [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines. + * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure ML using MLflow. ## Support matrix for querying runs and experiments |
machine-learning | How To Use Mlflow Azure Databricks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md | In this article, you will learn: ### Example notebooks -The [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb) demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments and models with the MLflow instance in Azure Databricks and leverage Azure ML for deployment. +The [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb) demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments and models with the MLflow instance in Azure Databricks and leverage Azure ML for deployment. ## Install libraries mlflow.set_registry_uri(azureml_mlflow_uri) > [!NOTE] > The value of `azureml_mlflow_uri` was obtained in the same way it was demostrated in [Set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) -For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb). +For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb). ## Deploying and consuming models registered in Azure Machine Learning Models registered in Azure Machine Learning Service using MLflow can be consumed You can leverage the `azureml-mlflow` plugin to deploy a model to your Azure Machine Learning workspace. Check [How to deploy MLflow models](how-to-deploy-mlflow-models.md) page for a complete detail about how to deploy models to the different targets. > [!IMPORTANT]-> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models happen to be registered in the MLflow instance inside Azure Databricks, you will have to register them again in Azure Machine Learning. If this is you case, please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb) +> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models happen to be registered in the MLflow instance inside Azure Databricks, you will have to register them again in Azure Machine Learning. If this is you case, please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb) ### Deploy models to ADB for batch scoring using UDFs |
machine-learning | Interactive Data Wrangling With Apache Spark Azure Ml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md | Data in the Azure storage account should become accessible once the user identit Azure Machine Learning offers Managed (Automatic) Spark compute, and [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md), for interactive data wrangling with Apache Spark, in Azure Machine Learning Notebooks. The Managed (Automatic) Spark compute does not require creation of resources in the Azure Synapse workspace. Instead, a fully managed automatic Spark compute becomes directly available in the Azure Machine Learning Notebooks. Using a Managed (Automatic) Spark compute is the easiest approach to access a Spark cluster in Azure Machine Learning. -### Create and configure Managed (Automatic) Spark compute in Azure Machine Learning Notebooks +### Managed (Automatic) Spark compute in Azure Machine Learning Notebooks -We can create a Managed (Automatic) Spark compute from the Machine Learning Notebooks user interface. To create a notebook, a first-time user should select **Notebooks** from the left panel in Azure Machine Learning studio, and then select **Start with an empty notebook**. Azure Machine Learning studio offers additional options to upload existing notebooks, and to clone notebooks from a git repository. +A Managed (Automatic) Spark compute is available in Azure Machine Learning Notebooks by default. To access it in a notebook, select **AzureML Spark Compute** under **Azure Machine Learning Spark** from the **Compute** selection menu. --To create and configure a Managed (Automatic) Spark compute in an open notebook: --1. Select the ellipses **(…)** next to the **Compute** selection menu. -1. Select **+ Create Azure ML compute**. Sometimes, the ellipses may not appear. In this case, directly select the **+** icon next to the **Compute** selection menu. -- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/create-azure-ml-compute-resource-in-a-notebook.png" alt-text="Screenshot highlighting the Create Azure ML compute option of a specific Azure Notebook tab."::: --1. Select **Azure Machine Learning Spark**. -1. Select **Create**. -- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/add-azure-machine-learning-spark-compute-type.png" alt-text="Screenshot highlighting the Azure Machine Learning Spark option at the Add new compute type screen."::: --1. Under **Azure Machine Learning Spark**, select **AzureML Spark Compute** from the **Compute** selection menu -- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/select-azure-ml-spark-compute.png" alt-text="Screenshot highlighting the selected Azure Machine Learning Spark option at the Add new compute type screen."::: The Notebooks UI also provides options for Spark session configuration, for the Managed (Automatic) Spark compute. To configure a Spark session: |
machine-learning | Migrate To V2 Command Job | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-command-job.md | In SDK v2, "experiments" and "runs" are consolidated into jobs. A job has a type. Most jobs are command jobs that run a `command`, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else. -To upgrade, you'll need to change your code for submitting jobs to SDK v2. What you run _within_ the job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more details, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md). +To upgrade, you'll need to change your code for submitting jobs to SDK v2. What you run _within_ the job doesn't need to be upgraded to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more details, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md). This article gives a comparison of scenario(s) in SDK v1 and SDK v2. |
machine-learning | Migrate To V2 Execution Automl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md | Title: Upgrade AutoML to SDK v2 -description: Migrate AutoML from v1 to v2 of Azure Machine Learning SDK +description: Upgrade AutoML from v1 to v2 of Azure Machine Learning SDK |
machine-learning | Migrate To V2 Execution Hyperdrive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-hyperdrive.md | A job has a type. Most jobs are command jobs that run a `command`, like `python A sweep job is another type of job, which defines sweep settings and can be initiated by calling the sweep method of command. -To migrate, you'll need to change your code for defining and submitting your hyperparameter tuning experiment to SDK v2. What you run _within_ the job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more information, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md). +To upgrade, you'll need to change your code for defining and submitting your hyperparameter tuning experiment to SDK v2. What you run _within_ the job doesn't need to be upgraded to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more information, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md). This article gives a comparison of scenario(s) in SDK v1 and SDK v2. |
machine-learning | Migrate To V2 Execution Parallel Run Step | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-parallel-run-step.md | In SDK v2, "Parallel run step" is consolidated into job concept as `parallel job To upgrade your current sdk v1 parallel run step to v2, you'll need to - Use `parallel_run_function` to create parallel job by replacing `ParallelRunConfig` and `ParallelRunStep` in v1.-- Migrate your v1 pipeline to v2. Then invoke your v2 parallel job as a step in your v2 pipeline. See [how to migrate pipeline from v1 to v2](migrate-to-v2-execution-pipeline.md) for the details about pipeline migration.+- Upgrade your v1 pipeline to v2. Then invoke your v2 parallel job as a step in your v2 pipeline. See [how to upgrade pipeline from v1 to v2](migrate-to-v2-execution-pipeline.md) for the details about pipeline upgrade. -> Note: User __entry script__ is compatible between v1 parallel run step and v2 parallel job. So you can keep using the same entry_script.py when you migrate your parallel run job. +> Note: User __entry script__ is compatible between v1 parallel run step and v2 parallel job. So you can keep using the same entry_script.py when you upgrade your parallel run job. This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the following examples, we'll build a parallel job to predict input data in a pipelines job. You'll see how to build a parallel job, and how to use it in a pipeline job for both SDK v1 and SDK v2. |
machine-learning | Migrate To V2 Execution Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md | A job has a type. Most jobs are command jobs that run a `command`, like `python A `pipeline` is another type of job, which defines child jobs that may have input/output relationships, forming a directed acyclic graph (DAG). -To migrate, you'll need to change your code for defining and submitting the pipelines to SDK v2. What you run _within_ the child job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more information, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md). +To upgrade, you'll need to change your code for defining and submitting the pipelines to SDK v2. What you run _within_ the child job doesn't need to be upgraded to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more information, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md). This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the following examples, we'll build three steps (train, score and evaluate) into a dummy pipeline job. This demonstrates how to build pipeline jobs using SDK v1 and SDK v2, and how to consume data and transfer data between steps. |
machine-learning | Reference Machine Learning Cloud Parity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md | The information in the rest of this document provides information on what featur | Managed compute Instances for integrated Notebooks | GA | YES | YES | | Jupyter, JupyterLab Integration | GA | YES | YES | | Virtual Network (VNet) support | GA | YES | YES |+| [Configure Apache Spark pools to perform data wrangling](apache-spark-azure-ml-concepts.md) | Public Preview | No | No | | **SDK support** | | | | | [Python SDK support](/python/api/overview/azure/ml/) | GA | YES | YES | | **[Security](concept-enterprise-security.md)** | | | | |
machine-learning | Tutorial Azure Ml In A Day | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md | |
machine-learning | How To Auto Train Forecast V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast-v1.md | + + Title: Set up AutoML for time-series forecasting (SDKv1) ++description: Set up Azure Machine Learning automated ML to train time-series forecasting models with the Azure Machine Learning Python SDKv1. ++++++++ Last updated : 11/18/2021+show_latex: true +++# Set up AutoML to train a time-series forecasting model with Python (SDKv1) +++> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning SDK you are using:"] +> * [v1](how-to-auto-train-forecast-v1.md) +> * [v2 (current version)](../how-to-auto-train-forecast.md) ++In this article, you learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/). ++To do so, you: ++> [!div class="checklist"] +> * Prepare data for time series modeling. +> * Configure specific time-series parameters in an [`AutoMLConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object. +> * Run predictions with time-series data. ++For a low code experience, see the [Tutorial: Forecast demand with automated machine learning](../tutorial-automated-ml-forecast.md) for a time-series forecasting example using automated ML in the [Azure Machine Learning studio](https://ml.azure.com/). ++Unlike classical time series methods, in automated ML, past time-series values are "pivoted" to become additional dimensions for the regressor together with other predictors. This approach incorporates multiple contextual variables and their relationship to one another during training. Since multiple factors can influence a forecast, this method aligns itself well with real world forecasting scenarios. For example, when forecasting sales, interactions of historical trends, exchange rate, and price all jointly drive the sales outcome. ++## Prerequisites ++For this article you need, ++* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](../quickstart-create-resources.md). ++* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [how-to](../how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns. ++ [!INCLUDE [automl-sdk-version](../../../includes/machine-learning-automl-sdk-version.md)] ++## Training and validation data ++The most important difference between a forecasting regression task type and regression task type within automated ML is including a feature in your training data that represents a valid time series. A regular time series has a well-defined and consistent frequency and has a value at every sample point in a continuous time span. ++> [!IMPORTANT] +> When training a model for forecasting future values, ensure all the features used in training can be used when running predictions for your intended horizon. For example, when creating a demand forecast, including a feature for current stock price could massively increase training accuracy. However, if you intend to forecast with a long horizon, you may not be able to accurately predict future stock values corresponding to future time-series points, and model accuracy could suffer. ++You can specify separate [training data and validation data](concept-automated-ml-v1.md#training-validation-and-test-data) directly in the `AutoMLConfig` object. Learn more about the [AutoMLConfig](#configure-experiment). ++For time series forecasting, only **Rolling Origin Cross Validation (ROCV)** is used for validation by default. ROCV divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds. This strategy preserves the time series data integrity and eliminates the risk of data leakage. +++Pass your training and validation data as one dataset to the parameter `training_data`. Set the number of cross validation folds with the parameter `n_cross_validations` and set the number of periods between two consecutive cross-validation folds with `cv_step_size`. You can also leave either or both parameters empty and AutoML will set them automatically. +++```python +automl_config = AutoMLConfig(task='forecasting', + training_data= training_data, + n_cross_validations="auto", # Could be customized as an integer + cv_step_size = "auto", # Could be customized as an integer + ... + **time_series_settings) +``` +++You can also bring your own validation data, learn more in [Configure data splits and cross-validation in AutoML](../how-to-configure-cross-validation-data-splits.md#provide-validation-data). ++Learn more about how AutoML applies cross validation to [prevent over-fitting models](../concept-manage-ml-pitfalls.md#prevent-overfitting). ++## Configure experiment ++The [`AutoMLConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object defines the settings and data necessary for an automated machine learning task. Configuration for a forecasting model is similar to the setup of a standard regression model, but certain models, configuration options, and featurization steps exist specifically for time-series data. ++### Supported models ++Automated machine learning automatically tries different models and algorithms as part of the model creation and tuning process. As a user, there is no need for you to specify the algorithm. For forecasting experiments, both native time-series and deep learning models are part of the recommendation system. ++>[!Tip] +> Traditional regression models are also tested as part of the recommendation system for forecasting experiments. See a complete list of the [supported models](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels) in the SDK reference documentation. +++### Configuration settings ++Similar to a regression problem, you define standard training parameters like task type, number of iterations, training data, and number of cross-validations. Forecasting tasks require the `time_column_name` and `forecast_horizon` parameters to configure your experiment. If the data includes multiple time series, such as sales data for multiple stores or energy data across different states, automated ML automatically detects this and sets the `time_series_id_column_names` parameter (preview) for you. You can also include additional parameters to better configure your run, see the [optional configurations](#optional-configurations) section for more detail on what can be included. ++> [!IMPORTANT] +> Automatic time series identification is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++| Parameter name | Description | +|-|-| +|`time_column_name`|Used to specify the datetime column in the input data used for building the time series and inferring its frequency.| +|`forecast_horizon`|Defines how many periods forward you would like to forecast. The horizon is in units of the time series frequency. Units are based on the time interval of your training data, for example, monthly, weekly that the forecaster should predict out.| ++The following code, +* Leverages the [`ForecastingParameters`](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters) class to define the forecasting parameters for your experiment training +* Sets the `time_column_name` to the `day_datetime` field in the data set. +* Sets the `forecast_horizon` to 50 in order to predict for the entire test set. ++```python +from azureml.automl.core.forecasting_parameters import ForecastingParameters ++forecasting_parameters = ForecastingParameters(time_column_name='day_datetime', + forecast_horizon=50, + freq='W') + +``` ++These `forecasting_parameters` are then passed into your standard `AutoMLConfig` object along with the `forecasting` task type, primary metric, exit criteria, and training data. ++```python +from azureml.core.workspace import Workspace +from azureml.core.experiment import Experiment +from azureml.train.automl import AutoMLConfig +import logging ++automl_config = AutoMLConfig(task='forecasting', + primary_metric='normalized_root_mean_squared_error', + experiment_timeout_minutes=15, + enable_early_stopping=True, + training_data=train_data, + label_column_name=label, + n_cross_validations="auto", # Could be customized as an integer + cv_step_size = "auto", # Could be customized as an integer + enable_ensembling=False, + verbosity=logging.INFO, + forecasting_parameters=forecasting_parameters) +``` ++The amount of data required to successfully train a forecasting model with automated ML is influenced by the `forecast_horizon`, `n_cross_validations`, and `target_lags` or `target_rolling_window_size` values specified when you configure your `AutoMLConfig`. ++The following formula calculates the amount of historic data that what would be needed to construct time series features. ++Minimum historic data required: (2x `forecast_horizon`) + #`n_cross_validations` + max(max(`target_lags`), `target_rolling_window_size`) ++An `Error exception` is raised for any series in the dataset that does not meet the required amount of historic data for the relevant settings specified. ++### Featurization steps ++In every automated machine learning experiment, automatic scaling and normalization techniques are applied to your data by default. These techniques are types of **featurization** that help *certain* algorithms that are sensitive to features on different scales. Learn more about default featurization steps in [Featurization in AutoML](../how-to-configure-auto-features.md#automatic-featurization) ++However, the following steps are performed only for `forecasting` task types: ++* Detect time-series sample frequency (for example, hourly, daily, weekly) and create new records for absent time points to make the series continuous. +* Impute missing values in the target (via forward-fill) and feature columns (using median column values) +* Create features based on time series identifiers to enable fixed effects across different series +* Create time-based features to assist in learning seasonal patterns +* Encode categorical variables to numeric quantities +* Detect the non-stationary time series and automatically differencing them to mitigate the impact of unit roots. ++To view the full list of possible engineered features generated from time series data, see [TimeIndexFeaturizer Class](/python/api/azureml-automl-runtime/azureml.automl.runtime.featurizer.transformer.timeseries.time_index_featurizer). ++> [!NOTE] +> Automated machine learning featurization steps (feature normalization, handling missing data, +> converting text to numeric, etc.) become part of the underlying model. When using the model for +> predictions, the same featurization steps applied during training are applied to +> your input data automatically. ++#### Customize featurization ++You also have the option to customize your featurization settings to ensure that the data and features that are used to train your ML model result in relevant predictions. ++Supported customizations for `forecasting` tasks include: ++|Customization|Definition| +|--|--| +|**Column purpose update**|Override the auto-detected feature type for the specified column.| +|**Transformer parameter update** |Update the parameters for the specified transformer. Currently supports *Imputer* (fill_value and median).| +|**Drop columns** |Specifies columns to drop from being featurized.| ++To customize featurizations with the SDK, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. Learn more about [custom featurizations](../how-to-configure-auto-features.md#customize-featurization). ++>[!NOTE] +> The **drop columns** functionality is deprecated as of SDK version 1.19. Drop columns from your dataset as part of data cleansing, prior to consuming it in your automated ML experiment. ++```python +featurization_config = FeaturizationConfig() ++# `logQuantity` is a leaky feature, so we remove it. +featurization_config.drop_columns = ['logQuantitity'] ++# Force the CPWVOL5 feature to be of numeric type. +featurization_config.add_column_purpose('CPWVOL5', 'Numeric') ++# Fill missing values in the target column, Quantity, with zeroes. +featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0}) ++# Fill mising values in the `INCOME` column with median value. +featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"}) +``` ++If you're using the Azure Machine Learning studio for your experiment, see [how to customize featurization in the studio](../how-to-use-automated-ml-for-ml-models.md#customize-featurization). ++## Optional configurations ++Additional optional configurations are available for forecasting tasks, such as enabling deep learning and specifying a target rolling window aggregation. A complete list of additional parameters is available in the [ForecastingParameters SDK reference documentation](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters). ++### Frequency & target data aggregation ++Leverage the frequency, `freq`, parameter to help avoid failures caused by irregular data, that is data that doesn't follow a set cadence, like hourly or daily data. ++For highly irregular data or for varying business needs, users can optionally set their desired forecast frequency, `freq`, and specify the `target_aggregation_function` to aggregate the target column of the time series. Leverage these two settings in your `AutoMLConfig` object can help save some time on data preparation. ++Supported aggregation operations for target column values include: ++|Function | Description +|| +|`sum`| Sum of target values +|`mean`| Mean or average of target values +|`min`| Minimum value of a target  +|`max`| Maximum value of a target  ++### Enable deep learning ++> [!NOTE] +> DNN support for forecasting in Automated Machine Learning is in **preview** and not supported for local runs or runs initiated in Databricks. ++You can also apply deep learning with deep neural networks, DNNs, to improve the scores of your model. Automated ML's deep learning allows for forecasting univariate and multivariate time series data. ++Deep learning models have three intrinsic capabilities: +1. They can learn from arbitrary mappings from inputs to outputs +1. They support multiple inputs and outputs +1. They can automatically extract patterns in input data that spans over long sequences. ++To enable deep learning, set the `enable_dnn=True` in the `AutoMLConfig` object. ++```python +automl_config = AutoMLConfig(task='forecasting', + enable_dnn=True, + ... + forecasting_parameters=forecasting_parameters) +``` +> [!Warning] +> When you enable DNN for experiments created with the SDK, [best model explanations](how-to-machine-learning-interpretability-automl.md) are disabled. ++To enable DNN for an AutoML experiment created in the Azure Machine Learning studio, see the [task type settings in the studio UI how-to](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment). +++### Target rolling window aggregation ++Often the best information a forecaster can have is the recent value of the target. Target rolling window aggregations allow you to add a rolling aggregation of data values as features. Generating and using these features as extra contextual data helps with the accuracy of the train model. ++For example, say you want to predict energy demand. You might want to add a rolling window feature of three days to account for thermal changes of heated spaces. In this example, create this window by setting `target_rolling_window_size= 3` in the `AutoMLConfig` constructor. ++The table shows resulting feature engineering that occurs when window aggregation is applied. Columns for **minimum, maximum,** and **sum** are generated on a sliding window of three based on the defined settings. Each row has a new calculated feature, in the case of the timestamp for September 8, 2017 4:00am the maximum, minimum, and sum values are calculated using the **demand values** for September 8, 2017 1:00AM - 3:00AM. This window of three shifts along to populate data for the remaining rows. ++ ++View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb). ++### Short series handling ++Automated ML considers a time series a **short series** if there are not enough data points to conduct the train and validation phases of model development. The number of data points varies for each experiment, and depends on the max_horizon, the number of cross validation splits, and the length of the model lookback, that is the maximum of history that's needed to construct the time-series features. ++Automated ML offers short series handling by default with the `short_series_handling_configuration` parameter in the `ForecastingParameters` object. ++To enable short series handling, the `freq` parameter must also be defined. To define an hourly frequency, we will set `freq='H'`. View the frequency string options by visiting the [pandas Time series page DataOffset objects section](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects). To change the default behavior, `short_series_handling_configuration = 'auto'`, update the `short_series_handling_configuration` parameter in your `ForecastingParameter` object. ++```python +from azureml.automl.core.forecasting_parameters import ForecastingParameters ++forecast_parameters = ForecastingParameters(time_column_name='day_datetime', + forecast_horizon=50, + short_series_handling_configuration='auto', + freq = 'H', + target_lags='auto') +``` +The following table summarizes the available settings for `short_series_handling_config`. + +|Setting|Description +|| +|`auto`| The default value for short series handling. <br> - _If all series are short_, pad the data. <br> - _If not all series are short_, drop the short series. +|`pad`| If `short_series_handling_config = pad`, then automated ML adds random values to each short series found. The following lists the column types and what they're padded with: <br> - Object columns with NaNs <br> - Numeric columns with 0 <br> - Boolean/logic columns with False <br> - The target column is padded with random values with mean of zero and standard deviation of 1. +|`drop`| If `short_series_handling_config = drop`, then automated ML drops the short series, and it will not be used for training or prediction. Predictions for these series will return NaN's. +|`None`| No series is padded or dropped ++>[!WARNING] +>Padding may impact the accuracy of the resulting model, since we are introducing artificial data just to get past training without failures. If many of the series are short, then you may also see some impact in explainability results ++### Non-stationary time series detection and handling ++A time series whose moments (mean and variance) change over time is called a **non-stationary**. For example, time series that exhibit stochastic trends are non-stationary by nature. To visualize this, the below image plots a series that is generally trending upward. Now, compute and compare the mean (average) values for the first and the second half of the series. Are they the same? Here, the mean of the series in the first half of the plot is significantly smaller than in the second half. The fact that the mean of the series depends on the time interval one is looking at, is an example of the time-varying moments. Here, the mean of a series is the first moment. +++Next, let's examine the image below, which plots the the original series in first differences, $x_t = y_t - y_{t-1}$ where $x_t$ is the change in retail sales and $y_t$ and $y_{t-1}$ represent the original series and its first lag, respectively. The mean of the series is roughly constant regardless the time frame one is looking at. This is an example of a first order stationary times series. The reason we added the first order term is because the first moment (mean) does not change with time interval, the same cannot be said about the variance, which is a second moment. ++++AutoML Machine learning models can not inherently deal with stochastic trends, or other well-known problems associated with non-stationary time series. As a result, their out of sample forecast accuracy will be "poor" if such trends are present. ++AutoML automatically analyzes time series dataset to check whether it is stationary or not. When non-stationary time series are detected, AutoML applies a differencing transform automatically to mitigate the impact of non-stationary time series. ++## Run the experiment ++When you have your `AutoMLConfig` object ready, you can submit the experiment. After the model finishes, retrieve the best run iteration. +++```python +ws = Workspace.from_config() +experiment = Experiment(ws, "Tutorial-automl-forecasting") +local_run = experiment.submit(automl_config, show_output=True) +best_run, fitted_model = local_run.get_output() +``` + +## Forecasting with best model ++Use the best model iteration to forecast values for data that wasn't used to train the model. + +### Evaluating model accuracy with a rolling forecast ++Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a so-called rolling evaluation which rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows to obtain statistically robust estimates for some set of chosen metrics. Ideally, the test set for the evaluation is long relative to the model's forecast horizon. Estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable. ++For example, suppose you train a model on daily sales to predict demand up to two weeks (14 days) into the future. If there is sufficient historic data available, you might reserve the final several months to even a year of the data for the test set. The rolling evaluation begins by generating a 14-day-ahead forecast for the first two weeks of the test set. Then, the forecaster is advanced by some number of days into the test set and you generate another 14-day-ahead forecast from the new position. The process continues until you get to the end of the test set. ++To do a rolling evaluation, you call the `rolling_forecast` method of the `fitted_model`, then compute desired metrics on the result. For example, assume you have test set features in a pandas DataFrame called `test_features_df` and the test set actual values of the target in a numpy array called `test_target`. A rolling evaluation using the mean squared error is shown in the following code sample: ++```python +from sklearn.metrics import mean_squared_error +rolling_forecast_df = fitted_model.rolling_forecast( + test_features_df, test_target, step=1) +mse = mean_squared_error( + rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name]) +``` ++In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). + +### Prediction into the future ++The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ++In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features. ++You can also use the `forecast_destination` parameter in the `forecast_quantiles()` function to forecast values up to a specified date. ++```python +label_query = test_labels.copy().astype(np.float) +label_query.fill(np.nan) +label_fcst, data_trans = fitted_model.forecast_quantiles( + test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) +``` ++Often customers want to understand the predictions at a specific quantile of the distribution. For example, when the forecast is used to control inventory like grocery items or virtual machines for a cloud service. In such cases, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". The following demonstrates how to specify which quantiles you'd like to see for your predictions, such as 50th or 95th percentile. If you don't specify a quantile, like in the aforementioned code example, then only the 50th percentile predictions are generated. ++```python +# specify which quantiles you would like +fitted_model.quantiles = [0.05,0.5, 0.9] +fitted_model.forecast_quantiles( + test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) +``` ++You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example. ++After the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values. ++Supply a data set in the same format as the test set `test_dataset` but with future datetimes, and the resulting prediction set is the forecasted values for each time-series step. Assume the last time-series records in the data set were for 12/31/2018. To forecast demand for the next day (or as many periods as you need to forecast, <= `forecast_horizon`), create a single time series record for each store for 01/01/2019. ++```output +day_datetime,store,week_of_year +01/01/2019,A,1 +01/01/2019,A,1 +``` ++Repeat the necessary steps to load this future data to a dataframe and then run `best_run.forecast_quantiles(test_dataset)` to predict future values. ++> [!NOTE] +> In-sample predictions are not supported for forecasting with automated ML when `target_lags` and/or `target_rolling_window_size` are enabled. ++## Forecasting at scale ++There are scenarios where a single machine learning model is insufficient and multiple machine learning models are needed. For instance, predicting sales for each individual store for a brand, or tailoring an experience to individual users. Building a model for each instance can lead to improved results on many machine learning problems. ++Grouping is a concept in time series forecasting that allows time series to be combined to train an individual model per group. This approach can be particularly helpful if you have time series which require smoothing, filling or entities in the group that can benefit from history or trends from other entities. Many models and hierarchical time series forecasting are solutions powered by automated machine learning for these large scale forecasting scenarios. ++### Many models ++The Azure Machine Learning many models solution with automated machine learning allows users to train and manage millions of models in parallel. Many models The solution accelerator leverages [Azure Machine Learning pipelines](../concept-ml-pipelines.md) to train the model. Specifically, a [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline%28class%29) object and [ParalleRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep) are used and require specific configuration parameters set through the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig). +++The following diagram shows the workflow for the many models solution. ++ ++The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example ++```python +from azureml.train.automl.runtime._many_models.many_models_parameters import ManyModelsTrainParameters ++partition_column_names = ['Store', 'Brand'] +automl_settings = {"task" : 'forecasting', + "primary_metric" : 'normalized_root_mean_squared_error', + "iteration_timeout_minutes" : 10, #This needs to be changed based on the dataset. Explore how long training is taking before setting this value + "iterations" : 15, + "experiment_timeout_hours" : 1, + "label_column_name" : 'Quantity', + "n_cross_validations" : "auto", # Could be customized as an integer + "cv_step_size" : "auto", # Could be customized as an integer + "time_column_name": 'WeekStarting', + "max_horizon" : 6, + "track_child_runs": False, + "pipeline_fetch_max_batch_size": 15,} ++mm_paramters = ManyModelsTrainParameters(automl_settings=automl_settings, partition_column_names=partition_column_names) ++``` ++### Hierarchical time series forecasting ++In most applications, customers have a need to understand their forecasts at a macro and micro level of the business; whether that be predicting sales of products at different geographic locations, or understanding the expected workforce demand for different organizations at a company. The ability to train a machine learning model to intelligently forecast on hierarchy data is essential. ++A hierarchical time series is a structure in which each of the unique series are arranged into a hierarchy based on dimensions such as, geography or product type. The following example shows data with unique attributes that form a hierarchy. Our hierarchy is defined by: the product type such as headphones or tablets, the product category which splits product types into accessories and devices, and the region the products are sold in. ++ + +To further visualize this, the leaf levels of the hierarchy contain all the time series with unique combinations of attribute values. Each higher level in the hierarchy considers one less dimension for defining the time series and aggregates each set of child nodes from the lower level into a parent node. + + ++The hierarchical time series solution is built on top of the Many Models Solution and share a similar configuration setup. ++The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example. ++```python ++from azureml.train.automl.runtime._hts.hts_parameters import HTSTrainParameters ++model_explainability = True ++engineered_explanations = False # Define your hierarchy. Adjust the settings below based on your dataset. +hierarchy = ["state", "store_id", "product_category", "SKU"] +training_level = "SKU"# Set your forecast parameters. Adjust the settings below based on your dataset. +time_column_name = "date" +label_column_name = "quantity" +forecast_horizon = 7 +++automl_settings = {"task" : "forecasting", + "primary_metric" : "normalized_root_mean_squared_error", + "label_column_name": label_column_name, + "time_column_name": time_column_name, + "forecast_horizon": forecast_horizon, + "hierarchy_column_names": hierarchy, + "hierarchy_training_level": training_level, + "track_child_runs": False, + "pipeline_fetch_max_batch_size": 15, + "model_explainability": model_explainability,# The following settings are specific to this sample and should be adjusted according to your own needs. + "iteration_timeout_minutes" : 10, + "iterations" : 10, + "n_cross_validations" : "auto", # Could be customized as an integer + "cv_step_size" : "auto", # Could be customized as an integer + } ++hts_parameters = HTSTrainParameters( + automl_settings=automl_settings, + hierarchy_column_names=hierarchy, + training_level=training_level, + enable_engineered_explanations=engineered_explanations +) +``` ++## Example notebooks ++See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including: ++* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) +* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) +* [configurable lags](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) +* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) +++## Next steps ++* Learn more about [How to deploy an AutoML model to an online endpoint](../how-to-deploy-automl-endpoint.md). +* Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md). +* Learn about [how AutoML builds forecasting models](../concept-automl-forecasting-methods.md). + |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md | For more information on using Azure Databricks in a virtual network, see [Deploy The following configurations are in addition to those listed in the [Prerequisites](#prerequisites) section, and are specific to **creating** a compute instances/clusters configured for no public IP: -+ Your workspace must use a private endpoint to connect to the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md). ++ You must use a workspace private endpoint for the compute resource to communicate with Azure Machine Learning services from the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md). + In your VNet, allow **outbound** traffic to the following service tags or fully qualified domain names (FQDN): |
mysql | Concepts Networking Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md | Here are some concepts to be familiar with when using virtual networks with MySQ - If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, create private DNS zones that end with `mysql.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md). > [!IMPORTANT] - > Private DNS zone names must end with `mysql.database.azure.com`. If you are connecting to an Azure Database for MySQL flexible sever with SSL and you're using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string. + > Private DNS zone names must end with `mysql.database.azure.com`. If you are connecting to an Azure Database for MySQL flexible server with SSL and you're using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string. Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md). |
mysql | How To Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md | mysql -h mydb.mysql.database.azure.com \ - Launch MySQL Workbench and Select the Database option, then select **Connect to database**. - In the hostname field, enter the MySQL FQDN for example, mysql.database.azure.com.-- In the username field, enter the MySQL Azure Active Directory administrator name and append this with the MySQL server name, not the FQDN for example, user@tenant.onmicrosoft.com.+- In the username field, enter the MySQL Azure Active Directory administrator name. For example, user@tenant.onmicrosoft.com. - In the password field, select **Store in Vault** and paste in the access token from the file for example, C:\temp\MySQLAccessToken.txt. - Select the advanced tab and ensure that you check **Enable Cleartext Authentication Plugin**. - Select OK to connect to the database. |
networking | Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md | For more information about different types of VPN connections, see [What is VPN Azure Virtual WAN is a networking service that provides optimized and automated branch connectivity to, and through, Azure. Azure regions serve as hubs that you can choose to connect your branches to. You can leverage the Azure backbone to also connect branches for branch-to-VNet connectivity. Azure Virtual WAN brings together many Azure cloud connectivity services such as site-to-site VPN, ExpressRoute, and point-to-site user VPN into a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. For more information, see [What is Azure Virtual WAN?](../../virtual-wan/virtual-wan-about.md). ### <a name="dns"></a>Azure DNS Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services. For more information, see [What is Azure DNS?](../../dns/dns-overview.md). |
partner-solutions | Dynatrace Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md | Use the Azure portal to find Azure Native Dynatrace Service application. 1. When creating the Dynatrace resource, you can set up automatic log forwarding for three types of logs: - - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription. + - **Send subscription activity logs** - Subscription activity logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription. - - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type. - - - **Azure Active Directory logs** – The global administrator or security administrator for your Azure Active Directory (Azure AD) tenant can enable Azure AD logs so that you can route the audit, sign-in, and provisioning logs to Dynatrace. The details are listed in [Azure AD activity logs in Azure Monitor](../../active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md). + - **Send Azure resource logs for all defined sources** - Azure resource logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type. ++ - **Send Azure Active Directory logs** – Azure Active Directory logs allow you to route the audit, sign-in, and provisioning logs to Dynatrace. The details are listed in [Azure AD activity logs in Azure Monitor](/azure/active-directory/reports-monitoring/concept-activity-logs-azure-monitor). The global administrator or security administrator for your Azure Active Directory (AAD) tenant can enable AAD logs. 1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace. |
partner-solutions | Dynatrace How To Configure Prereqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md | Title: Configure pre-deployment to use Azure Native Dynatrace Service description: This article describes how to complete the prerequisites for Dynatrace on the Azure portal. Previously updated : 02/04/2023 Last updated : 02/02/2023 |
partner-solutions | Dynatrace How To Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md | Title: Manage your Azure Native Dynatrace Service integration description: This article describes how to manage Dynatrace on the Azure portal. Previously updated : 02/04/2023 Last updated : 02/02/2023 |
partner-solutions | Dynatrace Link To Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md | Title: Linking to an existing Azure Native Dynatrace Service resource description: This article describes how to use the Azure portal to link to an instance of Dynatrace. Previously updated : 02/04/2023 Last updated : 02/02/2023 |
partner-solutions | Dynatrace Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md | Title: Azure Native Dynatrace Service overview description: Learn about using the Dynatrace Cloud-Native Observability Platform in the Azure Marketplace. Previously updated : 02/04/2023 Last updated : 02/02/2023 |
partner-solutions | Dynatrace Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md | Title: Troubleshooting Azure Native Dynatrace Service description: This article provides information about troubleshooting Dynatrace for Azure Previously updated : 02/04/2023 Last updated : 02/02/2023 |
postgresql | Concepts Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md | An example that uses an FQDN as a host name is `hostname = servername.postgres.d Azure Database for PostgreSQL - Flexible Server enforces connecting your client applications to the PostgreSQL service by using Transport Layer Security (TLS). TLS is an industry-standard protocol that ensures encrypted network connections between your database server and client applications. TLS is an updated protocol of Secure Sockets Layer (SSL). -Azure Database for PostgreSQL supports TLS 1.2 and later. In [RFC 8996](https://datatracker.ietf.org/doc/rfc8996/), the Internet Engineering Task Force (IETF) explicitly states that TLS 1.0 and TLS 1.1 must not be used. Both protocols were deprecated by the end of 2019. +There are several government entities worldwide that maintain guidelines for TLS with regard to network security, including Department of Health and Human Services (HHS) or the National Institute of Standards and Technology (NIST) in the United States. The level of security that TLS provides is most affected by the TLS protocol version and the supported cipher suites. A cipher suite is a set of algorithms, including a cipher, a key-exchange algorithm and a hashing algorithm, which are used together to establish a secure TLS connection. Most TLS clients and servers support multiple alternatives, so they have to negotiate when establishing a secure connection to select a common TLS version and cipher suite. ++Azure Database for PostgreSQL supports TLS version 1.2 and later. In [RFC 8996](https://datatracker.ietf.org/doc/rfc8996/), the Internet Engineering Task Force (IETF) explicitly states that TLS 1.0 and TLS 1.1 must not be used. Both protocols were deprecated by the end of 2019. All incoming connections that use earlier versions of the TLS protocol, such as TLS 1.0 and TLS 1.1, will be denied by default. All incoming connections that use earlier versions of the TLS protocol, such as > SSL and TLS certificates certify that your connection is secured with state-of-the-art encryption protocols. By encrypting your connection on the wire, you prevent unauthorized access to your data while in transit. This is why we strongly recommend using latest versions of TLS to encrypt your connections to Azure Database for PostgreSQL - Flexible Server. > Although its not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the **require_secure_transport** server parameter to OFF. You can also set TLS version by setting **ssl_min_protocol_version** and **ssl_max_protocol_version** server parameters. +[Certificate authentication](https://www.postgresql.org/docs/current/auth-cert.html) is performed using **SSL client certificates** for authentication. In this scenario, PostgreSQL server compares the CN (common name) attribute of the client certificate presented, against the requested database user. +**Azure Database for PostgreSQL - Flexible Server does not support SSL certificate based authentication at this time.** +++ ## Next steps * Learn how to create a flexible server by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md). |
postgresql | How To Configure Sign In Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md | If the service principal exists, you'll see the following output. ```output ObjectId AppId DisplayName -- -- ---0049e2e2-fcea-4bc4-af90-bdb29a9bbe98 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9 Azure OSSRDBMS PostgreSQL Flexible Server +0049e2e2-fcea-4bc4-af90-bdb29a9bbe98 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9 FSPG MS Graph App ``` > [!IMPORTANT] -> If you are not a **Global Administrator**, **Privileged Role Administrator**, **Tenant Creator**,**Application Owner** you can't proceed past this step. +> If you are not a **Global Administrator**, **Tenant Creator**, or **Application Owner**, you can't proceed past this step. -### Grant read access +### Create Azure Database for PostgreSQL Flexible Server service principal and grant read access -Grant Azure Database for PostgreSQL - Flexible Server Service Principal read access to a customer tenant to request Graph API tokens for Azure AD validation tasks: +If the Azure Database for PostgreSQL Flexible Server service principal doesn't exist, the following command creates it and grants it read access to your customer tenant to request Graph API tokens for Azure AD validation tasks: ```powershell New-AzureADServicePrincipal -AppId 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9 You're now authenticated to your Azure Database for PostgreSQL server through Az To enable an Azure AD group to access your database, use the same mechanism you used for users, but specify the group name instead. For example: ```sql-select * from pgAzure ADauth_create_principal('Prod DB Readonly', false, false). +select * from pgaadauth_create_principal('Prod DB Readonly', false, false). ``` When group members sign in, they use their access tokens but specify the group name as the username. |
postgresql | How To Create Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-users.md | -# Create users in Azure Database for PostgreSQL - Flexible Server Preview +# Create users in Azure Database for PostgreSQL - Flexible Server [!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)] The server admin user account can be used to create more users and grant those u 1. Edit and run the following SQL code. Replace your new user name with the placeholder value <new_user>, and replace the placeholder password with your own strong password. ```sql- CREATE ROLE <new_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>'; + CREATE USER <new_user> CREATEDB CREATEROLE PASSWORD '<StrongPassword!>'; GRANT azure_pg_admin TO <new_user>; ``` The server admin user account can be used to create more users and grant those u 1. Edit and run the following SQL code. Replace the placeholder value `<db_user>` with your intended new user name and placeholder value `<newdb>` with your own database name. Replace the placeholder password with your own strong password. - This sql code syntax creates a new database named testdb, for example, purposes. Then it creates a new user in the PostgreSQL service and grants connect privileges to the new database for that user. + This SQL code below creates a new database, then it creates a new user in the PostgreSQL instance and grants connect privilege to the new database for that user. ```sql CREATE DATABASE <newdb>; - CREATE ROLE <db_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>'; + CREATE USER <db_user> PASSWORD '<StrongPassword!>'; GRANT CONNECT ON DATABASE <newdb> TO <db_user>; ``` The server admin user account can be used to create more users and grant those u GRANT ALL PRIVILEGES ON DATABASE <newdb> TO <db_user>; ``` - If a user creates a table "role," the table belongs to that user. If another user needs access to the table, you must grant privileges to the other user on the table level. + If a user creates a table "role", the table belongs to that user. If another user needs access to the table, you must grant privileges to the other user on the table level. For example: The server admin user account can be used to create more users and grant those u 1. Sign in to your server, specifying the designated database, using the new username and password. This example shows the psql command line. With this command, you're prompted for the password for the user name. Replace your own server name, database name, and user name. ```shell- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=db_user@mydemoserver --dbname=newdb + psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=db_user --dbname=newdb ``` ## Next steps |
private-5g-core | Commission Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md | To view all the running pods, run: Additionally, your AKS cluster should now be visible from your Azure Stack Edge resource in the portal. +## Collect variables for the Kubernetes extensions ++Collect each of the values in the table below. ++| Value | Variable name | +|--|--| +|The ID of the Azure subscription in which the Azure resources are deployed. |**SUBSCRIPTION_ID**| +|The name of the resource group in which the AKS cluster is deployed. This can be found by using the **Manage** button in the **Azure Kubernetes Service** pane of the Azure portal. |**RESOURCE_GROUP_NAME**| +|The name of the AKS cluster resource. This can be found by using the **Manage** button in the **Azure Kubernetes Service** pane of the Azure portal. |**RESOURCE_NAME**| +|The region in which the Azure resources are deployed. This must match the region into which the mobile network will be deployed, which must be one of the regions supported by AP5GC: **EastUS** or **WestEurope**.</br></br>This value must be the [region's code name](region-code-names.md); see [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) for a list of supported regions. |**LOCATION**| +|The name of the **Custom location** resource to be created for the AKS cluster. </br></br>This value must start and end with alphanumeric characters, and must contain only alphanumeric characters, `-` or `.`. |**CUSTOM_LOCATION**| + ## Install Kubernetes extensions -The Azure Private 5G Core private mobile network requires a custom location and specific Kubernetes extensions that you need to set up using the Azure CLI in Azure Cloud Shell. +The Azure Private 5G Core private mobile network requires a custom location and specific Kubernetes extensions that you need to configure using the Azure CLI in Azure Cloud Shell. -You can obtain the *\<resource name\>* (the name of the AKS cluster) by using the **Manage** link in the **Azure Kubernetes Service** pane in the Azure portal. +> [!TIP] +> The commands in this section require the `k8s-extension` and `customlocation` extensions to the Azure CLI tool to be installed. If you do not already have them, a prompt will appear to install these when you run commands that require them. See [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview) for more information on automatic extension installation. 1. Sign in to the Azure CLI using Azure Cloud Shell. |
private-5g-core | Enable Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-azure-active-directory.md | If your deployment contains multiple sites, you can use the same two redirect UR | **Authorization URL** | In the local monitoring app registration Overview page, select **Endpoints**. Copy the contents of the **OAuth 2.0 authorization endpoint (v2)** field. | `auth_url` | | **Token URL** | In the local monitoring app registration Overview page, select **Endpoints**. Copy the contents of the **OAuth 2.0 token endpoint (v2)** field. | `token_url` | | **Client secret** | You collected this when creating the client secret in the previous step. | `client_secret` |- | **Distributed tracing redirect URI root** | Make a note of the following part of the redirect URI: **https://*\<local monitoring domain\>*/**. | `redirect_uri_root` | + | **Distributed tracing redirect URI root** | Make a note of the following part of the redirect URI: **https://*\<local monitoring domain\>***. | `redirect_uri_root` | | **Packet core dashboards redirect URI root** | Make a note of the following part of the packet core dashboards redirect URI: **https://*\<local monitoring domain\>*/grafana**. | `root_url` | ## Create Kubernetes Secret Objects |
private-link | Create Private Link Service Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-cli.md | Create a load balancer rule with [az network lb rule create](/cli/azure/network/ --enable-tcp-reset true ``` +## Disable network policy ++Before a private link service can be created in the virtual network, the setting `privateLinkServiceNetworkPolicies` must be disabled. ++* Disable the network policy with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update). ++```azurecli-interactive +az network vnet subnet update \ + --name mySubnet \ + --vnet-name MyVnet \ + --resource-group CreatePrivLinkService-rg \ + --disable-private-link-service-network-policies yes +``` + ## Create a private link service In this section, create a private link service that uses the Azure Load Balancer created in the previous step. az network private-link-service create \ Your private link service is created and can receive traffic. If you want to see traffic flows, configure your application behind your standard load balancer. - ## Create private endpoint In this section, you'll map the private link service to a private endpoint. A virtual network contains the private endpoint for the private link service. This virtual network contains the resources that will access your private link service. When no longer needed, use the [az group delete](/cli/azure/group#az-group-delet In this quickstart, you: * Created a virtual network and internal Azure Load Balancer.+ * Created a private link service To learn more about Azure Private endpoint, continue to: |
private-link | Disable Private Link Service Network Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-link-service-network-policy.md | $vnet | Set-AzVirtualNetwork This section describes how to disable subnet private endpoint policies using Azure CLI. ```azurecli-az network vnet subnet update \ - --name default \ - --resource-group myResourceGroup \ - --vnet-name myVNet \ - --disable-private-link-service-network-policies true +az network vnet subnet update \ + --name default \ + --vnet-name MyVnet \ + --resource-group myResourceGroup \ + --disable-private-link-service-network-policies yes ``` # [**JSON**](#tab/private-link-network-policy-json) |
reliability | Availability Zones Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md | -# Availability zone migration guidance overview +# Availability zone migration guidance overview for Microsoft Azure products and services Azure services that support availability zones, including zonal and zone-redundant offerings, are continually expanding. For that reason, resources that don't currently have availability zone support, may have an opportunity to gain that support. The Migration Guides section offers a collection of guides for each service that requires certain procedures in order to move a resource from non-availability zone support to availability support. You'll find information on prerequisites for migration, download requirements, important migration considerations and recommendations. |
reliability | Reliability Guidance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md | + + Title: Reliability guidance overview for Microsoft Azure products and services +description: Reliability guidance overview for Microsoft Azure products and services +++ Last updated : 02/03/2023+++++# Reliability guidance overview ++Azure reliability guidance is a collection of service-specific reliability guides. Each guide can cover both intra-regional resiliency with [availability zones](availability-zones-overview.md) and information on [cross-region resiliency with disaster recovery](cross-region-replication-azure.md). For a more detailed overview of reliability principles in Azure, see [Reliability in Microsoft Azure Well-Architected Framework](/azure/architecture/framework/resiliency/). ++## Azure services reliability guides +++###  Foundational services ++| **Products** | +| | +| [Azure Cosmos DB](../cosmos-db/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| +[Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Load Balancer](../load-balancer/load-balancer-standard-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Public IP](../virtual-network/ip-services/public-ip-addresses.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zone)| +[Azure Service Bus](../service-bus-messaging/service-bus-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| +[Azure Service Fabric](../service-fabric/service-fabric-cross-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Site Recovery](../site-recovery/site-recovery-overview.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure SQL](/azure/azure-sql/database/high-availability-sla?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Storage: Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Virtual Machines](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| ++###  Mainstream services ++| **Products** | +| | +| [Azure API Management](../api-management/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure App Configuration](../azure-app-configuration/faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-does-app-configuration-ensure-high-data-availability)| +[Azure App Service](/azure/architecture/framework/services/compute/azure-app-service/reliability?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#why-use-app-service)| +[Azure App Service- App Service Environment](/azure/architecture/reference-architectures/enterprise-integration/ase-high-availability-deployment?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Application Gateway (V2)](../application-gateway/application-gateway-autoscaling-zone-redundant.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Batch](../batch/create-pool-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Bot Service](reliability-bot.md)| +[Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Cognitive Search](../search/search-performance-optimization.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Container Instances](reliability-containers.md)| +[Azure Container Registry](../container-registry/zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Data Factory](../data-factory/concepts-data-redundancy.md?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json)| +[Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Database for PostgreSQL - Flexible Server](../postgresql/single-server/concepts-high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure DDoS Protection](../ddos-protection/ddos-faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Disk Encryption](../virtual-machines/disks-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure DNS - Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure DNS - Azure DNS Private Resolver](../dns/dns-private-resolver-get-started-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Energy Data Services](reliability-energy-data-services.md )| +[Azure Event Grid](../event-grid/availability-zones-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Firewall](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Firewall Manager](../firewall-manager/quick-firewall-policy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Functions](reliability-functions.md)| +[Azure HDInsight](../hdinsight/hdinsight-use-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure IoT Hub](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Kubernetes Service (AKS)](../aks/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Logic Apps](../logic-apps/set-up-zone-redundancy-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Monitor](../azure-monitor/logs/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Network Watcher](../network-watcher/frequently-asked-questions.yml?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json#service-availability-and-redundancy)| +[Azure Notification Hubs](../notification-hubs/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Private 5G Core](../private-5g-core/reliability-private-5g-core.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Private Link](../private-link/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Route Server](../route-server/route-server-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +[Azure Virtual WAN](../virtual-wan/virtual-wan-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-are-availability-zones-and-resiliency-handled-in-virtual-wan)| +[Azure Web Application Firewall](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| ++## Next steps +++> [!div class="nextstepaction"] +> [Azure services and regions with availability zones](availability-zones-service-support.md) ++> [!div class="nextstepaction"] +> [Availability of service by category](availability-service-by-category.md) ++> [!div class="nextstepaction"] +> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/) ++> [!div class="nextstepaction"] +> [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability) |
service-bus-messaging | Service Bus Sas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md | Shared Access Signatures are a claims-based authorization mechanism using simple SAS authentication in Service Bus is configured with named [Shared Access Authorization Policies](#shared-access-authorization-policies) having associated access rights, and a pair of primary and secondary cryptographic keys. The keys are 256-bit values in Base64 representation. You can configure rules at the namespace level, on Service Bus [queues](service-bus-messaging-overview.md#queues) and [topics](service-bus-messaging-overview.md#topics). +> [!NOTE] +> These keys are plain text strings using a Base64 representation, and must not be decoded before they are used. + The Shared Access Signature token contains the name of the chosen authorization policy, the URI of the resource that shall be accessed, an expiry instant, and an HMAC-SHA256 cryptographic signature computed over these fields using either the primary or the secondary cryptographic key of the chosen authorization rule. ## Shared Access Authorization Policies |
storage | Storage Blob Block Blob Premium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md | Premium block blob storage accounts are ideal for workloads that require fast an ## Cost effectiveness -Premium block blob storage accounts have a higher storage cost but a lower transaction cost as compared to standard general-purpose v2 accounts. If your applications and workloads execute a large number of transactions, premium blob blob storage can be cost-effective, especially if the workload is write-heavy. +Premium block blob storage accounts have a higher storage cost but a lower transaction cost as compared to standard general-purpose v2 accounts. If your applications and workloads execute a large number of transactions, premium block blob storage can be cost-effective, especially if the workload is write-heavy. In most cases, workloads executing more than 35 to 40 transactions per second per terabyte (TPS/TB) are good candidates for this type of account. For example, if your workload executes 500 million read operations and 100 million write operations in a month, then you can calculate the TPS/TB as follows: |
storage | Storage Blob Container Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md | The following example finds a deleted container, gets the version of that delete :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerDelete.java" id="Snippet_RestoreContainer"::: -## See also +## Resources ++To learn more about deleting a container using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for deleting or restoring a container use the following REST API operations: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerDelete.java)-- [Quickstart: Azure Blob Storage client library for Java](storage-quickstart-blobs-java.md) - [Delete Container](/rest/api/storageservices/delete-container) (REST API)+- [Restore Container](/rest/api/storageservices/restore-container) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerDelete.java) +++### See also + - [Soft delete for containers](soft-delete-container-overview.md) - [Enable and manage soft delete for containers](soft-delete-container-enable.md) |
storage | Storage Blob Container Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md | When a lease expires, the lease ID is maintained by the Blob service until the c If a lease expires rather than being explicitly released, a client may need to wait up to one minute before a new lease can be acquired for the container. However, the client can renew the lease with the expired lease ID immediately. +## Resources ++To learn more about leasing a container using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for leasing a container use the following REST API operation: ++- [Lease Container](/rest/api/storageservices/lease-container) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerLease.java) ++ ## See also -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerLease.java)-- [Lease Container](/rest/api/storageservices/lease-container)-- [Lease Blob](/rest/api/storageservices/lease-blob) - [Managing Concurrency in Blob storage](concurrency-manage.md) |
storage | Storage Blob Container Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md | The following example reads in metadata values: :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerPropertiesMetadata.java" id="Snippet_ReadContainerMetadata"::: -## See also +## Resources ++To learn more about setting and retrieving container properties and metadata using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for setting and retrieving properties and metadata use the following REST API operations: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerPropertiesMetadata.java)-- [Quickstart: Azure Blob Storage client library for Java](storage-quickstart-blobs-java.md) - [Get Container Properties](/rest/api/storageservices/get-container-properties) (REST API) - [Set Container Metadata](/rest/api/storageservices/set-container-metadata) (REST API)-- [Get Container Metadata](/rest/api/storageservices/get-container-metadata) (REST API)+- [Get Container Metadata](/rest/api/storageservices/get-container-metadata) (REST API) ++The `getProperties` method retrieves container properties and metadata by calling both the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation and the [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) operation. ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerPropertiesMetadata.java) + |
storage | Storage Blob Containers List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md | You can also return a smaller set of results, by specifying the size of the page :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerList.java" id="Snippet_ListContainersWithPaging"::: -## See also +## Resources ++To learn more about listing containers using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for listing containers use the following REST API operation: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerList.java)-- [Quickstart: Azure Blob Storage client library for Java](storage-quickstart-blobs-java.md) - [List Containers](/rest/api/storageservices/list-containers2) (REST API)++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerList.java) +++## See also + - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources) |
storage | Storage Blob Copy Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md | To copy a blob, use the following method: - [copyFromUrl](/java/api/com.azure.storage.blob.specialized.blobclientbase) -This method synchronously copies the data at the source URL to a blob and waits for the copy to complete before returning a response. The source must be a block blob no larger than 256 MB. The source URL must include a SAS token that provides permissions to read the source blob. To learn more about the underlying operation, see [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url). +This method synchronously copies the data at the source URL to a blob and waits for the copy to complete before returning a response. The source must be a block blob no larger than 256 MB. The source URL must include a SAS token that provides permissions to read the source blob. To learn more about the underlying operation, see [REST API operations](#rest-api-operations). The following code example gets a `BlobClient` object representing an existing blob and copies it to a new blob in a different container. This example also gets a lease on the source blob before copying so that no other client can modify the blob until the copy is complete and the lease is broken. You can also copy a blob using the following method: - [beginCopy](/java/api/com.azure.storage.blob.specialized.blobclientbase) -This method triggers a long-running, asynchronous operation. The source may be another blob or an Azure File resource. If the source is in another storage account, the source must either be public or authorized with a SAS token. To learn more about the underlying operation, see [Copy Blob](/rest/api/storageservices/copy-blob). +This method triggers a long-running, asynchronous operation. The source may be another blob or an Azure File resource. If the source is in another storage account, the source must either be public or authorized with a SAS token. To learn more about the underlying operation, see [REST API operations](#rest-api-operations). :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java" id="Snippet_CopyBlobBeginCopy"::: The following example stops a pending copy and leaves a destination blob with ze :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java" id="Snippet_AbortCopy"::: -## See also +## Resources ++To learn more about copying blobs using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for copying blobs use the following REST API operations: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java) - [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)+- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API) +- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java) + |
storage | Storage Blob Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md | This method restores the content and metadata of a soft-deleted blob and any ass :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDelete.java" id="Snippet_RestoreBlobVersion"::: -## See also +## Resources ++To learn more about how to delete blobs and restore deleted blobs using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for deleting blobs and restoring deleted blobs use the following REST API operations: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDelete.java)-- [Quickstart: Azure Blob Storage client library for Java](storage-quickstart-blobs-java.md) - [Delete Blob](/rest/api/storageservices/delete-blob) (REST API) - [Undelete Blob](/rest/api/storageservices/undelete-blob) (REST API)-- [Soft delete for blobs](soft-delete-blob-overview.md)++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDelete.java) +++### See also ++- [Soft delete for blobs](soft-delete-blob-overview.md) +- [Blob versioning](versioning-overview.md) |
storage | Storage Blob Download Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md | The following example downloads a blob by opening a `BlobInputStream` and readin :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java" id="Snippet_ReadBlobStream"::: -## See also +## Resources -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java)-- [Quickstart: Azure Blob Storage client library for Java](storage-quickstart-blobs-java.md)-- [Get Blob](/rest/api/storageservices/get-blob) (REST API)+To learn more about how to download blobs using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for downloading blobs use the following REST API operation: ++- [Get Blob](/rest/api/storageservices/get-blob) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java) + |
storage | Storage Blob Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md | If a lease expires rather than being explicitly released, a client may need to w A lease can't be granted for a blob snapshot, since snapshots are read-only. Requesting a lease against a snapshot results in status code `400 (Bad Request)`. -## See also +## Resources ++To learn more about managing blob leases using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for managing blob leases use the following REST API operation: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobLease.java)-- [Lease Container](/rest/api/storageservices/lease-container) - [Lease Blob](/rest/api/storageservices/lease-blob)++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobLease.java) +++### See also + - [Managing Concurrency in Blob storage](concurrency-manage.md) |
storage | Storage Blob Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md | The following code example reads metadata on a blob and prints each key/value pa :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java" id="Snippet_ReadBlobMetadata"::: -## See also +## Resources ++To learn more about how to manage system properties and user-defined metadata using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for managing system properties and user-defined metadata use the following REST API operations: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java) - [Set Blob Properties](/rest/api/storageservices/set-blob-properties) (REST API) - [Get Blob Properties](/rest/api/storageservices/get-blob-properties) (REST API) - [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) (REST API)-- [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API)+- [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java) + |
storage | Storage Blob Tags Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md | The following example finds all blobs tagged as an image: :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java" id="Snippet_FindBlobsByTag"::: -## See also +## Resources ++To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for managing and using blob index tags use the following REST API operations: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java)-- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) - [Get Blob Tags](/rest/api/storageservices/get-blob-tags) (REST API) - [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API)-- [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)+- [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java) +++### See also ++- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) +- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) |
storage | Storage Blob Upload Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md | The following example uploads a block blob with index tags set using `BlobUpload :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java" id="Snippet_UploadBlobTags"::: -## See also +## Resources ++To learn more about uploading blobs using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for uploading blobs use the following REST API operations: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java)-- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)-- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) - [Put Blob](/rest/api/storageservices/put-blob) (REST API)-- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)+- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java) +++### See also ++- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) +- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) |
storage | Storage Blobs List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md | Blob name: folderA/folderB/file3.txt > [!NOTE] > Blob snapshots cannot be listed in a hierarchical listing operation. -## Next steps +## Resources ++To learn more about how to list blobs using the Azure Blob Storage client library for Java, see the following resources. ++### REST API operations ++The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for listing blobs use the following REST API operation: -- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobList.java) - [List Blobs](/rest/api/storageservices/list-blobs) (REST API)++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobList.java) +++### See also + - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources) - [Blob versioning](versioning-overview.md) |
storage | File Sync Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md | description: Learn how to deploy Azure File Sync from start to finish using the Previously updated : 06/03/2022 Last updated : 02/03/2023 We strongly recommend that you read [Planning for an Azure Files deployment](../ $PSVersionTable.PSVersion ``` - If your **PSVersion** value is less than 5.1.\*, as will be the case with most fresh installations of Windows Server 2012 R2, you can easily upgrade by downloading and installing [Windows Management Framework (WMF) 5.1](https://www.microsoft.com/download/details.aspx?id=54616). The appropriate package to download and install for Windows Server 2012 R2 is **Win8.1AndW2K12R2-KB\*\*\*\*\*\*\*-x64.msu**. + If your **PSVersion** value is less than 5.1.\*, as will be the case with most fresh installations of Windows Server 2012 R2, you'll need to upgrade by downloading and installing [Windows Management Framework (WMF) 5.1](https://www.microsoft.com/download/details.aspx?id=54616). The appropriate package to download and install for Windows Server 2012 R2 is **Win8.1AndW2K12R2-KB\*\*\*\*\*\*\*-x64.msu**. PowerShell 6+ can be used with any supported system, and can be downloaded via its [GitHub page](https://github.com/PowerShell/PowerShell#get-powershell). - > [!IMPORTANT] - > If you plan to use the Server Registration UI, rather than registering directly from PowerShell, you must use PowerShell 5.1. - 6. If you have opted to use PowerShell 5.1, ensure that at least .NET 4.7.2 is installed. Learn more about [.NET Framework versions and dependencies](/dotnet/framework/migration-guide/versions-and-dependencies) on your system. > [!IMPORTANT] |
storage | File Sync Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md | description: Plan for a deployment with Azure File Sync, a service that allows y Previously updated : 06/01/2022 Last updated : 02/03/2023 Azure File Sync is supported with the following versions of Windows Server: | Windows Server 2022 | Azure, Datacenter, Standard, and IoT | Full and Core | | Windows Server 2019 | Datacenter, Standard, and IoT | Full and Core | | Windows Server 2016 | Datacenter, Standard, and Storage Server | Full and Core |-| Windows Server 2012 R2 | Datacenter, Standard, and Storage Server | Full and Core | +| Windows Server 2012 R2* | Datacenter, Standard, and Storage Server | Full and Core | ++*Requires downloading and installing [Windows Management Framework (WMF) 5.1](https://www.microsoft.com/download/details.aspx?id=54616). The appropriate package to download and install for Windows Server 2012 R2 is **Win8.1AndW2K12R2-KB\*\*\*\*\*\*\*-x64.msu**. Future versions of Windows Server will be added as they are released. |
synapse-analytics | Synapse Workspace Synapse Rbac Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md | The following table describes the built-in roles and the scopes at which they ca |Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace |Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime|-|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of serverless SQL pools, Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace | +|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace | |Synapse Credential User|Runtime and configuration-time use of secrets within credentials and linked services in activities like pipeline runs. To run pipelines, this role is required, scoped to the workspace system identity. </br></br>_Scoped to a credential, permits access to data via a linked service that is protected by the credential (may also require compute use permission) </br>Allows execution of pipelines protected by the workspace system identity credential_|Workspace </br>Linked Service</br>Credential |Synapse Linked Data Manager|Creation and management of managed private endpoints, linked services, and credentials. Can create managed private endpoints that use linked services protected by credentials|Workspace| |Synapse User|List and view details of SQL pools, Apache Spark pools, Integration runtimes, and published linked services and credentials. Doesn't include other published code artifacts.  Can create new artifacts but can't run or publish without additional permissions. </br></br>_Can list and read Spark pools, Integration runtimes._|Workspace, Spark pool</br>Linked service </br>Credential| |
virtual-desktop | Autoscale Scaling Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md | Title: Create an autoscale scaling plan for Azure Virtual Desktop description: How to create an autoscale scaling plan to optimize deployment costs. Previously updated : 01/28/2023 Last updated : 02/03/2023 To learn more about autoscale, see [Autoscale scaling plans and example scenario > - Azure Virtual Desktop (classic) doesn't support autoscale. > - Autoscale doesn't support Azure Virtual Desktop for Azure Stack HCI. > - Autoscale doesn't support scaling of ephemeral disks.-> - Autoscale doesn't support scaling of generalized VMs. +> - Autoscale doesn't support scaling of generalized or sysprepped VMs with machine-specific information removed. For more information, see [Remove machine-specific information by generalizing a VM before creating an image](../virtual-machines/generalize.md). > - You can't use autoscale and [scale session hosts using Azure Automation and Azure Logic Apps](scaling-automation-logic-apps.md) on the same host pool. You must use one or the other. > - Autoscale is available in Azure and Azure Government. |
virtual-desktop | Autoscale Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scenarios.md | Title: Autoscale scaling plans and example scenarios in Azure Virtual Desktop description: Information about autoscale and a collection of four example scenarios that illustrate how various parts of autoscale for Azure Virtual Desktop work. Previously updated : 08/15/2022 Last updated : 02/03/2023 Autoscale lets you scale your session host virtual machines (VMs) in a host pool > - Azure Virtual Desktop (classic) doesn't support autoscale. > - Autoscale doesn't support Azure Virtual Desktop for Azure Stack HCI. > - Autoscale doesn't support scaling of ephemeral disks.-> - Autoscale doesn't support scaling of generalized VMs. +> - Autoscale doesn't support scaling of generalized or sysprepped VMs with machine-specific information removed. For more information, see [Remove machine-specific information by generalizing a VM before creating an image](../virtual-machines/generalize.md). > - You can't use autoscale and [scale session hosts using Azure Automation](set-up-scaling-script.md) on the same host pool. You must use one or the other. > - Autoscale is available in Azure and Azure Government. |
virtual-desktop | Configure Host Pool Personal Desktop Assignment Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md | To directly assign a user to a session host in the Azure portal: ## How to unassign a personal desktop -To unassign a personal desktop, run the following PowerShell cmdlet: --```powershell -Update-AzWvdSessionHost -HostPoolName <hostpoolname> -Name <sessionhostname> -ResourceGroupName <resourcegroupname> -AssignedUser "" -Force -``` -->[!IMPORTANT] -> - Azure Virtual Desktop will not delete any VHD or profile data for unassigned personal desktops. -> - You must include the _-Force_ parameter when running the PowerShell cmdlet to unassign a personal desktop. If you don't include the _-Force_ parameter, you'll receive an error message. -> - There must be no existing user sessions on the session host when you unassign the user from the personal desktop. If there's an existing user session on the session host while you're unassigning it, you won't be able to unassign the personal desktop successfully. -> - If the session host has no user assignment, nothing will happen when you run this cmdlet. - To unassign a personal desktop in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Enter **Azure Virtual Desktop** into the search bar. To unassign a personal desktop in the Azure portal: ## How to reassign a personal desktop -To reassign a personal desktop, run the following PowerShell cmdlet: --```powershell -Update-AzWvdSessionHost -HostPoolName <hostpoolname> -Name <sessionhostname> -ResourceGroupName <resourcegroupname> -AssignedUser <userupn> -Force -``` -->[!IMPORTANT] -> - Azure Virtual Desktop will not delete any VHD or profile data for reassigned personal desktops. -> - You must include the _-Force_ parameter when running the PowerShell cmdlet to reassign a personal desktop. If you don't include the _-Force_ parameter, you'll receive an error message. -> - There must be no existing user sessions on the session host when you reassign a personal desktop. If there's an existing user session on the session host while you're reassigning it, you won't be able to reassign the personal desktop successfully. -> - If the user principal name (UPN) you enter for the _-AssignedUser_ parameter is the same as the UPN currently assigned to the personal desktop, the cmdlet won't do anything. -> - If the session host currently has no user assignment, the personal desktop will be assigned to the provided UPN. - To reassign a personal desktop in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Enter **Azure Virtual Desktop** into the search bar. |
virtual-desktop | Environment Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md | Title: Azure Virtual Desktop terminology - Azure description: Learn about the basic elements of Azure Virtual Desktop, like host pools, app groups, and workspaces. Previously updated : 11/12/2022 Last updated : 02/03/2023 An app group can be one of two types: - RemoteApp, where users access the RemoteApps you individually select and publish to the app group. Available with pooled host pools only. - Desktop, where users access the full desktop. Available with pooled or personal host pools. -Pooled host pools have a preferred app group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop app group with the friendly name **Default Desktop** whenever you create a host pool and sets the host pool's preferred app group type to **Desktop**. You can remove the Desktop app group at any time. If you want your users to only see RemoteApps in their feed, you should set the **Application group type** value to **RemoteApp**. You can't create another Desktop app group in a host pool while a Desktop app group exists. +Pooled host pools have a preferred app group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop app group with the friendly name **Default Desktop** whenever you create a host pool and sets the host pool's preferred app group type to **Desktop**. You can remove the Desktop app group at any time. If you want your users to only see RemoteApps in their feed, you should set the **preferred application group type** value to **RemoteApp**. If you want your users to only see session desktops in their feed, you should set the **preferred application group type** value to **Desktop**. You can't create another Desktop app group in a host pool while a Desktop app group exists. To publish resources to users, you must assign them to app groups. When assigning users to app groups, consider the following things: To publish resources to users, you must assign them to app groups. When assignin - Personal host pools only allow and support Desktop app groups. >[!NOTE]->If your host poolΓÇÖs *application group type* is set to **Undefined**, that means that you havenΓÇÖt set the value yet. You must finish configuring your host pool by setting its *application group type* before you start using it to prevent app incompatibility and session host overload issues. +>If your host poolΓÇÖs *preferred application group type* is set to **Undefined**, that means you havenΓÇÖt set the value yet. You must finish configuring your host pool by setting its *preferred application group type* before you start using it to prevent app incompatibility and session host overload issues. ## Workspaces |
virtual-machine-scale-sets | Virtual Machine Scale Sets Orchestration Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md | The following Virtual Machine Scale Set parameters aren't currently supported wi - Port Forwarding via Standard Load Balancer NAT Pool - you can configure NAT rules to specific instances -## Troubleshoot scale sets with Flexible orchestration -Find the right solution to your troubleshooting scenario. --<!-- error --> -### InvalidParameter. The specified fault domain count 3 must fall in the range 1 to 2. --``` -InvalidParameter. The specified fault domain count 3 must fall in the range 1 to 2. -``` --**Cause:** The `platformFaultDomainCount` parameter is invalid for the region or zone selected. --**Solution:** You must select a valid `platformFaultDomainCount` value. For zonal deployments, the maximum `platformFaultDomainCount` value is 1. For regional deployments where no zone is specified, the maximum `platformFaultDomainCount` varies depending on the region. See [Manage the availability of VMs for scripts](../virtual-machines/availability.md) to determine the maximum fault domain count per region. ---<!-- error --> -### OperationNotAllowed. Deletion of Virtual Machine Scale Set isn't allowed as it contains one or more VMs. Please delete or detach the VM(s) before deleting the Virtual Machine Scale Set. --``` -OperationNotAllowed. Deletion of Virtual Machine Scale Set isn't allowed as it contains one or more VMs. Please delete or detach the VM(s) before deleting the Virtual Machine Scale Set. -``` --**Cause:** Trying to delete a scale set in Flexible orchestration mode that is associated with one or more virtual machines. --**Solution:** Delete all of the virtual machines associated with the scale set in Flexible orchestration mode, then you can delete the scale set. ---<!-- error --> -### InvalidParameter. The value 'True' of parameter 'singlePlacementGroup' is not allowed. Allowed values are: False. --``` -InvalidParameter. The value 'True' of parameter 'singlePlacementGroup' is not allowed. Allowed values are: False. -``` -**Cause:** The `singlePlacementGroup` parameter is set to *True*. --**Solution:** The `singlePlacementGroup` must be set to *False*. ---<!-- error --> -### OutboundConnectivityNotEnabledOnVM. No outbound connectivity configured for virtual machine. --``` -OutboundConnectivityNotEnabledOnVM. No outbound connectivity configured for virtual machine. -``` -**Cause:** Trying to create a Virtual Machine Scale Set in Flexible Orchestration Mode with no outbound internet connectivity. --**Solution:** Enable secure outbound access for your Virtual Machine Scale Set in the manner best suited for your application. Outbound access can be enabled with a NAT Gateway on your subnet, adding instances to a Load Balancer backend pool, or adding an explicit public IP per instance. For highly secure applications, you can specify custom User Defined Routes through your firewall or virtual network applications. See [Default Outbound Access](../virtual-network/ip-services/default-outbound-access.md) for more details. - ## Get started with Flexible orchestration mode Register and get started with [Flexible orchestration mode](..\virtual-machines\flexible-virtual-machine-scale-sets.md) for your Virtual Machine Scale Sets. |
virtual-machines | Dedicated Host Retirement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-retirement.md | A: ### Q: What will happen to my Azure Reservation? -A: You'll need to [exchange your reservation](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md#how-to-exchange-or-refund-an-existing-reservation) through the Azure portal to match the new Dedicated Host SKU. +A: You'll need to [exchange your reservation](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md#how-to-exchange-or-refund-an-existing-reservation) through the Azure portal to match the new Dedicated Host SKU. ++### Q: What would happen to my host if I do not migrate by March 31, 2023? ++A: After March 31, 2023 any dedicated host running on the SKUs that are marked for retirement will be set to 'Host Pending Deallocate' state before eventually deallocating the host. For additional assistance please reach out to Azure support. ++### Q: What will happen to my VMs if a Host is automatically deallocated? ++A: If the underlying host is deallocated the VMs that were running on the host would be deallocated but not deleted. You would be able to either create a new host (of same VM family) and allocate VMs on the host or run the VMs on multi-tenant infrastructure. |
virtual-machines | Dedicated Hosts How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md | If you set a fault domain count for your host group, you'll need to specify the 1. Select *myDedicatedHostsRG* as the **Resource group**. 1. In **Instance details**, type *myHost* for the **Name** and select *East US* for the location. 1. In **Hardware profile**, select *Standard Es3 family - Type 1* for the **Size family**, select *myHostGroup* for the **Host group** and then select *1* for the **Fault domain**. Leave the defaults for the rest of the fields.+1. Leave the **Automatically replace host on failure** setting *Enabled* to automatically service heal the host in case of any host level failure. 1. When you're done, select **Review + create** and wait for validation. 1. Once you see the **Validation passed** message, select **Create** to create the host. az vm host create \ --name myHost \ --sku DSv3-Type1 \ --platform-fault-domain 1 \+ --auto-replace true \ -g myDHResourceGroup ``` $dHost = New-AzHost ` -Location $location -Name myHost ` -ResourceGroupName $rgName ` -Sku DSv3-Type1 `- -AutoReplaceOnFailure 1 ` + -AutoReplaceOnFailure True ` -PlatformFaultDomain 1 ``` |
virtual-machines | Dedicated Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts.md | Known issues and limitations when using automatic VM placement: - You won't be able to redeploy your VM. - You won't be able to use DCv2, Lsv2, NVasv4, NVsv3, Msv2, or M-series VMs with dedicated hosts. +## Host Service Healing ++In case of any failure relating to the underlying node, network connectivity or software issues can push the host and VMs on the host to a non-healthy state causing disruption and downtime to your workloads. The default action is for Azure to automatically service heal the impacted host to a healthy node and move all VMs to the healthy host. Once the VMs are service healed and restarted the impacted host will be deallocated. During the service healing process the host and VMs would become unavailable incurring a slight downtime. ++The newly created host would have all the same constraints as the old host: + - Resource group + - Region + - Fault Domain + - Host Group + - ADH SKU + - Auto replace on failure setting ++Users with compliance requirements might need a strong affinity between the host and underlying node and would not like to be automatically service healed, in such scenarios users can choose to opt out of auto service healing at host level by disabling the 'Automatically replace host on failure setting'. ++### Implications ++If you decide to disable auto service healing and if the underlying node encounters a failure your host state will change to 'Host Pending Deallocate' and will eventually be deallocated. ++To avoid deallocation, you would need to manually redeploy the host by creating a new dedicated host and moving all the VMs from the old host to the new host. ++The auto replace host setting is a create time setting and cannot be changed once the host is created. VMs that are manually stopped/deallocated from the impacted host are not moved as part of the automatic service healing. ## Virtual Machine Scale Set support |
virtual-machines | Dv2 Dsv2 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv2-dsv2-series.md | |
virtual-machines | Key Vault Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md | The Azure PowerShell can be used to deploy the Key Vault VM extension to an exis ```powershell # Build settings- $settings = ".\settings.json" + $settings = (get-content -raw ".\settings.json") $extName = "KeyVaultForWindows" $extPublisher = "Microsoft.Azure.KeyVault" $extType = "KeyVaultForWindows" |
virtual-machines | Automation Bom Get Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-bom-get-files.md | |
virtual-machines | Automation Bom Prepare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-bom-prepare.md | |
virtual-machines | Automation Bom Templates Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-bom-templates-db.md | |
virtual-machines | Automation Configure Control Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md | |
virtual-machines | Automation Configure Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md | |
virtual-machines | Automation Configure Extra Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-extra-disks.md | |
virtual-machines | Automation Configure Sap Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-sap-parameters.md | |
virtual-machines | Automation Configure System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md | |
virtual-machines | Automation Configure Webapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-webapp.md | |
virtual-machines | Automation Configure Workload Zone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md | |
virtual-machines | Automation Deploy Control Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md | |
virtual-machines | Automation Deploy System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-system.md | |
virtual-machines | Automation Deploy Workload Zone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-workload-zone.md | |
virtual-machines | Automation Deployment Framework | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deployment-framework.md | |
virtual-machines | Automation Devops Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-devops-tutorial.md | |
virtual-machines | Automation Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-get-started.md | |
virtual-machines | Automation Manual Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-manual-deployment.md | |
virtual-machines | Automation Naming Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-naming-module.md | |
virtual-machines | Automation Naming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-naming.md | |
virtual-machines | Automation New Vs Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-new-vs-existing.md | |
virtual-machines | Automation Plan Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-plan-deployment.md | |
virtual-machines | Automation Reference Bash | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-reference-bash.md | |
virtual-machines | Automation Reference Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-reference-powershell.md | |
virtual-machines | Automation Run Ansible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-run-ansible.md | |
virtual-machines | Automation Software | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-software.md | |
virtual-machines | Automation Supportability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-supportability.md | |
virtual-machines | Automation Tools Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-tools-configuration.md | |
virtual-machines | Automation Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-tutorial.md | |
virtual-machines | Azure Monitor Alerts Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-alerts-portal.md | description: Learn how to use a browser method for configuring alerts in Azure M --+ Last updated 10/19/2022 #Customer intent: As a developer, I want to configure alerts in Azure Monitor for SAP solutions so that I can receive alerts and notifications about my SAP systems. |
virtual-machines | Azure Monitor Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-providers.md | Title: What are providers in Azure Monitor for SAP solutions? (preview) description: This article provides answers to frequently asked questions about Azure Monitor for SAP solutions providers. --+ Last updated 10/19/2022 |
virtual-machines | Azure Monitor Sap Quickstart Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-sap-quickstart-powershell.md | description: Deploy Azure Monitor for SAP solutions with Azure PowerShell --+ Last updated 10/19/2022 ms.devlang: azurepowershell |
virtual-machines | Azure Monitor Sap Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-sap-quickstart.md | description: Learn how to use a browser method for deploying Azure Monitor for S --+ Last updated 10/19/2022 # Customer intent: As a developer, I want to deploy Azure Monitor for SAP solutions in the Azure portal so that I can configure providers. |
virtual-machines | Automation Advanced_State_Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-advanced_state_management.md | |
virtual-machines | Automation Install_Deployer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-install_deployer.md | |
virtual-machines | Automation Install_Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-install_library.md | |
virtual-machines | Automation Install_Workloadzone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-install_workloadzone.md | |
virtual-machines | Automation Installer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-installer.md | |
virtual-machines | Automation Prepare Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-prepare-region.md | |
virtual-machines | Automation Remove Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-remove-region.md | |
virtual-machines | Automation Remover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-remover.md | |
virtual-machines | Automation Set Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-set-secrets.md | |
virtual-machines | Automation Update_Sas_Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/bash/automation-update_sas_token.md | |
virtual-machines | Business One Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/business-one-azure.md | Title: SAP Business One on Azure Virtual Machines | Microsoft Docs description: SAP Business One on Azure. -+ Last updated 02/11/2022 |
virtual-machines | Businessobjects Deployment Guide Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/businessobjects-deployment-guide-linux.md | |
virtual-machines | Businessobjects Deployment Guide Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/businessobjects-deployment-guide-windows.md | |
virtual-machines | Businessobjects Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/businessobjects-deployment-guide.md | |
virtual-machines | Cal Ides Erp6 Erp7 Sp3 Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-ides-erp6-erp7-sp3-sql.md | Title: Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on Azure | Microsoft Docs description: Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on Azure -+ Last updated 09/16/2016 |
virtual-machines | Cal S4h | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md | tags: azure-resource-manager keywords: '' ms.assetid: 44bbd2b6-a376-4b5c-b824-e76917117fa9-+ vm-linux The online library is continuously updated with Appliances for demo, proof of co | Appliance Template | Date | Description | Creation Link | | | - | -- | - |-| [**SAP S/4HANA 2021 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | July 19 2022 | This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | -| [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | April 26 2022 | This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | -| [**SAP S/4HANA 2022, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4e6b3ba-ba8f-485f-813f-be27ed5c8311) | December 15 2022 | This appliance contains SAP S/4HANA 2022 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | -| [**SAP HANA, Platform Edition 2.0 SPS06 rev63**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3e897d4f-e384-49b4-9fca-f15a888a8e3f) | December 06 2022 | Invent new possibilities with SAP HANA, a completely re-imagined, modern platform for real-time business: Run your business in real real time. SAP HANA can help you dramatically accelerate analytics, business processes, and predictive analysis ΓÇô all on a single in-memory computing platform. Important: please observe that usage of this virtual appliance is governed by Terms and Conditions as published in the Pricing section. | [Create Appliance](https://cal.sap.com/registration?sguid=3e897d4f-e384-49b4-9fca-f15a888a8e3f&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | -| [**SAP Focused Run 4.0 SP00, unconfigured**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/8caeae4b-8521-45b1-a70c-d800834a01e4) | December 21 2022 | SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics. | [Create Appliance](https://cal.sap.com/registration?sguid=8caeae4b-8521-45b1-a70c-d800834a01e4&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | -| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 20 2020 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | +| [**SAP S/4HANA 2022, Fully-Activated Appliance**]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4e6b3ba-ba8f-485f-813f-be27ed5c8311) | December 15 2022 |This appliance contains SAP S/4HANA 2022 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) +| [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/7bd4548f-a95b-4ee9-910a-08c74b4f6c37) | June 21 2021 |The SAP ABAP Platform on SAP HANA gives you access to SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements ΓÇô including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=7bd4548f-a95b-4ee9-910a-08c74b4f6c37provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) + [**SAP S/4HANA 2021 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | July 19 2022 | This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | +| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 3 2018 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | +| [**SAP S/4HANA 2022**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/c4aff915-1af8-4d45-b370-0b38a079f9bc) | December 4 2022 | This solution comes as a standard S/4HANA system installation including a remote desktop for easy frontend access. It contains a pre-configured and activated SAP S/4HANA Fiori UI in client 100, with prerequisite components activated as per SAP note 3166600 Composite SAP note: Rapid Activation for SAP Fiori in SAP S/4HANA 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | +| [**SAP Focused Run 4.0 SP00 (configured)**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4a6643f-2731-486c-af82-0508396650b7) | January 19 2023 |SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics. | [Create Appliance](https://cal.sap.com/registration?sguid=f4a6643f-2731-486c-af82-0508396650b7&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |
virtual-machines | Certifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/certifications.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: -+ vm-linux |
virtual-machines | Configure Db 2 Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-db-2-azure-monitor-sap-solutions.md | Title: Create IBM Db2 provider for Azure Monitor for SAP solutions (preview) description: This article provides details to configure an IBM DB2 provider for Azure Monitor for SAP solutions. --+ Last updated 12/03/2022 |
virtual-machines | Configure Ha Cluster Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-ha-cluster-azure-monitor-sap-solutions.md | Title: Create a High Availability Pacemaker cluster provider for Azure Monitor for SAP solutions (preview) description: Learn how to configure High Availability (HA) Pacemaker cluster providers for Azure Monitor for SAP solutions. --+ Last updated 01/05/2023 |
virtual-machines | Configure Hana Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-hana-azure-monitor-sap-solutions.md | Title: Configure SAP HANA provider for Azure Monitor for SAP solutions (preview) description: Learn how to configure the SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal. --+ Last updated 10/19/2022 |
virtual-machines | Configure Linux Os Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-linux-os-azure-monitor-sap-solutions.md | Title: Configure Linux provider for Azure Monitor for SAP solutions (preview) description: This article explains how to configure a Linux OS provider for Azure Monitor for SAP solutions. --+ Last updated 01/05/2023 |
virtual-machines | Configure Netweaver Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-netweaver-azure-monitor-sap-solutions.md | Title: Configure SAP NetWeaver for Azure Monitor for SAP solutions (preview) description: Learn how to configure SAP NetWeaver for use with Azure Monitor for SAP solutions. --+ Last updated 10/19/2022 |
virtual-machines | Configure Sql Server Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-sql-server-azure-monitor-sap-solutions.md | Title: Configure Microsoft SQL Server provider for Azure Monitor for SAP solutions (preview) description: Learn how to configure a Microsoft SQL Server provider for use with Azure Monitor for SAP solutions. --+ Last updated 10/19/2022 |
virtual-machines | Create Network Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/create-network-azure-monitor-sap-solutions.md | Title: Set up network for Azure Monitor for SAP solutions (preview) description: Learn how to set up an Azure virtual network for use with Azure Monitor for SAP solutions. --+ Last updated 10/19/2022 |
virtual-machines | Dbms Guide General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-general.md | Title: Considerations for Azure Virtual Machines DBMS deployment for SAP workload | Microsoft Docs description: Considerations for Azure Virtual Machines DBMS deployment for SAP workload -+ Last updated 09/22/2020 |
virtual-machines | Dbms Guide Ha Ibm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-ha-ibm.md | Title: Set up IBM Db2 HADR on Azure virtual machines (VMs) | Microsoft Docs description: Establish high availability of IBM Db2 LUW on Azure virtual machines (VMs). -+ Last updated 12/06/2022 |
virtual-machines | Dbms Guide Ibm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-ibm.md | |
virtual-machines | Dbms Guide Maxdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-maxdb.md | description: SAP MaxDB, liveCache, and Content Server deployment on Azure tags: azure-resource-manager-+ Last updated 08/24/2022 |
virtual-machines | Dbms Guide Oracle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-oracle.md | |
virtual-machines | Dbms Guide Sapase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-sapase.md | description: SAP ASE Azure Virtual Machines DBMS deployment for SAP workload tags: azure-resource-manager-+ Last updated 11/30/2022 |
virtual-machines | Dbms Guide Sapiq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-sapiq.md | |
virtual-machines | Dbms Guide Sqlserver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-sqlserver.md | |
virtual-machines | Deployment Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/deployment-checklist.md | description: Checklist for planning SAP workload deployments to Azure and deploy tags: azure-resource-manager-+ Last updated 11/21/2022 |
virtual-machines | Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/deployment-guide.md | |
virtual-machines | Disaster Recovery Overview Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/disaster-recovery-overview-guide.md | |
virtual-machines | Disaster Recovery Sap Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/disaster-recovery-sap-guide.md | |
virtual-machines | Enable Tls Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/enable-tls-azure-monitor-sap-solutions.md | Title: Enable TLS 1.2 or higher description: Learn what is secure communication with TLS 1.2 or higher in Azure Monitor for SAP solutions. --+ Last updated 12/14/2022 |
virtual-machines | Exchange Online Integration Sap Email Outbound | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/exchange-online-integration-sap-email-outbound.md | Title: Exchange Online Integration for Email-Outbound from SAP NetWeaver | Micro description: Learn about Exchange Online integration for email outbound from SAP NetWeaver. -+ Last updated 03/11/2022 |
virtual-machines | Expose Sap Odata To Power Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-odata-to-power-query.md | Title: Enable SAP Principal Propagation for live OData feeds with Power Query description: Learn about configuring SAP Principal Propagation for live OData feeds with Power Query -+ Last updated 06/10/2022 |
virtual-machines | Expose Sap Process Orchestration On Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md | Title: Expose SAP legacy middleware securely with Azure PaaS description: Learn about securely exposing SAP Process Orchestration on Azure. -+ Last updated 07/19/2022 |
virtual-machines | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md | Title: Get started with SAP on Azure VMs | Microsoft Docs description: Learn about SAP solutions that run on virtual machines (VMs) in Microsoft Azure -+ documentationcenter: '' |
virtual-machines | Ha Setup With Fencing Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/ha-setup-with-fencing-device.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Additional Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-additional-network-requirements.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-architecture.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Available Skus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-available-skus.md | |
virtual-machines | Hana Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-backup-restore.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Certification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-certification.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Concept Preparation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-concept-preparation.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Connect Azure Vm Large Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-connect-azure-vm-large-instances.md | |
virtual-machines | Hana Connect Vnet Express Route | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-connect-vnet-express-route.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Data Tiering Extension Nodes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-data-tiering-extension-nodes.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Example Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-example-installation.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Failover Procedure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-failover-procedure.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-get-started.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: c51a2a06-6e97-429b-a346-b433a785c9f0-+ vm-linux |
virtual-machines | Hana Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-installation.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Know Terms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-know-terms.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Large Instance Enable Kdump | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-large-instance-enable-kdump.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Large Instance Virtual Machine Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-large-instance-virtual-machine-migration.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Li Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-li-portal.md | description: Describes the way how you can identify and interact with Azure HANA tags: azure-resource-manager-+ Last updated 07/01/2021 |
virtual-machines | Hana Monitor Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-monitor-troubleshoot.md | |
virtual-machines | Hana Network Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-network-architecture.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Onboarding Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-onboarding-requirements.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Operations Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-operations-model.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Overview Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-architecture.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Overview High Availability Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Overview Infrastructure Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Setup Smt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-setup-smt.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Sizing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-sizing.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Storage Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-storage-architecture.md | documentationcenter: editor: ''-+ vm-linux |
virtual-machines | Hana Supported Scenario | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-supported-scenario.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Hana Vm Operations Netapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-netapp.md | |
virtual-machines | Hana Vm Operations Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md | |
virtual-machines | Hana Vm Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations.md | description: Operations guide for SAP HANA systems that are deployed on Azure vi tags: azure-resource-manager-+ Last updated 08/30/2022 |
virtual-machines | Hana Vm Premium Ssd V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-premium-ssd-v1.md | |
virtual-machines | Hana Vm Premium Ssd V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-premium-ssd-v2.md | |
virtual-machines | Hana Vm Troubleshoot Scale Out Ha On Sles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-troubleshoot-scale-out-ha-on-sles.md | description: Guide to check and troubleshoot a complex SAP HANA scale-out high-a -+ vm-linux |
virtual-machines | Hana Vm Ultra Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-ultra-disk.md | |
virtual-machines | High Availability Guide Rhel Glusterfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-glusterfs.md | |
virtual-machines | High Availability Guide Rhel Ibm Db2 Luw | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-ibm-db2-luw.md | |
virtual-machines | High Availability Guide Rhel Multi Sid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-multi-sid.md | |
virtual-machines | High Availability Guide Rhel Netapp Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files.md | description: Establish high availability for SAP NW on Azure virtual machines (V tags: azure-resource-manager-+ Last updated 12/06/2022 |
virtual-machines | High Availability Guide Rhel Nfs Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-nfs-azure-files.md | description: Establish high availability for SAP NW on Azure virtual machines (V tags: azure-resource-manager-+ Last updated 12/06/2022 |
virtual-machines | High Availability Guide Rhel Pacemaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md | |
virtual-machines | High Availability Guide Rhel With Dialog Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-with-dialog-instance.md | documentationcenter: saponazure tags: azure-resource-manager-+ vm-linux |
virtual-machines | High Availability Guide Rhel With Hana Ascs Ers Dialog Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance.md | documentationcenter: saponazure tags: azure-resource-manager-+ vm-linux |
virtual-machines | High Availability Guide Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel.md | description: Azure Virtual Machines high availability for SAP NetWeaver on Red H tags: azure-resource-manager-+ Last updated 12/06/2022 |
virtual-machines | High Availability Guide Standard Load Balancer Outbound Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-standard-load-balancer-outbound-connections.md | |
virtual-machines | High Availability Guide Suse Multi Sid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid.md | |
virtual-machines | High Availability Guide Suse Netapp Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87-+ vm-windows |
virtual-machines | High Availability Guide Suse Nfs Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-azure-files.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87-+ vm-windows |
virtual-machines | High Availability Guide Suse Nfs Simple Mount | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-simple-mount.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87-+ vm-windows |
virtual-machines | High Availability Guide Suse Nfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md | |
virtual-machines | High Availability Guide Suse Pacemaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md | |
virtual-machines | High Availability Guide Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87-+ vm-windows |
virtual-machines | High Availability Guide Windows Azure Files Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-windows-azure-files-smb.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87-+ vm-windows |
virtual-machines | High Availability Guide Windows Dfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-windows-dfs.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87-+ vm-windows |
virtual-machines | High Availability Guide Windows Netapp Files Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-windows-netapp-files-smb.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87-+ vm-windows |
virtual-machines | High Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-zones.md | |
virtual-machines | Integration Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/integration-get-started.md | Title: Get started with SAP and Azure integration scenarios description: Learn about the various integration points in the Microsoft ecosystem for SAP workloads.-+ Last updated 12/15/2022 |
virtual-machines | Lama Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/lama-installation.md | |
virtual-machines | Large Instance High Availability Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/large-instance-high-availability-rhel.md | Title: Azure Large Instances high availability for SAP on RHEL description: Learn how to automate an SAP HANA database failover using a Pacemaker cluster in Red Hat Enterprise Linux. -+ Last updated 04/19/2021 |
virtual-machines | Large Instance Os Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/large-instance-os-backup.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Automation New Sapautomationregion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-new-sapautomationregion.md | |
virtual-machines | Automation New Sapdeployer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-new-sapdeployer.md | |
virtual-machines | Automation New Saplibrary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-new-saplibrary.md | |
virtual-machines | Automation New Sapsystem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-new-sapsystem.md | |
virtual-machines | Automation New Sapworkloadzone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-new-sapworkloadzone.md | |
virtual-machines | Automation Remove Sapautomationregion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-remove-sapautomationregion.md | |
virtual-machines | Automation Remove Sapsystem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-remove-sapsystem.md | |
virtual-machines | Automation Set Sapsecrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-set-sapsecrets.md | |
virtual-machines | Automation Update Tfstate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/module/automation-update-tfstate.md | |
virtual-machines | Monitor Sap On Azure Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure-reference.md | description: Important reference material needed when you monitor SAP on Azure. --+ Last updated 10/19/2022 |
virtual-machines | Monitor Sap On Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md | Title: What is Azure Monitor for SAP solutions? (preview) description: Learn about how to monitor your SAP resources on Azure for availability, performance, and operation. --+ Last updated 10/19/2022 |
virtual-machines | Os Backup Hli Type Ii Skus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-backup-hli-type-ii-skus.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Os Compatibility Matrix Hana Large Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-instance.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Os Upgrade Hana Large Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-upgrade-hana-large-instance.md | documentationcenter: editor:-+ vm-linux |
virtual-machines | Planning Guide Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide-storage.md | |
virtual-machines | Planning Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md | description: Azure Virtual Machines planning and implementation for SAP NetWeave tags: azure-resource-manager-+ vm-linux |
virtual-machines | Planning Supported Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-supported-configurations.md | |
virtual-machines | Proximity Placement Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/proximity-placement-scenarios.md | description: Describes SAP deployment scenarios with Azure proximity placement g tags: azure-resource-manager-+ Last updated 12/18/2022 |
virtual-machines | Rise Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/rise-integration.md | |
virtual-machines | Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325-+ vm-windows |
virtual-machines | Sap Ascs Ha Multi Sid Wsfc File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-ascs-ha-multi-sid-wsfc-file-share.md | |
virtual-machines | Sap Ascs Ha Multi Sid Wsfc Shared Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-ascs-ha-multi-sid-wsfc-shared-disk.md | editor: '' tags: azure-resource-manager keywords: '' ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325-+ vm-windows |
virtual-machines | Sap Hana Availability Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-across-regions.md | description: An overview of availability considerations when running SAP HANA on tags: azure-resource-manager-+ Last updated 09/12/2018 |
virtual-machines | Sap Hana Availability One Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-one-region.md | description: Describes SAP HANA operations on Azure native VMs in one Azure regi tags: azure-resource-manager-+ Last updated 07/27/2018 |
virtual-machines | Sap Hana Availability Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-overview.md | description: Describes SAP HANA operations on Azure native VMs. tags: azure-resource-manager-+ Last updated 03/05/2018 |
virtual-machines | Sap Hana High Availability Netapp Files Red Hat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md | |
virtual-machines | Sap Hana High Availability Netapp Files Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md | documentationcenter: saponazure tags: azure-resource-manager-+ vm-linux |
virtual-machines |