Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Custom Policies Series Hello World | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-hello-world.md | If you haven't already done so, create the following encryption keys. To automat ```xml <UserJourney Id="HelloWorldJourney">- <OrchestrationSteps> - <OrchestrationStep Order="1" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" /> - </OrchestrationSteps> -</UserJourney> + <OrchestrationSteps> + <OrchestrationStep Order="1" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" /> + </OrchestrationSteps> + </UserJourney> ``` We've added a [UserJourney](userjourneys.md). The user journey specifies the business logic the end user goes through as Azure AD B2C processes a request. This user journey has only one step that issues a JTW token with the claims that you'll define in the next step. |
active-directory | Concept Authentication Default Enablement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md | The following table lists each setting that can be set to Microsoft managed and | [Registration campaign](how-to-mfa-registration-campaign.md) | Beginning in July, 2023, enabled for SMS and voice call users with free and trial subscriptions. | | [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled |-| [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Disabled | +| [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled | As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication. |
active-directory | Concept Certificate Based Authentication Certificateuserids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md | -Users in Azure AD can have a multivalued attribute named **certificateUserIds**. The attribute allows up to four values, and each value can be of 120-character length. It can store any value, and doesn't require email ID format. It can store non-routable User Principal Names (UPNs) like _bob@woodgrove_ or _bob@local_. +Users in Azure AD can have a multivalued attribute named **certificateUserIds**. The attribute allows up to four values, and each value can be of 120-character length. It can store any value and doesn't require email ID format. It can store non-routable User Principal Names (UPNs) like _bob@woodgrove_ or _bob@local_. ## Supported patterns for certificate user IDs The values stored in **certificateUserIds** should be in the format described in ## Roles to update certificateUserIds -For cloud only users, only users with roles **Global Administrators**, **Privileged Authentication Administrator** can write into certificateUserIds. -For sync'd users, AD users with role **Hybrid Identity Administrator** can write into the attribute. +For cloud-only users, only users with roles **Global Administrators**, **Privileged Authentication Administrator** can write into certificateUserIds. +For synched users, AD users with role **Hybrid Identity Administrator** can write into the attribute. >[!NOTE]->Active Directory Administrators (including accounts with delegated administrative privilege over sync'd user accounts as well as administrative rights over the Azure >AD Connect Servers) can make changes that impact the certificateUserIds value in Azure AD for any sync'd accounts. +>Active Directory Administrators (including accounts with delegated administrative privilege over synched user accounts as well as administrative rights over the Azure >AD Connect Servers) can make changes that impact the certificateUserIds value in Azure AD for any synched accounts. ## Update certificate user IDs in the Azure portal Tenant admins can use the following steps Azure portal to update certificate use 1. Enter the value and click **Save**. You can add up to four values, each of 120 characters. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/save.png" alt-text="Screenshot of a value to enter for CertificateUserId.":::- ++## Update certificateUserIds using Microsoft Graph queries ++**Look up certificateUserIds** ++Authorized callers can run Microsoft Graph queries to find all the users with a given certificateUserId value. On the Microsoft Graph [user](/graph/api/resources/user) object, the collection of certificateUserIds is stored in the **authorizationInfo** property. ++To retrieve all user objects that have the value 'bob@contoso.com' in certificateUserIds: ++```msgraph-interactive +GET https://graph.microsoft.com/v1.0/users?$filter=authorizationInfo/certificateUserIds/any(x:x eq 'bob@contoso.com')&$count=true +ConsistencyLevel: eventual +``` ++You can also use the `not` and `startsWith` operators to match the filter condition. To filter against the certificateUserIds object, the request must include the `$count=true` query string and the **ConsistencyLevel** header set to `eventual`. ++**Update certificateUserIds** ++Run a PATCH request to update the certificateUserIds for a given user. ++#### Request body: ++```http +PATCH https://graph.microsoft.com/v1.0/users/{id} +Content-Type: application/json +{ + "authorizationInfo": { + "certificateUserIds": [ + "X509:<PN>123456789098765@mil" + ] + } +} +``` +## Update certificateUserIds using PowerShell commands ++For the configuration, you can use the [Azure Active Directory PowerShell Version 2](/powershell/microsoftgraph/installation): ++1. Start Windows PowerShell with administrator privileges. +1. Install and Import the Microsoft Graph PowerShell SDK ++ ```powershell + Install-Module Microsoft.Graph -Scope AllUsers + Import-Module Microsoft.Graph.Authentication + Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser + ``` +1. Connect to the tenant and accept all ++ ```powershell + Connect-MGGraph -Scopes "Directory.ReadWrite.All", "User.ReadWrite.All" -TenantId <tenantId> + ``` +1. List CertificateUserIds attribute of a given user ++ ```powershell + $results = Invoke-MGGraphRequest -Method get -Uri 'https://graph.microsoft.com/v1.0/users/<userId>?$select=authorizationinfo' -OutputType PSObject -Headers @{'ConsistencyLevel' = 'eventual' } + #list certificateUserIds + $results.authorizationInfo + ``` +1. Create a variable with CertificateUserIds values + + ```powershell + #Create a new variable to prepare the change. Ensure that you list any existing values you want to keep as this operation will overwrite the existing value + $params = @{ + ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéauthorizationInfo = @{ + ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇécertificateUserIds = @( + ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé"X509:<SKI>eec6b88788d2770a01e01775ce71f1125cd6ad0f", + ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé"X509:<PN>user@contoso.com" + ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé) + ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé} + } + ``` +1. Update CertificateUserIds attribute ++ ```powershell + $results = Invoke-MGGraphRequest -Method patch -Uri 'https://graph.microsoft.com/v1.0/users/<UserId>/?$select=authorizationinfo' -OutputType PSObject -Headers @{'ConsistencyLevel' = 'eventual' } -Body $params + ``` ++**Update CertificateUserIds using user object** ++1. Get the user object ++ ```powershell + $userObjectId = "6b2d3bd3-b078-4f46-ac53-f862f35e10b6" + $user = get-mguser -UserId $userObjectId -Property AuthorizationInfo + ``` ++1. Update the CertificateUserIds attribute of the user object ++ ```powershell + $user.AuthorizationInfo.certificateUserIds = @("X509:<SKI>eec6b88788d2770a01e01775ce71f1125cd6ad0f", "X509:<PN>user1@contoso.com") + Update-MgUser -UserId $userObjectId -AuthorizationInfo $user.AuthorizationInfo + ``` + ## Update certificate user IDs using Azure AD Connect To update certificate user IDs for federated users, configure Azure AD Connect to sync userPrincipalName to certificateUserIds. To synchronize X509:\<PN>PrincipalNameValue, create an outbound synchronization ### Synchronize X509:\<RFC822>RFC822Name -To synchronize X509:\<RFC822>RFC822Name, create an outbound synchronization rule, choose **Expression** in the flow type. Choose the target attribute as **certificateUserIds**, and in the source field, add the following expression. If your source attribute isn't userPrincipalName, you can change the expression accordingly. +To synchronize X509:\<RFC822>RFC822Name, create an outbound synchronization rule and choose **Expression** in the flow type. Choose the target attribute as **certificateUserIds**, and in the source field, add the following expression. If your source attribute isn't userPrincipalName, you can change the expression accordingly. ``` "X509:\<RFC822>"&[userPrincipalName] alt-security-identity-add. |Option | Value | |-|-| |Name | Descriptive name of the rule, such as: Out to AAD - certificateUserIds |- |Connected System | Your Azure AD doamin | + |Connected System | Your Azure AD domain | |Connected System Object Type | user | |Metaverse Object Type | person | |Precedence | Choose a random high number not currently used | IIF(IsPresent([alternativeSecurityId]), ) ``` -## Look up certificateUserIds using Microsoft Graph queries --Authorized callers can run Microsoft Graph queries to find all the users with a given certificateUserId value. On the Microsoft Graph [user](/graph/api/resources/user) object, the collection of certificateUserIds are stored in the **authorizationInfo** property. - -To retrieve all user objects that have the value 'bob@contoso.com' in certificateUserIds: --```msgraph-interactive -GET https://graph.microsoft.com/v1.0/users?$filter=authorizationInfo/certificateUserIds/any(x:x eq 'bob@contoso.com')&$count=true -ConsistencyLevel: eventual -``` --You can also use the `not` and `startsWith` operators to match the filter condition. To filter against the certificateUserIds object, the request must include the `$count=true` query string and the **ConsistencyLevel** header set to `eventual`. - -## Update certificateUserIds using Microsoft Graph queries --Run a PATCH request to update the certificateUserIds for a given user. --#### Request body: --```http -PATCH https://graph.microsoft.com/v1.0/users/{id} -Content-Type: application/json --{ - "authorizationInfo": { - "certificateUserIds": [ - "X509:<PN>123456789098765@mil" - ] - } -} -``` -- ## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md) |
active-directory | Concept System Preferred Multifactor Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md | System-preferred MFA is a Microsoft managed setting, which is a [tristate policy After system-preferred MFA is enabled, the authentication system does all the work. Users don't need to set any authentication method as their default because the system always determines and presents the most secure method they registered. >[!NOTE]->System-preferred MFA is an important security enhancement for users authenticating by using telecom transports. Starting July 07, 2023, the Microsoft managed value of system-preferred MFA will change from **Disabled** to **Enabled**. If you don't want to enable system-peeferred MFA, change the state from **Default** to **Disabled**, or exclude users and groups from the policy. +>System-preferred MFA is an important security enhancement for users authenticating by using telecom transports. Starting July 07, 2023, the Microsoft managed value of system-preferred MFA will change from **Disabled** to **Enabled**. If you don't want to enable system-preferred MFA, change the state from **Default** to **Disabled**, or exclude users and groups from the policy. ## Enable system-preferred MFA in the Azure portal |
active-directory | How To Mfa Authenticator Lite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md | Users receive a notification in Outlook mobile to approve or deny sign-in, or th ## Prerequisites -- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the modern Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API.+- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the modern Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API. Organizations with an active MFA server or that have not started migration from per-user MFA are not eligible for this feature. >[!TIP] >We recommend that you also enable [system-preferred multifactor authentication (MFA)](concept-system-preferred-multifactor-authentication.md) when you enable Authenticator Lite. With system-preferred MFA enabled, users try to sign-in with Authenticator Lite before they try less secure telephony methods like SMS or voice call. To disable Authenticator Lite in the Azure portal, complete the following steps: 2. On the Enable and Target tab, click Yes and All users to enable the Authenticator policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push. - Only users who are enabled for Microsoft Authenticator here can be enabled to use Authenticator Lite for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. + Only users who are enabled for Microsoft Authenticator here can be enabled to use Authenticator Lite for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. Android users utilizing a personal and work profile on their device may be prompted to register if Authenticator is present on a different profile from the Outlook application. <img width="1112" alt="Entra portal Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png"> |
active-directory | Daemon Quickstart Portal Netcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-netcore.md | -> ->  +>  > > ### MSAL.NET > |
active-directory | Licensing Service Plan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Microsoft 365 Business Standard | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>STREAM_O365_SMB (3c53ea51-d578-46fa-a4c0-fd0a92809a60)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4) | Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Business (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Kaizala Pro (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 1) (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Stream for Office 365 (3c53ea51-d578-46fa-a4c0-fd0a92809a60)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Viva Engage Core (a82fbf69-b4d7-49f4-83a6-915b2cf354f4) | | Microsoft 365 Business Standard - Prepaid Legacy | SMB_BUSINESS_PREMIUM | ac5cef5d-921b-4f97-9ef3-c99076e5470f | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | Microsoft 365 Business Premium | SPB | cbdc14ab-d96c-4c30-b9f4-6ada7cdc1d46 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_SMB (bfc1bbd9-981b-4f71-9b82-17c35fd0e2a4)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE_SHARED_COMPUTER_ACTIVATION (276d6e8a-f056-4f70-b7e8-4fc27f79f809)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINBIZ (8e229017-d77b-43d5-9305-903395523b99)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Business (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Business (bfc1bbd9-981b-4f71-9b82-17c35fd0e2a4)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Kaizala Pro (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Shared Computer Activation (276d6e8a-f056-4f70-b7e8-4fc27f79f809)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SharePoint (Plan 1) (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Business (8e229017-d77b-43d5-9305-903395523b99)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>Microsoft Stream for Office 365 E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Viva Engage Core (a82fbf69-b4d7-49f4-83a6-915b2cf354f4) |-| Microsoft 365 Business Voice | BUSINESS_VOICE_MED2 | a6051f20-9cbc-47d2-930d-419183bf6cf1 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | -| Microsoft 365 Business Voice (US) | BUSINESS_VOICE_MED2_TELCO | 08d7bce8-6e16-490e-89db-1d508e5e9609 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | -| Microsoft 365 Business Voice (without calling plan) | BUSINESS_VOICE_DIRECTROUTING | d52db95a-5ecb-46b6-beb0-190ab5cda4a8 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | -| Microsoft 365 Business Voice (without Calling Plan) for US | BUSINESS_VOICE_DIRECTROUTING_MED | 8330dae3-d349-44f7-9cad-1b23c64baabe | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Domestic Calling Plan (120 Minutes) | MCOPSTN_5 | 11dee6af-eca8-419f-8061-6864517c1875 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | MICROSOFT 365 DOMESTIC CALLING PLAN (120 min) (54a152dc-90de-4996-93d2-bc47e670fc06) | | Microsoft 365 Domestic Calling Plan for GCC | MCOPSTN_1_GOV | 923f58ab-fca1-46a1-92f9-89fda21238a8 | MCOPSTN1_GOV (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Domestic Calling for Government (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8) | | Microsoft 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Viva Engage Core (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | |
active-directory | Tenant Restrictions V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md | To configure tenant restrictions, you'll need the following: - Azure AD Premium P1 or P2 - Account with a role of Global administrator or Security administrator-- Windows devices running Windows 10, Windows 11, or Windows Server 2022 with the latest updates+- Windows devices running Windows 10, Windows 11 with the latest updates ## Step 1: Configure default tenant restrictions V2 Suppose you use tenant restrictions to block access by default, but you want to :::image type="content" source="media/tenant-restrictions-v2/add-app-save.png" alt-text="Screenshot showing the selected application."::: +> [!NOTE] + > + > Blocking MSA tenant will not block + > - user-less traffic for devices. This includes traffic for Autopilot, Windows Update, and organizational telemetry. + > - B2B authentication of consumer accounts. + > - "Passthrough" authentication, used by many Azure apps and Office.com, where apps use Azure AD to sign in consumer users in a consumer context. + ## Step 3: Enable tenant restrictions on Windows managed devices After you create a tenant restrictions V2 policy, you can enforce the policy on each Windows 10, Windows 11, and Windows Server 2022 device by adding your tenant ID and the policy ID to the device's **Tenant Restrictions** configuration. When tenant restrictions are enabled on a Windows device, corporate proxies aren't required for policy enforcement. Devices don't need to be Azure AD managed to enforce tenant restrictions V2; domain-joined devices that are managed with Group Policy are also supported. |
active-directory | What Is Deprecated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md | Use the following table to learn about changes including deprecations, retiremen |Functionality, feature, or service|Change|Change date | |||:|-|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Sometime after GA| -|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023| +|[System-preferred authentication methods](../authentication/concept-system-preferred-multifactor-authentication.md)|Feature change|Sometime after GA| |[Azure AD Graph API](https://aka.ms/aadgraphupdate)|Start of phased retirement|Jul 2023|-|[My Apps improvements](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jun 30, 2023| |[Terms of Use experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jul 2023| |[Azure AD PowerShell and MSOnline PowerShell](https://aka.ms/aadgraphupdate)|Deprecation|Mar 30, 2024| |[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024| Use the following table to learn about changes including deprecations, retiremen |Functionality, feature, or service|Change|Change date | |||:|+|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023| +|[My Apps improvements](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jun 30, 2023| |[Microsoft Authenticator Lite for Outlook mobile](../../active-directory/authentication/how-to-mfa-authenticator-lite.md)|Feature change|Jun 9, 2023| |[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| |[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| |
active-directory | Delegate By Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md | You can further restrict permissions by assigning roles at smaller scopes or by > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |-> | Submit support ticket | [Service Support Administrator](permissions-reference.md#service-support-administrator) | [Application Administrator](permissions-reference.md#application-administrator)<br/>[Azure Information Protection Administrator](permissions-reference.md#azure-information-protection-administrator)<br/>[Billing Administrator](permissions-reference.md#billing-administrator)<br/>[Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Compliance Administrator](permissions-reference.md#compliance-administrator)<br/>[Dynamics 365 Administrator](permissions-reference.md#dynamics-365-administrator)<br/>[Desktop Analytics Administrator](permissions-reference.md#desktop-analytics-administrator)<br/>[Exchange Administrator](permissions-reference.md#exchange-administrator)<br/>[Intune Administrator](permissions-reference.md#intune-administrator)<br/>[Password Administrator](permissions-reference.md#password-administrator)<br/>[Power BI Administrator](permissions-reference.md#power-bi-administrator)<br/>[Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator)<br/>[SharePoint Administrator](permissions-reference.md#sharepoint-administrator)<br/>[Skype for Business Administrator](permissions-reference.md#skype-for-business-administrator)<br/>[Teams Administrator](permissions-reference.md#teams-administrator)<br/>[Teams Communications Administrator](permissions-reference.md#teams-communications-administrator)<br/>[User Administrator](permissions-reference.md#user-administrator) | +> | Submit support ticket | [Service Support Administrator](permissions-reference.md#service-support-administrator) | [Application Administrator](permissions-reference.md#application-administrator)<br/>[Azure Information Protection Administrator](permissions-reference.md#azure-information-protection-administrator)<br/>[Billing Administrator](permissions-reference.md#billing-administrator)<br/>[Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Compliance Administrator](permissions-reference.md#compliance-administrator)<br/>[Dynamics 365 Administrator](permissions-reference.md#dynamics-365-administrator)<br/>[Desktop Analytics Administrator](permissions-reference.md#desktop-analytics-administrator)<br/>[Exchange Administrator](permissions-reference.md#exchange-administrator)<br/>[Intune Administrator](permissions-reference.md#intune-administrator)<br/>[Password Administrator](permissions-reference.md#password-administrator)<br/>[Fabric Administrator](permissions-reference.md#fabric-administrator)<br/>[Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator)<br/>[SharePoint Administrator](permissions-reference.md#sharepoint-administrator)<br/>[Skype for Business Administrator](permissions-reference.md#skype-for-business-administrator)<br/>[Teams Administrator](permissions-reference.md#teams-administrator)<br/>[Teams Communications Administrator](permissions-reference.md#teams-communications-administrator)<br/>[User Administrator](permissions-reference.md#user-administrator) | ## Next steps |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | This article lists the Azure AD built-in roles you can assign to allow managemen > | [External ID User Flow Administrator](#external-id-user-flow-administrator) | Can create and manage all aspects of user flows. | 6e591065-9bad-43ed-90f3-e9424366d2f0 | > | [External ID User Flow Attribute Administrator](#external-id-user-flow-attribute-administrator) | Can create and manage the attribute schema available to all user flows. | 0f971eea-41eb-4569-a71e-57bb8a3eff1e | > | [External Identity Provider Administrator](#external-identity-provider-administrator) | Can configure identity providers for use in direct federation. | be2f45a1-457d-42af-a067-6ec1fa63bc45 |+> | [Fabric Administrator](#fabric-administrator) | Can manage all aspects of the Fabric and Power BI products. | a9ea8996-122f-4c74-9520-8edcd192826c | > | [Global Administrator](#global-administrator) | Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities. | 62e90394-69f5-4237-9190-012177145e10 | > | [Global Reader](#global-reader) | Can read everything that a Global Administrator can, but not update anything. | f2ef992c-3afb-46b9-b7cf-a126ee74c451 | > | [Groups Administrator](#groups-administrator) | Members of this role can create/manage groups, create/manage groups settings like naming and expiration policies, and view groups activity and audit reports. | fdd7a751-b60b-444a-984c-02652fe8fa1c | This article lists the Azure AD built-in roles you can assign to allow managemen > | [Partner Tier2 Support](#partner-tier2-support) | Do not use - not intended for general use. | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 | > | [Password Administrator](#password-administrator) | Can reset passwords for non-administrators and Password Administrators. | 966707d0-3269-4727-9be2-8c3a10f19b9d | > | [Permissions Management Administrator](#permissions-management-administrator) | Manage all aspects of Entra Permissions Management. | af78dc32-cf4d-46f9-ba4e-4428526346b5 |-> | [Power BI Administrator](#power-bi-administrator) | Can manage all aspects of the Power BI product. | a9ea8996-122f-4c74-9520-8edcd192826c | > | [Power Platform Administrator](#power-platform-administrator) | Can create and manage all aspects of Microsoft Dynamics 365, Power Apps and Power Automate. | 11648597-926c-4cf3-9c36-bcebb0ba8dcc | > | [Printer Administrator](#printer-administrator) | Can manage all aspects of printers and printer connectors. | 644ef478-e28f-4e28-b9dc-3fdde9aa0b1f | > | [Printer Technician](#printer-technician) | Can register and unregister printers and update printer status. | e8cef6f1-e4bd-4ea8-bc07-4b8d950f4477 | This administrator manages federation between Azure AD organizations and externa > | microsoft.directory/domains/federation/update | Update federation property of domains | > | microsoft.directory/identityProviders/allProperties/allTasks | Read and configure identity providers in Azure Active Directory B2C | +## Fabric Administrator ++Users with this role have global permissions within Microsoft Fabric and Power BI, when the service is present, as well as the ability to manage support tickets and monitor service health. For more information, see [Understanding Fabric admin roles](/fabric/admin/roles). ++> [!div class="mx-tableFixed"] +> | Actions | Description | +> | | | +> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | +> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | +> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | +> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | +> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | +> | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Fabric and Power BI | + ## Global Administrator Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Global Administrators can view Directory Activity logs. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has zero Global Administrators. Users with this role have access to all administrative features in Azure Active > | microsoft.office365.yammer/allEntities/allProperties/allTasks | Manage all aspects of Yammer | > | microsoft.permissionsManagement/allEntities/allProperties/allTasks | Manage all aspects of Entra Permissions Management | > | microsoft.powerApps/allEntities/allTasks | Manage all aspects of Power Apps |-> | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Power BI | +> | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Fabric and Power BI | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams | > | microsoft.virtualVisits/allEntities/allProperties/allTasks | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app | > | microsoft.windows.defenderAdvancedThreatProtection/allEntities/allTasks | Manage all aspects of Microsoft Defender for Endpoint | Users with the Modern Commerce User role typically have administrative permissio **When is the Modern Commerce User role assigned?** -* **Self-service purchase in Microsoft 365 admin center** – Self-service purchase gives users a chance to try out new products by buying or signing up for them on their own. These products are managed in the admin center. Users who make a self-service purchase are assigned a role in the commerce system, and the Modern Commerce User role so they can manage their purchases in admin center. Admins can block self-service purchases (for Power BI, Power Apps, Power automate) through [PowerShell](/microsoft-365/commerce/subscriptions/allowselfservicepurchase-powershell). For more information, see [Self-service purchase FAQ](/microsoft-365/commerce/subscriptions/self-service-purchase-faq). +* **Self-service purchase in Microsoft 365 admin center** – Self-service purchase gives users a chance to try out new products by buying or signing up for them on their own. These products are managed in the admin center. Users who make a self-service purchase are assigned a role in the commerce system, and the Modern Commerce User role so they can manage their purchases in admin center. Admins can block self-service purchases (for Fabric, Power BI, Power Apps, Power automate) through [PowerShell](/microsoft-365/commerce/subscriptions/allowselfservicepurchase-powershell). For more information, see [Self-service purchase FAQ](/microsoft-365/commerce/subscriptions/self-service-purchase-faq). * **Purchases from Microsoft commercial marketplace** – Similar to self-service purchase, when a user buys a product or service from Microsoft AppSource or Azure Marketplace, the Modern Commerce User role is assigned if they don’t have the Global Administrator or Billing Administrator role. In some cases, users might be blocked from making these purchases. For more information, see [Microsoft commercial marketplace](../../marketplace/marketplace-faq-publisher-guide.yml#what-could-block-a-customer-from-completing-a-purchase-). * **Proposals from Microsoft** – A proposal is a formal offer from Microsoft for your organization to buy Microsoft products and services. When the person who is accepting the proposal doesn’t have a Global Administrator or Billing Administrator role in Azure AD, they are assigned both a commerce-specific role to complete the proposal and the Modern Commerce User role to access admin center. When they access the admin center they can only use features that are authorized by their commerce-specific role. * **Commerce-specific roles** – Some users are assigned commerce-specific roles. If a user isn't a Global Administrator or Billing Administrator, they get the Modern Commerce User role so they can access the admin center. Learn more about Permissions Management roles and polices at [View information a > | | | > | microsoft.permissionsManagement/allEntities/allProperties/allTasks | Manage all aspects of Entra Permissions Management | -## Power BI Administrator --Users with this role have global permissions within Microsoft Power BI, when the service is present, as well as the ability to manage support tickets and monitor service health. For more information, see [Understanding Power BI administrator roles](/power-bi/admin/service-admin-role). --> [!NOTE] -> In the Microsoft Graph API and Azure AD PowerShell, this role is named Power BI Service Administrator. In the [Azure portal](../../azure-portal/azure-portal-overview.md), it is named Power BI Administrator. --> [!div class="mx-tableFixed"] -> | Actions | Description | -> | | | -> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | -> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | -> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | -> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | -> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | -> | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Power BI | - ## Power Platform Administrator Users in this role can create and manage all aspects of environments, Power Apps, Flows, Data Loss Prevention policies. Additionally, users with this role have the ability to manage support tickets and monitor service health. Users with this role can manage role assignments in Azure Active Directory, as w ## Reports Reader -Users with this role can view usage reporting data and the reports dashboard in Microsoft 365 admin center and the adoption context pack in Power BI. Additionally, the role provides access to all sign-in logs, audit logs, and activity reports in Azure AD and data returned by the Microsoft Graph reporting API. A user assigned to the Reports Reader role can access only relevant usage and adoption metrics. They don't have any admin permissions to configure settings or access the product-specific admin centers like Exchange. This role has no access to view, create, or manage support tickets. +Users with this role can view usage reporting data and the reports dashboard in Microsoft 365 admin center and the adoption context pack in Fabric and Power BI. Additionally, the role provides access to all sign-in logs, audit logs, and activity reports in Azure AD and data returned by the Microsoft Graph reporting API. A user assigned to the Reports Reader role can access only relevant usage and adoption metrics. They don't have any admin permissions to configure settings or access the product-specific admin centers like Exchange. This role has no access to view, create, or manage support tickets. > [!div class="mx-tableFixed"] > | Actions | Description | Users with this role **cannot** do the following: Users with this role can do the following tasks: - Manage and configure all aspects of Virtual Visits in Bookings in the Microsoft 365 admin center, and in the Teams EHR connector-- View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, and Power BI+- View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, Fabric, and Power BI - View features and settings in the Microsoft 365 admin center, but can't edit any settings Virtual Visits are a simple way to schedule and manage online and video appointments for staff and attendees. For example, usage reporting can show how sending SMS text messages before appointments can reduce the number of people who don't show up for appointments. All custom roles | | | :heavy_check_mark: | :heavy_check_mark: - [Assign Azure AD roles to groups](groups-assign-role.md) - [Understand the different roles](../../role-based-access-control/rbac-and-directory-admin-roles.md)-- [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)+- [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md) |
active-directory | Airbase Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airbase-tutorial.md | + + Title: Azure Active Directory SSO integration with Airbase +description: Learn how to configure single sign-on between Azure Active Directory and Airbase. ++++++++ Last updated : 07/11/2023+++++# Azure Active Directory SSO integration with Airbase ++In this article, you'll learn how to integrate Airbase with Azure Active Directory (Azure AD). All-in-one spend management platform designed to deliver more control, visibility, and automation to today's finance teams that need an efficient way to scale controls and accounting operations. When you integrate Airbase with Azure AD, you can: ++* Control in Azure AD who has access to Airbase. +* Enable your users to be automatically signed-in to Airbase with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Airbase in a test environment. Airbase supports both **SP** and **IDP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with Airbase, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Airbase single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Airbase application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Airbase from the Azure AD gallery ++Add Airbase from the Azure AD application gallery to configure single sign-on with Airbase. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Airbase** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: + `https://auth.airbase.io/<ID>` ++ b. In the **Reply URL** textbox, type a URL using one of the following patterns: + + | **Reply URL** | + || + | `https://auth.airbase.io/login/callback?connection=<ID>` | + | `https://auth.workos.com/sso/saml/acs/<ID>` | ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<ENVIRONMENT>.airbase.io` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Airbase support team](mailto:integrations@airbase.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++  ++## Configure Airbase SSO ++To configure single sign-on on **Airbase** side, you need to send the **App Federation Metadata Url** to [Airbase support team](mailto:integrations@airbase.io). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Airbase test user ++In this section, you create a user called Britta Simon at Airbase SSO. Work with [Airbase support team](mailto:integrations@airbase.io) to add the users in the Airbase SSO platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to Airbase Sign-on URL where you can initiate the login flow. ++* Go to Airbase Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the Airbase for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the Airbase tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Airbase for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Airbase you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Couchbase Capella Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/couchbase-capella-sso-tutorial.md | + + Title: Azure Active Directory SSO integration with Couchbase Capella - SSO +description: Learn how to configure single sign-on between Azure Active Directory and Couchbase Capella - SSO. ++++++++ Last updated : 07/11/2023+++++# Azure Active Directory SSO integration with Couchbase Capella - SSO ++In this article, you'll learn how to integrate Couchbase Capella - SSO with Azure Active Directory (Azure AD). The purpose of this app is to integrate Couchbase's Capella cloud database platform with Azure SSO. ItΓÇÖs the easiest and fastest way to begin with Couchbase. When you integrate Couchbase Capella - SSO with Azure AD, you can: ++* Control in Azure AD who has access to Couchbase Capella - SSO. +* Enable your users to be automatically signed-in to Couchbase Capella - SSO with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Couchbase Capella - SSO in a test environment. Couchbase Capella - SSO supports **SP** initiated single sign-on and **Just In Time** user provisioning. ++## Prerequisites ++To integrate Azure Active Directory with Couchbase Capella - SSO, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Couchbase Capella - SSO single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Couchbase Capella - SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Couchbase Capella - SSO from the Azure AD gallery ++Add Couchbase Capella - SSO from the Azure AD application gallery to configure single sign-on with Couchbase Capella - SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Couchbase Capella - SSO** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:auth0:couchbase-capella:<Connection_UUID>` ++ b. In the **Reply URL** textbox, type one of the following URL/pattern: ++ | **Reply URL** | + || + | `https://couchbase-capella.us.auth0.com/login/callback` | + |` https://couchbase-capella.us.auth0.com/login/callback?connection=<Connection_UUID>` | ++ c. In the **Sign on URL** textbox, type the URL: + `https://cloud.couchbase.com/enterprise-sso` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Couchbase Capella - SSO support team](mailto:support@couchbase.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. Couchbase Capella - SSO application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, Couchbase Capella - SSO application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | email | user.mail | + | family_name | user.surname | ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Couchbase Capella - SSO** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Couchbase Capella - SSO ++To configure single sign-on on **Couchbase Capella - SSO** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Couchbase Capella - SSO support team](mailto:support@couchbase.com). They set this setting to have the SAML SSO connection set properly on both sides ++### Create Couchbase Capella - SSO test user ++In this section, a user called B.Simon is created in Couchbase Capella - SSO. Couchbase Capella - SSO supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Couchbase Capella - SSO, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Couchbase Capella - SSO Sign-on URL where you can initiate the login flow. ++* Go to Couchbase Capella - SSO Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the Couchbase Capella - SSO tile in the My Apps, this will redirect to Couchbase Capella - SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Couchbase Capella - SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Netskope Cloud Exchange Administration Console Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-cloud-exchange-administration-console-tutorial.md | + + Title: Azure Active Directory SSO integration with Netskope Cloud Exchange Administration Console +description: Learn how to configure single sign-on between Azure Active Directory and Netskope Cloud Exchange Administration Console. ++++++++ Last updated : 07/11/2023+++++# Azure Active Directory SSO integration with Netskope Cloud Exchange Administration Console ++In this article, you'll learn how to integrate Netskope Cloud Exchange Administration Console with Azure Active Directory (Azure AD). The Netskope Cloud Exchange (CE) gives customers powerful integration capabilities to leverage investments across their security and IT stacks. When you integrate Netskope Cloud Exchange Administration Console with Azure AD, you can: ++* Control in Azure AD who has access to Netskope Cloud Exchange Administration Console. +* Enable your users to be automatically signed-in to Netskope Cloud Exchange Administration Console with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Netskope Cloud Exchange Administration Console in a test environment. Netskope Cloud Exchange Administration Console supports **SP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with Netskope Cloud Exchange Administration Console, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Netskope Cloud Exchange Administration Console single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Netskope Cloud Exchange Administration Console application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Netskope Cloud Exchange Administration Console from the Azure AD gallery ++Add Netskope Cloud Exchange Administration Console from the Azure AD application gallery to configure single sign-on with Netskope Cloud Exchange Administration Console. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Netskope Cloud Exchange Administration Console** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: + `https://<Cloud_Exchange_FQDN>.com/api/metadata` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://<Cloud_Exchange_FQDN>/api/ssoauth?acs=true` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<Cloud_Exchange_FQDN>/login` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Netskope Cloud Exchange Administration Console support team](mailto:support@netskope.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. Netskope Cloud Exchange Administration Console application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, Netskope Cloud Exchange Administration Console application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | username | user.mail | + | roles | user.assignedroles | ++ > [!NOTE] + > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to know how to configure Role in Azure AD. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Netskope Cloud Exchange Administration Console** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Netskope Cloud Exchange Administration Console SSO ++To configure single sign-on on **Netskope Cloud Exchange Administration Console** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Netskope Cloud Exchange Administration Console support team](mailto:support@netskope.com). They set this setting to have the SAML SSO connection set properly on both sides ++### Create Netskope Cloud Exchange Administration Console test user ++In this section, you create a user called Britta Simon at Netskope Cloud Exchange Administration Console SSO. Work with [Netskope Cloud Exchange Administration Console support team](mailto:support@netskope.com) to add the users in the Netskope Cloud Exchange Administration Console SSO platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Netskope Cloud Exchange Administration Console Sign-on URL where you can initiate the login flow. ++* Go to Netskope Cloud Exchange Administration Console Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the Netskope Cloud Exchange Administration Console tile in the My Apps, this will redirect to Netskope Cloud Exchange Administration Console Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Netskope Cloud Exchange Administration Console you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Pwc Identity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pwc-identity-tutorial.md | + + Title: 'Tutorial: Azure AD SSO integration with PwC Identity' +description: Learn how to configure single sign-on between Azure Active Directory and PwC Identity. ++++++++ Last updated : 07/12/2023+++++# Tutorial: Azure AD SSO integration with PwC Identity ++In this tutorial, you'll learn how to integrate PwC Identity with Azure Active Directory (Azure AD). When you integrate PwC Identity with Azure AD, you can: ++* Control in Azure AD who has access to PwC Identity. +* Enable your users to be automatically signed-in to PwC Identity with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++## Prerequisites ++To get started, you need the following items: ++* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* PwC Identity single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Azure AD SSO in a test environment. ++* PwC Identity supports **SP** initiated SSO. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Add PwC Identity from the gallery ++To configure the integration of PwC Identity into Azure AD, you need to add PwC Identity from the gallery to your list of managed SaaS apps. ++1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. +1. On the left navigation pane, select the **Azure Active Directory** service. +1. Navigate to **Enterprise Applications** and then select **All Applications**. +1. To add new application, select **New application**. +1. In the **Add from the gallery** section, type **PwC Identity** in the search box. +1. Select **PwC Identity** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Azure AD SSO for PwC Identity ++Configure and test Azure AD SSO with PwC Identity using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in PwC Identity. ++To configure and test Azure AD SSO with PwC Identity, perform the following steps: ++1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. + 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. + 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. +1. **[Configure PwC Identity SSO](#configure-pwc-identity-sso)** - to configure the single sign-on settings on application side. + 1. **[Create PwC Identity test user](#create-pwc-identity-test-user)** - to have a counterpart of B.Simon in PwC Identity that is linked to the Azure AD representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Azure AD SSO ++Follow these steps to enable Azure AD SSO in the Azure portal. ++1. In the Azure portal, on the **PwC Identity** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier (Entity ID)** text box, type the value: + `urn:pwcid:saml:sp:p` ++ b. In the **Reply URL** textbox, type the URL: + `https://login.pwc.com/openam/PWCIAuthConsumer/metaAlias/pwc/sp3` ++ c. In the **Sign on URL** text box, type the URL: + `https://researchcreditsolution.pwc.com/` ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up PwC Identity** section, copy the appropriate URL(s) based on your requirement. ++  ++### Create an Azure AD test user ++In this section, you'll create a test user in the Azure portal called B.Simon. ++1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. +1. Select **New user** at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Name** field, enter `B.Simon`. + 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Click **Create**. ++### Assign the Azure AD test user ++In this section, you'll enable B.Simon to use Azure single sign-on by granting access to PwC Identity. ++1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. +1. In the applications list, select **PwC Identity**. +1. In the app's overview page, find the **Manage** section and select **Users and groups**. +1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. +1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. +1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. +1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure PwC Identity SSO ++To configure single sign-on on **PwC Identity** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [PwC Identity support team](https://www.pwc.com/us/en/services/tax/specialized-tax/research-development-credit.html). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create PwC Identity test user ++In this section, you create a user called Britta Simon in PwC Identity. Work with [PwC Identity support team](https://www.pwc.com/us/en/services/tax/specialized-tax/research-development-credit.html) to add the users in the PwC Identity platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to PwC Identity Sign on URL where you can initiate the login flow. ++* Go to PwC Identity Sign on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the PwC Identity tile in the My Apps, this will redirect to PwC Identity Sign on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure PwC Identity you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad). |
active-directory | R And D Tax Credit Services Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/r-and-d-tax-credit-services-tutorial.md | - Title: 'Tutorial: Azure AD SSO integration with R and D Tax Credit Services : 10-wk Implementation' -description: Learn how to configure single sign-on between Azure Active Directory and R and D Tax Credit Services. -------- Previously updated : 11/21/2022-----# Tutorial: Azure AD SSO integration with R and D Tax Credit Services : 10-wk Implementation --In this tutorial, you'll learn how to integrate R and D Tax Credit Services with Azure Active Directory (Azure AD). When you integrate R and D Tax Credit Services with Azure AD, you can: --* Control in Azure AD who has access to R and D Tax Credit Services. -* Enable your users to be automatically signed-in to R and D Tax Credit Services with their Azure AD accounts. -* Manage your accounts in one central location - the Azure portal. --## Prerequisites --To get started, you need the following items: --* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -* R and D Tax Credit Services single sign-on (SSO) enabled subscription. --## Scenario description --In this tutorial, you configure and test Azure AD SSO in a test environment. --* R and D Tax Credit Services supports **SP** initiated SSO. --> [!NOTE] -> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. --## Add R and D Tax Credit Services from the gallery --To configure the integration of R and D Tax Credit Services into Azure AD, you need to add R and D Tax Credit Services from the gallery to your list of managed SaaS apps. --1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. -1. On the left navigation pane, select the **Azure Active Directory** service. -1. Navigate to **Enterprise Applications** and then select **All Applications**. -1. To add new application, select **New application**. -1. In the **Add from the gallery** section, type **R and D Tax Credit Services** in the search box. -1. Select **R and D Tax Credit Services** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) --## Configure and test Azure AD SSO for R and D Tax Credit Services --Configure and test Azure AD SSO with R and D Tax Credit Services using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in R and D Tax Credit Services. --To configure and test Azure AD SSO with R and D Tax Credit Services, perform the following steps: --1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. - 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. - 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. -1. **[Configure R and D Tax Credit Services SSO](#configure-r-and-d-tax-credit-services-sso)** - to configure the single sign-on settings on application side. - 1. **[Create R and D Tax Credit Services test user](#create-r-and-d-tax-credit-services-test-user)** - to have a counterpart of B.Simon in R and D Tax Credit Services that is linked to the Azure AD representation of user. -1. **[Test SSO](#test-sso)** - to verify whether the configuration works. --## Configure Azure AD SSO --Follow these steps to enable Azure AD SSO in the Azure portal. --1. In the Azure portal, on the **R and D Tax Credit Services** application integration page, find the **Manage** section and select **single sign-on**. -1. On the **Select a single sign-on method** page, select **SAML**. -1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. --  --1. On the **Basic SAML Configuration** section, perform the following steps: -- a. In the **Identifier (Entity ID)** text box, type the value: - `urn:pwcid:saml:sp:p` -- b. In the **Reply URL** textbox, type the URL: - `https://login.pwc.com/openam/PWCIAuthConsumer/metaAlias/pwc/sp3` -- c. In the **Sign on URL** text box, type the URL: - `https://researchcreditsolution.pwc.com/` --1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. --  --1. On the **Set up R and D Tax Credit Services** section, copy the appropriate URL(s) based on your requirement. --  --### Create an Azure AD test user --In this section, you'll create a test user in the Azure portal called B.Simon. --1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. -1. Select **New user** at the top of the screen. -1. In the **User** properties, follow these steps: - 1. In the **Name** field, enter `B.Simon`. - 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. - 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. - 1. Click **Create**. --### Assign the Azure AD test user --In this section, you'll enable B.Simon to use Azure single sign-on by granting access to R and D Tax Credit Services. --1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. -1. In the applications list, select **R and D Tax Credit Services**. -1. In the app's overview page, find the **Manage** section and select **Users and groups**. -1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. -1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. -1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. -1. In the **Add Assignment** dialog, click the **Assign** button. --## Configure R and D Tax Credit Services SSO --To configure single sign-on on **R and D Tax Credit Services** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [R and D Tax Credit Services support team](https://www.pwc.com/us/en/services/tax/specialized-tax/research-development-credit.html). They set this setting to have the SAML SSO connection set properly on both sides. --### Create R and D Tax Credit Services test user --In this section, you create a user called Britta Simon in R and D Tax Credit Services. Work with [R and D Tax Credit Services support team](https://www.pwc.com/us/en/services/tax/specialized-tax/research-development-credit.html) to add the users in the R and D Tax Credit Services platform. Users must be created and activated before you use single sign-on. --## Test SSO --In this section, you test your Azure AD single sign-on configuration with following options. --* Click on **Test this application** in Azure portal. This will redirect to R and D Tax Credit Services Sign-on URL where you can initiate the login flow. --* Go to R and D Tax Credit Services Sign-on URL directly and initiate the login flow from there. --* You can use Microsoft My Apps. When you click the R and D Tax Credit Services tile in the My Apps, this will redirect to R and D Tax Credit Services Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). --## Next steps --Once you configure R and D Tax Credit Services you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad). |
active-directory | Sap Analytics Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md | Title: 'Tutorial: Configure SAP Analytics Cloud for automatic user provisioning with Azure Active Directory' -description: Learn how to automatically provision and de-provision user accounts from Azure AD to SAP Analytics Cloud. + Title: 'Tutorial: Configure SAP Analytics Cloud for automatic user provisioning with Microsoft Entra ID' +description: Learn how to automatically provision and deprovision user accounts from Microsoft Entra ID to SAP Analytics Cloud. documentationcenter: '' -This tutorial describes the steps you need to perform in both SAP Analytics Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SAP Analytics Cloud](https://www.sapanalytics.cloud/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +This tutorial describes the steps you need to perform in both SAP Analytics Cloud and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and deprovisions users and groups to [SAP Analytics Cloud](https://www.sapanalytics.cloud/) using the Microsoft Entra ID Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md). +> [!NOTE] +> We are working with SAP to deploy a new gallery application that provides a single point to configure your SAP Analytics Cloud application. ## Capabilities supported > [!div class="checklist"] > * Create users in SAP Analytics Cloud > * Remove users in SAP Analytics Cloud when they do not require access anymore-> * Keep user attributes synchronized between Azure AD and SAP Analytics Cloud +> * Keep user attributes synchronized between Microsoft Entra ID and SAP Analytics Cloud > * [Single sign-on](sapboc-tutorial.md) to SAP Analytics Cloud (recommended) ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites: -* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) -* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* [A Microsoft Entra ID tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Microsoft Entra ID with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * A SAP Analytics Cloud tenant * A user account on SAP Identity Provisioning admin console with Admin permissions. Make sure you have access to the proxy systems in the Identity Provisioning admin console. If you don't see the **Proxy Systems** tile, create an incident for component **BC-IAM-IPS** to request access to this tile. * An OAuth client with authorization grant Client Credentials in SAP Analytics Cloud. To learn how, see: [Managing OAuth Clients and Trusted Identity Providers](https://help.sap.com/viewer/00f68c2e08b941f081002fd3691d86a7/release/en-US/4f43b54398fc4acaa5efa32badfe3df6.html) > [!NOTE]-> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud. +> This integration is also available to use from Microsoft Entra ID US Government Cloud environment. You can find this application in the Microsoft Entra ID US Government Cloud Application Gallery and configure it in the same way as you do from public cloud. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -3. Determine what data to [map between Azure AD and SAP Analytics Cloud](../app-provisioning/customize-application-attributes.md). --## Step 2. Configure SAP Analytics Cloud to support provisioning with Azure AD --1. Sign into the SAP Identity Provisioning admin console with your administrator account and then select **Proxy Systems**. --  --2. Select **Properties**. --  --3. Copy the **URL** and append `/api/v1/scim` to the URL. Save this for later to use in the **Tenant URL** field. --  --4. Use [POSTMAN](https://www.postman.com/) to perform a POST HTTPS call to the address: `<Token URL>?grant_type=client_credentials` where `Token URL` is the URL in the **OAuth2TokenServiceURL** field. This step is needed to generate an access token to be used in the Secret Token field when configuring automatic provisioning. --  --5. In Postman, use **Basic Authentication**, and set the OAuth client ID as the user and the secret as the password. This call returns an access token. Keep this copied for later to use in the **Secret Token** field. --  --## Step 3. Add SAP Analytics Cloud from the Azure AD application gallery --Add SAP Analytics Cloud from the Azure AD application gallery to start managing provisioning to SAP Analytics Cloud. If you have previously setup SAP Analytics Cloud for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). --## Step 4. Define who will be in scope for provisioning --The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). --* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). --* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. ---## Step 5. Configure automatic user provisioning to SAP Analytics Cloud --This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. --### To configure automatic user provisioning for SAP Analytics Cloud in Azure AD: --1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. --  --2. In the applications list, select **SAP Analytics Cloud**. --  --3. Select the **Provisioning** tab. --  --4. Set the **Provisioning Mode** to **Automatic**. --  --5. Under the **Admin Credentials** section, input the tenant URL value retrieved earlier in **Tenant URL**. Input the access token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to SAP Analytics Cloud. If the connection fails, ensure your SAP Analytics Cloud account has Admin permissions and try again. --  --6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. --  --7. Select **Save**. --8. Under the **Mappings** section, select **Provision Azure Active Directory Users**. --9. Review the user attributes that are synchronized from Azure AD to SAP Analytics Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Analytics Cloud for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the SAP Analytics Cloud API supports filtering users based on that attribute. Select the **Save** button to commit any changes. -- |Attribute|Type|Supported for filtering| - |||| - |userName|String|✓| - |name.givenName|String| - |name.familyName|String| - |active|Boolean| - |emails[type eq "work"].value|String| - |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String| +2. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +3. Determine what data to [map between Microsoft Entra ID and SAP Analytics Cloud](../app-provisioning/customize-application-attributes.md). -10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +## Step 2. Configure SAP Analytics Cloud to support SSO with Microsoft Entra ID -11. To enable the Azure AD provisioning service for SAP Analytics Cloud, change the **Provisioning Status** to **On** in the **Settings** section. +Follow the set of instructions available for our SAP Cloud analytics SSO [tutorial](sapboc-tutorial.md) -  -12. Define the users and/or groups that you would like to provision to SAP Analytics Cloud by choosing the desired values in **Scope** in the **Settings** section. +## Step 3. Create Microsoft Entra ID Groups for your SAP business roles -  +Create Microsoft Entra ID groups for your SAP business roles -13. When you are ready to provision, click **Save**. -  +## Step 4. Map the created groups to your SAP business roles -This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. +Go to [SAP Help Portal](https://help.sap.com/docs/identity-provisioning/identity-provisioning/microsoft-azure-active-directory) to map the created groups to your business roles. If you get stuck, you can get further guidance from [SAP Blogs](https://blogs.sap.com/2022/02/04/provision-users-from-microsoft-azure-ad-to-sap-cloud-identity-services-identity-authentication/) -## Step 6. Monitor your deployment -Once you've configured provisioning, use the following resources to monitor your deployment: -1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully -2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion -3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). +## Step 5. Assign Users as members of the Microsoft Entra ID Groups -## Additional resources +Assign users as members of the Microsoft Entra ID Groups and give them app role assignments -* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) -* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* Start small. Test with a small set of users and groups before rolling out to everyone. -## Next steps +Check the users have the right access in SAP downstream targets and when they sign in, they have the right roles. -* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Sso For Jama Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sso-for-jama-connect-tutorial.md | + + Title: Azure Active Directory SSO integration with SSO for Jama Connect® +description: Learn how to configure single sign-on between Azure Active Directory and SSO for Jama Connect®. ++++++++ Last updated : 07/11/2023+++++# Azure Active Directory SSO integration with SSO for Jama Connect® ++In this article, you learn how to integrate SSO for Jama Connect® with Azure Active Directory (Azure AD). Jama Software®’s industry-leading platform helps teams manage requirements with Live Traceability™ through the systems development process for proven cycle time reduction and quality improvement. When you integrate SSO for Jama Connect® with Azure AD, you can: ++* Control in Azure AD who has access to SSO for Jama Connect®. +* Enable your users to be automatically signed-in to SSO for Jama Connect® with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for SSO for Jama Connect® in a test environment. SSO for Jama Connect® supports both **SP** and **IDP** initiated single sign-on and also **Just In Time** user provisioning. ++## Prerequisites ++To integrate Azure Active Directory with SSO for Jama Connect®, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* SSO for Jama Connect® single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the SSO for Jama Connect® application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add SSO for Jama Connect® from the Azure AD gallery ++Add SSO for Jama Connect® from the Azure AD application gallery to configure single sign-on with SSO for Jama Connect®. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **SSO for Jama Connect®** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:auth0:<First_Part_of_Auth0_Domain>:<TenantID>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://<Auth0_Domain>/login/callback?connection=<TenantID>` ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<Tenant_Name>.jamacloud.com/login.req` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [SSO for Jama Connect® support team](mailto:support@jamasoftware.zendesk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++  ++## Configure SSO for Jama Connect® SSO ++To configure single sign-on on **SSO for Jama Connect®** side, you need to send the **App Federation Metadata Url** to [SSO for Jama Connect® support team](mailto:support@jamasoftware.zendesk.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create SSO for Jama Connect® test user ++In this section, a user called B.Simon is created in SSO for Jama Connect®. SSO for Jama Connect® supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in SSO for Jama Connect®, a new one is created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to SSO for Jama Connect® Sign-on URL where you can initiate the login flow. ++* Go to SSO for Jama Connect® Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the SSO for Jama Connect® for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the SSO for Jama Connect® tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SSO for Jama Connect® for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure SSO for Jama Connect® you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Worthix App Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/worthix-app-tutorial.md | + + Title: Azure Active Directory SSO integration with Worthix App +description: Learn how to configure single sign-on between Azure Active Directory and Worthix App. ++++++++ Last updated : 07/11/2023+++++# Azure Active Directory SSO integration with Worthix App ++In this article, you'll learn how to integrate Worthix App with Azure Active Directory (Azure AD). Worthix App is a Customer Value Alignment platform that uses I.A to dialogue with your company customers to collect their company value perceptions. When you integrate Worthix App with Azure AD, you can: ++* Control in Azure AD who has access to Worthix App. +* Enable your users to be automatically signed-in to Worthix App with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Worthix App in a test environment. Worthix App supports **IDP** initiated single sign-on and **Just In Time** user provisioning. ++## Prerequisites ++To integrate Azure Active Directory with Worthix App, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Worthix App single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Worthix App application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Worthix App from the Azure AD gallery ++Add Worthix App from the Azure AD application gallery to configure single sign-on with Worthix App. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Worthix App** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:auth0:production-worthix:<Company_Name>Saml` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://production-worthix.us.auth0.com/login/callback?connection=<Company_Name>Saml` + + > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Worthix App support team](mailto:support@worthix.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Worthix App** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Worthix App SSO ++To configure single sign-on on **Worthix App** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Worthix App support team](mailto:support@worthix.com). They set this setting to have the SAML SSO connection set properly on both sides ++### Create Worthix App test user ++In this section, a user called B.Simon is created in Worthix App. Worthix App supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Worthix App, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on Test this application in Azure portal and you should be automatically signed in to the Worthix App for which you set up the SSO. ++* You can use Microsoft My Apps. When you click the Worthix App tile in the My Apps, you should be automatically signed in to the Worthix App for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Worthix App you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
aks | Api Server Authorized Ip Ranges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md | az aks create \ > [!NOTE] > You should add these ranges to an allow list: >-> - The firewall public IP address +> - The cluster egress IP address (firewall, NAT gateway, or other address, depending on your [outbound type][egress-outboundtype]). > - Any range that represents networks that you'll administer the cluster from > > The upper limit for the number of IP ranges you can specify is 200. In this article, you enabled API server authorized IP ranges. This approach is o [az-network-public-ip-list]: /cli/azure/network/public-ip#az_network_public_ip_list [concepts-clusters-workloads]: concepts-clusters-workloads.md [concepts-security]: concepts-security.md+[egress-outboundtype]: egress-outboundtype.md [install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md [route-tables]: ../virtual-network/manage-route-table.md [standard-sku-lb]: load-balancer-standard.md-[azure-devops-allowed-network-cfg]: /azure/devops/organizations/security/allow-list-ip-url +[azure-devops-allowed-network-cfg]: /azure/devops/organizations/security/allow-list-ip-url |
api-management | Validate Azure Ad Token Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md | The `validate-azure-ad-token` policy enforces the existence and validity of a JS | Element | Description | Required | | - | -- | -- |-| audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No | -| backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. | No | -| client-application-ids | Contains a list of acceptable client application IDs. If multiple application-id elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one application-id must be specified. | Yes | -| required-claims | Contains a list of `claim` elements for claim values expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No | +| audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. Policy expressions are allowed. | No | +| backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. Policy expressions aren't allowed. | No | +| client-application-ids | Contains a list of acceptable client application IDs. If multiple application-id elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one application-id must be specified. Policy expressions aren't allowed. | Yes | +| required-claims | Contains a list of `claim` elements for claim values expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. Policy expressions are allowed. | No | ### claim attributes |
api-management | Validate Jwt Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md | The `validate-jwt` policy enforces existence and validity of a supported JSON we | Attribute | Description | Required | Default | | - | | -- | | | id | String. Identifier used to match `kid` claim presented in JWT. | No | N/A |-| certificate-id | Identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management, used to specify the public key to verify an RS256 signed token. | No | N/A | -| n | Modulus of the public key used to verify the issuer of an RS256 signed token. Must be specified with the value of the exponent `e`.| No | N/A| -| e | Exponent of the public key used to verify the issuer an RS256 signed token. Must be specified with the value of the modulus `n`. | No | N/A| +| certificate-id | Identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management, used to specify the public key to verify an RS256 signed token. | No | N/A | +| n | Modulus of the public key used to verify the issuer of an RS256 signed token. Must be specified with the value of the exponent `e`. Policy expressions aren't allowed. | No | N/A| +| e | Exponent of the public key used to verify the issuer an RS256 signed token. Must be specified with the value of the modulus `n`. Policy expressions aren't allowed. | No | N/A| |
app-service | Deploy Staging Slots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md | For more information, see [New-AzWebAppSlot](/powershell/module/az.websites/new- The new deployment slot has no content, even if you clone the settings from a different slot. For example, you can [publish to this slot with Git](./deploy-local-git.md). You can deploy to the slot from a different repository branch or a different repository. Get publish profile [from Azure App Service](/visualstudio/azure/how-to-get-publish-profile-from-azure-app-service) can provide required information to deploy to the slot. The profile can be imported by Visual Studio to deploy contents to the slot. -The slot's URL has the format `http://sitename-slotname.azurewebsites.net`. To keep the URL length within necessary DNS limits, the site name is truncated at 40 characters, the slot name is truncated at 19 characters, and 4 extra random characters are appended to ensure the resulting domain name is unique. +The slot's URL has the format `http://sitename-slotname.azurewebsites.net`. To keep the URL length within necessary DNS limits, the combined site name and slot name must be fewer than 59 characters. <a name="AboutConfiguration"></a> |
application-gateway | Configuration Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md | You may block all other incoming traffic by using a deny-all rule. **Outbound rules** -1. **Outbound to the Internet** - Allow outbound traffic to the Internet for all destinations. This rule is created by default for [network security group](../virtual-network/network-security-groups-overview.md), and you must not override it with a manual Deny rule to ensure smooth operations of your application gateway. +1. **Outbound to the Internet** - Allow outbound traffic to the Internet for all destinations. This rule is created by default for [network security group](../virtual-network/network-security-groups-overview.md), and you must not override it with a manual Deny rule to ensure smooth operations of your application gateway. Outbound NSG rules that deny any outbound connectivity must not be created. | Source | Source ports | Destination | Destination ports | Protocol | Access | ||||||| |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | This article highlights capabilities, features, and enhancements recently releas ### Image tag -`v1.20.0_2023-07-11` +`v1.21.0_2023-07-11` For complete release version information, review [Version log](version-log.md#july-11-2023). |
azure-functions | Functions Bindings Event Grid Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md | $message = $Request.Query.Message Push-OutputBinding -Name outputEvent -Value  @{     id = "1"-    EventType = "testEvent" -    Subject = "testapp/testPublish" -    EventTime = "2020-08-27T21:03:07+00:00" -    Data = @{ +    eventType = "testEvent" +    subject = "testapp/testPublish" +    eventTime = "2020-08-27T21:03:07+00:00" +    data = @{         Message = $message     }-    DataVersion = "1.0" +    dataVersion = "1.0" } Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ |
azure-functions | Functions Bindings Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md | An extension bundle reference is defined by the `extensionBundle` section in a * [!INCLUDE [functions-extension-bundles-json](../../includes/functions-extension-bundles-json.md)] -The following table lists the currently available versions of the default *Microsoft.Azure.Functions.ExtensionBundle* bundle and links to the extensions they include. +The following table lists the currently available version ranges of the default *Microsoft.Azure.Functions.ExtensionBundle* bundles and links to the extensions they include. | Bundle version | Version in host.json | Included extensions | | | | |-| 1.x | `[1.*, 2.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v1.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle | -| 2.x | `[2.*, 3.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v2.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle | -| 3.x | `[3.3.0, 4.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/4f5934a18989353e36d771d0a964f14e6cd17ac3/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle<sup>1</sup> | -| 4.x | `[4.0.0, 5.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v4.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle<sup>1</sup> | --<sup>1</sup> Version 4.x of the extension bundle currently doesn't include the [Web PubSub bindings](https://learn.microsoft.com/azure/azure-web-pubsub/reference-functions-bindings?tabs=csharp#add-to-your-functions-app ). If your app requires Web PubSub, you'll need to continue using the 3.x version for now. -+| 1.x | `[1.*, 2.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v1.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle. | +| 2.x | `[2.*, 3.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v2.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle. | +| 3.x | `[3.3.0, 4.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/4f5934a18989353e36d771d0a964f14e6cd17ac3/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle. | +| 4.x | `[4.0.0, 5.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v4.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle. | > [!NOTE]-> Even though host.json supports custom ranges for `version`, you should use a version value from this table. +> Even though host.json supports custom ranges for `version`, you should use a version range value from this table, such as `[3.3.0, 4.0.0)`. ## Explicitly install extensions |
azure-functions | Functions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md | In Azure Functions, specific functions share a few core technical concepts and c This article assumes that you've already read the [Azure Functions overview](functions-overview.md). ## Function code-A *function* is the primary concept in Azure Functions. A function contains two important pieces - your code, which can be written in a variety of languages, and some config, the function.json file. For compiled languages, this config file is generated automatically from annotations in your code. For scripting languages, you must provide the config file yourself. +A *function* is the primary concept in Azure Functions. A function contains two important pieces - your code, which can be written in various languages, and some config, the function.json file. For compiled languages, this config file is generated automatically from annotations in your code. For scripting languages, you must provide the config file yourself. The function.json file defines the function's trigger, bindings, and other configuration settings. Every function has one and only one trigger. The runtime uses this config file to determine the events to monitor and how to pass data into and return data from a function execution. The following is an example function.json file. The `bindings` property is where you configure both triggers and bindings. Each | name | Function identifier.<br><br>For example, `myQueue`. | string | The name that is used for the bound data in the function. For C#, this is an argument name; for JavaScript, it's the key in a key/value list. | ## Function app-A function app provides an execution context in Azure in which your functions run. As such, it is the unit of deployment and management for your functions. A function app is composed of one or more individual functions that are managed, deployed, and scaled together. All of the functions in a function app share the same pricing plan, deployment method, and runtime version. Think of a function app as a way to organize and collectively manage your functions. To learn more, see [How to manage a function app](functions-how-to-use-azure-function-app-settings.md). +A function app provides an execution context in Azure in which your functions run. As such, it's the unit of deployment and management for your functions. A function app is composed of one or more individual functions that are managed, deployed, and scaled together. All of the functions in a function app share the same pricing plan, deployment method, and runtime version. Think of a function app as a way to organize and collectively manage your functions. To learn more, see [How to manage a function app](functions-how-to-use-azure-function-app-settings.md). > [!NOTE] > All functions in a function app must be authored in the same language. In [previous versions](functions-versions.md) of the Azure Functions runtime, this wasn't required. When multiple triggering events occur faster than a single-threaded function run ## Functions runtime versioning -You can configure the version of the Functions runtime using the `FUNCTIONS_EXTENSION_VERSION` app setting. For example, the value "~3" indicates that your function app will use 3.x as its major version. Function apps are upgraded to each new minor version as they are released. For more information, including how to view the exact version of your function app, see [How to target Azure Functions runtime versions](set-runtime-version.md). +You can configure the version of the Functions runtime using the `FUNCTIONS_EXTENSION_VERSION` app setting. For example, the value "~4" indicates that your function app uses 4.x as its major version. Function apps are upgraded to each new minor version as they're released. For more information, including how to view the exact version of your function app, see [How to target Azure Functions runtime versions](set-runtime-version.md). ## Repositories The code for Azure Functions is open source and stored in GitHub repositories: The code for Azure Functions is open source and stored in GitHub repositories: * [Azure WebJobs SDK Extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/) ## Bindings-Here is a table of all supported bindings. +Here's a table of all supported bindings. [!INCLUDE [dynamic compute](../../includes/functions-bindings.md)] Having issues with errors coming from the bindings? Review the [Azure Functions ## Connections -Your function project references connection information by name from its configuration provider. It does not directly accept the connection details, allowing them to be changed across environments. For example, a trigger definition might include a `connection` property. This might refer to a connection string, but you cannot set the connection string directly in a `function.json`. Instead, you would set `connection` to the name of an environment variable that contains the connection string. +Your function project references connection information by name from its configuration provider. It doesn't directly accept the connection details, allowing them to be changed across environments. For example, a trigger definition might include a `connection` property. This might refer to a connection string, but you can't set the connection string directly in a `function.json`. Instead, you would set `connection` to the name of an environment variable that contains the connection string. The default configuration provider uses environment variables. These might be set by [Application Settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure Functions service, or from the [local settings file](functions-develop-local.md#local-settings-file) when developing locally. When the connection name resolves to a single exact value, the runtime identifie However, a connection name can also refer to a collection of multiple configuration items, useful for configuring [identity-based connections](#configure-an-identity-based-connection). Environment variables can be treated as a collection by using a shared prefix that ends in double underscores `__`. The group can then be referenced by setting the connection name to this prefix. -For example, the `connection` property for an Azure Blob trigger definition might be "Storage1". As long as there is no single string value configured by an environment variable named "Storage1", an environment variable named `Storage1__blobServiceUri` could be used to inform the `blobServiceUri` property of the connection. The connection properties are different for each service. Refer to the documentation for the component that uses the connection. +For example, the `connection` property for an Azure Blob trigger definition might be `Storage1`. As long as there's no single string value configured by an environment variable named `Storage1`, an environment variable named `Storage1__blobServiceUri` could be used to inform the `blobServiceUri` property of the connection. The connection properties are different for each service. Refer to the documentation for the component that uses the connection. > [!NOTE] > When using [Azure App Configuration](../azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for Managed Identity connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly. For example, the `connection` property for an Azure Blob trigger definition migh ### Configure an identity-based connection -Some connections in Azure Functions can be configured to use an identity instead of a secret. Support depends on the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you are connecting supports identity-based connections. For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). +Some connections in Azure Functions can be configured to use an identity instead of a secret. Support depends on the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you're connecting supports identity-based connections. For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). +++> [!NOTE] +> When running in a Consumption or Elastic Premium plan, your app uses the [`WEBSITE_AZUREFILESCONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings when connecting to Azure Files on the storage account used by your function app. Azure Files doesn't support using managed identity when accessing the file share. For more information, see [Azure Files supported authentication scenarios](../storage/files/storage-files-active-directory-overview.md#supported-authentication-scenarios) + The following components support identity-based connections: An identity-based connection for an Azure service accepts the following common p | Property | Environment variable template | Description | |||||-| Token Credential | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. This setting should be set to "managedidentity" if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment. | -| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to "managedidentity", this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It is invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` should not be set. | -| Resource ID | `<CONNECTION_NAME_PREFIX>__managedIdentityResourceId` | When `credential` is set to "managedidentity", this property can be set to specify the resource Identifier to be used when obtaining a token. The property accepts a resource identifier corresponding to the resource ID of the user-defined managed identity. It is invalid to specify both a resource ID and a client ID. If neither are specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` should not be set. +| Token Credential | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. This setting should be set to `managedidentity` if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment. | +| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It is invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. | +| Resource ID | `<CONNECTION_NAME_PREFIX>__managedIdentityResourceId` | When `credential` is set to `managedidentity`, this property can be set to specify the resource Identifier to be used when obtaining a token. The property accepts a resource identifier corresponding to the resource ID of the user-defined managed identity. It's invalid to specify both a resource ID and a client ID. If neither are specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. Additional options may be supported for a given connection type. Refer to the documentation for the component making the connection. Additional options may be supported for a given connection type. Refer to the do > [!NOTE] > Local development with identity-based connections requires updated versions of the [Azure Functions Core Tools](./functions-run-local.md). You can check your currently installed version by running `func -v`. For Functions v3, use version `3.0.3904` or later. For Functions v4, use version `4.0.3904` or later. -When you are running your function project locally, the above configuration tells the runtime to use your local developer identity. The connection attempts to get a token from the following locations, in order: +When you're running your function project locally, the above configuration tells the runtime to use your local developer identity. The connection attempts to get a token from the following locations, in order: - A local cache shared between Microsoft applications - The current user context in Visual Studio - The current user context in Visual Studio Code - The current user context in the Azure CLI -If none of these options are successful, an error will occur. +If none of these options are successful, an error occurs. -Your identity may already have some role assignments against Azure resources used for development, but those roles may not provide the necessary data access. Management roles like [Owner](../role-based-access-control/built-in-roles.md#owner) are not sufficient. Double-check what permissions are required for connections for each component, and make sure that you have them assigned to yourself. +Your identity may already have some role assignments against Azure resources used for development, but those roles may not provide the necessary data access. Management roles like [Owner](../role-based-access-control/built-in-roles.md#owner) aren't sufficient. Double-check what permissions are required for connections for each component, and make sure that you have them assigned to yourself. In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity based on a client ID and client Secret for an Azure Active Directory service principal. **This configuration option is not supported when hosted in the Azure Functions service.** To use an ID and secret on your local machine, define the connection with the following additional properties: In some cases, you may wish to specify use of a different identity. You can add | Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | The client (application) ID of an app registration in the tenant. | | Client secret | `<CONNECTION_NAME_PREFIX>__clientSecret` | A client secret that was generated for the app registration. | -Here is an example of `local.settings.json` properties required for identity-based connection to Azure Blobs: +Here's an example of `local.settings.json` properties required for identity-based connection to Azure Blobs: ```json { Here is an example of `local.settings.json` properties required for identity-bas #### Connecting to host storage with an identity -The Azure Functions host uses the "AzureWebJobsStorage" connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This can be configured to leverage an identity as well. +The Azure Functions host uses the `AzureWebJobsStorage` connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This can be configured to use an identity as well. > [!CAUTION]-> Other components in Functions rely on "AzureWebJobsStorage" for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md). +> Other components in Functions rely on `AzureWebJobsStorage` for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md). >-> In addition, some apps reuse "AzureWebJobsStorage" for other storage connections in their triggers, bindings, and/or function code. Make sure that all uses of "AzureWebJobsStorage" are able to use the identity-based connection format before changing this connection from a connection string. +> In addition, some apps reuse `AzureWebJobsStorage` for other storage connections in their triggers, bindings, and/or function code. Make sure that all uses of `AzureWebJobsStorage` are able to use the identity-based connection format before changing this connection from a connection string. -To use an identity-based connection for "AzureWebJobsStorage", configure the following app settings: +To use an identity-based connection for `AzureWebJobsStorage`, configure the following app settings: | Setting | Description | Example value | |--|--|| To use an identity-based connection for "AzureWebJobsStorage", configure the fol [Common properties for identity-based connections](#common-properties-for-identity-based-connections) may also be set as well. -If you are configuring "AzureWebJobsStorage" using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service will be inferred for this account. This will not work if the storage account is in a sovereign cloud or has a custom DNS. +If you're configuring `AzureWebJobsStorage` using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service will be inferred for this account. This won't work if the storage account is in a sovereign cloud or has a custom DNS. | Setting | Description | Example value | |--|--||-| `AzureWebJobsStorage__accountName` | The account name of a storage account, valid only if the account is not in a sovereign cloud and does not have a custom DNS. This syntax is unique to "AzureWebJobsStorage" and cannot be used for other identity-based connections. | <storage_account_name> | +| `AzureWebJobsStorage__accountName` | The account name of a storage account, valid only if the account isn't in a sovereign cloud and doesn't have a custom DNS. This syntax is unique to `AzureWebJobsStorage` and can't be used for other identity-based connections. | <storage_account_name> | [!INCLUDE [functions-azurewebjobsstorage-permissions](../../includes/functions-azurewebjobsstorage-permissions.md)] |
azure-maps | Map Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md | Title: Handle map events | Microsoft Azure Maps + Title: Handle map events + description: Learn which events are fired when users interact with maps. View a list of all supported map events. See how to use the Azure Maps Web SDK to handle events. -# Interact with the map +# Handle map events -This article shows you how to use [map events class](/javascript/api/azure-maps-control/atlas.map#events). The property highlight events on the map and on different layers of the map. You can also highlight events when you interact with an HTML marker. +This article shows you how to use [map events class]. The property highlight events on the map and on different layers of the map. You can also highlight events when you interact with an HTML marker. ## Interact with the map The following table lists all supported map class events. | `boxzoomend` | Fired when a "box zoom" interaction ends.| | `boxzoomstart` | Fired when a "box zoom" interaction starts.| | `click` | Fired when a pointing device is pressed and released at the same point on the map.|-| `close` | Fired when the popup is closed manually or programatically.| +| `close` | Fired when the popup is closed manually or programmatically.| | `contextmenu` | Fired when the right button of the mouse is clicked.| | `data` | Fired when any map data loads or changes. | | `dataadded` | Fired when shapes are added to the `DataSource`.| The following table lists all supported map class events. | `error` | Fired when an error occurs.| | `idle` | <p>Fired after the last frame rendered before the map enters an "idle" state:<ul><li>No camera transitions are in progress.</li><li>All currently requested tiles have loaded.</li><li>All fade/transition animations have completed.</li></ul></p>| | `keydown` | Fired when a key is pressed down.|-| `keypress` | Fired when a key that produces a typable character (an ANSI key) is pressed.| +| `keypress` | Fired when a key that produces a typeable character (an ANSI key) is pressed.| | `keyup` | Fired when a key is released.| | `layeradded` | Fired when a layer is added to the map.| | `layerremoved` | Fired when a layer is removed from the map.| The following table lists all supported map class events. | `move` | Fired repeatedly during an animated transition from one view to another, as the result of either user interaction or methods.| | `moveend` | Fired just after the map completes a transition from one view to another, as the result of either user interaction or methods.| | `movestart` | Fired just before the map begins a transition from one view to another, as the result of either user interaction or methods.|-| `open` | Fired when the popup is opened manually or programatically.| +| `open` | Fired when the popup is opened manually or programmatically.| | `pitch` | Fired whenever the map's pitch (tilt) changes as the result of either user interaction or methods.| | `pitchend` | Fired immediately after the map's pitch (tilt) finishes changing as the result of either user interaction or methods.| | `pitchstart` | Fired whenever the map's pitch (tilt) begins a change as the result of either user interaction or methods.| The following table lists all supported map class events. See the following articles for full code examples: > [!div class="nextstepaction"]-> [Using the Azure Maps Services module](./how-to-use-services-module.md) +> [Using the Azure Maps Services module] > [!div class="nextstepaction"]-> [Code samples](/samples/browse/?products=azure-maps) +> [Code samples] +[map events class]: /javascript/api/azure-maps-control/atlas.map#events [Map Events]: https://samples.azuremaps.com/map/map-events [Layer Events]: https://samples.azuremaps.com/symbol-layer/symbol-layer-events [HTML marker layer events]: https://samples.azuremaps.com/html-markers/html-marker-layer-events See the following articles for full code examples: [Map Events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Map/Map%20Events/Map%20Events.html [Layer Events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Symbol%20Layer/Symbol%20layer%20events/Symbol%20layer%20events.html [HTML marker layer events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/HTML%20marker%20layer%20events/HTML%20marker%20layer%20events.html+[Using the Azure Maps Services module]: how-to-use-services-module.md +[Code samples]: /samples/browse/?products=azure-maps |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | Sampling is based on request, which means that if a request is captured (sampled Sampling is also based on trace ID to help ensure consistent sampling decisions across different services. +Sampling only applies to logs inside of a request. Logs which are not inside of a request (e.g. startup logs) are always collected by default. +If you want to sample those logs, you can use [Sampling overrides](./java-standalone-sampling-overrides.md). + ### Rate-limited sampling Starting from 3.4.0, rate-limited sampling is available and is now the default. |
azure-monitor | Java Standalone Sampling Overrides | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md | If no sampling overrides match: * If this is the first span in the trace, then the [top-level sampling configuration](./java-standalone-config.md#sampling) is used.-* If this is not the first span in the trace, then the parent sampling decision is used. +* If this isn't the first span in the trace, then the parent sampling decision is used. ## Example: Suppress collecting telemetry for health checks To see the exact set of attributes captured by Application Insights Java for you [self-diagnostics level to debug](./java-standalone-config.md#self-diagnostics), and look for debug messages starting with the text "exporting span". -Note that only attributes set at the start of the span are available for sampling, -so attributes such as `http.status_code` which are captured later on cannot be used for sampling. +>[!Note] +> Only attributes set at the start of the span are available for sampling, +so attributes such as `http.status_code` which are captured later on can't be used for sampling. ++## Troubleshooting ++If you use `regexp` and the sampling override doesn't work, please try with the `.*` regex. If the sampling now works, it means +you have an issue with the first regex and please read [this regex documentation](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html). ++If it doesn't work with `.*`, you may have a syntax issue in your `application-insights.json file`. Please look at the Application Insights logs and see if you notice +warning messages. + |
azure-monitor | Opencensus Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md | Our [Service Updates](https://azure.microsoft.com/updates/?service=application-i ## Next steps -* [Tracking incoming requests](./opencensus-python-dependency.md) -* [Tracking outgoing requests](./opencensus-python-request.md) -* [Application map](./app-map.md) -* [End-to-end performance monitoring](../app/tutorial-performance.md) +* To enable usage experiences, [enable web or browser user monitoring](javascript.md) +* [Track incoming requests](./opencensus-python-dependency.md). +* [Track outgoing requests](./opencensus-python-request.md). +* Check out the [Application map](./app-map.md). +* Learn how to do [End-to-end performance monitoring](../app/tutorial-performance.md). ### Alerts |
azure-monitor | Metrics Aggregation Explained | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-aggregation-explained.md | |
azure-monitor | Metrics Custom Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md | Title: Custom metrics in Azure Monitor (preview) description: Learn about custom metrics in Azure Monitor and how they're modeled.---+++ Last updated 06/01/2021 |
azure-monitor | Cost Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md | Usage on the Standalone pricing tier is billed by the ingested data volume. It's ### Per Node pricing tier -The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. The Per Node pricing tier should only be used by customers with active Operations Management Suite (OMS) licenses. +The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. The Per Node pricing tier is a legacy tier which is only available to existing Subscriptions fulfilling the requirement for [legacy pricing tiers](#legacy-pricing-tiers). On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Workspaces in the Per Node pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Per Node pricing tier don't support the use of [Basic Logs](basic-logs-configure.md). Usage is reported on three meters: |
azure-monitor | Logs Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md | Title: Log Analytics workspace data export in Azure Monitor -description: Log Analytics workspace data export in Azure Monitor lets you continuously export data per selected tables in your workspace. You can export to an Azure Storage account or Azure Event Hubs as it's collected. +description: Log Analytics workspace data export in Azure Monitor lets you continuously export data per selected tables in your workspace. You can export to an Azure Storage Account or Azure Event Hubs as it's collected. Last updated 06/29/2023 # Log Analytics workspace data export in Azure Monitor-Data export in a Log Analytics workspace lets you continuously export data per selected tables in your workspace. You can export to an Azure Storage account or Azure Event Hubs as the data arrives to an Azure Monitor pipeline. This article provides details on this feature and steps to configure data export in your workspaces. +Data export in a Log Analytics workspace lets you continuously export data per selected tables in your workspace. You can export to an Azure Storage Account or Azure Event Hubs as the data arrives to an Azure Monitor pipeline. This article provides details on this feature and steps to configure data export in your workspaces. ## Overview Data in Log Analytics is available for the retention period defined in your workspace. It's used in various experiences provided in Azure Monitor and Azure services. There are cases where you need to use other tools: -* **Tamper-protected store compliance:** Data can't be altered in Log Analytics after it's ingested, but it can be purged. Export to a storage account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to keep data tamper protected. -* **Integration with Azure services and other tools:** Export to event hubs as data arrives and is processed in Azure Monitor. -* **Long-term retention of audit and security data:** Export to a storage account in the workspace's region. Or you can replicate data to other regions by using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS. +* **Tamper-protected store compliance:** Data can't be altered in Log Analytics after it's ingested, but it can be purged. Export to a Storage Account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to keep data tamper protected. +* **Integration with Azure services and other tools:** Export to Event Hubs as data arrives and is processed in Azure Monitor. +* **Long-term retention of audit and security data:** Export to a Storage Account in the workspace's region. Or you can replicate data to other regions by using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS. -After you've configured data export rules in a Log Analytics workspace, new data for tables in rules is exported from the Azure Monitor pipeline to your storage account or event hubs as it arrives. +After you've configured data export rules in a Log Analytics workspace, new data for tables in rules is exported from the Azure Monitor pipeline to your Storage Account or Event Hubs as it arrives. [](media/logs-data-export/data-export-overview.png#lightbox) Data is exported without a filter. For example, when you configure a data export Log Analytics workspace data export continuously exports data that's sent to your Log Analytics workspace. There are other options to export data for particular scenarios: - Configure diagnostic settings in Azure resources. Logs are sent to a destination directly. This approach has lower latency compared to data export in Log Analytics.-- Schedule export of data based on a log query you define with the [Log Analytics query API](/rest/api/loganalytics/dataaccess/query/execute). Use Azure Data Factory, Azure Functions, or Azure Logic Apps to orchestrate queries in your workspace and export data to a destination. This method is similar to the data export feature, but you can use it to export historical data from your workspace by using filters and aggregation. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces) and isn't intended for scale. For more information, see [Export data from a Log Analytics workspace to a storage account by using Logic Apps](logs-export-logic-app.md).+- Schedule export of data based on a log query you define with the [Log Analytics query API](/rest/api/loganalytics/dataaccess/query/execute). Use Azure Data Factory, Azure Functions, or Azure Logic Apps to orchestrate queries in your workspace and export data to a destination. This method is similar to the data export feature, but you can use it to export historical data from your workspace by using filters and aggregation. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces) and isn't intended for scale. For more information, see [Export data from a Log Analytics workspace to a Storage Account by using Logic Apps](logs-export-logic-app.md). - One-time export to a local machine by using a PowerShell script. For more information, see [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport). ## Limitations Log Analytics workspace data export continuously exports data that's sent to you - Data export will gradually support more tables, but is currently limited to tables specified in the [supported tables](#supported-tables) section. - You can define up to 10 enabled rules in your workspace, each can include multiple tables. You can create more rules in workspace in disabled state. - Destinations must be in the same region as the Log Analytics workspace.-- The storage account must be unique across rules in the workspace.-- Table names can be 60 characters long when you're exporting to a storage account. They can be 47 characters when you're exporting to event hubs. Tables with longer names won't be exported.+- The Storage Account must be unique across rules in the workspace. +- Table names can be 60 characters long when you're exporting to a Storage Account. They can be 47 characters when you're exporting to Event Hubs. Tables with longer names won't be exported. - Export to Premium Storage Account isn't supported. ## Data completeness For more information, including the data export billing timeline, see [Azure Mon The data export destination must be available before you create export rules in your workspace. Destinations don't have to be in the same subscription as your workspace. When you use Azure Lighthouse, it's also possible to send data to destinations in another Azure Active Directory tenant. -You need to have write permissions to both workspace and destination to configure a data export rule on any table in a workspace. The shared access policy for the Event Hubs namespace defines the permissions that the streaming mechanism has. Streaming to event hubs requires manage, send, and listen permissions. To update the export rule, you must have the ListKey permission on that event hubs authorization rule. +You need to have write permissions to both workspace and destination to configure a data export rule on any table in a workspace. The shared access policy for the Event Hubs namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires manage, send, and listen permissions. To update the export rule, you must have the ListKey permission on that Event Hubs authorization rule. -### Storage account +### Storage Account -Don't use an existing storage account that has other non-monitoring data to better control access to the data and prevent reaching storage ingress rate limit, failures, and latency. +Avoid using existing Storage Account that has other non-monitoring data, to better control access to the data, prevent reaching storage ingress rate limit failures, and latency. -To send data to an immutable storage account, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this article, including enabling protected append blobs writes. +To send data to an immutable Storage Account, set the immutable policy for the Storage Account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this article, including enabling protected append blobs writes. -The Storage Account can't be Premium, must be StorageV1 or later, and located in the same region as your workspace. If you need to replicate your data to other storage accounts in other regions, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region), including GRS and GZRS. +The Storage Account can't be Premium, must be StorageV1 or later, and located in the same region as your workspace. If you need to replicate your data to other Storage Accounts in other regions, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region), including GRS and GZRS. -Data is sent to storage accounts as it reaches Azure Monitor and exported to destinations located in a workspace region. A container is created for each table in the storage account with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would send to a container named *am-SecurityEvent*. +Data is sent to Storage Accounts as it reaches Azure Monitor and exported to destinations located in a workspace region. A container is created for each table in the Storage Account with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would send to a container named *am-SecurityEvent*. -Blobs are stored in 5-minute folders in the following path structure: *WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/\<resource-group\>/providers/microsoft.operationalinsights/workspaces/\<workspace\>/y=\<four-digit numeric year\>/m=\<two-digit numeric month\>/d=\<two-digit numeric day\>/h=\<two-digit 24-hour clock hour\>/m=\<two-digit 60-minute clock minute\>/PT05M.json*. Appends to blobs are limited to 50-K writes. More blobs will be added in the folder as *PT05M_#.json**, where # is the incremental blob count. +Blobs are stored in 5-minute folders in the following path structure: *WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/\<resource-group\>/providers/microsoft.operationalinsights/workspaces/\<workspace\>/y=\<four-digit numeric year\>/m=\<two-digit numeric month\>/d=\<two-digit numeric day\>/h=\<two-digit 24-hour clock hour\>/m=\<two-digit 60-minute clock minute\>/PT05M.json*. Appends to blobs are limited to 50-K writes. More blobs will be added in the folder as *PT05M_#.json**, where '#' is the incremental blob count. -The format of blobs in a storage account is in [JSON lines](/previous-versions/azure/azure-monitor/essentials/resource-logs-blob-format), where each record is delimited by a new line, with no outer records array and no commas between JSON records. +> [!NOTE] +> Appends to blobs are written based on the "TimeGenerated" field and occur when receiving source data. Data arriving to Azure Monitor with delay, or retried following destinations throttling, is written to blobs according to its TimeGenerate. ++The format of blobs in a Storage Account is in [JSON lines](/previous-versions/azure/azure-monitor/essentials/resource-logs-blob-format), where each record is delimited by a new line, with no outer records array and no commas between JSON records. [](media/logs-data-export/storage-data-expand.png#lightbox) -### Event hubs +### Event Hubs -Don't use an existing event hub that has non-monitoring data to prevent reaching the Event Hubs namespace ingress rate limit, failures, and latency. +Avoid using existing Event Hub that has non-monitoring data to prevent reaching the Event Hubs namespace ingress rate limit failures, and latency. -Data is sent to your event hub as it reaches Azure Monitor and is exported to destinations located in a workspace region. You can create multiple export rules to the same Event Hubs namespace by providing a different `event hub name` in the rule. When an `event hub name` isn't provided, a default event hub is created for tables that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would be sent to an event hub named *am-SecurityEvent*. +Data is sent to your Event Hub as it reaches Azure Monitor and is exported to destinations located in a workspace region. You can create multiple export rules to the same Event Hubs namespace by providing a different `Event Hub name` in the rule. When an `Event Hub name` isn't provided, a default Event Hub is created for tables that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would be sent to an Event Hub named *am-SecurityEvent*. -The [number of supported event hubs in Basic and Standard namespace tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When you're exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces or provide an event hub name to export all tables to it. +The [number of supported Event Hubs in Basic and Standard namespace tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When you're exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces or provide an Event Hub name to export all tables to it. > [!NOTE]-> - The Basic Event Hubs namespace tier is limited. It supports [lower event size](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-vs-premium-vs-dedicated-tiers) and no [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) option to automatically scale up and increase the number of throughput units. Because data volume to your workspace increases over time and as a consequence event hub scaling is required, use Standard, Premium, or Dedicated Event Hubs tiers with the **Auto-inflate** feature enabled. For more information, see [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md). -> - Data export can't reach Event Hubs resources when virtual networks are enabled. You have to select the **Allow Azure services on the trusted services list to access this storage account** checkbox to bypass this firewall setting in an event hub to grant access to your event hubs. +> - The Basic Event Hubs namespace tier is limited. It supports [lower event size](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-vs-premium-vs-dedicated-tiers) and no [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) option to automatically scale up and increase the number of throughput units. Because data volume to your workspace increases over time and as a consequence Event Hub scaling is required, use Standard, Premium, or Dedicated Event Hubs tiers with the **Auto-inflate** feature enabled. For more information, see [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md). +> - Data export can't reach Event Hubs resources when virtual networks are enabled. You have to select the **Allow Azure services on the trusted services list to access this Storage Account** checkbox to bypass this firewall setting in an Event Hub to grant access to your Event Hubs. ## Query exported data Register-AzResourceProvider -ProviderNamespace Microsoft.insights ``` ### Allow trusted Microsoft services-If you've configured your storage account to allow access from selected networks, you need to add an exception to allow Azure Monitor to write to the account. From **Firewalls and virtual networks** for your storage account, select **Allow Azure services on the trusted services list to access this storage account**. +If you've configured your Storage Account to allow access from selected networks, you need to add an exception to allow Azure Monitor to write to the account. From **Firewalls and virtual networks** for your Storage Account, select **Allow Azure services on the trusted services list to access this Storage Account**. [](media/logs-data-export/storage-account-network.png#lightbox) ### Monitor destinations > [!IMPORTANT]-> Export destinations have limits and should be monitored to minimize throttling, failures, and latency. For more information, see [Storage account scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [Event Hubs namespace quotas](../../event-hubs/event-hubs-quotas.md). +> Export destinations have limits and should be monitored to minimize throttling, failures, and latency. For more information, see [Storage Account scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [Event Hubs namespace quotas](../../event-hubs/event-hubs-quotas.md). -#### Monitor a storage account +#### Monitor a Storage Account -1. Use a separate storage account for export. +1. Use a separate Storage Account for export. 1. Configure an alert on the metric: | Scope | Metric namespace | Metric | Aggregation | Threshold | If you've configured your storage account to allow access from selected networks | storage-name | Account | Ingress | Sum | 80% of maximum ingress per alert evaluation period. For example, the limit is 60 Gbps for general-purpose v2 in West US. The alert threshold is 1676 GiB per 5-minute evaluation period. | 1. Alert remediation actions:- - Use a separate storage account for export that isn't shared with non-monitoring data. + - Use a separate Storage Account for export that isn't shared with non-monitoring data. - Azure Storage Standard accounts support higher ingress limit by request. To request an increase, contact [Azure Support](https://azure.microsoft.com/support/faq/).- - Split tables between more storage accounts. + - Split tables between more Storage Accounts. -#### Monitor event hubs +#### Monitor Event Hubs 1. Configure alerts on the [metrics](../../event-hubs/monitor-event-hubs-reference.md): If you've configured your storage account to allow access from selected networks - Use Premium or Dedicated tiers for higher throughput. ### Create or update a data export rule-A data export rule defines the destination and tables for which data is exported. You can create 10 rules in the **Enabled** state in your workspace. More rules are allowed in the **Disabled** state. The storage account must be unique across rules in the workspace. Multiple rules can use the same Event Hubs namespace when you're sending to separate event hubs. +A data export rule defines the destination and tables for which data is exported. You can create 10 rules in the **Enabled** state in your workspace. More rules are allowed in the **Disabled** state. The Storage Account must be unique across rules in the workspace. Multiple rules can use the same Event Hubs namespace when you're sending to separate Event Hubs. > [!NOTE] > - You can include tables that aren't yet supported in rules, but no data will be exported for them until the tables are supported.-> - Export to a storage account: A separate container is created in the storage account for each table. -> - Export to event hubs: If an event hub name isn't provided, a separate event hub is created for each table. The [number of supported event hubs in Basic and Standard namespace tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When you're exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces or provide an event hub name in the rule to export all tables to it. +> - Export to a Storage Account: A separate container is created in the Storage Account for each table. +> - Export to Event Hubs: If an Event Hub name isn't provided, a separate Event Hub is created for each table. The [number of supported Event Hubs in Basic and Standard namespace tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When you're exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces or provide an Event Hub name in the rule to export all tables to it. # [Azure portal](#tab/portal) A data export rule defines the destination and tables for which data is exported # [PowerShell](#tab/powershell) -Use the following command to create a data export rule to a storage account by using PowerShell. A separate container is created for each table. +Use the following command to create a data export rule to a Storage Account by using PowerShell. A separate container is created for each table. ```powershell $storageAccountResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name' New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $storageAccountResourceId ``` -Use the following command to create a data export rule to a specific event hub by using PowerShell. All tables are exported to the provided event hub name and can be filtered by the **Type** field to separate tables. +Use the following command to create a data export rule to a specific Event Hub by using PowerShell. All tables are exported to the provided Event Hub name and can be filtered by the **Type** field to separate tables. ```powershell $eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name' New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $eventHubResourceId -EventHubName EventhubName ``` -Use the following command to create a data export rule to an event hub by using PowerShell. When a specific event hub name isn't provided, a separate container is created for each table, up to the [number of event hubs supported in each Event Hubs tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). To export more tables, provide an event hub name in the rule. Or you can set another rule and export the remaining tables to another Event Hubs namespace. +Use the following command to create a data export rule to an Event Hub by using PowerShell. When a specific Event Hub name isn't provided, a separate container is created for each table, up to the [number of Event Hubs supported in each Event Hubs tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). To export more tables, provide an Event Hub name in the rule. Or you can set another rule and export the remaining tables to another Event Hubs namespace. ```powershell $eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name' New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -Worksp # [Azure CLI](#tab/azure-cli) -Use the following command to create a data export rule to a storage account by using the CLI. A separate container is created for each table. +Use the following command to create a data export rule to a Storage Account by using the CLI. A separate container is created for each table. ```azurecli $storageAccountResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name' az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $storageAccountResourceId ``` -Use the following command to create a data export rule to a specific event hub by using the CLI. All tables are exported to the provided event hub name and can be filtered by the **Type** field to separate tables. +Use the following command to create a data export rule to a specific Event Hub by using the CLI. All tables are exported to the provided Event Hub name and can be filtered by the **Type** field to separate tables. ```azurecli $eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name' az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $eventHubResourceId ``` -Use the following command to create a data export rule to an event hub by using the CLI. When a specific event hub name isn't provided, a separate container is created for each table up to the [number of supported event hubs for your Event Hubs tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide an event hub name to export any number of tables. Or you can set another rule to export the remaining tables to another Event Hubs namespace. +Use the following command to create a data export rule to an Event Hub by using the CLI. When a specific Event Hub name isn't provided, a separate container is created for each table up to the [number of supported Event Hubs for your Event Hubs tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide an Event Hub name to export any number of tables. Or you can set another rule to export the remaining tables to another Event Hubs namespace. ```azurecli $eventHubsNamespacesResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name' az monitor log-analytics workspace data-export create --resource-group resourceG # [REST](#tab/rest) -Use the following request to create a data export rule to a storage account by using the REST API. A separate container is created for each table. The request should use bearer token authorization and content type application/json. +Use the following request to create a data export rule to a Storage Account by using the REST API. A separate container is created for each table. The request should use bearer token authorization and content type application/json. ```rest PUT https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.operationalInsights/workspaces/<workspace-name>/dataexports/<data-export-name>?api-version=2020-08-01 The body of the request specifies the table's destination. The following example } ``` -The following example is a sample body for the REST request for an event hub. +The following example is a sample body for the REST request for an Event Hub. ```json { The following example is a sample body for the REST request for an event hub. } ``` -The following example is a sample body for the REST request for an event hub where the event hub name is provided. In this case, all exported data is sent to it. +The following example is a sample body for the REST request for an Event Hub where the Event Hub name is provided. In this case, all exported data is sent to it. ```json { The following example is a sample body for the REST request for an event hub whe # [Template](#tab/json) -Use the following command to create a data export rule to a storage account by using an Azure Resource Manager template. +Use the following command to create a data export rule to a Storage Account by using an Azure Resource Manager template. ``` { Use the following command to create a data export rule to a storage account by u } ``` -Use the following command to create a data export rule to an event hub by using a Resource Manager template. A separate event hub is created for each table. +Use the following command to create a data export rule to an Event Hub by using a Resource Manager template. A separate Event Hub is created for each table. ``` { Use the following command to create a data export rule to an event hub by using } ``` -Use the following command to create a data export rule to a specific event hub by using a Resource Manager template. All tables are exported to it. +Use the following command to create a data export rule to a specific Event Hub by using a Resource Manager template. All tables are exported to it. ``` { |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | -Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provides advanced capabilities and higher query utilization. Clusters require a minimum ingestion commitment of 100 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption. +Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provides advanced capabilities and higher query utilization. Clusters require a minimum ingestion commitment of 500 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption. ## Advanced capabilities Capabilities that require dedicated clusters: eligible for commitment tier discount. - **[Ingest from Azure Event Hubs](../logs/ingest-logs-event-hub.md)** - Lets you ingest data directly from an Event Bubs into a Log Analytics workspace. Dedicated cluster lets you use capability when ingestion from all linked workspaces combined meet commitment tier. ## Cluster pricing model-Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 100 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. +Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. ## Required permissions Provide the following properties when creating new dedicated cluster: - **ClusterName**: Must be unique for the resource group. - **ResourceGroupName**: Use a central IT resource group because many teams in the organization usually share clusters. For more design considerations, review Design a Log Analytics workspace configuration(../logs/workspace-design.md). - **Location**-- **SkuCapacity**: You can set the commitment tier (formerly called capacity reservations) to 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters). +- **SkuCapacity**: You can set the commitment tier (formerly called capacity reservations) to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters). - **Managed identity**: Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): - System-assigned managed identity - Generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations. Content-type: application/json }, "sku": { "name": "capacityReservation",- "Capacity": 100 + "Capacity": 500 }, "properties": { "billingType": "Cluster", Send a GET request on the cluster resource and look at the *provisioningState* v }, "sku": { "name": "capacityreservation",- "capacity": 100 + "capacity": 500 }, "properties": { "provisioningState": "ProvisioningAccount", Send a GET request on the cluster resource and look at the *provisioningState* v "isAvailabilityZonesEnabled": false, "capacityReservationProperties": { "lastSkuUpdate": "last-sku-modified-date",- "minCapacity": 100 + "minCapacity": 500 } }, "id": "/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.OperationalInsights/clusters/cluster-name", Authorization: Bearer <token> }, "sku": { "name": "capacityreservation",- "capacity": 100 + "capacity": 500 }, "properties": { "provisioningState": "Succeeded", Authorization: Bearer <token> "isAvailabilityZonesEnabled": false, "capacityReservationProperties": { "lastSkuUpdate": "last-sku-modified-date",- "minCapacity": 100 + "minCapacity": 500 } }, "id": "/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.OperationalInsights/clusters/cluster-name", The same as for 'clusters in a resource group', but in subscription scope. ## Update commitment tier in cluster -When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB per day. You don't have to provide the full REST request body, but you must include the sku. +When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 500, 1000, 2000 or 5000 GB per day. You don't have to provide the full REST request body, but you must include the sku. During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period. During the commitment period, you can change to a higher commitment tier, which ```azurecli az account set --subscription "cluster-subscription-id" -az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --sku-capacity 100 +az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --sku-capacity 500 ``` #### [PowerShell](#tab/powershell) az monitor log-analytics cluster update --resource-group "resource-group-name" - ```powershell Select-AzSubscription "cluster-subscription-id" -Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 100 +Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 500 ``` #### [REST API](#tab/restapi) Authorization: Bearer <token> - 400--The body of the request is null or in bad format. - 400--SKU name is invalid. Set SKU name to capacityReservation. - 400--Capacity was provided but SKU isn't capacityReservation. Set SKU name to capacityReservation.-- 400--Missing Capacity in SKU. Set Capacity value to 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB/day.+- 400--Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day. - 400--Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.-- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 100, 200, 300, 400, 500, 1000, 2000 or 5000 GB/day.+- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day. - 400--Identity is null or empty. Set Identity with systemAssigned type. - 400--KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation. - 400--Operation can't be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed. |
azure-monitor | Tutorial Logs Ingestion Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-code.md | The following sample code uses the [Azure Monitor Ingestion client library for J 3. Replace the variables in the following sample code with values from your DCE and DCR. You might also want to replace the sample data with your own. ```javascript- const { isAggregateLogsUploadError, DefaultAzureCredential } = require("@azure/identity"); - const { LogsIngestionClient } = require("@azure/monitor-ingestion"); + const { DefaultAzureCredential } = require("@azure/identity"); + const { LogsIngestionClient, isAggregateLogsUploadError } = require("@azure/monitor-ingestion"); require("dotenv").config(); |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172) * [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload using Azure NetApp Files](../virtual-machines/workloads/sap/dbms_guide_ibm.md#using-azure-netapp-files) * [DB2 Installation Guide on Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/db2-installation-guide-on-anf/ba-p/3709437)+* [Manual Recovery Guide for SAP DB2 on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-db2-on-azure-vms-from-azure-netapp/ba-p/3865379) * [SAP ASE 16.0 on Azure NetApp Files for SAP Workloads on SLES15](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-ase-16-0-on-azure-netapp-files-for-sap-workloads-on-sles15/ba-p/3729496) ### SAP IQ-NLS |
azure-resource-manager | Deployment Stacks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md | Title: Create & deploy deployment stacks in Bicep description: Describes how to create deployment stacks in Bicep. Previously updated : 07/10/2023 Last updated : 07/12/2023 # Deployment stacks (Preview) Deployment stacks provide the following benefits: - [What-if](./deploy-what-if.md) isn't available in the preview. - Management group scoped deployment stacks can only deploy the template to subscription. - When using the Azure CLI create command to modify an existing stack, the deployment process continues regardless of whether you choose _n_ for a prompt. To halt the procedure, use _[CTRL] + C_.+- There is an issue with the Azure CLI create command when the value `none` is passed to the `deny-settings-mode` parameter. Before the issue is fixed, use the `denyDelete` instead of `none`. - If you create or modify a deployment stack in the Azure portal, deny settings will be overwritten (support for deny settings in the Azure portal is currently in progress).-- Management group deployment stacks not yet available in the Azure portal.+- Management group deployment stacks are not yet available in the Azure portal. ## Create deployment stacks |
azure-vmware | Deploy Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-azure-vmware-solution.md | description: Learn how to use the information gathered in the planning stage to Previously updated : 4/12/2023 Last updated : 7/13/2023 You should have connectivity between the Azure Virtual Network where the Express 1. If you want to log into both vCenter Server and NSX-T Manager, open a web browser and log into the same virtual machine used for network route validation. - You can identify the vCenter Server and NSX-T Manager console's IP addresses and credentials in the Azure portal. Select your private cloud and then **Manage** > **Identity**. + You can identify the vCenter Server and NSX-T Manager console's IP addresses and credentials in the Azure portal. Select your private cloud and then **Manage** > **VMware credentials**. :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter and NSX Manager URLs and credentials." border="true"::: |
azure-vmware | Install Vmware Hcx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md | description: Install VMware HCX in your Azure VMware Solution private cloud. Previously updated : 2/14/2023 Last updated : 7/13/2023 # Install and activate VMware HCX in Azure VMware Solution [VMware HCX](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html) is an application mobility platform designed for simplifying application migration, rebalancing workloads, and optimizing disaster recovery across data centers and clouds. -VMware HCX has two component +VMware HCX has two component This article shows you how to install and activate the VMware HCX Cloud Manager and VMware HCX Connector components. After HCX is deployed, follow the recommended [Next steps](#next-steps). 1. Select **Get started** for **HCX Workload Mobility**. - :::image type="content" source="media/tutorial-vmware-hcx/add-hcx-workload-mobility.png" alt-text="Screenshot showing the Get started button for HCX Workload Mobility." lightbox="media/tutorial-vmware-hcx/add-hcx-workload-mobility.png"::: + :::image type="content" source="media/tutorial-vmware-hcx/add-hcx-workload-mobility.png" alt-text="Screenshot showing the Get started button for VMware HCX Workload Mobility." lightbox="media/tutorial-vmware-hcx/add-hcx-workload-mobility.png"::: 1. Select the **I agree with terms and conditions** checkbox and then select **Install**. After HCX is deployed, follow the recommended [Next steps](#next-steps). > [!IMPORTANT] > If you don't see the HCX key after installing, click the **ADD** button to generate the key which you can then use for site pairing. - :::image type="content" source="media/tutorial-vmware-hcx/configure-hcx-appliance-for-migration-using-hcx-tab.png" alt-text="Screenshot showing the Migration using HCX tab under Connectivity." lightbox="media/tutorial-vmware-hcx/configure-hcx-appliance-for-migration-using-hcx-tab.png"::: + :::image type="content" source="media/tutorial-vmware-hcx/configure-hcx-appliance-for-migration-using-hcx-tab.png" alt-text="Screenshot showing the Migration using VMware HCX tab under Connectivity." lightbox="media/tutorial-vmware-hcx/configure-hcx-appliance-for-migration-using-hcx-tab.png"::: -## HCX license edition +## VMware HCX license edition HCX offers various [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html#GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED) based on the type of license installed with the system. Advanced delivers basic connectivity and mobility services to enable hybrid interconnect and migration services. HCX Enterprise offers more services than what standard licenses provide. Some of those services include; Mobility Groups, Replication assisted vMotion (RAV), Mobility Optimized Networking, Network Extension High availability, OS assisted Migration, and others. HCX offers various [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user 1. Under **Manage** in the left navigation, select **Add-ons**, then the **Migration using HCX** tab. 2. Select the **Upgrade to HCX Enterprise** button to enable HCX Enterprise edition. - :::image type="content" source="media/tutorial-vmware-hcx/upgrade-to-hcx-enterprise-on-migration-using-hcx-tab.png" alt-text="Screenshot showing upgrade to HCX Enterprise using HCX tab under Add-ons." lightbox="media/tutorial-vmware-hcx/upgrade-to-hcx-enterprise-on-migration-using-hcx-tab.png"::: + :::image type="content" source="media/tutorial-vmware-hcx/upgrade-to-hcx-enterprise-on-migration-using-hcx-tab.png" alt-text="Screenshot showing upgrade to VMware HCX Enterprise using VMware HCX tab under Add-ons." lightbox="media/tutorial-vmware-hcx/upgrade-to-hcx-enterprise-on-migration-using-hcx-tab.png"::: 3. Confirm the update to HCX Enterprise edition by selecting **Yes**. - :::image type="content" source="media/tutorial-vmware-hcx/update-to-hcx-enterprise-edition-on-migration-using-hcx-tab.png" alt-text="Screenshot showing confirmation to update to HCX Enterprise using HCX tab under Add-ons." lightbox="media/tutorial-vmware-hcx/update-to-hcx-enterprise-edition-on-migration-using-hcx-tab.png"::: + :::image type="content" source="media/tutorial-vmware-hcx/update-to-hcx-enterprise-edition-on-migration-using-hcx-tab.png" alt-text="Screenshot showing confirmation to update to VMware HCX Enterprise using VMware HCX tab under Add-ons." lightbox="media/tutorial-vmware-hcx/update-to-hcx-enterprise-edition-on-migration-using-hcx-tab.png"::: >[!IMPORTANT]- > If you upgraded HCX from advanced to Enterprise, enable the new features in the compute profile and perform resync in service mesh to select a new feature like, Replication Assisted vMotion (RAV). + > If you upgraded VMware HCX from advanced to Enterprise, enable the new features in the compute profile and perform resync in service mesh to select a new feature like, Replication Assisted vMotion (RAV). 4. Change Compute profile after HCX upgrade to HCX Enterprise. 1. On HCX UI, select **Infrastructure** > **Interconnect**, then select **Edit**.- 2. Select services you want activated like, Replication Assisted vMotion (RAV) and OS assisted Migration, which is available with HCX Enterprise only. -- 3. Select **Continue**, review the settings, then select **Finish** to create the Compute Profile. + 1. Select services you want activated like, Replication Assisted vMotion (RAV) and OS assisted Migration, which is available with VMware HCX Enterprise only. + 1. Select **Continue**, review the settings, then select **Finish** to create the Compute Profile. 5. If compute profile is being used in service mesh(es), resync service mesh. HCX offers various [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user 1. Verify that you've reverted to an HCX Advanced configuration state and you aren't using the Enterprise features. 1. If you plan to downgrade, verify that no scheduled migrations, [Enterprise services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html#GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED) like RAV and HCX MON, etc. are in use. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to request downgrade. -## Download and deploy the VMware HCX Connector in on-premises +## Download and deploy the VMware HCX Connector on-premises Use the following steps to download the VMware HCX Connector OVA file, and then deploy the VMware HCX Connector to your on-premises vCenter Server. After deploying the VMware HCX Connector OVA on-premises and starting the applia After the services restart, you'll see vCenter Server displayed as green on the screen that appears. Both vCenter Server and SSO must have the appropriate configuration parameters, which should be the same as the previous screen. ## Next steps Continue to the next tutorial to configure the VMware HCX Connector. After you've configured the VMware HCX Connector, you'll have a production-ready environment for creating virtual machines (VMs) and migration. |
azure-web-pubsub | Tutorial Subprotocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-subprotocol.md | This will be useful if you want to stream a large amount of data to other client ```javascript const WebSocket = require('ws');- const fetch = require('node-fetch'); + const fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args)); async function main() { let res = await fetch(`http://localhost:8080/negotiate`); |
backup | Azure Backup Architecture For Sap Hana Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md | In the following sections you'll learn about different SAP HANA setups and their :::image type="content" source="./media/azure-backup-architecture-for-sap-hana-backup/azure-network-with-udr-and-nva-or-azure-firewall-and-private-endpoint-or-service-endpoint.png" alt-text="Diagram showing the SAP HANA setup if Azure network with UDR + NVA / Azure Firewall + Private Endpoint or Service Endpoint."::: -### Backup architecture for database with HANA System Replication (preview) +### Backup architecture for database with HANA System Replication The backup service resides in both the physical nodes of the HSR setup. Once you confirm that these nodes are in a replication group (using the [pre-registration script](sap-hana-database-with-hana-system-replication-backup.md#run-the-preregistration-script)), Azure Backup groups the nodes logically, and creates a single backup item during protection configuration. This section provides you with an understanding about the backup process of an H - Learn about the supported configurations and scenarios in the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). - Learn about how to [backup SAP HANA databases in Azure VMs](backup-azure-sap-hana-database.md).-- Learn about how to [backup SAP HANA System Replication databases in Azure VMs (preview)](sap-hana-database-with-hana-system-replication-backup.md).+- Learn about how to [backup SAP HANA System Replication databases in Azure VMs](sap-hana-database-with-hana-system-replication-backup.md). - Learn about how to [backup SAP HANA databases' snapshot instances in Azure VMs (preview)](sap-hana-database-instances-backup.md). |
backup | Backup Azure Arm Restore Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md | Title: Restore VMs by using the Azure portal description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 06/13/2023 Last updated : 07/13/2023 Azure Backup provides several ways to restore a VM. | **Create a new VM** | Quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM and select the resource group and virtual network (VNet) in which it will be placed. The new VM must be created in the same region as the source VM.<br><br>If a VM restore fails because an Azure VM SKU wasn't available in the specified region of Azure, or because of any other issues, Azure Backup still restores the disks in the specified resource group. **Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell.-**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk. The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> When choosing a Vault-Standard recovery point, a VHD file with the content of the chosen recovery point is also created in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). +**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk. The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> When you choose a Vault-Standard recovery point, a VHD file with the content of the chosen recovery point is also created in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins. **Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed), [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup). **Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup). Azure Backup provides several ways to restore a VM. Some details about storage accounts: -- **Create VM**: When creating a new VM with managed disks, nothing is placed in the storage account you specify. If using unmanaged disks, the VHD files for the VM's disks will be placed in the storage account you specify.+- **Create VM**: When you create a new VM with managed disks, nothing is placed in the storage account you specify. If using unmanaged disks, the VHD files for the VM's disks will be placed in the storage account you specify. - **Restore disk**: The restore job generates a template that you can download and use to specify custom VM settings. This template is placed in the specified storage account. VHD files are also copied to the storage account when you restore managed disks from a Vault-Standard recovery point if the disk size is less than 4 TB, or when you restore unmanaged disks. - **Replace disk**: When you replace a managed disk from a Vault-Standard recovery point and the disk size is less than 4 TB, a VHD file with the data from the chosen recovery point is created in the specified storage account. After the replace disk operation, the disks of the source Azure VM are left in the specified Resource group for your operation and the VHDs are stored in the specified storage account. You can choose to delete or retain these VHDs and disks. - **Storage account location**: The storage account must be in the same region as the vault. Only these accounts are displayed. If there are no storage accounts in the location, you need to create one. If you don't have permissions, you can [restore a disk](#restore-disks), and the 1. Specify settings for your selected restore option. +>[!Note] +>Use the **Replace existing** option only when the **Transfer Data to Vault** subtask in the job details shows successfully completed. Otherwise, use the **Create New** option for the latest recovery point restoration. + ## Create a VM As one of the [restore options](#restore-options), you can create a VM quickly with basic settings from a restore point. There are a few things to note after restoring a VM: - Extensions present during the backup configuration are installed, but not enabled. If you see an issue, reinstall the extensions. In the case of disk replacement, reinstallation of extensions is not required. - If the backed-up VM had a static IP address, the restored VM will have a dynamic IP address to avoid conflict. You can [add a static IP address to the restored VM](/powershell/module/az.network/set-aznetworkinterfaceipconfig#description). - A restored VM doesn't have an availability set. If you use the restore disk option, then you can [specify an availability set](../virtual-machines/windows/tutorial-availability-sets.md) when you create a VM from the disk using the provided template or PowerShell.-- If you use a cloud-init-based Linux distribution, such as Ubuntu, for security reasons the password is blocked after the restore. Use the VMAccess extension on the restored VM to [reset the password](/troubleshoot/azure/virtual-machines/reset-password). We recommend using SSH keys on these distributions, so you don't need to reset the password after the restore.+- If you use a cloud-init-based Linux distribution, such as Ubuntu, for security reasons the password is blocked after the restore. Use the `VMAccess` extension on the restored VM to [reset the password](/troubleshoot/azure/virtual-machines/reset-password). We recommend using SSH keys on these distributions, so you don't need to reset the password after the restore. - If you're unable to access a VM once restored because the VM has a broken relationship with the domain controller, then follow the steps below to bring up the VM: - Attach OS disk as a data disk to a recovered VM. - Manually install VM agent if Azure Agent is found to be unresponsive by following this [link](/troubleshoot/azure/virtual-machines/install-vm-agent-offline). |
backup | Quick Backup Hana Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-hana-cli.md | -# Quickstart: Back up SAP HANA System Replication on Azure VMs using Azure CLI (preview) +# Quickstart: Back up SAP HANA System Replication on Azure VMs using Azure CLI This quickstart describes how to protect SAP HANA System Replication (HSR) using Azure CLI. |
backup | Quick Restore Hana Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-restore-hana-cli.md | -# Quickstart: Restore SAP HANA System Replication on Azure VMs using Azure CLI (preview) +# Quickstart: Restore SAP HANA System Replication on Azure VMs using Azure CLI This quickstart describes how to restore SAP HANA System Replication (HSR) using Azure CLI. |
backup | Sap Hana Database About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-about.md | You can use [an Azure VM backup](backup-azure-vms-introduction.md) to back up th 1. Restore the database into the VM from the [Azure SAP HANA database backup](sap-hana-db-restore.md#restore-to-a-point-in-time-or-to-a-recovery-point) to your intended point in time. -## Back up a HANA system with replication enabled (preview) +## Back up a HANA system with replication enabled Azure Backup now supports backing up databases that have HSR enabled. This means that backups are managed automatically when a failover occurs, which eliminates the necessity for manual intervention. Backup also offers immediate protection with no remedial full backups, so you can protect HANA instances or HSR setup nodes as a single HSR container. |
backup | Sap Hana Database Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md | Title: Restore SAP HANA databases on Azure VMs description: In this article, you'll learn how to restore SAP HANA databases that are running on Azure virtual machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 06/20/2023 Last updated : 07/14/2023 -Azure Backup now supports backup and restore of SAP HANA System Replication (HSR) instance (preview). +Azure Backup now supports backup and restore of SAP HANA System Replication (HSR) instance. >[!Note] >- The restore process for HANA databases with HSR is the same as the restore process for HANA databases without HSR. As per SAP advisories, you can restore databases with HSR mode as *standalone* databases. If the target system has the HSR mode enabled, first disable the mode, and then restore the database.->- Original Location Recovery (OLR) is currently not supported for HSR. +>- Original Location Recovery (OLR) is currently not supported for HSR. Select **Alternate location** restore, and then select the source VM as your *Host* from the list. >- Restore to HSR instance isn't supported. However, restore only to HANA instance is supported. For information about the supported configurations and scenarios, see the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). To restore a database, you need the following permissions: :::image type="content" source="./media/sap-hana-db-restore/hana-restore-configuration.png" alt-text="Screenshot that shows where to restore the configuration."::: +>[!Note] +>During restore (applicable to Virtual IP/ Load balancer frontend IP scenario only), if youΓÇÖre trying to restore a backup to target node after changing the HSR mode as standalone or breaking HSR before restore as recommended by SAP and, ensure that Load Balancer is pointed to the target node. +> +>**Example scenarios**: +> +>- If youΓÇÖre using *hdbuserstore set SYSTEMKEY localhost* in your preregistration script, there will be no issues during restore. +>- If your *hdbuserstore set `SYSTEMKEY <load balancer host/ip>` in your preregistration script and youΓÇÖre trying to restore the backup to target node, ensure that the load balancer is pointed to the target node that needs to be restored. +> +> + ### Restore to an alternate location 1. On the **Restore** pane, under **Where and how to Restore?**, select **Alternate Location**. |
backup | Sap Hana Database With Hana System Replication Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md | Title: Back up SAP HANA System Replication databases on Azure VMs description: In this article, discover how to back up SAP HANA databases with HANA System Replication enabled. Previously updated : 07/11/2023 Last updated : 07/14/2023 -# Back up SAP HANA System Replication databases on Azure VMs (preview) +# Back up SAP HANA System Replication databases on Azure VMs SAP HANA databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. This article describes how you can back up SAP HANA databases that are running on Azure virtual machines (VMs) to an Azure Backup Recovery Services vault by using [Azure Backup](backup-overview.md). -You can also switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. [Learn more](#switch-database-protection-from-standalone-to-hsr-on-azure-backup). +You can also switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. [Learn more](#possible-scenarios-to-protect-hsr-nodes-on-azure-backup). >[!Note] >- The support for **HSR + DR** scenario is currently not available because there is a restriction to have VM and Vault in the same region. When a failover occurs, the users are replicated to the new primary, but *hdbuse hdbuserstore set SYSTEMKEY <load balancer host/ip>:30013@SYSTEMDB <custom-user> '<some-password>' ``` + :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/pass-custom-backup-user-key-to-script-as-parameter-architecture.png" alt-text="Disgram explains the flow to pass the custom backup user key to the script as a parameter." lightbox="./media/sap-hana-database-with-hana-system-replication-backup/pass-custom-backup-user-key-to-script-as-parameter-architecture.png"::: + >[!Note] >You can create a custom backup key using the load balancer host/IP instead of local host to use Virtual IP (VIP). To discover the HSR database, follow these steps: To view the details about all the databases of each discovered VM, select **View details** under the **Step 1: Discover DBs in VMs section**. +>[!Note] +>During discovery or configuration of backup on the secondary node, ignore the status if the **Backup Readiness** state appears **Not Ready** as this is an expected state for the secondary node on HSR. +> +> :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/backup-readiness-state.png" alt-text="Screenshot shows the different backup readiness state." lightbox="./media/sap-hana-database-with-hana-system-replication-backup/backup-readiness-state.png"::: + ## Configure backup To enable the backup, follow these steps: Backups run in accordance with the policy schedule. Learn how to [run an on-dema You can run an on-demand backup using SAP HANA native clients to local file-system instead of Backint. Learn more how to [manage operations using SAP native clients](sap-hana-database-manage.md#manage-operations-using-sap-hana-native-clients). -## Switch database protection from standalone to HSR on Azure Backup ++## Possible scenarios to protect HSR nodes on Azure Backup + You can now switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. If youΓÇÖve already configured HSR and protecting only the primary node using Azure Backup, you can modify the configuration to protect both primary and secondary nodes. -Follow these steps: +### Two standalone/HSR nodes never protected using SAP HANA Database backup on Azure VM ++1. (Mandatory) [Run the latest preregistration script on both primary and secondary VM nodes](#run-the-preregistration-script). ++ >[!Note] + >HSR-based attributes are added to the latest preregistration script - ++1. Configure HSR manually or using any clustering tools, such as **pacemaker**, -1. On standalone VM, Primary node, or Secondary node (once protected using Azure Backup), go to *vault* > **Backup Items** > **SAP HANA in Azure VM** > **View Details** > **Stop backup**, and then select **Retain backup data** > **Stop backup** to stop backup and retain data. + Skip to the next step if HSR configuration is already complete. -2. (Mandatory) [Run the latest preregistration script](sap-hana-database-with-hana-system-replication-backup.md#run-the-preregistration-script) on both primary and condary VM nodes +1. Discover and configure backup for those VMs. ++ >[!Note] + >For HSR deployments, Protected Instance cost is charged to HSR logical container (two nodes - primary and secondary) will form a single HSR logical container. - The preregistration script contains the HSR attributes. +1. Before a planned failover, [ensure that both VMs/Nodes are registered to the vault (physical and logical registration)](sap-hana-database-manage.md#verify-the-registration-status-of-vms-or-nodes-to-the-vault). -3. [Configure HSR manually](sap-hana-database-with-hana-system-replication-backup.md#configure-backup). -You can also configure the backup with clustering tools, such as **Pacemaker**. +### Two standalone VMs/ One standalone VM already protected using SAP HANA Database backup on Azure VM ++1. To stop backup and retain data, go to the *vault* > **Backup Items** > **SAP HANA in Azure VM**, and then select **View Details** > **Stop backup** > **Retain backup data** > **Stop backup**. +1. (Mandatory) [Run the latest preregistration script on both primary and secondary VM nodes](#run-the-preregistration-script). ++ >[!Note] + >HSR-based attributes are added to the latest preregistration script - //link here ) - Skip this step if HSR configuration is complete. +1. Configure HSR manually or using any clustering tools like pacemaker. -4. Add the Primary and secondary nodes to Azure Backup, [rediscover the databases](sap-hana-database-with-hana-system-replication-backup.md#discover-the-databases), and [resume protection](sap-hana-database-manage.md#resume-protection-for-an-sap-hana-database-or-hana-instance). +1. Discover the VMs and configure backup on HSR logical instance. >[!Note]- >For HSR deployments, Protected Instance cost is charged to HSR container. Two nodes (primary and secondary) will form a single HSR logical container and storage cost is charged as applicable. + >For HSR deployments, Protected Instance cost will be charged to HSR logical container (two nodes - primary and / secondary) will form a single HSR logical container. -5. Before a planned failover, [ensure that both VMs/Nodes are registered to the vault (physical and logical registration)](sap-hana-database-manage.md#verify-the-registration-status-of-vms-or-nodes-to-the-vault). +1. Before a planned failover, [ensure that both VMs/Nodes are registered to the vault (physical and logical registration)](sap-hana-database-manage.md#verify-the-registration-status-of-vms-or-nodes-to-the-vault). ## Next steps -- [Restore SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-restore.md)-- [About backing up SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview)+- [Restore SAP HANA System Replication databases on Azure VMs](sap-hana-database-restore.md) +- [About backing up SAP HANA System Replication databases on Azure VMs](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled) |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 07/05/2023 Last updated : 07/14/2023 You can learn more about the new releases by bookmarking this page or by [subscr ## Updates summary - July 2023+ - [SAP HANA System Replication database backup support is now generally available](#sap-hana-system-replication-database-backup-support-is-now-generally-available) - [Cross Region Restore for PostgreSQL (preview)](#cross-region-restore-for-postgresql-preview) - April 2023 - [Microsoft Azure Backup Server v4 is now generally available](#microsoft-azure-backup-server-v4-is-now-generally-available) You can learn more about the new releases by bookmarking this page or by [subscr - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) +## SAP HANA System Replication database backup support is now generally available ++Azure Backup now supports backup of HANA database with HANA System Replication. Now, the log backups from the new primary node are accepted immediately; thus provides continuous database automatic protection, ++This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection. ++For more information, see [Back up a HANA system with replication enabled](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled). + ## Cross Region Restore for PostgreSQL (preview) Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region. -For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup (preview)](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup-preview). +For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup-preview). ## Microsoft Azure Backup Server v4 is now generally available Azure Backup now supports backup of HANA database with HANA System Replication. This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection. -For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview). +For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled). ## Built-in Azure Monitor alerting for Azure Backup is now generally available |
cognitive-services | Abuse Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/abuse-monitoring.md | description: Learn about the abuse monitoring capabilities of Azure OpenAI Servi + Last updated 06/16/2023 |
cognitive-services | Advanced Prompt Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/advanced-prompt-engineering.md | description: Learn about the options for how to use prompt engineering with GPT- + Last updated 04/20/2023 |
cognitive-services | Legacy Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/legacy-models.md | Title: Azure OpenAI Service legacy models description: Learn about the legacy models in Azure OpenAI. + Last updated 07/06/2023 |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | Title: Azure OpenAI Service models description: Learn about the different model capabilities that are available with Azure OpenAI. + Last updated 07/12/2023 |
cognitive-services | Prompt Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/prompt-engineering.md | Title: Azure OpenAI Service | Introduction to Prompt engineering description: Learn how to use prompt engineering to optimize your work with Azure OpenAI Service. + Last updated 03/21/2023 |
cognitive-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/use-your-data.md | There are some caveats about document structure and how it might affect the qual Azure OpenAI on your data does not currently support private endpoints. +## Azure Role-based access controls (Azure RBAC) ++To add a new data source to your Azure OpenAI resource, you need the following Azure RBAC roles. +++|Azure RBAC role |Needed when | +||| +|[Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) | You want to use Azure OpenAI on your data. | +|[Search Index Data Contributor](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) | You have an existing Azure Cognitive Search index that you want to use, instead of creating a new one. | +|[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. | + ## Recommended settings Use the following sections to help you configure Azure OpenAI on your data for optimal results. Avoid asking long questions and break them down into multiple questions if possi * If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI. +### Using the web app ++You can use the available web app to interact with your model using a graphical user interface, which you can deploy using either [Azure OpenAI studio](../use-your-data-quickstart.md?pivots=programming-language-studio#deploy-a-web-app) or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT). ++ ++You can also customize the app's frontend and backend logic. For example, you could change the icon that appears in the center of the app by updating `/frontend/src/assets/Azure.svg` and then redeploying the app [using the Azure CLI](https://github.com/microsoft/sample-app-aoai-chatGPT#deploy-with-the-azure-cli). See the source code for the web app, and more information [on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). ++When customizing the app, we recommend: ++- Resetting the chat session (clear chat) if the user changes any settings. Notify the user that their chat history will be lost. ++- Clearly communicating the impact on the user experience that each setting you implement will have. ++- When you rotate API keys for your Azure OpenAI or Azure Cognitive Search resource, be sure to update the app settings for each of your deployed apps to use the new keys. ++- Pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes and improvements. + ### Using the API Consider setting the following parameters even if they are optional for using the API. When chatting with a model, providing a history of the chat will help the model ## Next steps * [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)+ * [Introduction to prompt engineering](./prompt-engineering.md)++ |
cognitive-services | Chatgpt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/chatgpt.md | description: Learn about the options for how to use the GPT-35-Turbo and GPT-4 m + Last updated 05/15/2023 |
cognitive-services | Switching Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/switching-endpoints.md | description: Learn about the changes you need to make to your code to swap back + Last updated 05/24/2023 |
cognitive-services | Use Your Data Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/use-your-data-quickstart.md | In this quickstart you can use your own data with Azure OpenAI models. Using Azu Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).+- Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) role for the Azure OpenAI resource. + > [!div class="nextstepaction"] > [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=OVERVIEW&Pillar=AOAI&Product=ownData&Page=quickstart&Section=Prerequisites) |
communication-services | Troubleshooting Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md | Console.WriteLine($"Email operation id = {emailSendOperation.Id}"); # [JavaScript](#tab/javascript) The Azure Communication Services Calling SDK relies internally on [@azure/logger](https://www.npmjs.com/package/@azure/logger) library to control logging.-Use the `setLogLevel` method from the `@azure/logger` package to configure the log output: +Use the `setLogLevel` method from the `@azure/logger` package to configure the log output level. Create a logger and pass it into the CallClient constructor: ```javascript-import { setLogLevel } from '@azure/logger'; +import { setLogLevel, createClientLogger, AzureLogger } from '@azure/logger'; setLogLevel('verbose');-const callClient = new CallClient(); +let logger = createClientLogger('ACS'); +const callClient = new CallClient({ logger }); ``` You can use AzureLogger to redirect the logging output from Azure SDKs by overriding the `AzureLogger.log` method: This value may be useful if you want to redirect logs to a location other than console. ```javascript-import { AzureLogger } from '@azure/logger'; // redirect log output AzureLogger.log = (...args) => {- console.log(...args); // to console, file, buffer, REST API.. + console.log(...args); // to console, file, buffer, REST API, etc... }; ``` |
communication-services | Manage Call Quality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/manage-call-quality.md | + + Title: Azure Communication Services Manage Calling Quality ++description: Learn how to improve and manage calling quality with Azure Communication Services +++++ Last updated : 7/10/2023++++++++# Improve and manage call quality ++This article introduces key tools you can use to monitor, troubleshoot, +and improve call quality in Azure Communication Services. The following materials help you plan for the best end-user experience. Ensure you read our calling overview materials first to familiarize yourself. ++- Voice and Video Calling - [Azure Communication Services Calling SDK + overview](calling-sdk-features.md) ++- Phone Calling - [Public Switched Telephone Network (PSTN) integration + concepts](../telephony/telephony-concept.md) ++## Prepare your network and prioritize important network traffic using QoS ++As your users start using Azure Communication Services for calls and meetings, they may experience a caller's voice breaking up or cutting in and out of a call or meeting. Shared video may freeze, or pixelate, or fail altogether. This is due to the IP packets that represent voice and video traffic encountering network congestion and arriving out of sequence or not at all. If this happens (or to prevent it from happening in the first place), use Quality of Service (QoS) by following our +[network recommendations](network-requirements.md). ++With QoS, you prioritize delay-sensitive network traffic (for example, voice or video streams), allowing it to "cut in line" in front of +traffic that is less sensitive (like downloading a new app, where an extra second to download isn't a big deal). QoS identifies and marks all packets in real-time streams using Windows Group Policy Objects and a routing feature called Port-based Access Control Lists, which instructs your network to give voice, video, and screen sharing their own dedicated network bandwidth. ++Ideally, you implement QoS on your internal network while getting ready to roll out your Azure Communication Services solution, but you can do it anytime. If you're small enough, you might not need QoS. ++For detailed guidance, see: [Network optimization](network-requirements.md#network-optimization). ++## Prepare your deployment for quality and reliability investigations ++Quality has different definitions depending on the real-time +communication use case and perspective of the end users. There are many +variables that affect the perceived quality of a real-time calling +experience, an improvement in one variable may cause a negative changes +in another variable. For example, increasing the frame rate and +resolution of a video call increases network bandwidth utilization +and processing power. ++Therefore, you need to determine your customer’s use cases and +requirements before starting your development. For example, a customer +who needs to monitor dozens of security cameras feeds simultaneously may +not need the maximum resolution and frame rate that each video stream +can provide. In this scenario, you could utilize our [Video constraints](video-constraints.md) capability to limit the amount of bandwidth used by each video stream. ++## Implement existing quality and reliability capabilities before deployment ++Before you launch and scale your Azure Communication Services calling +solution, implement the following capabilities to support a high quality calling experience. These tools help prevent common quality and reliability calling issues from happening and diagnose issues if they occur. Keep in mind, some of these call data aren't created or stored unless you implement them. ++The following sections detail the tools to implement at different phases of a call: +- **Before a call** +- **During a call** +- **After a call** ++## Before a call +**Pre-call readiness** – By using the pre-call checks ACS provides, + you can learn a user’s connection status before the call and take + proactive action on their behalf. For example, if you learn a user’s + connection is poor you can suggest they turn off their video before + joining the call to have a better audio connection. ++<!-- This is not possible yet ... ~~You could also + have callers with poor network conditions join from [PSTN (Public + Switched Telephone Network) voice + calling](https://learn.microsoft.com/en-us/azure/communication-services/concepts/telephony/telephony-concept).~~ --> +++<!-- TODO need to add a Permissions section. - filippos for input ++- needs OS level permissions. ++- needs device permission. ++- needs to return true for both Audio and Video. If false then know issues. review the Blog post on this best practice . . . --> +++### Network Diagnostic Tool ++The Network Diagnostic Tool provides a hosted experience for + developers to validate call readiness during development. You can + check if a user’s device and network conditions are optimal for + connecting to the service to ensure a great call experience. The tool + performs diagnostics on the network, devices, and call quality. ++ - By using the network diagnostic tool, you can encourage users to resolve reliability issues and improve their network connection before joining a call. ++++- For more information, please see: [Network Diagnostics Tool](../developer-tools/network-diagnostic.md). + <!- + Tool](https://azurecommdiagnostics.net/) --> ++++#### Pre-Call Diagnostics API ++Maybe you want to build your own Network Diagnostic Tool or to perform a deeper integration of this tool into your application. If so, you can use the Pre-Call diagnostic APIs that run the Network Diagnostic Tool for the calling SDK. The Pre-Call Diagnostics API lets you customize the experience in your user interface. You can then run the same series of tests that the Network Diagnostic Tool uses to ensure compatibility, connectivity, and device permissions with a test call. You can decide the best way to tell users how to correct issues before calls begin. You can also perform specific checks when troubleshooting quality and reliability issues. ++ <!- + join their audio from [PSTN (Public Switched Telephone Network) + voice + calling](https://learn.microsoft.com/en-us/azure/communication-services/concepts/telephony/telephony-concept) + before they join.~~ --> ++ - For example, if a user's hardware test has an issue, you can notify the users + involved to manage expectations and change for future calls. ++- For more information, please see: [Pre-Call diagnostic](pre-call-diagnostics.md). ++<!-- NOTE - developers can run a separate browser test now, but there's no use case specific to just doing that check we should highlight here. ++### Browser support ++When user's use unsupported browsers it can be difficult to diagnose call issues after they occur. To optimize call quality check if an application is running a supported browser before user's join to + ensure they can properly support audio and video calling. ++- To learn more, see: [How to verify if your application is running in a web browser supported by Azure Communication Services](../../how-tos/calling-sdk/browser-support.md). --> +++### Conflicting call clients ++Because Azure Communication Services Voice and Video calls run on web and mobile browsers your users may have multiple browser tabs running separate instances of the Azure + Communication Services calling SDK. This can happen for various reasons. Maybe the user forget to close their previous tab. Maybe the user couldn't join a call without a meeting organizer present and they re-attempt to open the meeting join url link, which opens a separate mobile browser tab. No matter how a user ends up with multiple call browser tabs at the same time, it causes disruptions to audio and video + behavior on the call they're trying to participate in, referred to as the target call. You should make sure there aren't multiple browser tabs open before a call starts, and also monitor during the whole call lifecycle. You can pro-actively notify customers to close their excess tabs, or help them join a call correctly with useful messaging if they're unable to join a call initially. ++ - To check if user has multiple instances + of ACS running in a browser, see: [How to detect if an application using Azure Communication Services' SDK is active in multiple tabs of a browser](../../how-tos/calling-sdk/is-sdk-active-in-multiple-tabs.md). ++## During a call ++**In-call communication** – During a call, a user’s network conditions + can worsen or they may run into reliability and compatibility issues, all of which can result in a poor calling experience. This section helps you apply capabilities to manage issues in a call and communicate with your users. ++### User Facing Diagnostics (UFDs) ++When a user is in a call, it's important to proactively notify them in real-time about issues on their call. User Facing Diagnostics (UFDs) provide real-time flags for issues to the user such as having their + microphone muted while talking or having a poor network quality. You can nudge or act on their behalf. In addition to messaging, you can consider proactive approaches to protect the limited bandwidth a user has. You can tailor your user interface messages to best suite your scenarios. If you find users + don’t consistently turn off their video upon receiving a notification + from you, then you can proactively turn a user’s video off to + prioritize their audio connection, or even hide video capability from + customer in your User Interface before they join a call. ++**For example:** ++- If there's a network issue identified you can prompt users to + turn off their video, change networks, or move to a location with a better network condition or connection. +- If there's a device issue identified, you can nudge the user to switch + devices. +++- For more information, please see: [User Facing Diagnostics](user-facing-diagnostics.md). +++### Video constraints ++Video streams consume large amounts of network bandwidth, if you know your users have limited network bandwidth or poor network conditions you can reduce control the network usage of a user's video connection with video constraints. When you limit the amount of bandwidth a user's video stream can consume you can protect the bandwidth needed for good audio quality in poor network environments. ++- To learn more, see: [Video constraints](video-constraints.md). +++### Volume indicator ++Sometimes users can't hear each other, maybe the speaker is too quiet, the listener's device doesn't receive the audio packets, or there's an audio device issue blocking the sound. Users don't know when they're speaking too quietly, or when the other person can't hear them. You can use the input and output indicator to indicate if a user’s volume is low or absent and prompt a user to speak louder or investigate an audio device issue through your user interface. ++- For more information, please see: [Add volume indicator to your web calling](../../quickstarts/voice-video-calling/get-started-volume-indicator.md) +++### Detailed media statistics +++Since network conditions can change during a call, users can report poor audio and video quality even if they started the call without issue. Our Media statistics give you detailed quality metrics on each inbound and outbound audio, video, and screen share stream. These detailed insights help you monitor calls in progress, show users their network quality status throughout a call, and debug individual calls. ++- These metrics help indicate issues on the ACS client SDK send and receive media streams. As an example, you can actively monitor the outgoing video stream's `availableBitrate`, notice a persistent drop below the recommended 1.5 Mbps and notify the user their video quality is degraded. ++- It's important to note that our Server Log data only give you an overall summary of the call after it ends. Our detailed Media Statistics provide low level metrics throughout the call duration for use in during the call and afterwards for deeper analysis. +- To learn more, see: [Media quality statistics](media-quality-sdk.md) +++### Optimal video count +During a group call with 2 or more participants a user's video quality can fluctuate due to changes in network conditions and their specific hardware limitations. By using the Optimal Video Count API, you can improve user call quality by understanding how many video streams their local endpoint can render at a time without worsening quality. By implementing this feature, you can preserve the call quality and bandwidth of local endpoints that would otherwise attempt to render video poorly. The API exposes the property, optimalVideoCount, which dynamically changes in response to the network and hardware capabilities of a local endpoint. This information is available at runtime and updates throughout the call letting you adjust a user’s visual experience as network and hardware conditions change. ++- To implement, visit web platform guidance [Manage Video](/azure/communication-services/how-tos/calling-sdk/manage-video?pivots=platform-web) and review the section titled Remote Video Quality. ++<!-- NOTE - cannot link the URL to a sub-header within a pivoted document --> +### End of Call Survey ++Customer feedback is invaluable, the End of Call Survey provides you with a tool to understand how your end users perceive the overall quality and reliability of your JavaScript / Web SDK calling solution. The survey can be modified to various survey formats if already have a survey solution in place. After publishing survey data, you can view the survey results in Azure Monitor for analysis and improvements. Azure Communication Services also uses the survey API results to monitor and improve your quality and reliability. ++- To learn more, see: [End of Call Survey overview](end-of-call-survey-concept.md) +- To implement, see: [Tutorial: Use End of Call Survey to collect user feedback](../../tutorials/end-of-call-survey-tutorial.md) ++++## After a call +**Monitor and troubleshoot call quality and reliability** - Before you release and scale your Azure Communication Services calling +solution, implement these quality and reliability monitoring capabilities +to ensure you collecting available logs and metrics. These call data aren't stored unless you implement them. +### Call Summary and Call Diagnostics Logs ++After a call ends, call logs are created to help you investigate individual calls and monitor your overall call quality and reliability. The following fields provide useful insight on user's call quality and reliability. +++- For more information, see: [Azure Communication Services Voice Calling and Video Calling logs](../analytics/logs/voice-and-video-logs.md). +++<!-- #### sdkVersion ++- Allows you to monitor the deployment of client versions. See our guidance <u>on **Client Versions**</u> to learn how old client versions can impact quality --> ++#### Call errors ++- The `participantEndReason` is the reason a participant ends a connection. This data helps you identify common trends leading to unplanned call ends (when relevant). See our guidance on [Calling SDK error codes](../troubleshooting-info.md#calling-sdk-error-codes) +++<!-- #### transportType ++- A UDP connection is better than a TCP connection. See our guidance on **<u>UDP vs. TCP</u>** to learn how TCP connections can result in poor quality. --> ++<!-- #### <span class="mark">DRAFT UIHint later – what is added quality value with Device, skd, custom tag?</span> --> ++#### Summarized Media Quality logs ++- These three logs give you insight on the average media quality during the call. + <!-- See our guidance on **<u>Media Quality</u>** to learn more. --> ++ - `roundTripTimeAvg` ++ - `jitterAvg` ++ - `packetLossRateAvg` +++### Start collecting call logs ++Review this documentation to start collecting call logs: [Enable logs via Diagnostic Settings in Azure Monitor](../analytics/enable-logging.md) ++- Choose the category group "allLogs" and choose the destination detail of “Send to Log Analytics workspace" in order to view and analyze the data in Azure Monitor. ++<!-- To enable call logs review this documentation + [Enable and Access Call Summary and Call Diagnostic Logs](../call-logs-azure-monitor-access.md). Then follow these steps: [Enable logs via Diagnostic Settings in Azure Monitor](../analytics/enable-logging.md) --> ++### Examine call quality with Voice and Video Insights Preview ++Once you have enabled logs, you can view call insights in your Azure Resource using visualization examples: [Voice and video Insights](../analytics/insights/voice-and-video-insights.md) ++- You can modify the existing workbooks or even create your own: [Azure Workbooks](../../../azure-monitor/visualize/workbooks-overview.md) ++- For examples of deeper suggested analysis see our [Query call logs](../analytics/query-call-logs.md) ++<!-- #### Detailed Media Statistics --> +++#### End of Call Survey +Once you enable diagnostic settings to capture your survey data you can use our sample [call log queries](../analytics/query-call-logs.md) in Azure Log Analytics to analyze your user's perceived quality experience. User feedback can show you call issues you didn't know you had and help you prioritize your quality improvements. ++### Analyze your call data +By collecting call data such as Media Statistics, User Facing Diagnostics, and pre-call API information you can review calls with + poor quality to conduct root cause analysis when troubleshooting issues. For example, a user may have an hour long call and report poor audio at one point in the call. ++The call may have fired a User Facing Diagnostic indicating a severe problem with the incoming or outgoing media steam quality. By storing the [detailed media statistics](media-quality-sdk.md) from the call you can review when the UFD occurred to see if there were high levels of packet loss, jitter, or latency around this time indicating a poor network condition. You explore whether the network was impacted by an external client's unmanaged network, unnecessary network traffic due to improper Quality of Service (QoS) network prioritization policies, or an unnecessary Virtual Private Network (VPN) for example. ++> [!NOTE] +> As a rule, we recommend prioritizing a user’s Audio connection bandwidth before their video connection and both audio and video before other network traffic. When a network is unable to support both audio and video, you can proactively disable a user’s video or nudge a user to disable their video. ++### Other considerations +<!- + - [Azure logs and metrics for Teams external users](../interop/guest/monitor-logs-metrics.md) --> ++- If you don't have access to your customer’s Azure portal to view data tied to their Azure Resource ID you can query their workspaces to improve quality on their behalf? + - [Create a log query across multiple workspaces and apps in Azure Monitor](../../../azure-monitor/logs/cross-workspace-query.md) +++## Next steps ++- Continue to learn other best practices, see: [Best practices: Azure Communication Services calling SDKs](../best-practices.md) ++- Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../../articles/azure-monitor/logs/log-analytics-tutorial.md) ++- Create your own queries in Log Analytics, see: [Get Started Queries](../../../../articles/azure-monitor/logs/get-started-queries.md) +++<!-- Comment this out - add to the toc.yml file at row 583. ++ - name: Monitor and manage call quality + items: + - name: Manage call quality + href: concepts/voice-video-calling/manage-call-quality.md + displayName: diagnostics, Survey, feedback, quality, reliability, users, end, call, quick + - name: End of Call Survey + href: concepts/voice-video-calling/end-of-call-survey-concept.md + displayName: diagnostics, Survey, feedback, quality, reliability, users, end, call, quick + --> |
container-apps | Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md | The *Is Configurable* column in the following tables denotes a feature maximum m | Feature | Scope | Default | Is Configurable | Remarks | |--|--|--|--|--|-| Environments | Region | Up to 15 | Yes | Limit up to 15 environments per subscription, per region.<br><br>For example, if you deploy to three regions you can get up to 45 environments for a single subscription. | +| Environments | Region | Up to 15 | Yes | Limit up to 15 environments per subscription, per region. | +| Environments | GLobal | Up to 20 | Yes | Limit up to 20 environments per subscription accross all regions | | Container Apps | Environment | Unlimited | n/a | | | Revisions | Container app | 100 | No | | | Replicas | Revision | 300 | Yes | | |
cosmos-db | Get Started Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md | Now create and configure a source to flow data from the Azure Cosmos DB account' | Capture intermediate updates | Enable this option if you would like to capture the history of changes to items including the intermediate changes between change data capture reads. | | Capture Deletes | Enable this option to capture user-deleted records and apply them on the Sink. Deletes can't be applied on Azure Data Explorer and Azure Cosmos DB Sinks. | | Capture Transactional store TTLs | Enable this option to capture Azure Cosmos DB transactional store (time-to-live) TTL deleted records and apply on the Sink. TTL-deletes can't be applied on Azure Data Explorer and Azure Cosmos DB sinks. |-| Batchsize in bytes | Specify the size in bytes if you would like to batch the change data capture feeds | +| Batchsize in bytes | This setting is in fact **gigabytes**. Specify the size in gigabytes if you would like to batch the change data capture feeds | | Extra Configs | Extra Azure Cosmos DB analytical store configs and their values. (ex: `spark.cosmos.allowWhiteSpaceInFieldNames -> true`) | ### Working with source options After a data flow has been published, you can add a new pipeline to move and tra > [!NOTE] > The initial cluster startup time may take up to three minutes. To avoid cluster startup time in the subsequent change data capture executions, configure the Dataflow cluster **Time to live** value. For more information about the itegration runtime and TTL, see [integration runtime in Azure Data Factory](../data-factory/concepts-integration-runtime.md). +## Concurrent jobs ++The batch size in the source options, or situations when the sink is slow to ingest the stream of changes, may cause the execution of multiple jobs at the same time. To avoid this situation, set the **Concurrency** option to 1 in the Pipeline settings, to make sure that new executions are not triggered until the current execution completes. ++ ## Next steps - Review the [overview of Azure Cosmos DB analytical store](analytical-store-introduction.md) |
cosmos-db | How To Setup Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md | description: Learn how to configure role-based access control with Azure Active Previously updated : 04/14/2023 Last updated : 07/12/2023 |
cosmos-db | Best Practice Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md | services.AddSingleton<CosmosClient>(serviceProvider => ## Best practices when using Gateway mode -Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value. +Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `CosmosClientOptions.GatewayModeMaxConnectionLimit` to a higher value. ## Best practices for write-heavy workloads For workloads that have heavy create payloads, set the `EnableContentResponseOnW > [!IMPORTANT] > Setting `EnableContentResponseOnWrite` to `false` will also disable the response from a trigger operation. +## Best practices for multi-tenant applications ++Applications that distribute usage across multiple tenants where each tenant is represented by a different database, container, or partition key **within the same Azure Cosmos DB account** should use a single client instance. A single client instance can interact with all the databases, containers, and partition keys within an account, and it's best practice to use the [singleton pattern](performance-tips-dotnet-sdk-v3.md#sdk-usage). ++However, when each tenant is represented by a **different Azure Cosmos DB account**, it's required to create a separate client instance per account. The singleton pattern still applies for each client (one client for each account for the lifetime of the application), but if the volume of tenants is high, the number of clients can be difficult to manage. [Connections](sdk-connection-modes.md#direct-mode) can increase beyond the limits of the compute environment and cause [connectivity issues](conceptual-resilient-sdk-applications.md#client-instances-and-connections). ++It's recommended in these cases to: ++* Understand the limitations of the compute environment (CPU and connection resources). We recommend using VMs with at least 4-cores and 8-GB memory whenever possible. +* Based on the limitations of the compute environment, determine the number of client instances (and therefore number of tenants) a single compute instance can handle. You can [estimate the number of connections](sdk-connection-modes.md#volume-of-connections) that will be opened per client depending on the connection mode chosen. +* Evaluate tenant distribution across instances. If each compute instance can successfully handle a certain limited amount of tenants, load balancing and routing of tenants to different compute instances would allow for scaling as the number of tenants grow. +* For sparse workloads, consider using a Least Frequently Used cache as the structure to hold the client instances and dispose clients for tenants that haven't been accessed within a time window. One option in .NET is [MemoryCacheEntryOptions](/dotnet/api/microsoft.extensions.caching.memory.memorycacheentryoptions), where [RegisterPostEvictionCallback](/dotnet/api/microsoft.extensions.caching.memory.memorycacheentryextensions.registerpostevictioncallback) can be used to **dispose inactive clients** and [SetSlidingExpiration](/dotnet/api/microsoft.extensions.caching.memory.memorycacheentryextensions.setslidingexpiration) can be used to define the maximum time to hold inactive connections. +* Evaluate using [Gateway mode](sdk-connection-modes.md#available-connectivity-modes) to reduce the number of network connections. +* When using [Direct mode](sdk-connection-modes.md#direct-mode) consider adjusting [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout) and [CosmosClientOptions.PortReuseMode](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode) on the [direct mode configuration](tune-connection-configurations-net-sdk-v3.md) to close unused connections and keep the [volume of connections](sdk-connection-modes.md#volume-of-connections) under control. + ## Next steps For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md). For a sample application that's used to evaluate Azure Cosmos DB for high-perfor To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md). Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.-* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Performance Tips Dotnet Sdk V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-dotnet-sdk-v3.md | When it's running on the TCP protocol, the client optimizes for latency by using In scenarios where you have sparse access, and if you notice a higher connection count when compared to Gateway mode access, you can: * Configure the [CosmosClientOptions.PortReuseMode](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode) property to `PrivatePortPool` (effective with framework versions 4.6.1 and later and .NET Core versions 2.0 and later). This property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints.-* Configure the [CosmosClientOptions.IdleConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout) property as greater than or equal to 10 minutes. The recommended values are from 20 minutes to 24 hours. +* Configure the [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout) property as greater than or equal to 10 minutes. The recommended values are from 20 minutes to 24 hours. <a id="same-region"></a> Middle-tier applications that don't consume responses directly from the SDK but **Use a singleton Azure Cosmos DB client for the lifetime of your application** -Each `CosmosClient` instance is thread-safe and performs efficient connection management and address caching when it operates in Direct mode. To allow efficient connection management and better SDK client performance, we recommend that you use a single instance per `AppDomain` for the lifetime of the application. +Each `CosmosClient` instance is thread-safe and performs efficient connection management and address caching when it operates in Direct mode. To allow efficient connection management and better SDK client performance, we recommend that you use a single instance per `AppDomain` for the lifetime of the application for each account your application interacts with. ++For multi-tenant applications handling multiple accounts, see the [related best practices](best-practice-dotnet.md#best-practices-for-multi-tenant-applications). When you're working on Azure Functions, instances should also follow the existing [guidelines](../../azure-functions/manage-connections.md#static-clients) and maintain a single instance. |
cosmos-db | Iif | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/iif.md | Evaluates a boolean expression and returns the result of one of two expressions ## Syntax ```sql-IIF(<bool_expr>, <true_expr>, <false_expr>) +IIF(<bool_expr>, <true_expr>, <not_true_expr>) ``` ## Arguments IIF(<bool_expr>, <true_expr>, <false_expr>) | | | | **`bool_expr`** | A boolean expression, which is evaluated and used to determine which of the two supplemental expressions to use. | | **`true_expr`** | The expression to return if the boolean expression evaluated to `true`. |-| **`false_expr`** | The expression to return if the boolean expression evaluated to `false`. | +| **`not_true_expr`** | The expression to return if the boolean expression evaluated to **NOT** `true`. | ## Return types This first example evaluates a static boolean expression and returns one of two ```sql SELECT VALUE { evalTrue: IIF(true, 123, 456),- evalFalse: IIF(false, 123, 456) + evalFalse: IIF(false, 123, 456), + evalNumberNotTrue: IIF(123, 123, 456), + evalStringNotTrue: IIF("ABC", 123, 456), + evalArrayNotTrue: IIF([1,2,3], 123, 456), + evalObjectNotTrue: IIF({"name": "Alice", "age": 20}, 123, 456) } ``` SELECT VALUE { [ { "evalTrue": 123,- "evalFalse": 456 + "evalFalse": 456, + "evalNumberNotTrue": 456, + "evalStringNotTrue": 456, + "evalArrayNotTrue": 456, + "evalObjectNotTrue": 456 } ] ``` |
cost-management-billing | Reservation Exchange Policy Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-exchange-policy-changes.md | You purchase a three-year compute reservation after on or after January 1, 2024. You can always trade in the reservation for a savings plan. There's no time limit for trade-ins. +### Scenario 5 ++You purchase a three-year compute reservation of 10 quantities before January 2024. You exchange 2 quantities of the compute reservation on or after January 1, 2024. You can still exchange the leftover 8 quantities on the original reservation after January 1, 2024. ++You can always trade in the reservation for a savings plan. There's no time limit for trade-ins. + ## Next steps - Learn more about [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md).-- Learn more about [Self-service trade-in for Azure savings plans](../savings-plan/reservation-trade-in.md).+- Learn more about [Self-service trade-in for Azure savings plans](../savings-plan/reservation-trade-in.md). |
data-factory | Better Understand Different Integration Runtime Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/better-understand-different-integration-runtime-charges.md | |
data-factory | Concepts Data Flow Udf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-udf.md | |
data-factory | Connector Amazon Rds For Oracle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-oracle.md | |
data-factory | Connector Amazon Rds For Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-sql-server.md | |
data-factory | Connector Amazon Redshift | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-redshift.md | |
data-factory | Connector Asana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-asana.md | |
data-factory | Connector Azure Cosmos Db Mongodb Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md | |
data-factory | Connector Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-explorer.md | |
data-factory | Connector Azure Database For Mariadb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mariadb.md | |
data-factory | Connector Azure Database For Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md | |
data-factory | Connector Azure Databricks Delta Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-databricks-delta-lake.md | |
data-factory | Connector Azure File Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-file-storage.md | |
data-factory | Connector Azure Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-search.md | |
data-factory | Connector Azure Table Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-table-storage.md | |
data-factory | Connector Dataworld | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dataworld.md | |
data-factory | Connector Db2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-db2.md | |
data-factory | Connector Drill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-drill.md | |
data-factory | Connector Google Adwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md | |
data-factory | Connector Google Bigquery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md | |
data-factory | Connector Mariadb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md | |
data-factory | Connector Odata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odata.md | |
data-factory | Connector Salesforce Service Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-service-cloud.md | |
data-factory | Connector Salesforce | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md | The following properties are supported for the Salesforce linked service. } ``` +**Example: Store credentials in Key Vault, as well as environmentUrl and username** ++Note that by doing so, you will no longer be able to use the UI to edit settings. The ***Specify dynamic contents in JSON format*** checkbox will be checked, and you will have to edit this configuration entirely by hand. The advantage is you can derive ALL configuration settings from the Key Vault instead of parameterizing anything here. ++```json +{ + "name": "SalesforceLinkedService", + "properties": { + "type": "Salesforce", + "typeProperties": { + "environmentUrl": { + "type": "AzureKeyVaultSecret", + "secretName": "<secret name of environment URL in AKV>", + "store": { + "referenceName": "<Azure Key Vault linked service>", + "type": "LinkedServiceReference" + }, + }, + "username": { + "type": "AzureKeyVaultSecret", + "secretName": "<secret name of username in AKV>", + "store": { + "referenceName": "<Azure Key Vault linked service>", + "type": "LinkedServiceReference" + }, + }, + "password": { + "type": "AzureKeyVaultSecret", + "secretName": "<secret name of password in AKV>", + "store":{ + "referenceName": "<Azure Key Vault linked service>", + "type": "LinkedServiceReference" + } + }, + "securityToken": { + "type": "AzureKeyVaultSecret", + "secretName": "<secret name of security token in AKV>", + "store":{ + "referenceName": "<Azure Key Vault linked service>", + "type": "LinkedServiceReference" + } + } + }, + "connectVia": { + "referenceName": "<name of Integration Runtime>", + "type": "IntegrationRuntimeReference" + } + } +} +``` + ## Dataset properties For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by the Salesforce dataset. To learn details about the properties, check [Lookup activity](control-flow-look ## Next steps-For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats). +For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats). |
data-factory | Connector Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md | |
data-factory | Connector Square | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-square.md | |
data-factory | Connector Troubleshoot Dynamics Dataverse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-dynamics-dataverse.md | |
data-factory | Connector Troubleshoot Sap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-sap.md | |
data-factory | Connector Twilio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-twilio.md | |
data-factory | Continuous Integration Delivery Sample Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md | Install the latest Azure PowerShell modules by following instructions in [How to >[!WARNING] >Make sure to use **PowerShell Core** in ADO task to run the script -## Pre- and post-deployment script +## Pre- and post-deployment script The sample scripts to stop/ start triggers and update global parameters during release process (CICD) are located in the [Azure Data Factory Official GitHub page](https://github.com/Azure/Azure-DataFactory/tree/main/SamplesV2/ContinuousIntegrationAndDelivery). > [!NOTE] The sample scripts to stop/ start triggers and update global parameters during r The following sample script can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. Save the script in an Azure DevOps git repository and reference it via an Azure PowerShell task the latest Azure PowerShell version. -When running a pre-deployment script, you will need to specify a variation of the following parameters in the **Script Arguments** field. +When running a predeployment script, you need to specify a variation of the following parameters in the **Script Arguments** field. `-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $true -deleteDeployment $false` -When running a post-deployment script, you will need to specify a variation of the following parameters in the **Script Arguments** field. +When running a postdeployment script, you need to specify a variation of the following parameters in the **Script Arguments** field. `-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $false -deleteDeployment $true` When running a post-deployment script, you will need to specify a variation of t :::image type="content" source="media/continuous-integration-delivery/continuous-integration-image11.png" alt-text="Azure PowerShell task"::: +## Script execution and parameters - YAML Pipelines +The following YAML code executes a script that can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. If you're following the steps outlined in [New CI/CD Flow](continuous-integration-delivery-improvements.md), this script is exported as part of artifact created via the npm publish package. ++### Stop ADF Triggers +``` + - task: AzurePowerShell@5 + displayName: Stop ADF Triggers + inputs: + scriptType: 'FilePath' + ConnectedServiceNameARM: AzureDevServiceConnection + scriptPath: ../ADFTemplates/PrePostDeploymentScript.ps1 + ScriptArguments: -armTemplate "<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $true -deleteDeployment $false + errorActionPreference: stop + FailOnStandardError: False + azurePowerShellVersion: azurePowerShellVersion + preferredAzurePowerShellVersion: 3.1.0 + pwsh: False + workingDirectory: ../ +``` ++### Start ADF Triggers +``` + - task: AzurePowerShell@5 + displayName: Start ADF Triggers + inputs: + scriptType: 'FilePath' + ConnectedServiceNameARM: AzureDevServiceConnection + scriptPath: ../ADFTemplates/PrePostDeploymentScript.ps1 + ScriptArguments: -armTemplate "<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name>-predeployment $false -deleteDeployment $true + errorActionPreference: stop + FailOnStandardError: False + azurePowerShellVersion: azurePowerShellVersion + preferredAzurePowerShellVersion: 3.1.0 + pwsh: False + workingDirectory: ../ +``` ## Next steps |
data-factory | Control Flow Lookup Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-lookup-activity.md | |
data-factory | Create Self Hosted Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md | |
data-factory | Data Access Strategies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-access-strategies.md | |
data-factory | Data Factory Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md | |
data-factory | Data Factory Tutorials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-tutorials.md | |
data-factory | Data Factory Ux Troubleshoot Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-ux-troubleshoot-guide.md | |
data-factory | Data Flow Aggregate Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate-functions.md | |
data-factory | Data Flow Aggregate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate.md | |
data-factory | Data Flow Array Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-array-functions.md | |
data-factory | Data Flow Assert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-assert.md | |
data-factory | Data Flow Cached Lookup Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cached-lookup-functions.md | |
data-factory | Data Flow Cast | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cast.md | |
data-factory | Data Flow Conditional Split | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conditional-split.md | |
data-factory | Data Flow Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-create.md | |
data-factory | Data Flow Date Time Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-date-time-functions.md | |
data-factory | Data Flow Derived Column | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-derived-column.md | |
data-factory | Data Flow Exists | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-exists.md | |
data-factory | Data Flow Expression Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expression-functions.md | |
data-factory | Data Flow External Call | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-external-call.md | |
data-factory | Data Flow Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-filter.md | |
data-factory | How To Create Event Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md | This section shows you how to create a storage event trigger within the Azure Da 1. Select whether or not your trigger ignores blobs with zero bytes. -1. After you configure you trigger, click on **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you've specific filters. Configuring filters that are too broad can match a large number of files created/deleted and may significantly impact your cost. Once your filter conditions have been verified, click **Finish**. +1. After you configure your trigger, click on **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you've specific filters. Configuring filters that are too broad can match a large number of files created/deleted and may significantly impact your cost. Once your filter conditions have been verified, click **Finish**. :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-3.png" alt-text="Screenshot of storage event trigger preview page."::: This section shows you how to create a storage event trigger within the Azure Da In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively. - > [!NOTE] - > If you are creating your pipeline and trigger in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md), you must use `@trigger().outputs.body.fileName` and `@trigger().outputs.body.folderPath` as parameters. Those two properties capture blob information. Use those properties instead of using `@triggerBody().fileName` and `@triggerBody().folderPath`. - 1. Click **Finish** once you are done. ## JSON schema |
data-factory | How To Send Notifications To Teams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-notifications-to-teams.md | Before you can send notifications to Teams from your pipelines, you must create "targets": [ { "os": "default",- "uri": "@{concat('https://synapse.azure.com/monitoring/pipelineruns/',pipeline().parameters.runId,'?factory=/subscriptions/',pipeline().parameters.subscription,'/resourceGroups/',pipeline().parameters.resourceGroup,'/providers/Microsoft.DataFactory/factories/',pipeline().DataFactory)}" + "uri": "@{concat('https://web.azuresynapse.net/monitoring/pipelineruns/',pipeline().parameters.runId,'?workspace=%2Fsubscriptions%2F',pipeline().parameters.subscription,'%2FresourceGroups%2F',pipeline().parameters.resourceGroup,'%2Fproviders%2FMicrosoft.Synapse%2Fworkspaces%2F',pipeline().DataFactory)}" } ] } |
data-factory | Monitor Visually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-visually.md | |
data-factory | Transform Data Synapse Notebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-synapse-notebook.md | |
data-factory | Transform Data Synapse Spark Job Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-synapse-spark-job-definition.md | |
data-factory | Tutorial Incremental Copy Change Data Capture Feature Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md | In this tutorial, you create a pipeline that performs the following operations: If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. ## Prerequisites-* **Azure SQL Database Managed Instance**. You use the database as the **source** data store. If you don't have an Azure SQL Database Managed Instance, see the [Create an Azure SQL Database Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart) article for steps to create one. +* **Azure SQL Managed Instance**. You use the database as the **source** data store. If you don't have an Azure SQL Managed Instance, see the [Create an Azure SQL Database Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart) article for steps to create one. * **Azure Storage account**. You use the blob storage as the **sink** data store. If you don't have an Azure storage account, see the [Create a storage account](../storage/common/storage-account-create.md) article for steps to create one. Create a container named **raw**. ### Create a data source table in Azure SQL Database If you don't have an Azure subscription, create a [free](https://azure.microsoft EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'customers', - @role_name = 'null', + @role_name = NULL, @supports_net_changes = 1 ``` 5. Insert data into the customers table by running the following command: |
data-factory | Tutorial Pipeline Failure Error Handling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-failure-error-handling.md | Last updated 01/09/2023 ## Conditional paths -Azure Data Factory and Synapse Pipeline orchestration allows conditional logic and enables user to take different based upon outcomes of a previous activity. Using different paths allow users to build robust pipelines and incorporates error handling in ETL/ELT logic. In total, we allow four conditional paths, +Azure Data Factory and Synapse Pipeline orchestration allows conditional logic and enables the user to take a different path based upon outcomes of a previous activity. Using different paths allow users to build robust pipelines and incorporates error handling in ETL/ELT logic. In total, we allow four conditional paths, | Name | Explanation | | | | |
defender-for-cloud | Defender For Apis Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-manage.md | -1. Next to the API you want to offboard from Defender for APIs, select the ellipsis (...) > **Remove**. +1. Next to the API you want to offboard from Defender for APIs, select the ***ellipsis*** (...) > **Remove**. :::image type="content" source="media/defender-for-apis-manage/api-remove.png" alt-text="Screenshot of the review API information in Cloud Security Explorer." lightbox="media/defender-for-apis-manage/api-remove.png"::: +## Query your APIs with the cloud security explorer ++You can use the cloud security explorer to run graph-based queries on the cloud security graph. By utilizing the cloud security explorer, you can proactively identify potential security risks to your APIs. ++There are three types of APIs you can query: ++- **API Collections** - A group of all types of API collections. ++- **API Endpoints** - A group of all types of API endpoints. ++- **API Management** services - API management services are platforms that provide tools and infrastructure for managing APIs, typically through a web-based interface. They often include features such as: API gateway, API portal, API analytics and API security. ++**To query APIs in the cloud security graph**: ++1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Navigate to **Microsoft Defender for Cloud** > **Cloud Security Explorer**. ++1. From the drop down menu, select APIs. ++ :::image type="content" source="media/defender-for-apis-manage/cloud-explorer-apis.png" alt-text="Screenshot of Defender for Cloud's cloud security explorer that shows how to select APIs." lightbox="media/defender-for-apis-manage/cloud-explorer-apis.png"::: ++1. Select all relevant options. ++1. Select **Done**. ++1. Add any other conditions. ++1. Select **Search**. +You can learn more about how to [build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md). ## Next steps |
defender-for-cloud | Express Configuration Sql Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/express-configuration-sql-commands.md | This article contains the PowerShell wrapper for SQL vulnerability assessment ex You should make a local copy of the script and save the file with the following file name `SqlVulnerabilityAssessmentCommands.psm1`. - After you have made a local copy of the wrapper you should use the [Express configuration PowerShell commands reference](express-configuration-powershell-commands.md). ## SqlVulnerabilityAssessmentCommands.psm1 function Invoke-SqlVulnerabilityAssessmentScan([parameter(mandatory)] [string] $ Content : {"operation":"ExecuteDatabaseVulnerabilityAssessmentScan","startTime":"2023-05-15T10:58:48.367Z"} #> if ($DatabaseName -eq 'master') {- $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/defualt/initiateScan?api-version=2022-02-01-preview&systemDatabaseName=master" + $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/default/initiateScan?api-version=2022-02-01-preview&systemDatabaseName=master" } else {- $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/defualt/initiateScan?api-version=2022-02-01-preview" + $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/default/initiateScan?api-version=2022-02-01-preview" } SendRestRequest -Method "Post" -Uri $Uri } |
defender-for-cloud | Powershell Sample Vulnerability Assessment Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-azure-sql.md | function SetSqlVulnerabilityAssessmentBaselineOnSystemDatabase($SubscriptionId, } function RunSqlVulnerabilityAssessmentScanOnUserDatabase($SubscriptionId, $ResourceGroupName, $ServerName, $DatabaseName) {- $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/defualt/initiateScan?api-version=2022-02-01-preview" + $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/default/initiateScan?api-version=2022-02-01-preview" SendRestRequest -Method "Post" -Uri $Uri } function RunSqlVulnerabilityAssessmentScanOnSystemDatabase($SubscriptionId, $ResourceGroupName, $ServerName, $DatabaseName) {- $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/defualt/initiateScan?api-version=2022-02-01-preview&systemDatabaseName=$DatabaseName" + $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/default/initiateScan?api-version=2022-02-01-preview&systemDatabaseName=$DatabaseName" SendRestRequest -Method "Post" -Uri $Uri } function GetSqlVulnerabilityAssessmentScanOnUserDatabase($SubscriptionId, $ResourceGroupName, $ServerName, $DatabaseName) {- $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/defualt/scans/latest?api-version=2022-02-01-preview" + $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/default/scans/latest?api-version=2022-02-01-preview" return SendRestRequest -Method "Get" -Uri $Uri } function GetSqlVulnerabilityAssessmentScanOnSystemDatabase($SubscriptionId, $ResourceGroupName, $ServerName, $DatabaseName) {- $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/defualt/scans/latest?api-version=2022-02-01-preview&systemDatabaseName=$DatabaseName" + $Uri = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/default/scans/latest?api-version=2022-02-01-preview&systemDatabaseName=$DatabaseName" return SendRestRequest -Method "Get" -Uri $Uri } if ($haveVaSetting) { LogMessage -LogMessage "Finished to fetch baseline for $($database.DatabaseName) database." } catch {- LogError "An error occurred: $($_.Exception.Message) while hndling $($database.DatabaseName) database." + LogError "An error occurred: $($_.Exception.Message) while handling $($database.DatabaseName) database." $canRemoveVa = $false } if ($haveVaSetting) { LogMessage -LogMessage "Finished to fetch baseline for database master." } catch {- LogError "An error occurred: $($_.Exception.Message) while hndling master database." + LogError "An error occurred: $($_.Exception.Message) while handling master database." $canRemoveVa = $false } if ($haveVaSetting) { foreach ($database in $Databases) { $i += 1 $completed = ($i/$Databases.count) * 100- Write-Progress -Activity "Proccessing" -Status "Progress:" -PercentComplete $completed + Write-Progress -Activity "Processing" -Status "Progress:" -PercentComplete $completed LogMessage -LogMessage "Clear Vulnerability Assessment setting for '$($database.DatabaseName)' database." Clear-AzSqlDatabaseVulnerabilityAssessmentSetting -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $database.DatabaseName } $i = 0 foreach ($database in $Databases) { $i += 1 $completed = ($i/$Databases.count) * 100- Write-Progress -Activity "Proccessing" -Status "Progress:" -PercentComplete $completed + Write-Progress -Activity "Processing" -Status "Progress:" -PercentComplete $completed LogMessage -LogMessage "Run scan on '$($database.DatabaseName)' database." Retry -action { RunSqlVulnerabilityAssessmentScanOnUserDatabase -SubscriptionId $SubscriptionId -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $database.DatabaseName } } $i = 0 foreach ($database in $Databases) { $i += 1 $completed = ($i/$Databases.count) * 100- Write-Progress -Activity "Proccessing" -Status "Progress:" -PercentComplete $completed + Write-Progress -Activity "Processing" -Status "Progress:" -PercentComplete $completed try { LogMessage -LogMessage "Waiting for results for $($database.DatabaseName) database." Retry -action { GetSqlVulnerabilityAssessmentScanOnUserDatabase -SubscriptionId $SubscriptionId -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $database.DatabaseName } else { foreach ($database in $Databases) { $i += 1 $completed = ($i/$Databases.count) * 100- Write-Progress -Activity "Proccessing" -Status "Progress:" -PercentComplete $completed + Write-Progress -Activity "Processing" -Status "Progress:" -PercentComplete $completed try { if (![string]::IsNullOrEmpty($baselines[$database.DatabaseName])) { LogMessage -LogMessage "Applying baseline for '$($database.DatabaseName)' database." if ($successMigration.Count -eq 0) { LogMessage -LogMessage "You can revert back to classic configuration. For more information: https://learn.microsoft.com/en-us/azure/defender-for-cloud/sql-azure-vulnerability-assessment-manage?tabs=express#revert-back-to-the-classic-configuration" } elseif ($failedMigration.Count -eq 0) {- LogMessage -LogMessage "The migration process completed successfuly." + LogMessage -LogMessage "The migration process completed successfully." } else {- LogMessage -LogMessage "The migration process completed. The migration was seccessful for $($successMigration -join ',') and unseccessful for $($failedMigration -join ',')" + LogMessage -LogMessage "The migration process completed. The migration was successful for $($successMigration -join ',') and unsuccessful for $($failedMigration -join ',')" LogMessage -LogMessage "You can revert back to classic configuration. For more information: https://learn.microsoft.com/en-us/azure/defender-for-cloud/sql-azure-vulnerability-assessment-manage?tabs=express#revert-back-to-the-classic-configuration" } ``` |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | Last updated 06/28/2023 # Connect your AWS account to Microsoft Defender for Cloud -Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Amazon Web Services (AWS), but you need to set up the connection between them to your Azure subscription. +Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Amazon Web Services (AWS), but you need to set up the connection between them and Defender for Cloud. If you're connecting an AWS account that you previously connected by using the classic connector, you must [remove it](how-to-use-the-classic-connector.md#remove-classic-aws-connectors) first. Using an AWS account that's connected by both the classic and native connectors can produce duplicate recommendations. |
defender-for-cloud | Quickstart Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md | Last updated 06/28/2023 # Connect your GCP project to Microsoft Defender for Cloud -Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Google Cloud Platform (GCP), but you need to set up the connection between them to your Azure subscription. +Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Google Cloud Platform (GCP), but you need to set up the connection between them and Defender for Cloud. If you're connecting a GCP project that you previously connected by using the classic connector, you must [remove it](how-to-use-the-classic-connector.md#remove-classic-gcp-connectors) first. Using a GCP project that's connected by both the classic and native connectors can produce duplicate recommendations. |
event-grid | Communication Services Email Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-email-events.md | -This article provides the properties and schema for communication services telephony and SMS events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). +This article provides the properties and schema for communication services email events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). ## Events types |
event-hubs | Schema Registry Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-overview.md | Last updated 05/04/2022 -# Azure Schema Registry in Azure Event Hubs +# Use Azure Schema Registry in Event Hubs from Apache Kafka and other apps In many event streaming and messaging scenarios, the event or message payload contains structured data. Schema-driven formats such as [Apache Avro](https://avro.apache.org/) are often used to serialize or deserialize such structured data. An event producer uses a schema to serialize event payload and publish it to an event broker such as Event Hubs. Event consumers read event payload from the broker and deserialize it using the same schema. So, both producers and consumers can validate the integrity of the data with the same schema. An event producer uses a schema to serialize event payload and publish it to an :::image type="content" source="./media/schema-registry-overview/schema-registry.svg" alt-text="Schema Registry" border="false"::: -With schema-driven serialization frameworks like Apache Avro, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. That's because each message won't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON. +With schema-driven serialization frameworks like Apache Avro, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. It's because each message doesn't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON. > [!NOTE] > The feature isn't available in the **basic** tier. |
expressroute | Expressroute Locations Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md | The following table shows connectivity locations and the service providers for e |--|--|--|--|--|--| | **Abu Dhabi** | Etisalat KDC | 3 | UAE Central | Supported | | | **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>Colt<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>InterCloud<br/>Interxion<br/>KPN<br/>IX Reach<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>Tata Communications<br/>Telefonica<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Zayo |-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>Interxion<br/>NL-IX<br/>NOS<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Vodafone | +| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>Interxion<br/>Megaport<br/>NL-IX<br/>NOS<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Vodafone | | **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | Supported | Equinix<br/>Megaport | | **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | Supported | Devoli<br/>Kordia<br/>Megaport<br/>REANNZ<br/>Spark NZ<br/>Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS<br/>National Telecom UIH | |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** | Supported | Supported | Newport(Wales) | | **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** | Supported | Supported | Melbourne<br/>Perth<br/>Sydney<br/>Sydney2 | | **NL-IX** | Supported | Supported | Amsterdam2<br/>Dublin2 |-| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** | Supported | Supported | Amsterdam2<br/>Madrid | +| **[NOS](https://www.nos.pt/empresas/solucoes/cloud/cloud-publica/nos-cloud-connect)** | Supported | Supported | Amsterdam2<br/>Madrid | | **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** | Supported | Supported | Amsterdam<br/>Hong Kong<br/>London<br/>Los Angeles<br/>New York<br/>Osaka<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | | **NTT Communications India Network Services Pvt Ltd** | Supported | Supported | Chennai<br/>Mumbai | | **NTT Communications - Flexible InterConnect** |Supported |Supported | Jakarta<br/>Osaka<br/>Singapore2<br/>Tokyo<br/>Tokyo2 | Azure national clouds are isolated from each other and from global commercial Az To learn more<br/>see [ExpressRoute in China](https://www.azure.cn/home/features/expressroute/). -### Germany --| Service provider | Microsoft Azure | Office 365 | Locations | -| | | | | -| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Not Supported |Frankfurt | -| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Not Supported |Frankfurt | -| **e-shelter** |Supported |Not Supported |Berlin | -| **Interxion** |Supported |Not Supported |Frankfurt | -| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported | Not Supported | Berlin | -| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Not Supported |Berlin | - ## Connectivity through Exchange providers If your connectivity provider isn't listed in previous sections, you can still create a connection. If you're remote and don't have fiber connectivity, or you want to explore other | **[Nianet](https://www.globalconnect.dk/)** |Equinix | Amsterdam<br/>Frankfurt | | **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Montreal<br/>Toronto | | **[POST Telecom Luxembourg](https://business.post.lu/grandes-entreprises/telecom-ict/telecom)**| Equinix | Amsterdam |-| **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**| Equinix | Amsterdam<br/>Dublin<br/>London<br/>Paris | +| **[Proximus](https://www.proximus.be/en/id_cl_explore/companies-and-public-sector/networks/corporate-networks/explore.html)**| Equinix | Amsterdam<br/>Dublin<br/>London<br/>Paris | | **[QSC AG](https://www2.qbeyond.de/en/)** |Interxion | Frankfurt | | **[RETN](https://retn.net/products/cloud-connect)** | Equinix | Amsterdam | | **Rogers** | Cologix<br/>Equinix | Montreal<br/>Toronto | Enabling private connectivity to fit your needs can be challenging, based on the | **[FlexManage](https://www.flexmanage.com/cloud)** | North America | | **[Lightstream](https://www.lightstream.tech/partners/microsoft-azure/)** | North America | | **[The IT Consultancy Group](https://itconsult.com.au/)** | Australia |-| **[MOQdigital](https://www.moqdigital.com/insights)** | Australia | +| **[MOQdigital](https://www.brennanit.com.au/solutions/cloud-services/)** | Australia | | **[MSG Services](https://www.msg-services.de/it-services/managed-services/cloud-outsourcing/)** | Europe (Germany) | | **[Nelite](https://www.exakis-nelite.com/offres/)** | Europe | | **[New Signature](https://www.cognizant.com/us/en/services/cloud-solutions/microsoft-business-group)** | Europe | |
external-attack-surface-management | Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/index.md | Microsoft Defender External Attack Surface Management contains both global data For security purposes, Microsoft collects users' IP addresses when they log in. This data is stored for up to 30 days but may be stored longer if needed to investigate potential fraudulent or malicious use of the product. -In the case of a region down scenario, customers should see no downtime as Defender EASM uses technologies that replicate data to a backup region. Defender EASM processes customer data. By default, customer data is replicated to the paired region. +In the case of a region down scenario, only the customers in the affected region will experience downtime. The Microsoft compliance framework requires that all customer data be deleted within 180 days of that organization no longer being a customer of Microsoft.  This also includes storage of customer data in offline locations, such as database backups. Once a resource is deleted, it cannot be restored by our teams.  The customer data will be retained in our data stores for 75 days, however the actual resource cannot be restored.  After the 75 day period, customer data will be permanently deleted.   The Microsoft compliance framework requires that all customer data be deleted wi - [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md) - [Understanding inventory assets](understanding-inventory-assets.md) - [What is discovery?](what-is-discovery.md)+ |
governance | Policy For Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md | must enable the **Microsoft.PolicyInsights** resource providers. 1. If limited preview policy definitions were installed, remove the add-on with the **Disable** button on your AKS cluster under the **Policies** page. -1. The AKS cluster must be version _1.14_ or higher. Use the following script to validate your AKS +1. The AKS cluster must be a [supported AKS cluster version](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli). Use the following script to validate your AKS cluster version: ```azurecli-interactive |
hdinsight | Apache Hadoop Use Hive Ambari View | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-ambari-view.md | description: Learn how to use the Hive View from your web browser to submit Hive Previously updated : 06/09/2022 Last updated : 07/12/2023 # Use Apache Ambari Hive View with Apache Hadoop in HDInsight |
hdinsight | Hdinsight Linux Ambari Ssh Tunnel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-linux-ambari-ssh-tunnel.md | description: Learn how to use an SSH tunnel to securely browse web resources hos Previously updated : 06/09/2022 Last updated : 07/12/2023 # Use SSH tunneling to access Apache Ambari web UI, JobHistory, NameNode, Apache Oozie, and other UIs |
hdinsight | Hdinsight Use External Metadata Stores | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-external-metadata-stores.md | description: Use external metadata stores with Azure HDInsight clusters. Previously updated : 06/08/2022 Last updated : 07/12/2023 # Use external metadata stores in Azure HDInsight |
hdinsight | Apache Spark Jupyter Notebook Use External Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages.md | description: Step-by-step instructions on how to configure Jupyter Notebooks ava Previously updated : 06/10/2022 Last updated : 07/12/2023 # Use external packages with Jupyter Notebooks in Apache Spark clusters on HDInsight |
hdinsight | Apache Spark Load Data Run Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-load-data-run-query.md | description: Tutorial - Learn how to load data and run interactive queries on Sp Previously updated : 06/08/2022 Last updated : 07/12/2023 #Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to load data into a Spark cluster, so I can run interactive SQL queries against the data. |
hdinsight | Apache Spark Troubleshoot Application Stops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-application-stops.md | Title: Apache Spark Streaming application stops after 24 days in Azure HDInsight description: An Apache Spark Streaming application stops after executing for 24 days and there are no errors in the log files. Previously updated : 07/10/2023 Last updated : 07/12/2023 # Scenario: Apache Spark Streaming application stops after executing for 24 days in Azure HDInsight |
hdinsight | Spark Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-best-practices.md | Title: Apache Spark guidelines on Azure HDInsight description: Learn guidelines for using Apache Spark in Azure HDInsight. Previously updated : 07/10/2023 Last updated : 07/12/2023 # Apache Spark guidelines |
healthcare-apis | How To Run A Reindex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md | content-type: application/fhir+json "resourceType": "Parameters", "parameter": [+ { "name": "targetSearchParameterTypes", "valueString": "{url of custom search parameter. In case of multiple custom search parameters, url list can be comma seperated.}"-+ } ] } |
healthcare-apis | How To Run A Reindex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-run-a-reindex.md | content-type: application/fhir+json "resourceType": "Parameters", "parameter": [+ { "name": "targetSearchParameterTypes", "valueString": "{url of custom search parameter. In case of multiple custom search parameters, url list can be comma seperated.}"-+ } ] } |
healthcare-apis | Deploy Bicep Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md | When deployment is completed, the following resources and access roles are creat * Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: - * For the event hub, the **Azure Events Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. + * For the event hub, the **Azure Events Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub. * For the FHIR service, the **FHIR Data Writer** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. |
healthcare-apis | Overview Of Device Data Processing Stages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-data-processing-stages.md | Transform is the next stage where normalized messages are processed using the us > [!NOTE] > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent. -If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [**Resolution type**](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to **Lookup**, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to **Create**, the MedTech service creates minimal Device and Patient resources in the FHIR service. +If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [**Resolution type**](deploy-manual-portal.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to **Lookup**, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to **Create**, the MedTech service creates minimal Device and Patient resources in the FHIR service. > [!NOTE] > The **Resolution type** can also be adjusted post deployment of the MedTech service if a different **Resolution type** is later required. |
iot-dps | Concepts Control Access Dps Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps-azure-ad.md | For more information, see the [Azure IoT extension for Azure CLI release page](h - [Azure IoT SDKs for Node.js Provisioning Service](https://aka.ms/IoTDPSNodeJSSDKRBAC) - [Sample](https://aka.ms/IoTDPSNodeJSSDKRBACSample) - [Azure IoT SDK for Java Preview Release ](https://aka.ms/IoTDPSJavaSDKRBAC)- - [Sample](https://aka.ms/IoTDPSJavaSDKRBACSample) + - [Sample](https://github.com/Azure/azure-iot-sdk-java/tree/preview/provisioning/provisioning-service-client-samples) - [ΓÇó Microsoft Azure IoT SDKs for .NET Preview Release](https://aka.ms/IoTDPScsharpSDKRBAC) ## Azure AD access from the Azure portal |
lab-services | Troubleshoot Lab Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-lab-creation.md | In this article, you learn how to resolve common issues with creating a lab in A ## Prerequisites -- To change settings for the lab plan, your Azure account needs the Owner or Contributor Azure Active Directory role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).+- To change settings for the lab plan, your Azure account needs the Owner or Contributor [RBAC](../role-based-access-control/overview.md) role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles). ## Virtual machine image is not available When your lab plan uses advanced networking, the lab plan and all labs must be i For more information about setting up and managing labs, see: - [Manage lab plans](how-to-manage-lab-plans.md) -- [Lab setup guide](setup-guide.md)+- [Lab setup guide](setup-guide.md) |
load-testing | How To Test Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md | When you start the load test, Azure Load Testing service injects the following A These resources are ephemeral and exist only during the load test run. If you restrict access to your virtual network, you need to [configure your virtual network](#configure-virtual-network) to enable communication between these Azure Load Testing and the injected VMs. +> [!NOTE] +> Virtual network support for Azure Load Testing is available in the following Azure regions: Australia East, East Asia, East US, East US 2, North Europe, South Central US, Sweden Central, UK South, West Europe, West US 2 and West US 3. +> + ## Prerequisites - Your Azure account has the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions. |
logic-apps | Logic Apps Limits And Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md | For Azure Logic Apps to receive incoming communication through your firewall, yo | Azure Government region | Azure Logic Apps IP | |-||-| US Gov Arizona | 52.244.67.164, 52.244.67.64, 52.244.66.82, 52.126.52.254, 52.126.53.145 | +| US Gov Arizona | 52.244.67.164, 52.244.67.64, 52.244.66.82, 52.126.52.254, 52.126.53.145, 52.182.49.105, 52.182.49.175 | | US Gov Texas | 52.238.119.104, 52.238.112.96, 52.238.119.145, 52.245.171.151, 52.245.163.42 | | US Gov Virginia | 52.227.159.157, 52.227.152.90, 23.97.4.36, 13.77.239.182, 13.77.239.190 | | US DoD Central | 52.182.49.204, 52.182.52.106 | This section lists the outbound IP addresses that Azure Logic Apps requires in y | Region | Azure Logic Apps IP | |--||-| US DoD Central | 52.182.48.215, 52.182.92.143 | +| US DoD Central | 52.182.48.215, 52.182.92.143, 52.182.53.147, 52.182.52.212, 52.182.49.162, 52.182.49.151 | | US Gov Arizona | 52.244.67.143, 52.244.65.66, 52.244.65.190, 52.126.50.197, 52.126.49.223, 52.126.53.144, 52.126.36.100 | | US Gov Texas | 52.238.114.217, 52.238.115.245, 52.238.117.119, 20.141.120.209, 52.245.171.152, 20.141.123.226, 52.245.163.1 | | US Gov Virginia | 13.72.54.205, 52.227.138.30, 52.227.152.44, 13.77.239.177, 13.77.239.140, 13.77.239.187, 13.77.239.184 | |
machine-learning | How To Access Data Batch Endpoints Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md | |
machine-learning | How To Authenticate Batch Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md | |
machine-learning | How To Batch Scoring Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md | |
machine-learning | How To Change Storage Access Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md | |
machine-learning | How To Configure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md | |
machine-learning | How To Create Workspace Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md | |
machine-learning | How To Devops Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md | jobs: scriptType: bash inlineScript: | - # submit component job and get the run name - job_out=$(az ml job create --file single-job-pipeline.yml -g $(resource-group) -w $(workspace) --query name) + # submit component job and get the run name + job_name=$(az ml job create --file single-job-pipeline.yml -g $(resource-group) -w $(workspace) --query name --output tsv) - # Remove quotes around job name - job_name=$(sed -e 's/^"//' -e 's/"$//' <<<"$job_out") - echo $job_name -- # Set output variable for next task - echo "##vso[task.setvariable variable=JOB_NAME;isOutput=true;]$job_name" + # Set output variable for next task + echo "##vso[task.setvariable variable=JOB_NAME;isOutput=true;]$job_name" ``` # [Using generic service connection](#tab/generic) jobs: scriptType: bash inlineScript: | - # submit component job and get the run name - job_out=$(az ml job create --file single-job-pipeline.yml -g $(resource-group) -w $(workspace) --query name) + # submit component job and get the run name + job_name=$(az ml job create --file single-job-pipeline.yml -g $(resource-group) -w $(workspace) --query name --output tsv) - # Remove quotes around run name - job_name=$(sed -e 's/^"//' -e 's/"$//' <<<"$job_out") - echo $job_name - # Set output variable for next task - echo "##vso[task.setvariable variable=JOB_NAME;isOutput=true;]$job_name" + # Set output variable for next task + echo "##vso[task.setvariable variable=JOB_NAME;isOutput=true;]$job_name" - # Get a bearer token to authenticate the request in the next job - export aadToken=$(az account get-access-token --resource=https://management.azure.com --query accessToken -o tsv) - echo "##vso[task.setvariable variable=AAD_TOKEN;isOutput=true;issecret=true]$aadToken" + # Get a bearer token to authenticate the request in the next job + export aadToken=$(az account get-access-token --resource=https://management.azure.com --query accessToken -o tsv) + echo "##vso[task.setvariable variable=AAD_TOKEN;isOutput=true;issecret=true]$aadToken" ``` |
machine-learning | How To Manage Compute Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-compute-instance.md | Learn how to manage a [compute instance](concept-compute-instance.md) in your Az Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#training-compute-targets). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace. -In this article, you learn how to start, stop, restart, delete) a compute instance. See [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md) to learn how to create a compute instance. +In this article, you learn how to start, stop, restart, delete a compute instance. See [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md) to learn how to create a compute instance. > [!NOTE]-> This article shows CLI v2 in the sections below. If you are still using CLI v1, see [Create an Azure Machine Learning compute cluster CLI v1)](v1/how-to-create-manage-compute-instance.md?view=azureml-api-1&preserve-view=true). +> This article shows CLI v2 in the sections below. If you are still using CLI v1, see [Create an Azure Machine Learning compute cluster CLI v1](v1/how-to-create-manage-compute-instance.md?view=azureml-api-1&preserve-view=true). ## Prerequisites |
machine-learning | How To Manage Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md | |
machine-learning | How To Manage Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md | |
machine-learning | How To Manage Workspace Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-terraform.md | |
machine-learning | How To Mlflow Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md | |
machine-learning | How To Move Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-workspace.md | |
machine-learning | How To R Train Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-train-model.md | This article explains how to take the R script that you [adapted to run in produ - An [Azure Machine Learning workspace](quickstart-create-resources.md). - [A registered data asset](how-to-create-data-assets.md) that your training job will use.-- Azure [CLI and ml extension installed](how-to-configure-cli.md). Or use a [compute instance in your workspace](quickstart-create-resources.md), which has the CLI pre-installed.+- Azure [CLI and ml extension installed](how-to-configure-cli.md). Or use a [compute instance in your workspace](quickstart-create-resources.md), which has the CLI preinstalled. - [A compute cluster](how-to-create-attach-compute-cluster.md) or [compute instance](quickstart-create-resources.md#create-a-compute-instance) to run your training job. - [An R environment](how-to-r-modify-script-for-production.md#create-an-environment) for the compute cluster to use to run the job. To submit the job, run the following commands in a terminal window: az account set --subscription "<SUBSCRIPTION-NAME>" ``` -1. Now use CLI to submit the job. If you are doing this on a compute instance in your workspace, you can use environment variables for the workspace name and resource group as show in the following code. If you are not on a compute instance, replace these values with your workspace name and resource group. +1. Now use CLI to submit the job. If you're doing this on a compute instance in your workspace, you can use environment variables for the workspace name and resource group as show in the following code. If you aren't on a compute instance, replace these values with your workspace name and resource group. ```azurecli az ml job create -f job.yml --workspace-name $CI_WORKSPACE --resource-group $CI_RESOURCE_GROUP Once you've submitted the job, you can check the status and results in studio: Finally, once the training job is complete, register your model if you want to deploy it. Start in the studio from the page showing your job details. +1. Once your job completes, select **Outputs + logs** to view outputs of the job. +1. Open the **models** folder to verify that **crate.bin** and **MLmodel** are present. If not, check the logs to see if there was an error. 1. On the toolbar at the top, select **+ Register model**.-1. Select **Unspecified type** for the **Model type**. -1. Select the folder which contains the model. ++ :::image type="content" source="media/how-to-r-train-model/register-model.png" alt-text="Screenshot shows the Job section of studio with the Outputs section open."::: ++1. For **Model type**, change the default from **MLflow** to **Unspecified type**. +1. For **Job output**, select **models**, the folder that contains the model. 1. Select **Next**. 1. Supply the name you wish to use for your model. Add **Description**, **Version**, and **Tags** if you wish. 1. Select **Next**. 1. Review the information. 1. Select **Register**. -You'll see a confirmation that the model is registered. +At the top of the page, you'll see a confirmation that the model is registered. The confirmation looks similar to this: +++Select **Click here to go to this model.** if you wish to view the registered model details. ## Next steps |
machine-learning | How To Troubleshoot Batch Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md | |
machine-learning | How To Use Batch Azure Data Factory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md | |
machine-learning | How To Use Low Priority Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-low-priority-batch.md | |
machine-learning | How To Create Manage Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md | If you didn't have compute instance, create a new one: [Create and manage an Azu > [!NOTE] > - We are going to perform an automatic restart of your compute instance. Please ensure that you do not have any tasks or jobs running on it, as they may be affected by the restart.- > - To build your custom environment, please use an image from public docker hub. We do not support custom environments built with images from ACR at this time. 1. To use an existing custom application as a runtime, choose the option "existing". This option is available if you have previously created a custom application on a compute instance. For more information on how to create and use a custom application as a runtime, learn more about [how to create custom application as runtime](how-to-customize-environment-runtime.md#create-a-custom-application-on-compute-instance-that-can-be-used-as-prompt-flow-runtime). You can also assign these permissions manually through the UI. > [!NOTE] > This operation may take several minutes to take effect.+ > If your compute instance behind VNet, please follow [Compute instance behind VNet](#compute-instance-behind-vnet) to configure the network. To learn more: - [Manage access to an Azure Machine Learning workspace](../how-to-assign-roles.md?view=azureml-api-2&tabs=labeler&preserve-view=true) Go to runtime detail page and select update button at the top. You can change ne :::image type="content" source="./media/how-to-create-manage-runtime/runtime-update-env.png" alt-text="Screenshot of the runtime detail page with updated selected. " lightbox = "./media/how-to-create-manage-runtime/runtime-update-env.png"::: -If you used a custom environment, you need to rebuild it using latest Prompt flow image first, and then update your runtime with the new custom environment. +> [!NOTE] +> If you used a custom environment, you need to rebuild it using latest prompt flow image first, and then update your runtime with the new custom environment. ## Troubleshooting guide for runtime If you just assigned the permissions, it will take a few minutes to take effect. :::image type="content" source="./media/how-to-create-manage-runtime/ci-failed-runtime-not-ready.png" alt-text="Screenshot of a failed run on the runtime detail page. " lightbox = "./media/how-to-create-manage-runtime/ci-failed-runtime-not-ready.png"::: -First, go to the Compute Instance terminal and run `docker ps` to find the root cause. You can follow the steps in the [Manually customize conda packages in CI runtime](how-to-customize-environment-runtime.md#manually-customize-conda-packages-in-ci-runtime) section. +First, go to the Compute Instance terminal and run `docker ps` to find the root cause. Use `docker images` to check if the image was pulled successfully. If your image was pulled successfully, check if the Docker container is running. If it's already running, locate this runtime, which will attempt to restart the runtime and compute instance. Go to the compute instance terminal and run `docker logs -<runtime_container_na This because you're cloning a flow from others that is using compute instance as runtime. As compute instance runtime is user isolated, you need to create your own compute instance runtime or select a managed online deployment/endpoint runtime, which can be shared with others. -#### Compute instance behind vnet +#### Compute instance behind VNet If your compute instance is behind a VNet, you need to make the following changes to ensure that your compute instance can be used in prompt flow: - Please follow [required-public-internet-access](../how-to-secure-workspace-vnet.md#required-public-internet-access) to set your CI network configuration. |
machine-learning | How To Customize Environment Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-customize-environment-runtime.md | Last updated 06/30/2023 # Customize environment for runtime (preview) -We have following approaches to customize environment for runtime: --- Manually customize conda packages in CI runtime-- Customize environment with docker context for runtime--Meanwhile, you can also create custom application on compute instance and managed online endpoint then use them as runtime. > [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -## Manually customize conda packages in CI runtime --1. Go to runtime list page find the compute instance linked with runtime. -- :::image type="content" source="./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-link.png" alt-text="Screenshot of flows highlighting the linked compute column. " lightbox = "./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-link.png"::: --1. Under `applications` in the detail page of the compute instance, select `terminal`. -- :::image type="content" source="./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-terminal.png" alt-text="Screenshot of compute detail page with terminal highlighted. " lightbox = "./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-terminal.png"::: --1. Jump to terminal on this compute instance -- :::image type="content" source="./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-jump-to-terminal.png" alt-text="Screenshot of notebooks with the compute highlighted. " lightbox = "./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-jump-to-terminal.png"::: --1. Retrieve the container name of the runtime using the command `docker ps`. -- :::image type="content" source="./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-terminal-docker-ps.png" alt-text="Screenshot of notebooks highlighting the container name of the runtime. " lightbox = "./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-terminal-docker-ps.png"::: --1. Jump into the container using the command `docker exec -it <container_id/container_name> /bin/bash`. -- :::image type="content" source="./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-terminal-docker-exec.png" alt-text="Screenshot of notebooks showing the docker command. " lightbox = "./media/how-to-customize-environment-runtime/runtime-creation-ci-runtime-list-compute-terminal-docker-exec.png"::: --1. You can now install packages using `conda install` or `pip install` in this conda environment. --> [!NOTE] -> Any package installed in this way may be lost after a compute instance restart. If you want to keep these packages,follow the instructions in the section titled [Customize Environment with Docker Context for Runtime](#customize-environment-with-docker-context-for-runtime). - ## Customize environment with docker context for runtime This section assumes you have knowledge of [Docker](https://www.docker.com/) and [Azure Machine Learning environments](../concept-environments.md). RUN pip install -r requirements.txt ``` > [!NOTE]-> This docker image should be built from Prompt flow base image that is `mcr.microsoft.com/azureml/promptflow/promptflow-runtime:<newest_version>`. If possible use the [latest version of the base image](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime/tags/list). +> This docker image should be built from prompt flow base image that is `mcr.microsoft.com/azureml/promptflow/promptflow-runtime:<newest_version>`. If possible use the [latest version of the base image](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime/tags/list). ### Step 2: Use Azure Machine Learning environment to build image Find the image in ACR. ### Step 3: Create a custom Azure Machine Learning environment for runtime -#### Create a custom Azure Machine Learning environment for runtime in docker hub - compute instance runtime --> [!NOTE] -> Compute instance only support image in public docker hub or MCR, so you need push the image to docker hub or MCR, then use them to create custom environment. --```shell -docker login <the_acr_you_build_image_in_previous_step> -docker pull <image_build_in_acr> -docker login <your_public_docker_hub> -docker tag <image_build_in_acr> <image_in_your_public_docker_hub> -docker push <image_in_your_public_docker_hub> -``` --Open the `environment.yaml` file and add the following content. Replace the `<environment_name>` placeholder with your desired environment name and change `<image_build_in_acr>` to the ACR image found in the previous step. --```yaml -$schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json -name: <environment_name> -image: <image_in_your_public_docker_hub> -inference_config: - liveness_route: - port: 8080 - path: /health - readiness_route: - port: 8080 - path: /health - scoring_route: - port: 8080 - path: /score -``` --Using following CLI command to create the environment: --```bash -cd image_build # optional if you already in this folder -az login(optional) -az ml environment create -f environment.yaml --subscription <sub-id> -g <resource-group> -w <workspace> -``` --#### Create a custom Azure Machine Learning environment for runtime in ACR - Managed online deployment runtime - Open the `environment.yaml` file and add the following content. Replace the `<environment_name>` placeholder with your desired environment name and change `<image_build_in_acr>` to the ACR image found in the step 2.3. ```yaml |
machine-learning | How To Integrate With Langchain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-integrate-with-langchain.md | Create a custom connection that stores all your LLM API KEY or other required cr :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-2.png" alt-text="Screenshot of add custom connection point to the add key-value pairs button. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-2.png"::: > [!NOTE]-> - You can set one Key-Value pair as **secret** by **is secret** checked, which will be encrypted and stored in your key value. -> - You can also set the whole connection as **workspace level key**, which will be shared to all members in the workspace. If not set as workspace level key, it can only be accessed by the creator. +> - You can set one Key-Value pair as secret by **is secret** checked, which will be encrypted and stored in your key value. +> - Make sure at least one key-value pair is set as secret, otherwise the connection will not be created successfully. Then this custom connection is used to replace the key and credential you explicitly defined in LangChain code, if you already have a LangChain integration Prompt flow, you can jump toΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï [Configure connection, input and output](#configure-connection-input-and-output). After you have a properly structured flow and are done moving the code to specif To utilize a [custom connection](#create-a-custom-connection) that stores all the required keys and credentials, follow these steps: -1. In the python tools, need to access LLM Key and other credentials, import custom connection library `from promptflow.connections import CustomConnection`. +1. In the python tools, import custom connection library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function. :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png" alt-text="Screenshot of doc search chain node highlighting the custom connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png":::-1. Add an input parameter of type `connection` to the tool function. +1. Parse the input to the input section, then select your target custom connection in the value dropdown. :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png" alt-text="Screenshot of the chain node highlighting the connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png"::: 1. Replace the environment variables that originally defined the key and credential with the corresponding key added in the connection. 1. Save and return to authoring page, and configure the connection parameter in the node input. |
machine-learning | Python Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/python-tool.md | Title: Python tool in Azure Machine Learning prompt flow (preview) -description: Users are empowered by the Python Tool to offer customized code snippets as self-contained executable nodes in Prompt flow. +description: The Python Tool empowers users to offer customized code snippets as self-contained executable nodes in Prompt flow. Last updated 06/30/2023 # Python tool (preview) -Users are empowered by the Python Tool to offer customized code snippets as self-contained executable nodes in Prompt flow. Users can effortlessly create Python tools, edit code, and verify results with ease. +The Python Tool empowers users to offer customized code snippets as self-contained executable nodes in Prompt flow. Users can effortlessly create Python tools, edit code, and verify results with ease. > [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. Outputs: ```python "hello world"-``` +``` +++## How to consume custom connection in Python Tool? ++If you are developing a python tool that requires calling external services with authentication, you can use the custom connection in prompt flow. It allows you to securely store the access key then retrieve it in your python code. ++### Create a custom connection ++Create a custom connection that stores all your LLM API KEY or other required credentials. ++1. Go to Prompt flow in your workspace, then go to **connections** tab. +2. Select **Create** and select **Custom**. + :::image type="content" source="../media/how-to-integrate-with-langchain/custom-connection-1.png" alt-text="Screenshot of flows on the connections tab highlighting the custom button in the drop-down menu. " lightbox = "../media/how-to-integrate-with-langchain/custom-connection-1.png"::: +1. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**. + :::image type="content" source="../media/how-to-integrate-with-langchain/custom-connection-2.png" alt-text="Screenshot of add custom connection point to the add key-value pairs button. " lightbox = "../media/how-to-integrate-with-langchain/custom-connection-2.png"::: ++> [!NOTE] +> - You can set one Key-Value pair as secret by **is secret** checked, which will be encrypted and stored in your key value. +> - Make sure at least one key-value pair is set as secret, otherwise the connection will not be created successfully. +++### Consume custom connection in Python ++To consume a custom connection in your python code, follow these steps: ++1. In the code section in your python node, import custom connection library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function. + :::image type="content" source="../media/how-to-integrate-with-langchain/custom-connection-python-node-1.png" alt-text="Screenshot of doc search chain node highlighting the custom connection. " lightbox = "../media/how-to-integrate-with-langchain/custom-connection-python-node-1.png"::: +1. Parse the input to the input section, then select your target custom connection in the value dropdown. + :::image type="content" source="../media/how-to-integrate-with-langchain/custom-connection-python-node-2.png" alt-text="Screenshot of the chain node highlighting the connection. " lightbox = "../media/how-to-integrate-with-langchain/custom-connection-python-node-2.png"::: ++For example: ++```python +from promptflow import tool +from promptflow.connections import CustomConnection ++@tool +def my_python_tool(message:str, myconn:CustomConnection) -> str: + # Get authentication key-values from the custom connection + connection_key1_value = myconn.key1 + connection_key2_value = myconn.key2 +``` |
machine-learning | Concept Designer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-designer.md | |
machine-learning | How To Authenticate Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-authenticate-web-service.md | |
machine-learning | How To Configure Auto Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-features.md | |
machine-learning | How To Configure Cross Validation Data Splits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-cross-validation-data-splits.md | |
machine-learning | How To Configure Databricks Automl Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-databricks-automl-environment.md | |
machine-learning | How To Consume Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-consume-web-service.md | |
machine-learning | How To Data Ingest Adf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-ingest-adf.md | |
machine-learning | How To Debug Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-debug-visual-studio-code.md | |
machine-learning | How To Deploy And Where | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md | |
machine-learning | How To Deploy Azure Container Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-container-instance.md | |
machine-learning | How To Deploy Azure Kubernetes Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-kubernetes-service.md | |
machine-learning | How To Deploy Fpga Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-fpga-web-service.md | |
machine-learning | How To Deploy Inferencing Gpus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-inferencing-gpus.md | |
machine-learning | How To Deploy Model Cognitive Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-cognitive-search.md | |
machine-learning | How To Deploy Package Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-package-models.md | |
machine-learning | How To Deploy Profile Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-profile-model.md | |
machine-learning | How To Deploy Update Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-update-web-service.md | Title: Update deployed web services description: Learn how to refresh a web service that is already deployed in Azure Machine Learning. You can update settings such as model, environment, and entry script. -+ |
machine-learning | How To Designer Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-python.md | |
machine-learning | How To Designer Transform Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-transform-data.md | |
machine-learning | How To Extend Prebuilt Docker Image Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-extend-prebuilt-docker-image-inference.md | |
machine-learning | How To Generate Automl Training Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-generate-automl-training-code.md | |
machine-learning | How To High Availability Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-high-availability-machine-learning.md | description: Learn how to plan for disaster recovery and maintain business conti -+ |
machine-learning | How To Machine Learning Fairness Aml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-machine-learning-fairness-aml.md | |
machine-learning | How To Machine Learning Interpretability Aml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-machine-learning-interpretability-aml.md | |
machine-learning | How To Prebuilt Docker Images Inference Python Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prebuilt-docker-images-inference-python-extensibility.md | |
machine-learning | How To Retrain Designer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-retrain-designer.md | |
machine-learning | How To Run Batch Predictions Designer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-run-batch-predictions-designer.md | |
machine-learning | How To Track Designer Experiments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-designer-experiments.md | |
machine-learning | How To Troubleshoot Auto Ml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-auto-ml.md | |
machine-learning | How To Troubleshoot Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment.md | |
machine-learning | How To Troubleshoot Prebuilt Docker Image Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-prebuilt-docker-image-inference.md | |
machine-learning | How To Use Pipeline Parameter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-pipeline-parameter.md | |
machine-learning | Migrate Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-overview.md | description: Migrate from Studio (classic) to Azure Machine Learning for a moder -+ |
machine-learning | Migrate Rebuild Integrate With Client App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-rebuild-integrate-with-client-app.md | |
machine-learning | Migrate Rebuild Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-rebuild-web-service.md | description: Rebuild Studio (classic) web services as pipeline endpoints in Azur -+ |
machine-learning | Migrate Register Dataset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-register-dataset.md | description: Rebuild Studio (classic) datasets in Azure Machine Learning designe -+ |
machine-learning | Samples Designer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-designer.md | The sample datasets are available under **Datasets**-**Samples** category. You c | Dataset name | Dataset description | |-|:--|-| Adult Census Income Binary Classification dataset | A subset of the 1994 Census database, using working adults over the age of 16 with an adjusted income index of > 100.<br/>**Usage**: Classify people using demographics to predict whether a person earns over 50K a year.<br/> **Related Research**: Kohavi, R., Becker, B., (1996). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science| -|Automobile price data (Raw)|Information about automobiles by make and model, including the price, features such as the number of cylinders and MPG, as well as an insurance risk score.<br/> The risk score is initially associated with auto price. It is then adjusted for actual risk in a process known to actuaries as symboling. A value of +3 indicates that the auto is risky, and a value of -3 that it is probably safe.<br/>**Usage**: Predict the risk score by features, using regression or multivariate classification.<br/>**Related Research**: Schlimmer, J.C. (1987). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science. | +| Adult Census Income Binary Classification dataset | A subset of the 1994 Census database, using working adults over the age of 16 with an adjusted income index of > 100.<br/>**Usage**: Classify people using demographics to predict whether a person earns over 50K a year.<br/> **Related Research**: Kohavi, R., Becker, B., (1996). [UCI Machine Learning Repository](https://archive.ics.uci.edu/). Irvine, CA: University of California, School of Information and Computer Science| +|Automobile price data (Raw)|Information about automobiles by make and model, including the price, features such as the number of cylinders and MPG, as well as an insurance risk score.<br/> The risk score is initially associated with auto price. It is then adjusted for actual risk in a process known to actuaries as symboling. A value of +3 indicates that the auto is risky, and a value of -3 that it is probably safe.<br/>**Usage**: Predict the risk score by features, using regression or multivariate classification.<br/>**Related Research**: Schlimmer, J.C. (1987). [UCI Machine Learning Repository](https://archive.ics.uci.edu/). Irvine, CA: University of California, School of Information and Computer Science. | | CRM Appetency Labels Shared |Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_appetency.labels](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train_appetency.labels)).| |CRM Churn Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_churn.labels](https://www.kdd.org/kdd-cup/view/kdd-cup-2009/Datas)).| |CRM Dataset Shared | This data comes from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train.data.zip](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train.data.zip)). <br/>The dataset contains 50K customers from the French Telecom company Orange. Each customer has 230 anonymized features, 190 of which are numeric and 40 are categorical. The features are very sparse. | |CRM Upselling Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_large_train_upselling.labels](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train_upselling.labels)| |Flight Delays Data|Passenger flight on-time performance data taken from the TranStats data collection of the U.S. Department of Transportation ([On-Time](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)).<br/>The dataset covers the time period April-October 2013. Before uploading to the designer, the dataset was processed as follows: <br/>- The dataset was filtered to cover only the 70 busiest airports in the continental US <br/>- Canceled flights were labeled as delayed by more than 15 minutes <br/>- Diverted flights were filtered out <br/>- The following columns were selected: Year, Month, DayofMonth, DayOfWeek, Carrier, OriginAirportID, DestAirportID, CRSDepTime, DepDelay, DepDel15, CRSArrTime, ArrDelay, ArrDel15, Canceled|-|German Credit Card UCI dataset|The UCI Statlog (German Credit Card) dataset ([Statlog+German+Credit+Data](https://archive.ics.uci.edu/ml/datasets/Statlog+(German+Credit+Data))), using the german.data file.<br/>The dataset classifies people, described by a set of attributes, as low or high credit risks. Each example represents a person. There are 20 features, both numerical and categorical, and a binary label (the credit risk value). High credit risk entries have label = 2, low credit risk entries have label = 1. The cost of misclassifying a low risk example as high is 1, whereas the cost of misclassifying a high risk example as low is 5.| +|German Credit Card UCI dataset|The UCI Statlog (German Credit Card) dataset ([Statlog+German+Credit+Data](https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data)), using the german.data file.<br/>The dataset classifies people, described by a set of attributes, as low or high credit risks. Each example represents a person. There are 20 features, both numerical and categorical, and a binary label (the credit risk value). High credit risk entries have label = 2, low credit risk entries have label = 1. The cost of misclassifying a low risk example as high is 1, whereas the cost of misclassifying a high risk example as low is 5.| |IMDB Movie Titles|The dataset contains information about movies that were rated in Twitter tweets: IMDB movie ID, movie name, genre, and production year. There are 17K movies in the dataset. The dataset was introduced in the paper "S. Dooms, T. De Pessemier and L. Martens. MovieTweetings: a Movie Rating Dataset Collected From Twitter. Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys 2013."| |Movie Ratings|The dataset is an extended version of the Movie Tweetings dataset. The dataset has 170K ratings for movies, extracted from well-structured tweets on Twitter. Each instance represents a tweet and is a tuple: user ID, IMDB movie ID, rating, timestamp, number of favorites for this tweet, and number of retweets of this tweet. The dataset was made available by A. Said, S. Dooms, B. Loni and D. Tikk for Recommender Systems Challenge 2014.| |Weather Dataset|Hourly land-based weather observations from NOAA ([merged data from 201304 to 201310](https://az754797.vo.msecnd.net/data/WeatherDataset.csv)).<br/>The weather data covers observations made from airport weather stations, covering the time period April-October 2013. Before uploading to the designer, the dataset was processed as follows: <br/> - Weather station IDs were mapped to corresponding airport IDs <br/> - Weather stations not associated with the 70 busiest airports were filtered out <br/> - The Date column was split into separate Year, Month, and Day columns <br/> - The following columns were selected: AirportID, Year, Month, Day, Time, TimeZone, SkyCondition, Visibility, WeatherType, DryBulbFarenheit, DryBulbCelsius, WetBulbFarenheit, WetBulbCelsius, DewPointFarenheit, DewPointCelsius, RelativeHumidity, WindSpeed, WindDirection, ValueForWindCharacter, StationPressure, PressureTendency, PressureChange, SeaLevelPressure, RecordType, HourlyPrecip, Altimeter| |Wikipedia SP 500 Dataset|Data is derived from Wikipedia (https://www.wikipedia.org/) based on articles of each S&P 500 company, stored as XML data. <br/>Before uploading to the designer, the dataset was processed as follows: <br/> - Extract text content for each specific company <br/> - Remove wiki formatting <br/> - Remove non-alphanumeric characters <br/> - Convert all text to lowercase <br/> - Known company categories were added <br/>Note that for some companies an article could not be found, so the number of records is less than 500.|-|Restaurant Feature Data| A set of metadata about restaurants and their features, such as food type, dining style, and location. <br/>**Usage**: Use this dataset, in combination with the other two restaurant datasets, to train and test a recommender system.<br/> **Related Research**: Bache, K. and Lichman, M. (2013). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science.| -|Restaurant Ratings| Contains ratings given by users to restaurants on a scale from 0 to 2.<br/>**Usage**: Use this dataset, in combination with the other two restaurant datasets, to train and test a recommender system. <br/>**Related Research**: Bache, K. and Lichman, M. (2013). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science.| -|Restaurant Customer Data| A set of metadata about customers, including demographics and preferences. <br/>**Usage**: Use this dataset, in combination with the other two restaurant datasets, to train and test a recommender system. <br/> **Related Research**: Bache, K. and Lichman, M. (2013). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml) Irvine, CA: University of California, School of Information and Computer Science.| +|Restaurant Feature Data| A set of metadata about restaurants and their features, such as food type, dining style, and location. <br/>**Usage**: Use this dataset, in combination with the other two restaurant datasets, to train and test a recommender system.<br/> **Related Research**: Bache, K. and Lichman, M. (2013). [UCI Machine Learning Repository](https://archive.ics.uci.edu/). Irvine, CA: University of California, School of Information and Computer Science.| +|Restaurant Ratings| Contains ratings given by users to restaurants on a scale from 0 to 2.<br/>**Usage**: Use this dataset, in combination with the other two restaurant datasets, to train and test a recommender system. <br/>**Related Research**: Bache, K. and Lichman, M. (2013). [UCI Machine Learning Repository](https://archive.ics.uci.edu/). Irvine, CA: University of California, School of Information and Computer Science.| +|Restaurant Customer Data| A set of metadata about customers, including demographics and preferences. <br/>**Usage**: Use this dataset, in combination with the other two restaurant datasets, to train and test a recommender system. <br/> **Related Research**: Bache, K. and Lichman, M. (2013). [UCI Machine Learning Repository](https://archive.ics.uci.edu/) Irvine, CA: University of California, School of Information and Computer Science.| ## Clean up resources |
machine-learning | Tutorial Designer Automobile Price Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-designer-automobile-price-deploy.md | |
machine-learning | Tutorial Designer Automobile Price Train Score | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-designer-automobile-price-train-score.md | |
mariadb | Concepts Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md | The following table lists the gateway IP addresses of the Azure Database for Mar | Germany West Central | 51.116.152.0 | | | India Central | 104.211.96.159 | | | | India South | 104.211.224.146 | | |-| India West | 104.211.160.80 | | | +| India West | 104.211.144.32 |104.211.160.80 | | | Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | | | Japan West | 191.238.68.11, 40.74.96.6, 40.74.96.7 | 104.214.148.156 | | | Korea Central | 52.231.17.13 | 52.231.32.42 | | |
migrate | Tutorial Migrate Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md | Before you begin this tutorial, you should: 1. [Review](migrate-support-matrix-hyper-v-migration.md#hyper-v-vms) the requirements for Hyper-V VMs that you want to migrate to Azure. 1. We recommend that you [assess Hyper-V VMs](tutorial-assess-hyper-v.md) before migrating them to Azure, but you don't have to. 1. Go to the already created project or [create a new project.](./create-manage-projects.md)-1. Verify permissions for your Azure account - Your Azure account needs permissions to create a VM, and write to an Azure managed disk. +1. Verify permissions for your Azure account - Your Azure account needs permissions to create a VM, write to an Azure managed disk, and manage failover operations for the Recovery Services Vault associated with your Azure Migrate project. ## Download the provider For migrating Hyper-V VMs, the Migration and modernization tool installs softwar 1. In **Discover machines** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. 1. In **Target region**, select the Azure region to which you want to migrate the machines. 1. Select **Confirm that the target region for migration is region-name**.-1. Click **Create resources**. This creates an Azure Site Recovery vault in the background. +1. Click **Create resources**. This creates a Recovery Services Vault in the background. - If you've already set up migration with the Migration and modernization tool, this option won't appear since resources were set up previously. - You can't change the target region for this project after clicking this button. - All subsequent migrations are to this region. Run the provider setup file on each host, as described below: 1. Select **AzureSiteRecoveryProvider.exe** file. - In the provider installation wizard, ensure **On (recommended)** is checked, and then select **Next**. - Select **Install** to accept the default installation folder.- - Select **Register** to register this server in Azure Site Recovery vault. + - Select **Register** to register this server in the Recovery Services Vault. - Select **Browse**. - Locate the registration key and select **Open**. - Select **Next**. |
mysql | Concepts Version Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md | Azure Database for MySQL currently supports the following major and minor versio | Version | [Single Server](single-server/overview.md)<br />Current minor version | [Flexible Server](flexible-server/overview.md)<br />Current minor version | | : | : | : | | MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.40](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-40.html) |-| MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.31](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-31.html) | +| MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.32](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-32.html) | > [!NOTE] > In the Single Server deployment option, a gateway redirects the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to a specific major version, say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version) |
mysql | Concepts Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md | The following table lists the gateway IP addresses of the Azure Database for MyS | Germany West Central | 51.116.152.0 | | | | India Central | 20.192.96.33 | 104.211.96.159 | | | India South | 40.78.192.32 | 104.211.224.146 | |-| India West | 104.211.160.80 | | | +| India West | 104.211.144.32 | 104.211.160.80 | | | Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | | | Japan West | 191.238.68.11, 40.74.96.6, 40.74.96.7, 40.74.96.32 | 104.214.148.156 | | | Korea Central | 52.231.17.13 | 52.231.32.42 | | |
nat-gateway | Nat Gateway Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-resource.md | The total number of connections that a NAT gateway can support at any given time - NAT Gateway doesn't support Public IP addresses with routing configuration type **internet**. To see a list of Azure services that do support routing configuration **internet** on public IPs, see [supported services for routing over the public internet](/azure/virtual-network/ip-services/routing-preference-overview#supported-services). +- Public IPs with DDoS protection enabled are not supported with NAT gateway. See [DDoS limitations](/azure/ddos-protection/ddos-protection-sku-comparison#limitations) for more information. + ## Next steps - Review [Azure NAT Gateway](nat-overview.md). |
nat-gateway | Tutorial Dual Stack Outbound Nat Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer.md | -NAT gateway supports the use of IPv4 public IP addresses for outbound connectivity whereas load balancer supports both IPv4 and IPv6 public IP addresses. When NAT gateway with an IPv4 public IP is present with a load balancer using an IPv4 public IP address, NAT gateway takes precedence over load balancer for providing outbound connectivity. When a NAT gateway is deployed in a dual-stack network with a IPv6 load balancer, IPv4 outbound traffic is handled by the NAT gateway, and IPv6 outbound traffic is handled by the load balancer. +NAT gateway supports the use of IPv4 public IP addresses for outbound connectivity whereas load balancer supports both IPv4 and IPv6 public IP addresses. When NAT gateway with an IPv4 public IP is present with a load balancer using an IPv4 public IP address, NAT gateway takes precedence over load balancer for providing outbound connectivity. When a NAT gateway is deployed in a dual-stack network with a IPv6 load balancer, IPv4 outbound traffic uses the NAT gateway, and IPv6 outbound traffic uses the load balancer. In this tutorial, you learn how to: In this tutorial, you learn how to: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] In this tutorial, you learn how to: -## Create virtual network --In this section, create a virtual network for the virtual machine and load balancer. +## Sign in to Azure # [**Portal**](#tab/dual-stack-outbound-portal) -1. Sign-in to the [Azure portal](https://portal.azure.com). --1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. --1. Select **+ Create**. --1. In the **Basics** tab of **Create virtual network**, enter or select the following information. -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **Create new**. </br> Enter **TutorialIPv6NATLB-rg**. </br> Select **OK**. | - | **Instance details** | | - | Name | Enter **myVNet**. | - | Region | Select **West US 2**. | --1. Select the **IP Addresses** tab, or **Next: IP Addresses**. --1. Leave the default IPv4 address space of **10.1.0.0/16**. If the default is absent or different, enter an IPv4 address space of **10.1.0.0/16**. --1. Select **default** under **Subnet name**. If default is missing, select **+ Add subnet**. --1. In **Subnet name**, enter **myBackendSubnet**. --1. Leave the default IPv4 subnet of **10.1.0.0/24**. +Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. -1. Select **Save**. If creating a subnet, select **Add**. +# [**CLI**](#tab/dual-stack-outbound-cli) -1. Select the **Security** tab or select **Next: Security**. +Use [az login](/cli/azure/reference-index#az-login) to sign in to Azure. -1. In **BastionHost**, select **Enable**. +```azurecli-interactive +az login +``` + -1. Enter or select the following information: +## Create virtual network - | Setting | Value | - | - | -- | - | Bastion name | **myBastion** | - | AzureBastionSubnet address space | Enter **10.1.1.0/26**. | - | Public IP address | Select **Create new**. </br> Enter **myPublicIP-Bastion** in **Name**. </br> Select **OK**. | +In this section, create a virtual network for the virtual machine and load balancer. -1. Select the **Review + create**. +# [**Portal**](#tab/dual-stack-outbound-portal) -1. Select **Create**. -# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) ### Create a resource group The NAT gateway provides the outbound connectivity for the IPv4 portion of the v # [**Portal**](#tab/dual-stack-outbound-portal) -1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results. --1. Select **+ Create**. --1. In the **Basics** tab of **Create network address translation (NAT) gateway**, enter or select the following information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **TutorialIPv6NATLB-rg**. | - | **Instance details** | | - | NAT gateway name | Enter **myNATgateway**. | - | Region | Select **West US 2**. | - | Availability zone | Select a zone or **No Zone**. | - | TCP idle timeout (minutes) | Leave the default of **4**. | --1. Select **Next: Outbound IP**. --1. In **Public IP addresses**, select **Create a new public IP address**. -1. Enter **myPublicIP-NAT** in **Name**. Select **OK**. --1. Select **Next: Subnet**. --1. In **Virtual network**, select **myVNet**. --1. In the list of subnets, select the box for **myBackendSubnet**. --1. Select **Review + create**. --1. Select **Create**. ---# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public IPv4 address for the NAT gateway. The addition of IPv6 to the virtual network must be done after the NAT gateway i 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -1. Select **myVNet**. +1. Select **vnet-1**. 1. In **Settings**, select **Address space**. The addition of IPv6 to the virtual network must be done after the NAT gateway i 1. Select **Subnets** in **Settings**. -1. Select **myBackendSubnet** in the list of subnets. +1. Select **subnet-1** in the list of subnets. 1. Select the box next to **Add IPv6 address space**. The addition of IPv6 to the virtual network must be done after the NAT gateway i 1. Select **Save**. -# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to add the IPv6 address space to the virtual network. The network configuration of the virtual machine has IPv4 and IPv6 configuration # [**Portal**](#tab/dual-stack-outbound-portal) -1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. --1. Select **+ Create** then **Azure virtual machine**. --1. In the **Basics** tab of **Create a virtual machine**, enter or select the following information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **TutorialIPv6NATLB-rg**. | - | **Instance details** | | - | Virtual machine name | Enter **myVM**. | - | Region | Select **(US) West US 2**. | - | Availability options | Leave the default of **No infrastructure redundancy required**. | - | Security type | Leave the default of **Standard**. | - | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | - | Size | Select a size. | - | **Administrator account** | | - | Username | Enter a username. | - | Password | Enter a password. | - | Confirm password | Confirm password. | - | **Inbound port rules** | | - | Public inbound ports | Select **None**. | --1. Select the **Networking** tab, or **Next: Disks** then **Next: Networking**. --1. In the **Networking tab**, enter or select the following information: -- | Setting | Value | - | - | -- | - | **Network interface** | | - | Virtual network | Select **myVNet**. | - | Subnet | Select **myBackendSubnet (10.1.0.0/24,2404:f800:8000:122::/64)**. | - | Public IP | Select **None**. | - | NIC network security group | Select **Basic**. | - | Public inbound ports | Select **None**. | --1. Select **Review + create**. --1. Select **Create**. Wait for the virtual machine to finish deploying before continuing on to the next steps. The support IPv6, the virtual machine must have a IPv6 network configuration add 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -1. Select **myVM**. +1. Select **vm-1**. 1. In **Settings** select **Networking**. -1. Select the name of the network interface in the **Network Interface:** field. The name of the network interface is the virtual machine name plus a random number. In this example, it's **myVM202**. +1. Select the name of the network interface in the **Network Interface:** field. The name of the network interface is the virtual machine name plus a random number. In this example, it's **vm-1202**. 1. In the network interface properties, select **IP configurations** in **Settings**. The support IPv6, the virtual machine must have a IPv6 network configuration add | Setting | Value | | - | -- |- | Name | Enter **ipv6config**. | + | Name | Enter **ipconfig-ipv6**. | | IP version | Select **IPv6**. | -1. Leave the rest of the settings at the defaults and select **OK**. +1. Leave the rest of the settings at the defaults and select **Add**. -# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) ### Create NSG The public load balancer has a front-end IPv6 address and outbound rule for the | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialIPv6NATLB-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter **myLoadBalancer**. | - | Region | Select **West US 2**. | + | Name | Enter **load-balancer**. | + | Region | Select **East US 2**. | | SKU | Leave the default of **Standard**. | | Type | Select **Public**. |+ | Tier | Leave the default of **Regional**. | 1. Select **Next: Frontend IP configuration**. The public load balancer has a front-end IPv6 address and outbound rule for the | Setting | Value | | - | -- |- | Name | Enter **myFrontend-IPv6**. | + | Name | Enter **frontend-ipv6**. | | IP version | Select **IPv6**. | | IP type | Select **IP address**. |- | Public IP address | Select **Create new**. </br> In **Name** enter **myPublicIP-IPv6**. </br> Select **OK**. | + | Public IP address | Select **Create new**. </br> In **Name** enter **public-ip-ipv6**. </br> Select **OK**. | 1. Select **Add**. The public load balancer has a front-end IPv6 address and outbound rule for the | Setting | Value | | - | -- |- | Name | Enter **myBackendPool**. | - | Virtual network | Select **myVNet (TutorialIPv6NATLB-rg)**. | + | Name | Enter **backend-pool**. | + | Virtual network | Select **vnet-1 (test-rg)**. | | Backend Pool Configuration | Leave the default of **NIC**. | 1. Select **Save**. The public load balancer has a front-end IPv6 address and outbound rule for the | Setting | Value | | - | -- |- | Name | Enter **myOutboundRule**. | + | Name | Enter **outbound-rule**. | | IP Version | Select **IPv6**. |- | Frontend IP address | Select **myFrontend-IPv6**. | + | Frontend IP address | Select **frontend-ipv6**. | | Protocol | Leave the default of **All**. | | Idle timeout (minutes) | Leave the default of **4**. | | TCP Reset | Leave the default of **Enabled**. |- | Backend pool | Select **myBackendPool**. | + | Backend pool | Select **backend-pool**. | | **Port allocation** | | | Port allocation | Select **Manually choose number of outbound ports**. | | **Outbound ports** | | Wait for the load balancer to finish deploying before proceeding to the next ste 1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. -1. Select **myLoadBalancer**. +1. Select **load-balancer**. 1. In **Settings** select **Backend pools**. -1. Select **myBackendPool**. +1. Select **backend-pool**. -1. In **Virtual network** select **myVNet (TutorialIPv6NATLB-rg)**. +1. In **Virtual network** select **vnet-1 (test-rg)**. 1. In **IP configurations** select **+ Add**. -1. Select the checkbox for **myVM** that corresponds with the **IP configuration** of **ipv6config**. Don't select **ipconfig1**. +1. Select the checkbox for **vm-1** that corresponds with the **IP configuration** of **ipconfig-ipv6**. Don't select **ipconfig1**. 1. Select **Add**. 1. Select **Save**. -# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public IPv6 address for the frontend IP address of the load balancer. Before you can validate outbound connectivity, make not of the IPv4, and IPv6 pu 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results. -1. Select **myPublicIP-NAT**. +1. Select **public-ip-nat**. 1. Make note of the address in **IP address**. In this example, it's **20.230.191.5**. 1. Return to **Public IP addresses**. -1. Select **myPublicIP-IPv6**. +1. Select **public-ip-ipv6**. 1. Make note of the address in **IP address**. In this example, it's **2603:1030:c02:8::14**. -# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) Use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show) to obtain the IPv4 and IPv6 public IP addresses. Make note of both IP addresses. Use the IPs to verify the outbound connectivity 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -1. Select **myVM**. +1. Select **vm-1**. -1. In the **Overview** of **myVM**, select **Connect** then **Bastion**. +1. In the **Overview** of **myVM**, select **Connect** then **Bastion**. Select **Use Bastion** 1. Enter the username and password you created when you created the virtual machine. 1. Select **Connect**. -1. On the desktop of **myVM**, open **Microsoft Edge**. +1. At the command line, enter the following command to verify the IPv4 address. -1. To confirm the IPv4 address, enter `http://v4.testmyipv6.com` in the address bar. + ```bash + curl -4 icanhazip.com + ``` -1. You should see the IPv4 address displayed. In this example, the IP of **20.230.191.5** is displayed. + ```output + azureuser@vm-1:~$ curl -4 icanhazip.com + 20.230.191.5 + ``` - :::image type="content" source="./media/tutorial-dual-stack-outbound-nat-load-balancer/portal-verify-ipv4.png" alt-text="Screenshot of outbound IPv4 public IP address from portal steps."::: +1. At the command line, enter the following command to verify the IPv4 address. -1. In the address bar, enter `http://v6.testmyipv6.com` + ```bash + curl -6 icanhazip.com + ``` -1. You should see the IPv6 address displayed. In this example, the IP of **2603:1030:c02:8::14** is displayed. + ```output + azureuser@vm-1:~$ curl -6 icanhazip.com + 2603:1030:c02:8::14 + ``` - :::image type="content" source="./media/tutorial-dual-stack-outbound-nat-load-balancer/portal-verify-ipv6.png" alt-text="Screenshot of outbound IPv6 public IP address from portal steps."::: --1. Close the bastion connection to **myVM**. +1. Close the bastion connection to **vm-1**. --# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) 1. Sign-in to the [Azure portal](https://portal.azure.com). Make note of both IP addresses. Use the IPs to verify the outbound connectivity 1. Close the bastion connection to **myVM**. + ## Clean up resources When your finished with the resources created in this article, delete the resource group and all of the resources it contains. # [**Portal**](#tab/dual-stack-outbound-portal) -1. In the search box at the top of the portal, enter **TutorialIPv6NATLB-rg**. Select **TutorialIPv6NATLB-rg** in the search results in **Resource groups**. --1. Select **Delete resource group**. --1. Enter **TutorialIPv6NATLB-rg** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. -# [**CLI**](#tab/dual-stack-outbound--cli) +# [**CLI**](#tab/dual-stack-outbound-cli) Use [az group delete](/cli/azure/group#az-group-delete) to delete the resource group and the resources it contains. |
openshift | Howto Create Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md | AZ_SUB_ID=$(az account show --query id -o tsv) az ad sp create-for-rbac -n "test-aro-SP" --role contributor --scopes "/subscriptions/${AZ_SUB_ID}/resourceGroups/${AZ_RG}" ``` +> [!NOTE] +> +> Service principals must be unique per Azure RedHat OpenShift (ARO) Cluster. + The output is similar to the following example: ``` |
openshift | Howto Restrict Egress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md | az network firewall ip-config create -g $RESOURCEGROUP -f aro-private -n fw-conf ### Capture Azure Firewall IPs for a later use ```azurecli FWPUBLIC_IP=$(az network public-ip show -g $RESOURCEGROUP -n fw-ip --query "ipAddress" -o tsv)-FWPRIVATE_IP=$(az network firewall show -g $RESOURCEGROUP -n aro-private --query "ipConfigurations[0].privateIpAddress" -o tsv) +FWPRIVATE_IP=$(az network firewall show -g $RESOURCEGROUP -n aro-private --query "ipConfigurations[0].privateIPAddress" -o tsv) echo $FWPUBLIC_IP echo $FWPRIVATE_IP |
partner-solutions | Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/partners.md | Azure Native ISV Services are available through the Marketplace. ## Observability -|Partner |Description | -||-| -|[Datadog - An Azure Native ISV Service](datadog/overview.md) | Monitoring and analytics platform for large scale applications. | -|[Elastic](elastic/overview.md) | Build modern search experiences and maximize visibility into health, performance, and security of your infrastructure, applications, and data. | -|[Logz.io](logzio/overview.md) | Observability platform that centralizes log, metric, and tracing analytics. | -|[Azure Native Dynatrace Service](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. | -|[Azure Native New Relic Service Preview](new-relic/new-relic-overview.md) | A cloud-based end-to-end observability platform for analyzing and troubleshooting the performance of applications, infrastructure, logs, real-user monitoring, and more. | +|Partner |Description | | Get started on| +||-|-|-| +|[Datadog - An Azure Native ISV Service](datadog/overview.md) | Monitoring and analytics platform for large scale applications. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Datadog%2Fmonitors) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadog1591740804488.dd_liftr_v2?tab=Overview) | +|[Elastic](elastic/overview.md) | Build modern search experiences and maximize visibility into health, performance, and security of your infrastructure, applications, and data. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Elastic%2Fmonitors) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview) | +|[Azure Native Dynatrace Service](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview) | +|[Azure Native New Relic Service](new-relic/new-relic-overview.md) | A cloud-based end-to-end observability platform for analyzing and troubleshooting the performance of applications, infrastructure, logs, real-user monitoring, and more. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview) | +|[Logz.io](logzio/overview.md) | Observability platform that centralizes log, metric, and tracing analytics. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Logz%2Fmonitors) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/logz.logzio_via_liftr?tab=Overview) | ## Data and storage -|Partner |Description | -||-| -|[Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event streaming platform powered by Apache Kafka. | -|[Azure Native Qumulo Scalable File Service](qumulo/qumulo-overview.md) | Multi-petabyte scale, single namespace, multi-protocol file data platform with the performance, security, and simplicity to meet the most demanding enterprise workloads. | +|Partner |Description || Get started on| +||-||-| +|[Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event streaming platform powered by Apache Kafka. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview) | +|[Azure Native Qumulo Scalable File Service](qumulo/qumulo-overview.md) | Multi-petabyte scale, single namespace, multi-protocol file data platform with the performance, security, and simplicity to meet the most demanding enterprise workloads. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview) | ## Networking and security -|Partner |Description | -||-| -|[NGINXaaS - Azure Native ISV Service](nginx/nginx-overview.md) | Use NGINXaaS as a reverse proxy within your Azure environment. | -|[Cloud NGFW by Palo Alto Networks Preview](palo-alto/palo-alto-overview.md) | Use Palo Alto Networks as a firewall in the Azure environment. | +|Partner |Description || Get started on | +||-||-| +|[NGINXaaS - Azure Native ISV Service](nginx/nginx-overview.md) | Use NGINXaaS as a reverse proxy within your Azure environment. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NGINX.NGINXPLUS%2FnginxDeployments) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/f5-networks.f5-nginx-for-azure?tab=Overview) | +|[Cloud NGFW by Palo Alto Networks Preview](palo-alto/palo-alto-overview.md) | Use Palo Alto Networks as a firewall in the Azure environment. | [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/PaloAltoNetworks.Cloudngfw%2Ffirewalls) | [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/paloaltonetworks.pan_swfw_cloud_ngfw?tab=Overview) | |
postgresql | Concepts Supported Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md | Last updated 08/25/2022 Azure Database for PostgreSQL - Flexible Server currently supports the following major versions: -## PostgreSQL version 15 (Preview) +## PostgreSQL version 15 -PostgreSQL version 15 is now available in public preview in limited regions (West Europe, East US, West US2, South East Asia, UK South, North Europe, Japan east).The current minor release is **15.2**.Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/15.2/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. +PostgreSQL version 15 is now generally available in all Azure regions. The current minor release is **15.2**.Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/15.2/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. ## PostgreSQL version 14 |
postgresql | Howto Alert On Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-alert-on-metrics.md | To [set up a new metric alert rule](../../azure-monitor/alerts/alerts-create-new > [!IMPORTANT] > The resources you select must be within the same resource type, location, and subscription. Resources that do not fit these criteria are not selectable. -You can also use [Azure Resource Manager templates](../../azure-monitor/alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-using-an-arm-template) to deploy multi-resource metric alerts. Learn more in our documentation, [Understand how metric alerts work in Azure Monitor](../../azure-monitor/alerts/alerts-types.md). +You can also use [Azure Resource Manager templates](../../azure-monitor/alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-using-an-arm-template) to deploy multi-resource metric alerts. To learn more about multi-resource alerts, refer our blog [Scale Monitoring with Azure PostgreSQL Multi-Resource Alert](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/scale-monitoring-with-azure-postgresql-multi-resource-alerts/ba-p/3866526). ## Manage your alerts |
postgresql | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md | Last updated 05/10/2023 [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] -This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL +This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Flexible Server - PostgreSQL + ## Release: July 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.3 (preview), 14.8, 13.11, 12.15, 11.20 <sup>$</sup>+* General Availability of PostgreSQL 15 for Azure Database for PostgreSQL ΓÇô Flexible Server. ## Release: June 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.2 (preview), 14.7, 13.10, 12.14, 11.19 <sup>$</sup> This page provides latest news and updates regarding feature additions, engine v ## Release: May 2023 * Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.-* Postgres 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions (West Europe, East US, West US2, South East Asia, UK South, North Europe, Japan east). +* PostgreSQL 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions (West Europe, East US, West US2, South East Asia, UK South, North Europe, Japan east). * General availability: [Pgvector extension](how-to-use-pgvector.md) for Azure Database for PostgreSQL - Flexible Server. * General availability :[Azure Key Vault Managed HSM](./concepts-data-encryption.md#using-azure-key-vault-managed-hsm) with Azure Database for PostgreSQL- Flexible server * General availability [32 TB Storage](./concepts-compute-storage.md) with Azure Database for PostgreSQL- Flexible server |
postgresql | Common Errors And Special Scenarios Fms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/common-errors-and-special-scenarios-fms.md | This articles explains common errors and special scenarios for PostgreSQL Single - Mitigation/Resolution - Customers need to go to the server parameters of the flexible server and allowlist all the extensions they intend to use. At least the ones mentioned in the error message should be allowed to be listed. + Customers need to go to the server parameters of the flexible server and allowlist all the extensions they intend to use. At least the ones mentioned in the error message should be allowed to be listed. To add extensions to the allowlist, you can edit the list of the `azure.extensions` parameter in the Server parameters for your flexible server. ## No pg_hba.conf entry for host |
sap | Dbms Guide Oracle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-oracle.md | The specific scenario of SAP applications using Oracle Databases is supported as ### General Recommendations for running SAP on Oracle on Azure -When installing or migrating existing SAP on Oracle systems to Azure, the following deployment pattern should be followed: +Installing or migrating existing SAP on Oracle systems to Azure, the following deployment pattern should be followed: 1. Use the most [recent Oracle Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/) version available (Oracle Linux 8.6 or higher) 2. Use the most recent Oracle Database version available with the latest SAP Bundle Patch (SBP) (Oracle 19 Patch 15 or higher) [2799920 - Patches for 19c: Database](https://launchpad.support.sap.com/#/notes/2799920) 3. Use Automatic Storage Management (ASM) for small, medium and large sized databases on block storage-4. Azure Premium Storage SSD should be used. Do not use Standard or other storage types. +4. Azure Premium Storage SSD should be used. Don't use Standard or other storage types. 5. ASM removes the requirement for Mirror Log. Follow the guidance from Oracle in Note [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626)-6. Use ASMLib and do not use udev +6. Use ASMLib and don't use udev 7. Azure NetApp Files deployments should use Oracle dNFS (Oracle’s own high performance Direct NFS solution) 8. Large databases benefit greatly from large SGA sizes. Large customers should deploy on Azure M-series with 4 TB or more RAM size. - Set Linux Huge Pages to 75% of Physical RAM size - Set SGA to 90% of Huge Page size 9. Oracle Home should be located outside of the “root” volume or disk. Use a separate disk or ANF volume. The disk holding the Oracle Home should be 64GB or larger-10. The size of the boot disk for large high performance Oracle database servers is important. As a minimum a P10 disk should be used for M-series or E-series. Do not use small disks such as P4 or P6. A small disk can cause performance issues. +10. The size of the boot disk for large high performance Oracle database servers is important. As a minimum a P10 disk should be used for M-series or E-series. Don't use small disks such as P4 or P6. A small disk can cause performance issues. 11. Accelerated Networking must be enabled on all VMs. Upgrade to the latest OL release if there are any problems enabling Accelerated Networking 12. Check for updates in this documentation and SAP note [2039619 - SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2039619) For information about which Oracle versions and corresponding OS versions are supported for running SAP on Oracle on Azure Virtual Machines, see SAP Note [<u>2039619</u>](https://launchpad.support.sap.com/#/notes/2039619). -General information about running SAP Business Suite on Oracle can be found in the [<u>SAP on Oracle community page</u>](https://www.sap.com/community/topic/oracle.html). SAP on Oracle on Azure is only supported on Oracle Linux (and not Suse or Red Hat). Oracle RAC is not supported on Azure because RAC would require Multicast networking. +General information about running SAP Business Suite on Oracle can be found in the [<u>SAP on Oracle community page</u>](https://www.sap.com/community/topic/oracle.html). SAP on Oracle on Azure is only supported on Oracle Linux (and not Suse or Red Hat). Oracle RAC isn't supported on Azure because RAC would require Multicast networking. ## Storage configuration Checklist for Oracle Automatic Storage Management: 4. No Mirror Log is required for ASM [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626) 5. ASM Disk Groups configured as per Variant 1, 2 or 3 below 6. ASM Allocation Unit size = 4MB (default). VLDB OLAP systems such as BW may benefit from larger ASM Allocation Unit size. Change only after confirming with Oracle support-7. ASM Sector Size and Logical Sector Size = default (UDEV is not recommended but requires 4k) +7. ASM Sector Size and Logical Sector Size = default (UDEV isn't recommended but requires 4k) 8. Appropriate ASM Variant is used. Production systems should use Variant 2 or 3 ### Oracle Automatic Storage Management Disk Groups Oracle ASM disk group recommendation: |ASM Disk Group Name |Stores | Azure Storage | |-||--| | **+DATA** |All data files |3-6 x P 30 (1 TiB) |-| |Control file (first copy) | To increase DB size add extra P30 disks | +| |Control file (first copy) | To increase DB size, add extra P30 disks | | |Online redo logs (first copy) | | | **+ARCH** |Control file (second copy) | 2 x P20 (512 GiB) | | |Archived redo logs | | Oracle ASM disk group recommendation: Customer has medium to large sized databases where backup and/or restore + -recovery of all databases cannot be accomplished in a timely fashion. +recovery of all databases can't be accomplished in a timely fashion. -Usually customers will use RMAN, Azure Backup for Oracle and/or disk snap techniques in combination. +Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap techniques in combination. Major differences to Variant 1 are: Major differences to Variant 1 are: ### Variant 3 – huge data and data change volumes more than 5 TB, restore time crucial -Customer has a huge database where backup and/or restore + recovery of a single databases cannot be accomplished in a timely fashion. +Customer has a huge database where backup and/or restore + recovery of a single database can't be accomplished in a timely fashion. -Usually customers will use RMAN, Azure Backup for Oracle and/or disk snap techniques in combination. In this variant, each relevant database file type is separated to different Oracle ASM disk groups. +Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap techniques in combination. In this variant, each relevant database file type is separated to different Oracle ASM disk groups. |ASM Disk Group Name | Stores | Azure Storage | |||| Usually customers will use RMAN, Azure Backup for Oracle and/or disk snap techni ### Adding Space to ASM + Azure Disks -Oracle ASM Disk Groups can either be extended by adding extra disks or by extending current disks. It is recommended to add extra disks rather than extending existing disks. Review these MOS articles and links MOS Notes 1684112.1 and 2176737.1 +Oracle ASM Disk Groups can either be extended by adding extra disks or by extending current disks. It's recommended to add extra disks rather than extending existing disks. Review these MOS articles and links MOS Notes 1684112.1 and 2176737.1 -ASM will add a disk to the disk group: +ASM adds a disk to the disk group: `asmca -silent -addDisk -diskGroupName DATA -disk '/dev/sdd1'` -ASM will automatically rebalance the data. +ASM automatically rebalances the data. To check rebalancing run this command. `ps -ef | grep rbal` Documentation is available with: ### Monitoring SAP on Oracle ASM Systems on Azure -Run an Oracle AWR report as the first step when troubleshooting a performance problem. Disk performance metrics will be detailed in the AWR report. +Run an Oracle AWR report as the first step when troubleshooting a performance problem. Disk performance metrics are detailed in the AWR report. Disk performance can be monitored from inside Oracle Enterprise Manager and via external tools. Documentation which might help is available here: - [Using Views to Display Oracle ASM Information](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/views-asm-info.html#GUID-23E1F0D8-ECF5-4A5A-8C9C-11230D2B4AD4) - [ASMCMD Disk Group Management Commands (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/asmcmd-diskgroup-commands.html#GUID-55F7A91D-2197-467C-9847-82A3308F0392) -OS level monitoring tools cannot monitor ASM disks as there is no recognizable file system. Freespace monitoring must be done from within Oracle. +OS level monitoring tools can't monitor ASM disks as there is no recognizable file system. Freespace monitoring must be done from within Oracle. ### Training Resources on Oracle Automatic Storage Management (ASM) Oracle DBAs that are not familiar with Oracle ASM follow the training materials ## Azure NetApp Files (ANF) with Oracle dNFS (Direct NFS) -The combination of Azure VM’s and ANF is a robust and proven combination implemented by many customers on an exceptionally large scale. +The combination of Azure VMs and ANF is a robust and proven combination implemented by many customers on an exceptionally large scale. Databases of 100+ TB are already running productive on this combination. To start, we wrote a detailed blog on how to set up this combination: Databases of 100+ TB are already running productive on this combination. To star More general information -- [TR-3633: Oracle Databases on NetApp ONTAP \| NetApp](https://www.netapp.com/pdf.html?item=/media/8744-tr3633pdf.pdf)-- [NFS best practice and implementation guide \| TR-4067 (netapp.com)](https://www.netapp.com/media/10720-tr-4067.pdf)+- [Solution architectures using Azure NetApp Files | Oracle](../../azure-netapp-files/azure-netapp-files-solution-architectures.md#oracle) +- [Solution architectures using Azure NetApp Files | SAP on anyDB](../../azure-netapp-files/azure-netapp-files-solution-architectures.md#sap-anydb) Mirror Log is required on dNFS ANF Production systems. Even though the ANF is highly redundant, Oracle still requires a mirrored redo-logfile volume. The recommendation is to create two separate volumes and configure origlogA together with mirrlogB and origlogB together with mirrlogA. In this case, you make use of a distributed load balancing of the redo-logfiles. -The mount option “nconnect” is NOT recommended when the dNFS client is configured. dNFS manages the IO channel and makes use of multiple sessions, so this option is obsolete and can cause manifold issues. The dNFS client will ignore the mount options and will handle the IO directly. +The mount option “nconnect” is NOT recommended when the dNFS client is configured. dNFS manages the IO channel and makes use of multiple sessions, so this option is obsolete and can cause manifold issues. The dNFS client is going to ignore the mount options and is going to handle the IO directly. Both NFS versions (v3 and v4.1) with ANF are supported for the Oracle binaries, data- and log-files. -We highly recommend using the Oracle dNFS clint for all Oracle volumes. +We highly recommend using the Oracle dNFS client for all Oracle volumes. Recommended mount options are: -| NFS Vers | Mount Options | +| NFS Version | Mount Options | |-|| | **NFSv3** | rw,vers=3,rsize=262144,wsize=262144,hard,timeo=600,noatime | | | | or other backup tools. ## SAP on Oracle on Azure with LVM -ASM is the default recommendation from Oracle for all SAP systems of any size on Azure. Performance, Reliability and Support will be better for customers using ASM. Oracle provide documentation and training for DBAs to transition to ASM and every customer who has migrated to ASM has been pleased with the benefits. In cases where the Oracle DBA team do not follow the recommendation from Oracle, Microsoft and SAP to use ASM the following LVM configuration should be used. +ASM is the default recommendation from Oracle for all SAP systems of any size on Azure. Performance, Reliability and Support are better for customers using ASM. Oracle provide documentation and training for DBAs to transition to ASM and every customer who has migrated to ASM has been pleased with the benefits. In cases where the Oracle DBA team doesn't follow the recommendation from Oracle, Microsoft and SAP to use ASM the following LVM configuration should be used. Note that: when creating LVM the “-i” option must be used to evenly distribute data across the number of disks in the LVM group. Mirror Log is required when running LVM. | Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: LVM stripe using RAID0-2. During R3load migrations the Host Cache option for SAPDATA should be set to None +2. During R3load migrations, the Host Cache option for SAPDATA should be set to None 3. oraarch: LVM is optional The disk selection for hosting Oracle's online redo logs should be driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy the requirements. The disk selection for hosting Oracle's online redo logs should be driven by IOP | Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: LVM stripe using RAID0-2. During R3load migrations the Host Cache option for SAPDATA should be set to None +2. During R3load migrations, the Host Cache option for SAPDATA should be set to None 3. oraarch: LVM is optional ## Azure Infra: VM Throughput Limits & Azure Disk Storage Options Another good Oracle whitepaper [Setting up Oracle 12c Data Guard for SAP Custome VLDB SAP on Oracle on Azure deployments apply SGA sizes in excess of 3TB.  Modern versions of Oracle handle large SGA sizes well and significantly reduce IO.  Review the AWR report and increase the SGA size to reduce read IO.  -As general guidance Linux Huge Pages should be configured to approximately 75% of the VM RAM size.  The SGA size can be set to 90% of the Huge Page size.  A approximate example would be a m192ms VM with 4 TB of RAM would have Huge Pages set proximately 3 TB.  The SGA can be set to a value a little less such as 2.95 TB. +As general guidance Linux Huge Pages should be configured to approximately 75% of the VM RAM size.  The SGA size can be set to 90% of the Huge Page size.  An approximate example would be a m192ms VM with 4 TB of RAM would have Huge Pages set proximately 3 TB.  The SGA can be set to a value a little less such as 2.95 TB. Large SAP customers running on High Memory Azure VMs greatly benefit from HugePages as described in this [article](https://www.carajandb.com/en/blog/2016/7-easy-steps-to-configure-hugepages-for-your-oracle-database-server/) SAP on Oracle on Azure also supports Windows. The recommendations for Windows de Windows Server 2022 (only from Oracle Database 19.13.0 on) Windows Server 2019 (only from Oracle Database 19.5.0 on) 2. There is no support for ASM on Windows. Windows Storage Spaces should be used to aggregate disks for optimal performance-3. Install the Oracle Home on a dedicated independent disk (do not install Oracle Home on the C: Drive) +3. Install the Oracle Home on a dedicated independent disk (don't install Oracle Home on the C: Drive) 4. All disks must be formatted NTFS 5. Follow the Windows Tuning guide from Oracle and enable large pages, lock pages in memory and other Windows specific settings -At the time, of writing ASM for Windows customers on Azure is not supported. SWPM for Windows does not support ASM currently. VLDB SAP on Oracle migrations to Azure have required ASM and have therefore selected Oracle Linux. +At the time, of writing ASM for Windows customers on Azure isn't supported. SWPM for Windows does not support ASM currently. VLDB SAP on Oracle migrations to Azure have required ASM and have therefore selected Oracle Linux. ## Storage Configurations for SAP on Oracle on Windows At the time, of writing ASM for Windows customers on Azure is not supported. SWP | I:\Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: Windows Storage Spaces-2. During R3load migrations the Host Cache option for SAPDATA should be set to None +2. During R3load migrations, the Host Cache option for SAPDATA should be set to None 3. oraarch: Windows Storage Spaces is optional The disk selection for hosting Oracle's online redo logs should be driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy the requirements. The disk selection for hosting Oracle's online redo logs should be driven by IOP | K:\Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: Windows Storage Spaces-2. During R3load migrations the Host Cache option for SAPDATA should be set to None +2. During R3load migrations, the Host Cache option for SAPDATA should be set to None 3. oraarch: Windows Storage Spaces is optional ### Links for Oracle on Windows |
sap | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md | In the SAP workload documentation space, you can find the following areas: ## Change Log +- July 13, 2023: Clarifying dfifferences in zonal replication between NFS on AFS and ANF in table in [Azure Storage types for SAP workload](./planning-guide-storage.md) +- July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 do not show any performance difference in [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md) +- July 13, 2023: Replaced links in ANF section of [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md#) to new ANF related documentation - July 11, 2023: Add a note about Azure NetApp Files application volume group for SAP HANA in [HA for HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [HANA scale-out with standby node with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for HANA Scale-out HA on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) and [HA for HANA scale-out on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) - June 29, 2023: Update important considerations and sizing information in [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) - June 26, 2023: Update important considerations and sizing information in [HA for HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md) and [HANA scale-out with standby node with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md). - June 23, 2023: Updated Azure scheduled events for SLES in [Pacemaker set up guide](./high-availability-guide-suse-pacemaker.md#configure-pacemaker-for-azure-scheduled-events).+- June 22, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 do not show any performance difference in [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md) - June 1, 2023: Included virtual machine scale set with flexible orchestration guidelines in SAP workload [planning guide](./planning-guide.md). - June 1, 2023: Updated high availability guidelines in [HA architecture and scenarios](./sap-high-availability-architecture-scenarios.md), and added additional deployment option in [configuring optimal network latency with SAP applications](./proximity-placement-scenarios.md). - June 1, 2023: Release of [virtual machine scale set with flexible orchestration support for SAP workload](./virtual-machine-scale-set-sap-deployment-guide.md). |
sap | Hana Vm Ultra Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-ultra-disk.md | Other advantages of Ultra disk can be the better read latency in comparison to p > [!NOTE] > Ultra disk might not be present in all the Azure regions. For detailed information where Ultra disk is available and which VM families are supported, check the article [What disk types are available in Azure?](../../virtual-machines/disks-types.md#ultra-disks). +> [!IMPORTANT] +> You have the possibility to define the sector size of Ultra disk as 512 Bytes or 4096 Bytes. Default sector size is 4096 Bytes. Tests conducted with HCMT did not reveal any significant differences in performance and throughput between the different sector sizes. This sector size is different than stripe sizes that you need to define when using a logical volume manager. + ## Production recommended storage solution with pure Ultra disk configuration In this configuration, you keep the **/hana/data** and **/hana/log** volumes separately. The suggested values are derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as recommended in the [SAP TDI Storage Whitepaper](https://www.sap.com/documents/2017/09/e6519450-d47c-0010-82c7-eda71af511fa.html). |
sap | Planning Guide Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage.md | Before going into the details, we're presenting the summary and recommendations | Usage scenario | Standard HDD | Standard SSD | Premium Storage | Premium SSD v2 | Ultra disk | Azure NetApp Files | Azure Premium Files | | | | | | | | | | | OS disk | Not suitable | Restricted suitable (non-prod) | Recommended | Not possible | Not possible | Not possible | Not possible |-| Global transport Directory | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Recommended | -| /sapmnt | Not suitable | Restricted suitable (non-prod) | Recommended | Recommended | Recommended | Recommended | Recommended | +| Global transport Directory | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Highly Recommended | +| /sapmnt | Not suitable | Restricted suitable (non-prod) | Recommended | Recommended | Recommended | Recommended | Highly Recommended | | DBMS Data volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup> | Not supported | | DBMS log volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended<sup>1</sup> | Recommended | Recommended | Recommended<sup>2</sup> | Not supported | | DBMS Data volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup> | Not supported | Characteristics you can expect from the different storage types list like: | Disk snapshots possible | Yes | Yes | Yes | No | No | Yes | No | | Allocation of disks on different storage clusters when using availability sets | Through managed disks | Through managed disks | Through managed disks | Disk type not supported with VMs deployed through availability sets | Disk type not supported with VMs deployed through availability sets | No<sup>3</sup> | No | | Aligned with Availability Zones | Yes | Yes | Yes | Yes | Yes | In public preview | No |-| Zonal redundancy | Not for managed disks | Not for managed disks | Not supported for DBMS | No | No | No | Yes | +| Synchronous Zonal redundancy | Not for managed disks | Not for managed disks | Not supported for DBMS | No | No | No | Yes | +| Asynchronous Zonal redundancy | Not for managed disks | Not for managed disks | Not supported for DBMS | No | No | In preview | No | | Geo redundancy | Not for managed disks | Not for managed disks | No | No | No | Possible | No | |
search | Performance Benchmarks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/performance-benchmarks.md | You can also see that performance can vary drastically between scenarios. If you Now that you've seen the performance benchmarks, you can learn more about how to analyze Cognitive Search's performance and key factors that influence performance. -> [!div class="nextstepaction"] -> [Analyze performance](search-performance-analysis.md) -> [Tips for better performance](search-performance-tips.md) ++ [Analyze performance](search-performance-analysis.md)++ [Tips for better performance](search-performance-tips.md)++ [Case Study: Use Cognitive Search to Support Complex AI Scenarios](https://techcommunity.microsoft.com/t5/azure-ai/case-study-effectively-using-cognitive-search-to-support-complex/ba-p/2804078) |
search | Resource Partners Knowledge Mining | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-partners-knowledge-mining.md | Get expert help from Microsoft partners who build comprehensive solutions that i |  | [**BA Insight Search for Workplace**](https://www.bainsight.com/azure-search/) is a complete enterprise search solution powered by Azure Cognitive Search. It is the first of its kind solution, bringing the internet to enterprises for secure, "askable", powerful search to help organizations get a return on information. It delivers a web-like search experience, connects to 80+ enterprise systems and provides automated and intelligent meta tagging. | [Product page](https://www.bainsight.com/azure-search/) | |  | [**BlueGranite**](https://www.bluegranite.com/) offers 25 years of experience in Modern Business Intelligence, Data Platforms, and AI solutions across multiple industries. Their Knowledge Mining services enable organizations to obtain unique insights from structured and unstructured data sources. Modular AI capabilities perform searches on numerous file types to index data and associate that data with more traditional data sources. Analytics tools extract patterns and trends from the enriched data and showcase results to users at all levels. | [Product page](https://www.bluegranite.com/knowledge-mining) | |  | [**Enlighten Designs**](https://www.enlighten.co.nz) is an award-winning innovation studio that has been enabling client value and delivering digitally transformative experiences for over 22 years. We are pushing the boundaries of the Microsoft technology toolbox, harnessing Cognitive Search, application development, and advanced Azure services that have the potential to transform our world. As experts in Power BI and data visualization, we hold the titles for the most viewed, and the most downloaded Power BI visuals in the world and are MicrosoftΓÇÖs Data Journalism agency of record when it comes to data storytelling. | [Product page](https://www.enlighten.co.nz/Services/Data-Visualisation/Azure-Cognitive-Search) |-|  | [**Neal Analytics**](https://nealanalytics.com/) offers over 10 years of cloud, data, and AI expertise on Azure. Its experts have recognized in-depth expertise across the Azure AI and ML services. Neal can help customers customize and implement Cognitive Search across a wide variety of use cases. Neal Analytics expertise ranges from enterprise-level search, form, and process automation to domain mapping for data extraction and analytics, plagiarism detection, and more. | [Product page](https://go.nealanalytics.com/cognitive-search)| |  | [**Neudesic**](https://www.neudesic.com/) is the trusted technology partner in business innovation, delivering impactful business results to clients through digital modernization and evolution. Our consultants bring business and technology expertise together, offering a wide range of cloud and data-driven solutions, including custom application development, data and artificial intelligence, comprehensive managed services, and business software products. Founded in 2002, Neudesic is a privately held company headquartered in Irvine, California. | [Product page](https://www.neudesic.com/services/modern-workplace/document-intelligence-platform-schedule-demo/)| |  | [**OrangeNXT**](https://orangenxt.com/) offers expertise in data consolidation, data modeling, and building skillsets that include custom logic developed for specific use-cases.</br></br>digitalNXT Search is an OrangeNXT solution that combines AI, optical character recognition (OCR), and natural language processing in Azure Cognitive Search pipeline to help you extract search results from multiple structured and unstructured data sources. Integral to digitalNXT Search is advanced custom cognitive skills for interpreting and correlating selected data.</br></br>| [Product page](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/)| |  | [**Plain Concepts**](https://www.plainconcepts.com/contact/) is a Microsoft Partner with over 15 years of cloud, data, and AI expertise on Azure, and more than 12 Microsoft MVP awards. We specialize in the creation of new data relationships among heterogeneous information sources, which combined with our experience with Artificial Intelligence, Machine Learning, and Cognitive Services, exponentially increases the productivity of both machines and human teams. We help customers to face the digital revolution with the AI-based solutions that best suits their company requirements.| [Product page](https://www.plainconcepts.com/artificial-intelligence/) | |
search | Search Create Service Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md | Paid (or billable) search occurs when you choose a billable tier (Basic or above 1. Click the plus sign (**"+ Create Resource"**) in the top-left corner. -1. Use the search bar to find "Azure Cognitive Search" or navigate to the resource through **Web** > **Azure Cognitive Search**. +1. Use the search bar to find "Azure Cognitive Search". :::image type="content" source="media/search-create-service-portal/find-search3.png" lightbox="media/search-create-service-portal/find-search3.png" alt-text="Screenshot of the Create Resource page in the portal." border="true"::: |
search | Search Performance Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-analysis.md | Review these articles related to analyzing service performance. + [Performance tips](search-performance-tips.md) + [Choose a service tier](search-sku-tier.md) + [Manage capacity](search-capacity-planning.md)++ [Case Study: Use Cognitive Search to Support Complex AI Scenarios](https://techcommunity.microsoft.com/t5/azure-ai/case-study-effectively-using-cognitive-search-to-support-complex/ba-p/2804078) |
search | Search Reliability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md | Title: Reliability in Azure Cognitive Search description: Find out about reliability in Azure Cognitive Search. --++ Previously updated : 06/20/2023 Last updated : 07/11/2023 # Reliability in Azure Cognitive Search -Across Azure, [reliability](../reliability/overview.md) means maintaining resiliency and availability if there's a service outage or degradation. In Cognitive Search, reliability is achieved when you: +Across Azure, [reliability](../reliability/overview.md) means resiliency and availability if there's a service outage or degradation. In Cognitive Search, reliability can be achieved within a single service or through multiple search services in separate regions. -+ Deploy a single search service with multiple replicas to scale indexing and query workloads. Within a region, replicas run within availability zones for extra reliability. ++ Deploy a single search service and scale up for high availability. You can add multiple replicas to handle higher indexing and query workloads. If your search service [supports availability zones](#availability-zone-support), replicas are automatically provisioned in different physical data centers for extra resiliency. -+ Deploy multiple search services across different geographic regions. ++ Deploy multiple search services across different geographic regions. All search workloads are fully contained within a single service that runs in a single geographic region, but in a multi-service scenario, you have options for synchronizing content so that it's the same across all services. You can also set up a load balancing solution to redistribute requests or fail over if there's a service outage. -All search workloads are fully contained within a single service that runs in a single geographic region. On a service, you can configure multiple replicas that automatically run in different availability zones. This capability is how you achieve high availability. --For business continuity and recovery from disasters at a regional level, you should develop a strategy that includes a cross-regional topology, consisting of multiple search services having identical configuration and content. Your custom script or code provides the "fail over" mechanism to an alternate search service if one suddenly becomes unavailable. +For business continuity and recovery from disasters at a regional level, plan on a cross-regional topology, consisting of multiple search services having identical configuration and content. Your custom script or code provides the "fail over" mechanism to an alternate search service if one suddenly becomes unavailable. <a name="scale-for-availability"></a> No SLA is provided for the Free tier. For more information, see [SLA for Azure C ## Availability zone support -[Availability zones](../availability-zones/az-overview.md) are an Azure platform capability that divides a region's data centers into distinct physical location groups to provide high-availability, within the same region. If you use availability zones for Cognitive Search, individual replicas are the units for zone assignment. A search service runs within one region; its replicas run in different zones. +[Availability zones](../availability-zones/az-overview.md) are an Azure platform capability that divides a region's data centers into distinct physical location groups to provide high-availability, within the same region. In Cognitive Search, individual replicas are the units for zone assignment. A search service runs within one region; its replicas run in different physical data centers (or zones) within that region. -You can utilize availability zones with Azure Cognitive Search by adding two or more replicas to your search service. Each replica is placed in a different availability zone within the region. If you have more replicas than availability zones, the replicas are distributed across availability zones as evenly as possible. There's no specific action on your part, except to [create a search service](search-create-service-portal.md) in a region that provides availability zones, and then to configure the service to [use multiple replicas](search-capacity-planning.md#adjust-capacity). +Availability zones are used when you add two or more replicas to your search service. Each replica is placed in a different availability zone within the region. If you have more replicas than available zones in the search service region, the replicas are distributed across zones as evenly as possible. There's no specific action on your part, except to [create a search service](search-create-service-portal.md) in a region that provides availability zones, and then to configure the service to [use multiple replicas](search-capacity-planning.md#adjust-capacity). ### Prerequisites -As noted, you must have multiple replicas for high availability: two for read-only query workloads, three for read-write workloads that include indexing. ++ Service tier must be Standard or higher.++ Service region must be in a region that has available zones (listed in the following table).++ Configuration must include multiple replicas: two for read-only query workloads, three for read-write workloads that include indexing. -Your service must be deployed in a region that supports availability zones. Azure Cognitive Search currently supports availability zones for Standard tier or higher, in one of the following regions: +Availability zones for Cognitive Search are supported in the following regions: | Region | Roll out | |--|--| Your service must be deployed in a region that supports availability zones. Azur | West US 2 | January 30, 2021 or later | | West US 3 | June 02, 2021 or later | -Availability zones don't impact the [Azure Cognitive Search Service Level Agreement](https://azure.microsoft.com/support/legal/sla/search/v1_0/). You still need three or more replicas for query high availability. +> [!NOTE] +> Availability zones don't change the terms of the [Azure Cognitive Search Service Level Agreement](https://azure.microsoft.com/support/legal/sla/search/v1_0/). You still need three or more replicas for query high availability. ## Multiple services in separate geographic regions -Service redundancy is necessary if operational requirements include: +Service redundancy is necessary if your operational requirements include: -+ [Business continuity and disaster recovery (BCDR)](../availability-zones/cross-region-replication-azure.md) (Cognitive Search doesn't provide instant failover in the event of an outage). ++ [Business continuity and disaster recovery (BCDR) requirements](../availability-zones/cross-region-replication-azure.md) (Cognitive Search doesn't provide instant failover if there's an outage). -+ Global availability. If query and indexing requests come from all over the world, users who are closest to the host data center will have faster performance. Creating more services in regions with close proximity to these users can equalize performance for all users. ++ Fast performance for a globally distributed application. If query and indexing requests come from all over the world, users who are closest to the host data center experience faster performance. Creating more services in regions with close proximity to these users can equalize performance for all users. -If you need two or more search services, creating them in different regions can meet application requirements for continuity and recovery, as well as faster response times for a global user base. +If you need two or more search services, creating them in different regions can meet application requirements for continuity and recovery, and faster response times for a global user base. Azure Cognitive Search doesn't provide an automated method of replicating search indexes across geographic regions, but there are some techniques that can make this process simple to implement and manage. These techniques are outlined in the next few sections. You can implement this architecture by creating multiple services and designing ### Synchronize data across multiple services -There are two options for keeping two or more distributed search services in sync: +There are two options for keeping two or more distinct search services in sync: + Pull content updates into a search index by using an [indexer](search-indexer-overview.md). + Push content into an index using the [Add or Update Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents) API or an Azure SDK equivalent API. +To configure either option, we recommend using the [sample Bicep script in the azure-search-multiple-region](https://github.com/Azure-Samples/azure-search-multiple-regions) repository, modified to your regions and indexing strategies. + #### Option 1: Use indexers for updating content on multiple services -If you're already using indexer on one service, you can configure a second indexer on a second service to use the same data source object, pulling data from the same location. Each service in each region has its own indexer and a target index (your search index isn't shared, which means data is duplicated), but each indexer references the same data source. +If you're already using indexer on one service, you can configure a second indexer on a second service to use the same data source object, pulling data from the same location. Each service in each region has its own indexer and a target index (your search index isn't shared, which means each index has its own copy of the data), but each indexer references the same data source. Here's a high-level visual of what that architecture would look like. Here's a high-level visual of what that architecture would look like. If you're using the Azure Cognitive Search REST API to [push content to your search index](tutorial-optimize-indexing-push-api.md), you can keep your various search services in sync by pushing changes to all search services whenever an update is required. In your code, make sure to handle cases where an update to one search service fails but succeeds for other search services. -### Use Azure Traffic Manager and Azure Application Gateway to coordinate requests +### Fail over or redirect query requests ++If you need redundancy at the request level, Azure provides several [load balancing options](/azure/architecture/guide/technology-choices/load-balancing-overview): +++ [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview), used to route requests to multiple geo-located websites that are then backed by multiple search services. ++ [Application Gateway](/azure/application-gateway/overview), used to load balance between servers in a region at the application layer.++ [Azure Front Door](/azure/frontdoor/front-door-overview), used to optimize global routing of web traffic and provide global failover.++Some points to keep in mind when evaluating load balancing options: +++ Search is a backend service that accepts query and indexing requests from a client. -[Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) allows you to route requests to multiple geo-located websites that are then backed by multiple search services. Azure Traffic Manager is primarily used for routing network traffic across different endpoints based on specific routing methods (such as priority, performance, or geographic location). It acts at the DNS level to direct incoming requests to the appropriate endpoint. It doesn't have inherent knowledge of the health or availability of specific services like Azure Cognitive Search. ++ Requests from the client to a search service must be authenticated. For access to search operations, the caller must have role-based permissions or provide an API key on the request. -To add health checks and failover capabilities for Azure Cognitive Search, you would typically use Azure Application Gateway or a load balancer in combination with Azure Traffic Manager. [Azure Application Gateway](/azure/application-gateway/overview) supports health probes, which can be configured to check the availability of specific backend services and perform load balancing accordingly. ++ Search endpoints are reached through a public internet connection by default. If you set up a private endpoint for client connections that originate from within a virtual network, use [Application Gateway](/azure/application-gateway/overview). -The architecture for this solution would consist of search-enabled client apps that connect to Application Gateway through Azure Traffic Manager, where each gateway endpoint connects to a backend search service in a specific region. ++ Cognitive Search accepts requests addressed to the `<your-search-service-name>.search.windows.net` endpoint. If you reach the same endpoint using a different DNS name in the host header, such as a CNAME, the request is rejected. -## Data residency +Cognitive Search provides a [multi-region deployment sample](https://github.com/Azure-Samples/azure-search-multiple-regions) that uses Azure Traffic Manager for request redirection if the primary endpoint fails. This solution is useful when you route to a search-enabled client that only calls a search service in the same region. -When you deploy multiple search services in various geographic regions, your content is stored in your chosen region. +Azure Traffic Manager is primarily used for routing network traffic across different endpoints based on specific routing methods (such as priority, performance, or geographic location). It acts at the DNS level to direct incoming requests to the appropriate endpoint. If an endpoint that Traffic Manager is servicing begins refusing requests, traffic is routed to another endpoint. -Azure Cognitive Search won't store data outside of your specified region without your authorization. Specifically, the following features write to an Azure Storage resource: [enrichment cache](cognitive-search-incremental-indexing-conceptual.md), [debug session](cognitive-search-debug-session.md), [knowledge store](knowledge-store-concept-intro.md). The storage account is one that you provide, and it could be in any region. +Traffic Manager doesn't provide an endpoint for a direct connection to Cognitive Search, which means you can't put a search service directly behind Traffic Manager. Instead, the assumption is that requests flow to Traffic Manager, then to a search-enabled web client, and finally to a search service on the backend. The client and service are located in the same region. If one search service goes down, the search client starts failing, and Traffic Manager redirects to the remaining client. -If both the storage account and the search service are in the same region, network traffic between search and storage uses a private IP address and occurs over the Microsoft backbone network. Because private IP addresses are used, you can't configure IP firewalls or a private endpoint for network security. Instead, use the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) as an alternative when both services are in the same region. +![Search apps connecting through Azure Traffic Manager][4] -## Disaster recovery and service outages +## About data residency in a multi-region deployment ++When you deploy multiple search services in various geographic regions, your content is stored in the region you chose for each search service. ++Azure Cognitive Search won't store data outside of your specified region without your authorization. Authorization is implicit when you use features that write to an Azure Storage resource: [enrichment cache](cognitive-search-incremental-indexing-conceptual.md), [debug session](cognitive-search-debug-session.md), [knowledge store](knowledge-store-concept-intro.md). In all cases, the storage account is one that you provide, in the region of your choice. ++> [!NOTE] +> If both the storage account and the search service are in the same region, network traffic between search and storage uses a private IP address and occurs over the Microsoft backbone network. Because private IP addresses are used, you can't configure IP firewalls or a private endpoint for network security. Instead, use the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) as an alternative when both services are in the same region. ++## About service outages and catastrophic events As stated in the [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/search/v1_0/), Microsoft guarantees a high level of availability for index query requests when an Azure Cognitive Search service instance is configured with two or more replicas, and index update requests when an Azure Cognitive Search service instance is configured with three or more replicas. However, there's no built-in mechanism for disaster recovery. If continuous service is required in the event of a catastrophic failure outside of Microsoft’s control, we recommend provisioning a second service in a different region and implementing a geo-replication strategy to ensure indexes are fully redundant across all services. -Customers who use [indexers](search-indexer-overview.md) to populate and refresh indexes can handle disaster recovery through geo-specific indexers that retrieve data from the same data source. Two services in different regions, each running an indexer, could index the same data source to achieve geo-redundancy. If you're indexing from data sources that are also geo-redundant, be aware that Azure Cognitive Search indexers can only perform incremental indexing (merging updates from new, modified, or deleted documents) from primary replicas. In a failover event, be sure to redirect the indexer to the new primary replica. +Customers who use [indexers](search-indexer-overview.md) to populate and refresh indexes can handle disaster recovery through geo-specific indexers that retrieve data from the same data source. Two services in different regions, each running an indexer, could index the same data source to achieve geo-redundancy. If you're indexing from data sources that are also geo-redundant, remember that Azure Cognitive Search indexers can only perform incremental indexing (merging updates from new, modified, or deleted documents) from primary replicas. In a failover event, be sure to redirect the indexer to the new primary replica. If you don't use indexers, you would use your application code to push objects and data to different search services in parallel. For more information, see [Keep data synchronized across multiple services](#data-sync). ## Back up and restore alternatives -Because Azure Cognitive Search isn't a primary data storage solution, Microsoft doesn't provide a formal mechanism for self-service backup and restore. However, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples) to back up your index definition and snapshot to a series of JSON files, and then use these files to restore the index, if needed. This tool can also move indexes between service tiers. +A business continuity strategy for the data layer usually includes a restore-from-backup step. Because Azure Cognitive Search isn't a primary data storage solution, Microsoft doesn't provide a formal mechanism for self-service backup and restore. However, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples) to back up your index definition and snapshot to a series of JSON files, and then use these files to restore the index, if needed. This tool can also move indexes between service tiers. Otherwise, your application code used for creating and populating an index is the de facto restore option if you delete an index by mistake. To rebuild an index, you would delete it (assuming it exists), recreate the index in the service, and reload by retrieving data from your primary data store. ## Next steps + Review [Service limits](search-limits-quotas-capacity.md) to learn more about the pricing tiers and services limits for each one.- + Review [Plan for capacity](search-capacity-planning.md) to learn more about partition and replica combinations.--+ Review [Case Study: Use Cognitive Search to Support Complex AI Scenarios](https://techcommunity.microsoft.com/t5/azure-ai/case-study-effectively-using-cognitive-search-to-support-complex/ba-p/2804078) for real-world tips. ++ Review [Case Study: Use Cognitive Search to Support Complex AI Scenarios](https://techcommunity.microsoft.com/t5/azure-ai/case-study-effectively-using-cognitive-search-to-support-complex/ba-p/2804078) for more configuration guidance. <!--Image references--> [1]: ./media/search-reliability/geo-redundancy.png [2]: ./media/search-reliability/scale-indexers.png-[3]: ./media/search-reliability/geo-search-traffic-mgr.png +[3]: ./media/search-reliability/geo-search-traffic-mgr.png +[4]: ./media/search-reliability/azure-function-search-traffic-mgr.png |
search | Vector Search How To Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md | api-key: {{admin-api-key}} } ``` -## Query syntax for cross-field vector query +## Query syntax for vector query over multiple fields -You can set "vector.fields" property to multiple vector fields. For example, the Postman collection has vector fields named titleVector and contentVector. Your query can include both titleVector and contentVector. +You can set "vector.fields" property to multiple vector fields. For example, the Postman collection has vector fields named titleVector and contentVector. Your vector query executes over both the titleVector and contentVector fields, which must have the same embedding space since they share the same query vector. ```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} api-key: {{admin-api-key}} } ``` -## Query syntax for multi-modal vector queries +## Query syntax for multiple vector queries -You can issue a search request with multiple query vectors using the `vectors` query parameter. The queries execute concurrently over the same embedding space in the search index, looking for similarities in each of the vector fields. The result set is a union of the documents that matched all vector queries. A common example of this query request is when using models such as [CLIP](https://openai.com/research/clip) for a multi-modal vector search. +You can issue a search request containing multiple query vectors using the `vectors` query parameter. The queries execute concurrently in the search index, each one looking for similarities in the target vector fields. The result set is a union of the documents that matched both vector queries. A common example of this query request is when using models such as [CLIP](https://openai.com/research/clip) for a multi-modal vector search where the same model can vectorize image and non-image content. -You must use REST for this scenario. Currently, there isn't support for multiple vector fields in the alpha SDKs. +You must use REST for this scenario. Currently, there isn't support for multiple vector queries in the alpha SDKs. + `vectors.value` property contains the vector query generated from the embedding model used to create image and text vectors in the search index. -+ `vectors.fields` contains the image vectors and text vectors in the search index. ++ `vectors.fields` contains the image vectors and text vectors in the search index. This is the searchable data. + `vectors.k` is the number of nearest neighbor matches to include in results. ```http { "vectors": [ {- "value": [1.0, 2.0], + "value": [ + -0.001111111, + 0.018708462, + -0.013770515, + . . . + ], "fields": "myimagevector", "k": 5 }, {- "value": [1.0, 2.0, 3.0], + "value": [ + -0.002222222, + 0.018708462, + -0.013770515, + . . . + ], "fields": "mytextvector", "k": 5 } |
search | Vector Search Index Size | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md | To estimate the total size of your vector index, use the following calculation: **`(raw_size) * (1 + algorithm_overhead (in percent)) * (1 + deleted_docs_ratio (in percent))`** -For example, to calculate the **raw_size**, let's assume you're using a popular Azure OpenAI model, `text-embedding-ada-002` with 1,536 dimensions. This means one document would consume 1,536 `Edm.Single` (floats), or 6,144 bytes since each `Edm.Single` is 4 bytes. 1,000 documents with a single, 1,536-dimensional vector field would consume in total 100 docs x 1536 floats/doc = 1,536,000 floats, or 6,144,000 bytes. +For example, to calculate the **raw_size**, let's assume you're using a popular Azure OpenAI model, `text-embedding-ada-002` with 1,536 dimensions. This means one document would consume 1,536 `Edm.Single` (floats), or 6,144 bytes since each `Edm.Single` is 4 bytes. 1,000 documents with a single, 1,536-dimensional vector field would consume in total 1000 docs x 1536 floats/doc = 1,536,000 floats, or 6,144,000 bytes. If you have multiple vector fields, you need to perform this calculation for each vector field within your index and add them all together. For example, 1,000 documents with **two** 1,536-dimensional vector fields, consume 1000 docs x **2 fields** x 1536 floats/doc x 4 bytes/float = 12,288,000 bytes. |
search | Vector Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md | Scenarios for vector search include: + **Hybrid search**. For text data, combine the best of vector retrieval and keyword retrieval to obtain the best results. Use with [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing. -+ **Vector database**. Use Cognitive Search as a vector store to server as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications. ++ **Vector database**. Use Cognitive Search as a vector store to serve as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications. ## Azure integration and related services In order to create effective embeddings for vector search, it's important to tak ### What is the embedding space? -*Embedding space* is the corpus for vector search. Machine learning models create the embedding space by mapping individual words, phrases, or documents (for natural language processing), images, or other forms of data into a representation comprised of a vector of real numbers representing a coordinate in a high-dimensional space. In this embedding space, similar items are located close together, and dissimilar items are located farther apart. +*Embedding space* is the corpus for vector queries. Within a search index, it's all of the vector fields populated with embeddings from the same embedding model. Machine learning models create the embedding space by mapping individual words, phrases, or documents (for natural language processing), images, or other forms of data into a representation comprised of a vector of real numbers representing a coordinate in a high-dimensional space. In this embedding space, similar items are located close together, and dissimilar items are located farther apart. For example, documents that talk about different species of dogs would be clustered close together in the embedding space. Documents about cats would be close together, but farther from the dogs cluster while still being in the neighborhood for animals. Dissimilar concepts such as cloud computing would be much farther away. In practice, these embedding spaces are abstract and don't have well-defined, human-interpretable meanings, but the core idea stays the same. |
service-bus-messaging | Service Bus Messaging Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-exceptions.md | - Title: Azure Service Bus - messaging exceptions | Microsoft Docs -description: This article provides a list of Azure Service Bus messaging exceptions and suggested actions to taken when the exception occurs. - Previously updated : 02/17/2023---# Service Bus messaging exceptions --This article lists the .NET exceptions generated by .NET Framework APIs. --## Exception categories --The messaging APIs generate exceptions that can fall into the following categories, along with the associated action you can take to try to fix them. The meaning and causes of an exception can vary depending on the type of messaging entity: --1. User coding error ([System.ArgumentException](/dotnet/api/system.argumentexception), [System.InvalidOperationException](/dotnet/api/system.invalidoperationexception), [System.OperationCanceledException](/dotnet/api/system.operationcanceledexception), [System.Runtime.Serialization.SerializationException](/dotnet/api/system.runtime.serialization.serializationexception)). General action: try to fix the code before proceeding. -2. Setup/configuration error ([Microsoft.ServiceBus.Messaging.MessagingEntityNotFoundException](/dotnet/api/microsoft.azure.servicebus.messagingentitynotfoundexception), [System.UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception). General action: review your configuration and change if necessary. -3. Transient exceptions ([Microsoft.ServiceBus.Messaging.MessagingException](/dotnet/api/microsoft.servicebus.messaging.messagingexception), [Microsoft.ServiceBus.Messaging.ServerBusyException](/dotnet/api/microsoft.azure.servicebus.serverbusyexception), [Microsoft.ServiceBus.Messaging.MessagingCommunicationException](/dotnet/api/microsoft.servicebus.messaging.messagingcommunicationexception)). General action: retry the operation or notify users. The `RetryPolicy` class in the client SDK can be configured to handle retries automatically. For more information, see [Retry guidance](/azure/architecture/best-practices/retry-service-specific#service-bus). -4. Other exceptions ([System.Transactions.TransactionException](/dotnet/api/system.transactions.transactionexception), [System.TimeoutException](/dotnet/api/system.timeoutexception), [Microsoft.ServiceBus.Messaging.MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception), [Microsoft.ServiceBus.Messaging.SessionLockLostException](/dotnet/api/microsoft.azure.servicebus.sessionlocklostexception)). General action: specific to the exception type; refer to the table in the following section: --> [!IMPORTANT] -> - Azure Service Bus doesn't retry an operation in case of an exception when the operation is in a transaction scope. -> - For retry guidance specific to Azure Service Bus, see [Retry guidance for Service Bus](/azure/architecture/best-practices/retry-service-specific#service-bus). ---## Exception types --The following table lists messaging exception types, and their causes, and notes suggested action you can take. --| **Exception Type** | **Description/Cause/Examples** | **Suggested Action** | **Note on automatic/immediate retry** | -| | | | | -| [TimeoutException](/dotnet/api/system.timeoutexception) |The server didn't respond to the requested operation within the specified time, which is controlled by [OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings). The server may have completed the requested operation. It can happen because of network or other infrastructure delays. |Check the system state for consistency and retry if necessary. See [Timeout exceptions](#timeoutexception). |Retry might help in some cases; add retry logic to code. | -| [InvalidOperationException](/dotnet/api/system.invalidoperationexception) |The requested user operation isn't allowed within the server or service. See the exception message for details. For example, [Complete()](/dotnet/api/microsoft.azure.servicebus.queueclient.completeasync) generates this exception if the message was received in [ReceiveAndDelete](/dotnet/api/microsoft.azure.servicebus.receivemode) mode. |Check the code and the documentation. Make sure the requested operation is valid. |Retry doesn't help. | -| [OperationCanceledException](/dotnet/api/system.operationcanceledexception) |An attempt is made to invoke an operation on an object that has already been closed, aborted, or disposed. In rare cases, the ambient transaction is already disposed. |Check the code and make sure it doesn't invoke operations on a disposed object. |Retry doesn't help. | -| [UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception) |The [TokenProvider](/dotnet/api/microsoft.servicebus.tokenprovider) object couldn't acquire a token, the token is invalid, or the token doesn't contain the claims required to do the operation. |Make sure the token provider is created with the correct values. Check the configuration of the Access Control Service. |Retry might help in some cases; add retry logic to code. | -| [ArgumentException](/dotnet/api/system.argumentexception)<br /> [ArgumentNullException](/dotnet/api/system.argumentnullexception)<br />[ArgumentOutOfRangeException](/dotnet/api/system.argumentoutofrangeexception) |One or more arguments supplied to the method are invalid.<br /> The URI supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) contains path segment(s).<br /> The URI scheme supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) is invalid. <br />The property value is larger than 32 KB. |Check the calling code and make sure the arguments are correct. |Retry doesn't help. | -| [MessagingEntityNotFoundException](/dotnet/api/microsoft.azure.servicebus.messagingentitynotfoundexception) |Entity associated with the operation doesn't exist or it has been deleted. |Make sure the entity exists. |Retry doesn't help. | -| [MessageNotFoundException](/dotnet/api/microsoft.servicebus.messaging.messagenotfoundexception) |Attempt to receive a message with a particular sequence number. This message isn't found. |Make sure the message hasn't been received already. Check the deadletter queue to see if the message has been deadlettered. |Retry doesn't help. | -| [MessagingCommunicationException](/dotnet/api/microsoft.servicebus.messaging.messagingcommunicationexception) |Client isn't able to establish a connection to Service Bus. |Make sure the supplied host name is correct and the host is reachable. <p>If your code runs in an environment with a firewall/proxy, ensure that the traffic to the Service Bus domain/IP address and ports isn't blocked.</p>|Retry might help if there are intermittent connectivity issues. | -| [ServerBusyException](/dotnet/api/microsoft.azure.servicebus.serverbusyexception) |Service isn't able to process the request at this time. |Client can wait for a period of time, then retry the operation. |Client may retry after certain interval. If a retry results in a different exception, check retry behavior of that exception. | -| [MessagingException](/dotnet/api/microsoft.servicebus.messaging.messagingexception) |Generic messaging exception that may be thrown in the following cases:<p>An attempt is made to create a [QueueClient](/dotnet/api/microsoft.azure.servicebus.queueclient) using a name or path that belongs to a different entity type (for example, a topic).</p><p>An attempt is made to send a message larger than 256 KB. </p>The server or service encountered an error during processing of the request. See the exception message for details. It's usually a transient exception.</p><p>The request was terminated because the entity is being throttled. Error code: 50001, 50002, 50008. </p> | Check the code and ensure that only serializable objects are used for the message body (or use a custom serializer). <p>Check the documentation for the supported value types of the properties and only use supported types.</p><p> Check the [IsTransient](/dotnet/api/microsoft.servicebus.messaging.messagingexception) property. If it's **true**, you can retry the operation. </p>| If the exception is due to throttling, wait for a few seconds and retry the operation again. Retry behavior is undefined and might not help in other scenarios.| -| [MessagingEntityAlreadyExistsException](/dotnet/api/microsoft.servicebus.messaging.messagingentityalreadyexistsexception) |Attempt to create an entity with a name that is already used by another entity in that service namespace. |Delete the existing entity or choose a different name for the entity to be created. |Retry doesn't help. | -| [QuotaExceededException](/dotnet/api/microsoft.azure.servicebus.quotaexceededexception) |The messaging entity has reached its maximum allowable size, or the maximum number of connections to a namespace has been exceeded. |Create space in the entity by receiving messages from the entity or its subqueues. See [QuotaExceededException](#quotaexceededexception). |Retry might help if messages have been removed in the meantime. | -| [RuleActionException](/dotnet/api/microsoft.servicebus.messaging.ruleactionexception) |Service Bus returns this exception if you attempt to create an invalid rule action. Service Bus attaches this exception to a deadlettered message if an error occurs while processing the rule action for that message. |Check the rule action for correctness. |Retry doesn't help. | -| [FilterException](/dotnet/api/microsoft.servicebus.messaging.filterexception) |Service Bus returns this exception if you attempt to create an invalid filter. Service Bus attaches this exception to a deadlettered message if an error occurred while processing the filter for that message. |Check the filter for correctness. |Retry doesn't help. | -| [SessionCannotBeLockedException](/dotnet/api/microsoft.servicebus.messaging.sessioncannotbelockedexception) |Attempt to accept a session with a specific session ID, but the session is currently locked by another client. |Make sure the session is unlocked by other clients. |Retry might help if the session has been released in the interim. | -| [TransactionSizeExceededException](/dotnet/api/microsoft.servicebus.messaging.transactionsizeexceededexception) |Too many operations are part of the transaction. |Reduce the number of operations that are part of this transaction. |Retry doesn't help. | -| [MessagingEntityDisabledException](/dotnet/api/microsoft.azure.servicebus.messagingentitydisabledexception) |Request for a runtime operation on a disabled entity. |Activate the entity. |Retry might help if the entity has been activated in the interim. | -| [NoMatchingSubscriptionException](/dotnet/api/microsoft.servicebus.messaging.nomatchingsubscriptionexception) |Service Bus returns this exception if you send a message to a topic that has pre-filtering enabled and none of the filters match. |Make sure at least one filter matches. |Retry doesn't help. | -| [MessageSizeExceededException](/dotnet/api/microsoft.servicebus.messaging.messagesizeexceededexception) |A message payload exceeds the 256-KB limit. The 256-KB limit is the total message size, which can include system properties and any .NET overhead. |Reduce the size of the message payload, then retry the operation. |Retry doesn't help. | -| [TransactionException](/dotnet/api/system.transactions.transactionexception) |The ambient transaction (`Transaction.Current`) is invalid. It may have been completed or aborted. Inner exception may provide additional information. | |Retry doesn't help. | -| [TransactionInDoubtException](/dotnet/api/system.transactions.transactionindoubtexception) |An operation is attempted on a transaction that is in doubt, or an attempt is made to commit the transaction and the transaction becomes in doubt. |Your application must handle this exception (as a special case), as the transaction may have already been committed. |- | --## QuotaExceededException --[QuotaExceededException](/dotnet/api/microsoft.azure.servicebus.quotaexceededexception) indicates that a quota for a specific entity has been exceeded. --> [!NOTE] -> For Service Bus quotas, see [Quotas](service-bus-quotas.md). --### Queues and topics --For queues and topics, it's often the size of the queue. The error message property contains further details, as in the following example: --```output -Microsoft.ServiceBus.Messaging.QuotaExceededException -Message: The maximum entity size has been reached or exceeded for Topic: 'xxx-xxx-xxx'. - Size of entity in bytes:1073742326, Max entity size in bytes: -1073741824..TrackingId:xxxxxxxxxxxxxxxxxxxxxxxxxx, TimeStamp:3/15/2013 7:50:18 AM -``` --The message states that the topic exceeded its size limit, in this case 1 GB (the default size limit). --### Namespaces --For namespaces, [QuotaExceededException](/dotnet/api/microsoft.azure.servicebus.quotaexceededexception) can indicate that an application has exceeded the maximum number of connections to a namespace. For example: --```output -Microsoft.ServiceBus.Messaging.QuotaExceededException: ConnectionsQuotaExceeded for namespace xxx. -<tracking-id-guid>_G12 > -System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: -ConnectionsQuotaExceeded for namespace xxx. -``` --### Common causes --There are two common causes for this error: the dead-letter queue, and non-functioning message receivers. --1. **[Dead-letter queue](service-bus-dead-letter-queues.md)** - A reader is failing to complete messages and the messages are returned to the queue/topic when the lock expires. It can happen if the reader encounters an exception that prevents it from calling [BrokeredMessage.Complete](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.complete). After a message has been read 10 times, it moves to the dead-letter queue by default. This behavior is controlled by the [QueueDescription.MaxDeliveryCount](/dotnet/api/microsoft.servicebus.messaging.queuedescription.maxdeliverycount) property and has a default value of 10. As messages pile up in the dead letter queue, they take up space. -- To resolve the issue, read and complete the messages from the dead-letter queue, as you would from any other queue. You can use the [FormatDeadLetterPath](/dotnet/api/microsoft.azure.servicebus.entitynamehelper.formatdeadletterpath) method to help format the dead-letter queue path. -2. **Receiver stopped**. A receiver has stopped receiving messages from a queue or subscription. The way to identify this is to look at the [QueueDescription.MessageCountDetails](/dotnet/api/microsoft.servicebus.messaging.messagecountdetails) property, which shows the full breakdown of the messages. If the [ActiveMessageCount](/dotnet/api/microsoft.servicebus.messaging.messagecountdetails.activemessagecount) property is high or growing, then the messages aren't being read as fast as they are being written. --## TimeoutException --A [TimeoutException](/dotnet/api/system.timeoutexception) indicates that a user-initiated operation is taking longer than the operation timeout. --You should check the value of the [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) property, as hitting this limit can also cause a [TimeoutException](/dotnet/api/system.timeoutexception). --Timeouts are expected to happen during or in-between maintenance operations such as Service Bus service updates (or) OS updates on resources that run the service. During OS updates, entities are moved around and nodes are updated or rebooted, which can cause timeouts. For service level agreement (SLA) details for the Azure Service Bus service, see [SLA for Service Bus](https://azure.microsoft.com/support/legal/sla/service-bus/). --### Queues and topics --For queues and topics, the timeout is specified either in the [MessagingFactorySettings.OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings) property, as part of the connection string, or through [ServiceBusConnectionStringBuilder](/dotnet/api/microsoft.azure.servicebus.servicebusconnectionstringbuilder). The error message itself might vary, but it always contains the timeout value specified for the current operation. --## MessageLockLostException --### Cause --The **MessageLockLostException** is thrown when a message is received using the [PeekLock](message-transfers-locks-settlement.md#peeklock) Receive mode and the lock held by the client expires on the service side. --The lock on a message may expire due to various reasons: -- * The lock timer has expired before it was renewed by the client application. - * The client application acquired the lock, saved it to a persistent store and then restarted. Once it restarted, the client application looked at the inflight messages and tried to complete these. --You may also receive this exception in the following scenarios: --* Service Update -* OS update -* Changing properties on the entity (queue, topic, subscription) while holding the lock. --### Resolution --In the event of a **MessageLockLostException**, the client application can no longer process the message. The client application may optionally consider logging the exception for analysis, but the client *must* dispose off the message. --Since the lock on the message has expired, it would go back on the Queue (or Subscription) and can be processed by the next client application which calls receive. --If the **MaxDeliveryCount** has exceeded then the message may be moved to the **DeadLetterQueue**. --## SessionLockLostException --### Cause --The **SessionLockLostException** is thrown when a session is accepted and the lock held by the client expires on the service side. --The lock on a session may expire due to various reasons: -- * The lock timer has expired before it was renewed by the client application. - * The client application acquired the lock, saved it to a persistent store and then restarted. Once it restarted, the client application looked at the inflight sessions and tried to process the messages in those sessions. --You may also receive this exception in the following scenarios: --* Service Update -* OS update -* Changing properties on the entity (queue, topic, subscription) while holding the lock. --### Resolution --In the event of a **SessionLockLostException**, the client application can no longer process the messages on the session. The client application may consider logging the exception for analysis, but the client *must* dispose off the message. --Since the lock on the session has expired, it would go back on the Queue (or Subscription) and can be locked by the next client application which accepts the session. Since the session lock is held by a single client application at any given time, the in-order processing is guaranteed. --## SocketException --### Cause --A **SocketException** is thrown in the following cases: -- * When a connection attempt fails because the host did not properly respond after a specified time (TCP error code 10060). - * An established connection failed because connected host has failed to respond. - * There was an error processing the message or the timeout is exceeded by the remote host. - * Underlying network resource issue. --### Resolution --The **SocketException** errors indicate that the VM hosting the applications is unable to convert the name `<mynamespace>.servicebus.windows.net` to the corresponding IP address. --Check to see if below command succeeds in mapping to an IP address. --```powershell -PS C:\> nslookup <mynamespace>.servicebus.windows.net -``` --which should provide an output as below --```bash -Name: <cloudappinstance>.cloudapp.net -Address: XX.XX.XXX.240 -Aliases: <mynamespace>.servicebus.windows.net -``` --If the above name **does not resolve** to an IP and the namespace alias, check which the network administrator to investigate further. Name resolution is done through a DNS server typically a resource in the customer network. If the DNS resolution is done by Azure DNS please contact Azure support. --If name resolution **works as expected**, check if connections to Azure Service Bus is allowed [here](service-bus-troubleshooting-guide.md#connectivity-certificate-or-timeout-issues) --## MessagingException --### Cause --**MessagingException** is a generic exception that may be thrown for various reasons. Some of the reasons are listed below. -- * An attempt is made to create a **QueueClient** on a **Topic** or a **Subscription**. - * The size of the message sent is greater than the limit for the given tier. Read more about the Service Bus [quotas and limits](service-bus-quotas.md). - * Specific data plane request (send, receive, complete, abandon) was terminated due to throttling. - * Transient issues caused due to service upgrades and restarts. --> [!NOTE] -> The above list of exceptions is not exhaustive. --### Resolution --The resolution steps depend on what caused the **MessagingException** to be thrown. -- * For **transient issues** (where ***isTransient*** is set to ***true***) or for **throttling issues**, retrying the operation may resolve it. The default retry policy on the SDK can be leveraged for this. - * For other issues, the details in the exception indicate the issue and resolution steps can be deduced from the same. --## StorageQuotaExceededException --### Cause --The **StorageQuotaExceededException** is generated when the total size of entities in a premium namespace exceeds the limit of 1 TB per [messaging unit](service-bus-premium-messaging.md). --### Resolution --- Increase the number of messaging units assigned to the premium namespace-- If you are already using maximum allowed messaging units for a namespace, create a separate namespace. --## Next steps --For the complete Service Bus .NET API reference, see the [Azure .NET API reference](/dotnet/api/overview/azure/service-bus). -For troubleshooting tips, see the [Troubleshooting guide](service-bus-troubleshooting-guide.md) |
spring-apps | How To Enterprise Configure Apm Integration And Ca Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-integration-and-ca-certificates.md | Azure Spring Apps supports CA certificates for all language family buildpacks, b | Buildpack | ApplicationInsights | New Relic | AppDynamics | Dynatrace | ElasticAPM | |-||--|-|--|| | Java | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |-| Dotnet | | | | Γ£ö | | +| .NET | | Γ£ö | | Γ£ö | | | Go | | | | Γ£ö | | | Python | | | | | | | NodeJS | | Γ£ö | Γ£ö | Γ£ö | Γ£ö | This section lists the supported languages and required environment variables fo Supported languages: - Java+ - .NET - Node.js Environment variables required for buildpack binding: |
spring-apps | How To Enterprise Deploy Polyglot Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md | The following features aren't supported in Azure Spring Apps due to the limitati > [!NOTE] > In the following different language build and deploy configuration sections, `--build-env` means the environment is used in the build phase. `--env` means the environment is used in the runtime phase.+> +> We recommend that you specify the language version in case the default version changes. For example, use `--build-env BP_JVM_VERSION=11.*` to specify Java 11 as the JDK version. For other languages, you can get the environment variable name in the following descriptions for each language. ### Deploy Java applications The following table lists the features supported in Azure Spring Apps: ||--|--|--| | Configure the .NET Core runtime version. | Supports *Net6.0* and *Net7.0*. <br> You can configure through a *runtimeconfig.json* or MSBuild Project file. <br> The default runtime is *6.0.\**. | N/A | N/A | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |-| Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | +| Integrate with the Dynatrace and New Relic APM agents. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` | ### Deploy Python applications The following table lists the features supported in Azure Spring Apps: | Feature description | Comment | Environment variable | Usage | ||--|--|-|-| Specify a Go version. | Supports *1.18.\**, *1.19.\**. The default value is *1.18.\**.<br> The Go version is automatically detected from the appΓÇÖs *go.mod* file. You can override this version by setting the `BP_GO_VERSION` environment variable at build time. | `BP_GO_VERSION` | `--build-env BP_GO_VERSION=1.19.*` | +| Specify a Go version. | Supports *1.19.\**, *1.20.\**. The default value is *1.19.\**.<br> The Go version is automatically detected from the appΓÇÖs *go.mod* file. You can override this version by setting the `BP_GO_VERSION` environment variable at build time. | `BP_GO_VERSION` | `--build-env BP_GO_VERSION=1.20.*` | | Configure multiple targets. | Specifies multiple targets for a Go build. | `BP_GO_TARGETS` | `--build-env BP_GO_TARGETS=./some-target:./other-target` | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | The following table lists the features supported in Azure Spring Apps: | Feature description | Comment | Environment variable | Usage | |-|--|--|--|-| Specify a Node version. | Supports *12.\**, *14.\**, *16.\**, *18.\**, *19.\**. The default value is *18.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=18.*` | +| Specify a Node version. | Supports *14.\**, *16.\**, *18.\**, *19.\**. The default value is *18.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=19.*` | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` | |
static-web-apps | Assign Roles Microsoft Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/assign-roles-microsoft-graph.md | In this tutorial, you learn to: > [!NOTE] > This tutorial requires you to [use a function to assign roles](authentication-custom.md#manage-roles). Function-based role management is currently in preview. +There's a function named *GetRoles* in the app's API. This function uses the user's access token to query Active Directory from Microsoft Graph. If the user is a member of any groups defined in the app, then the corresponding custom roles are mapped to the user. + ## Prerequisites -- **Active Azure account:** If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/).-- You must have sufficient permissions to create an Azure Active Directory application.+| Requirement | Comments | +||| +| Active Azure account | If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/). | +| Azure Active Directory permissions | You must have sufficient permissions to create an Azure Active Directory application. | ## Create a GitHub repository -1. Go to the following location to create a new repository: - - [https://github.com/staticwebdev/roles-function/generate](https://github.com/login?return_to=/staticwebdev/roles-function/generate) +1. Generate a repository based on the roles function template. Go to the following location to create a new repository. ++ [https://github.com/staticwebdev/roles-function/generate](https://github.com/login?return_to=/staticwebdev/roles-function/generate) 1. Name your repository **my-custom-roles-app**. In this tutorial, you learn to: ## Deploy the static web app to Azure -1. In a new browser window, go to the [Azure portal](https://portal.azure.com) and sign in with your Azure account. +1. In a new browser window, open the [Azure portal](https://portal.azure.com). -1. Select **Create a resource** in the top left corner. +1. From the top left corner, select **Create a resource**. -1. Type **static web apps** in the search box. +1. In the search box, type **static web apps**. -1. Select **Static Web App**. +1. Select **Static Web Apps**. 1. Select **Create**. -1. Configure your Azure Static Web App with the following information: +1. Configure your static web app with the following information: - | **Input** | **Value** | **Notes** | - | - | - | - | - | _Subscription_ | Select your Azure subscription | | - | _Resource group_ | Create a new one named **my-custom-roles-app-group** | | - | _Name_ | **my-custom-roles-app** | | - | _Hosting plan_ | **Standard** | Customizing authentication and assigning roles using a function require the Standard plan | - | _Region_ | Select a region closest to you | | - | _Deployment details_ | Select **GitHub** as the source | | + | Setting | Value | Notes | + |||| + | Subscription | Select your Azure subscription. | | + | Resource group | Create a new group named **my-custom-roles-app-group**. | | + | Name | **my-custom-roles-app** | | + | Plan type | **Standard** | Customizing authentication and assigning roles using a function require the *Standard* plan. | + | Region for API | Select the region closest to you. | -1. Select **Sign-in with GitHub** and authenticate with GitHub. +1. In the *Deployment details* section: -1. Select the name of the _Organization_ where you created the repository. + | Setting | Value | + ||| + | Source | Select **GitHub**. | + | Organization | Select the organization where you generated the repository. | + | Repository | Select **my-custom-roles-app**. | + | Branch | Select **main**. | -1. Select **my-custom-roles-app** from the _Repository_ drop-down. +1. In the _Build Details_ section, add the configuration details for this app. -1. Select **main** from the _Branch_ drop-down. + | Setting | Value | Notes | + |||| + | Build presets | Select **Custom**. | | + | App location | Enter **/frontend**. | This folder contains the front end application. | + | API location | **/api** | Folder in the repository containing the API functions. | + | Output location | Leave blank. | This app has no build output. | -1. In the _Build Details_ section, add configuration details for this app. +1. Select **Review + create**. - | **Input** | **Value** | **Notes** | - | - | - | - | - | _Build presets_ | **Custom** | | - | _App location_ | **frontend** | Folder in the repository containing the app | - | _API location_ | **api** | Folder in the repository containing the API | - | _Output location_ | | This app has no build output | +1. Select **Create** initiate the first deployment. -1. Select **Review + create**. Then select **Create** to create the static web app and initiate the first deployment. +1. Once the process is complete, select **Go to resource** to open your new static web app. -1. Select **Go to resource** to open your new static web app. --1. In the overview section, locate your application's **URL**. Copy this value into a text editor as you'll need this URL to set up Active Directory authentication and test the app. +1. In the overview section, locate your application's **URL**. Copy this value into a text editor to use in upcoming steps to set up Active Directory authentication. ## Create an Azure Active Directory application 1. In the Azure portal, search for and go to *Azure Active Directory*. -1. In the menu bar, select **App registrations**. --1. Select **+ New registration** to open the *Register an application* page +1. From the *Manage* menu, select **App registrations**. -1. Enter a name for the application. For example, **MyStaticWebApp**. +1. Select **New registration** to open the *Register an application* window. Enter the following values: -1. For *Supported account types*, select **Accounts in this organizational directory only**. --1. For *Redirect URIs*, select **Web** and enter the Azure Active Directory login [authentication callback](authentication-custom.md#authentication-callbacks) of your static web app. For example, `<YOUR_SITE_URL>/.auth/login/aad/callback`. -- Replace `<YOUR_SITE_URL>` with the URL of your static web app. + | Setting | Value | Notes | + |||| + | Name | Enter **MyStaticWebApp**. | | + | Supported account types | Select **Accounts in this organizational directory only**. || + | Redirect URI | Select **Web** and enter the Azure Active Directory [authentication callback](authentication-custom.md#authentication-callbacks) URL of your static web app. Replace `<YOUR_SITE_URL>` in `<YOUR_SITE_URL>/.auth/login/aad/callback` with the URL of your static web app. | This URL is what you copied to a text editor in an earlier step. | :::image type="content" source="media/assign-roles-microsoft-graph/create-app-registration.png" alt-text="Create an app registration"::: 1. Select **Register**. -1. After the app registration is created, copy the **Application (client) ID** and **Directory (tenant) ID** in the *Essentials* section to a text editor. You'll need these values to configure Active Directory authentication in your static web app. +1. After the app registration is created, copy the **Application (client) ID** and **Directory (tenant) ID** in the *Essentials* section to a text editor. ++ You need these values to configure Active Directory authentication in your static web app. ### Enable ID tokens -1. Select *Authentication* in the menu bar. +1. From the app registration settings, select **Authentication** under *Manage*. 1. In the *Implicit grant and hybrid flows* section, select **ID tokens (used for implicit and hybrid flows)**. - :::image type="content" source="media/assign-roles-microsoft-graph/enable-id-tokens.png" alt-text="Enable ID tokens"::: - - This configuration is required by Static Web Apps to authenticate your users. + The Static Web Apps runtime requires this configuration to authenticate your users. 1. Select **Save**. ### Create a client secret -1. Select *Certificates & secrets* in the menu bar. +1. In the app registration settings, select **Certificates & secrets** under *Manage*. -1. In the *Client secrets* section, select **+ New client secret**. +1. In the *Client secrets* section, select **New client secret**. -1. Enter a name for the client secret. For example, **MyStaticWebApp**. +1. For the *Description* field, enter **MyStaticWebApp**. -1. Leave the default of _6 months_ for the *Expires* field. +1. For the *Expires* field, leave the default value of _6 months_. > [!NOTE] > You must rotate the secret before the expiration date by generating a new secret and updating your app with its value. 1. Select **Add**. -1. Note the **Value** of the client secret you created. You'll need this value to configure Active Directory authentication in your static web app. +1. Copy the **Value** of the client secret you created to a text editor. ++ You need this value to configure Active Directory authentication in your static web app. :::image type="content" source="media/assign-roles-microsoft-graph/create-client-secret.png" alt-text="Create a client secret"::: ## Configure Active Directory authentication -1. In a browser, open the GitHub repository containing the static web app you deployed. Go to the app's configuration file at *frontend/staticwebapp.config.json*. It contains the following section: +1. In a browser, open the GitHub repository containing the static web app you deployed. ++ Go to the app's configuration file at *frontend/staticwebapp.config.json*. This file contains the following section: ```json "auth": { In this tutorial, you learn to: }, ``` - > [!NOTE] - > To obtain an access token for Microsoft Graph, the `loginParameters` field must be configured with `resource=https://graph.microsoft.com`. + This configuration is made up of the following settings: ++ | Properties | Description | + ||| + | `rolesSource` | The URL where the login process gets a list of available roles. For the sample application the URL is `/api/GetRoles`. | + | `userDetailsClaim` | The URL of the schema used to validate the login request. | + | `openIdIssuer` | The Azure Active Directory login route, appended with your tenant ID. | + | `clientIdSettingName` | Your Azure Active Directory tenant ID. | + | `clientSecretSettingName` | Your Azure Active Directory client secret value. | + | `loginParameters` | To obtain an access token for Microsoft Graph, the `loginParameters` field must be configured with `resource=https://graph.microsoft.com`. | -2. Select **Edit** to update the file. +1. Select **Edit** to update the file. -3. Update the *openIdIssuer* value of `https://login.microsoftonline.com/<YOUR_AAD_TENANT_ID>` by replacing `<YOUR_AAD_TENANT_ID>` with the directory (tenant) ID of your Azure Active Directory. +1. Update the *openIdIssuer* value of `https://login.microsoftonline.com/<YOUR_AAD_TENANT_ID>` by replacing `<YOUR_AAD_TENANT_ID>` with the directory (tenant) ID of your Azure Active Directory. -4. Select **Commit directly to the main branch** and select **Commit changes**. +1. Select **Commit changes...**. -5. A GitHub Actions run triggers to update the static web app. +1. Enter a commit message, and select **Commit changes**. -6. Go to your static web app resource in the Azure portal. + Committing these changes initiates a GitHub Actions run to update the static web app. -7. Select **Configuration** in the menu bar. +1. Go to your static web app resource in the Azure portal. -8. In the *Application settings* section, add the following settings: +1. Select **Configuration** in the menu bar. ++1. In the *Application settings* section, add the following settings: | Name | Value |- ||-| - | `AAD_CLIENT_ID` | *Your Active Directory application (client) ID* | - | `AAD_CLIENT_SECRET` | *Your Active Directory application client secret value* | + ||| + | `AAD_CLIENT_ID` | Your Active Directory application (client) ID. | + | `AAD_CLIENT_SECRET` | Your Active Directory application client secret value. | ++1. Select **Save**. -9. Select **Save**. +## Create roles ++1. Open you Active Directory app registration in the Azure portal. ++1. Under *Manage*, select **App roles**. ++1. Select **Create app role** and enter the following values: ++ | Setting | Value | + ||| + | Display name | Enter **admin**. | + | Allowed member types | Select **Users/Groups**. | + | Value | Enter **admin**. | + | Description | Enter **Administrator**. | ++1. Check the box for **Do you want to enable this app role?** ++1. Select **Apply**. ++1. Now repeat the same process for a role named **reader**. ++1. Copy the *ID* values for each role and set them aside in a text editor. ## Verify custom roles -The sample application contains a serverless function (*api/GetRoles/index.js*) that queries Microsoft Graph to determine if a user is in a pre-defined group. Based on the user's group memberships, the function assigns custom roles to the user. The application is configured to restrict certain routes based on these custom roles. +The sample application contains an API function (*api/GetRoles/index.js*) that queries Microsoft Graph to determine if a user is in a predefined group. ++Based on the user's group memberships, the function assigns custom roles to the user. The application is configured to restrict certain routes based on these custom roles. ++1. In your GitHub repository, go to the *GetRoles* function located at *api/GetRoles/index.js*. -1. In your GitHub repository, go to the *GetRoles* function located at *api/GetRoles/index.js*. Near the top, there is a `roleGroupMappings` object that maps custom user roles to Azure Active Directory groups. + Near the top, there's a `roleGroupMappings` object that maps custom user roles to Azure Active Directory groups. -2. Select **Edit**. +1. Select **Edit**. -3. Update the object with group IDs from your Azure Active Directory tenant. +1. Update the object with group IDs from your Azure Active Directory tenant. For instance, if you have groups with IDs `6b0b2fff-53e9-4cff-914f-dd97a13bfbd6` and `b6059db5-9cef-4b27-9434-bb793aa31805`, you would update the object to: The sample application contains a serverless function (*api/GetRoles/index.js*) }; ``` - The *GetRoles* function is called whenever a user is successfully authenticated with Azure Active Directory. The function uses the user's access token to query their Active Directory group membership from Microsoft Graph. If the user is a member of any groups defined in the `roleGroupMappings` object , the corresponding custom roles are returned by the function. - - In the above example, if a user is a member of the Active Directory group with ID `b6059db5-9cef-4b27-9434-bb793aa31805`, they are granted the `reader` role. + The *GetRoles* function is called whenever a user is successfully authenticated with Azure Active Directory. The function uses the user's access token to query their Active Directory group membership from Microsoft Graph. If the user is a member of any groups defined in the `roleGroupMappings` object, then the corresponding custom roles are returned. ++ In the above example, if a user is a member of the Active Directory group with ID `b6059db5-9cef-4b27-9434-bb793aa31805`, they're granted the `reader` role. ++1. Select **Commit changes...**. ++1. Add a commit message and select **Commit changes**. -4. Select **Commit directly to the main branch** and select **Commit changes**. + Making these changes initiates a build in to update the static web app. -5. A GitHub Actions run triggers to update the static web app. +1. When the deployment is complete, you can verify your changes by navigating to the app's URL. -6. When the deployment is complete, you can verify your changes by navigating to the app's URL. +1. Sign in to your static web app using Azure Active Directory. -7. Log in to your static web app using Azure Active Directory. +1. When you're logged in, the sample app displays the list of roles that you're assigned based on your identity's Active Directory group membership. -8. When you are logged in, the sample app displays the list of roles that you are assigned based on your identity's Active Directory group membership. Depending on these roles, you are permitted or prohibited to access some of the routes in the app. + Depending on these roles, you're permitted or prohibited to access some of the routes in the app. > [!NOTE]-> Some queries against Microsoft Graph return multiple pages of data. When more than one query request is required, Microsoft Graph returns an `@odata.nextLink` property in the response which contains a URL to the next page of results. For more details please refer to [Paging Microsoft Graph data in your app](/graph/paging) +> Some queries against Microsoft Graph return multiple pages of data. When more than one query request is required, Microsoft Graph returns an `@odata.nextLink` property in the response which contains a URL to the next page of results. For more information, see [Paging Microsoft Graph data in your app](/graph/paging) ## Clean up resources |
storage-mover | Agent Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md | STATUS: IN REVIEW CONTENT: final -REVIEW Stephen/Fabian: not reviewed +REVIEW Stephen/Fabian: COMPLETE REVIEW Engineering: not reviewed+EDIT PASS: COMPLETE ++Initial doc score: 86 +Current doc score: 100 (1654 words and 0 issues) !######################################################## --> REVIEW Engineering: not reviewed The Azure Storage Mover service utilizes agents that carry out the migration jobs you configure in the service. The agent is a virtual machine / appliance that you run on a virtualization host, close to the source storage. -In this article, you'll learn how to successfully register a previously deployed Storage Mover agent VM. Registration creates a trust relationship with your cloud service and enables the agent to receive migration jobs. +You need to register an agent to create a trust relationship with your Storage Mover resource. This trust enables your agent to securely receive migration jobs and report progress. Agent registration can occur over either the public or private endpoint of your Storage Mover resource. A private endpoint, also known as the private link to a resource, can be deployed in an Azure virtual network (VNet). ++You can connect to an Azure VNET from other networks, like an on-premises corporate network. This type of connection is made through a VPN connection such as Azure Express Route. To learn more about this approach, refer to the [Azure ExpressRoute documentation](/azure/expressroute/) and [Azure Private Link](/azure/private-link) documentation. ++[!IMPORTANT] Currently, Storage Mover can be configured to route migration data from the agent to the destination storage account over Private Link. Hybrid Compute heartbeats and certificates can also be routed to a private Azure Arc service endpoint in your virtual network (VNet). Some Storage Mover traffic can't be routed through Private Link and is routed over the public endpoint of a storage mover resource. This data includes control messages, progress telemetry, and copy logs. ++In this article, you learn how to successfully register a previously deployed Storage Mover agent virtual machine (VM). ## Prerequisites There are two prerequisites before you can register an Azure Storage Mover agent Registration creates trust between the agent and the cloud resource. It allows you to remotely manage the agent and to give it migration jobs to execute. -Registration is always initiated from the agent. For security purposes, trust can only be created by the agent reaching out to the Storage Mover service. The registration procedure utilizes your Azure credentials and permissions on the storage mover resource you've previously deployed. If you don't have a storage mover cloud resource or an agent VM deployed yet, refer to the [prerequisites section](#prerequisites). +Registration is always initiated from the agent. In the interest of security, only the agent can establish trust by reaching out to the Storage Mover service. The registration procedure utilizes your Azure credentials and permissions on the storage mover resource you've previously deployed. If you don't have a storage mover cloud resource or an agent VM deployed yet, refer to the [prerequisites section](#prerequisites). ## Step 1: Connect to the agent VM However, the agent VM is a Linux based appliance and copy/paste often doesn't wo ## Step 2: Test network connectivity -Your agent needs to be connected to the internet. <!-- The article **<!!!!! ARTICLE AND LINK NEEDED !!!!!>** showcases connectivity requirements and options. --> +Your agent needs to be connected to the internet. When logged into the administrative shell, you can test the agents connectivity state: Choice: 3 ``` Select menu item 3) *Test network connectivity*. -<!-- The **<!!!!! ARTICLE AND LINK NEEDED !!!!!>** article can help troubleshoot in case you've encountered any issues. --> + > [!IMPORTANT] > Only proceed to the registration step when your network connectivity test returns no issues. ## Step 3: Register the agent -In this step, you'll register your agent with the storage mover resource you've deployed in an Azure subscription. +In this step, you register your agent with the storage mover resource you've deployed in an Azure subscription. [Connect to the administrative shell](#step-1-connect-to-the-agent-vm) of your agent, then select menu item *4) Register*: ```StorageMoverAgent-AdministrativeShell In this step, you'll register your agent with the storage mover resource you've xdmsh> 4 ```-You'll be prompted for: +You're prompted for: - Subscription ID - Resource group name - Storage mover resource name-- Agent name: This name will be shown for the agent in the Azure portal. Select a name that clearly identifies this agent VM for you. Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) to choose a supported name.+- Agent name: This name is shown for the agent in the Azure portal. Select a name that clearly identifies this agent VM for you. Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) to choose a supported name. +- Private Link Scope: Provide the fully qualified resource ID of your Private Link Scope if you're utilizing private networking. You can find more information on Azure Private Link in the [Azure Private Link documentation](/azure/private-link/) article. -Once you've supplied these values, the agent will attempt registration, and requires you to sign into Azure with the credentials that have permissions to the supplied subscription and storage mover resource. +After you've supplied these values, the agent will attempt registration. During the registration process, you're required to sign into Azure with credentials that have permissions to your subscription and storage mover resource. > [!IMPORTANT] > The Azure credentials you use for registration must have owner permissions to the specified resource group and storage mover resource. For authentication, the agent utilizes the [device authentication flow](../active-directory/develop/msal-authentication-flows.md#device-code) with Azure Active Directory. -The agent will display the device auth URL: [https://microsoft.com/devicelogin](https://microsoft.com/devicelogin) and a unique sign-in code. Navigate to the displayed URL on an internet connected machine, enter the code, and sign into Azure with your credentials. +The agent displays the device auth URL: [https://microsoft.com/devicelogin](https://microsoft.com/devicelogin) and a unique sign-in code. Navigate to the displayed URL on an internet connected machine, enter the code, and sign into Azure with your credentials. -The agent will display detailed progress. Once the registration is complete, you'll be able to see the agent in the Azure portal. It will be under *Registered agents* in the storage mover resource you've registered the agent with. +The agent displays detailed progress. Once the registration is complete, you're able to see the agent in the Azure portal. It is under *Registered agents* in the storage mover resource you've registered the agent with. ## Authentication and Authorization The agent is also registered with the [Azure ARC service](../azure-arc/overview. Azure Storage Mover uses a system-assigned managed identity. A managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is also automatically removed. -The process of deletion is automatically initiated when you unregister the agent. However, there are other ways to remove this identity. Doing so will incapacitate the registered agent and require the agent to be unregistered. Only the registration process can get an agent to obtain and maintain its Azure identity properly. +The process of deletion is automatically initiated when you unregister the agent. However, there are other ways to remove this identity. Doing so incapacitates the registered agent and require the agent to be unregistered. Only the registration process can get an agent to obtain and maintain its Azure identity properly. > [!NOTE] > During public preview, there is a side effect of the registration with the Azure ARC service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource. The process of deletion is automatically initiated when you unregister the agent The registered agent needs to be authorized to access several services and resources in your subscription. The managed identity is its way to prove its identity. The Azure service or resource can then decide if the agent is authorized to access it. -The agent is automatically authorized to converse with the Storage Mover service. You won't be able to see or influence this authorization short of destroying the managed identity, for instance by unregistering the agent. +The agent is automatically authorized to converse with the Storage Mover service. You aren't able to see or influence this authorization short of destroying the managed identity, for instance by unregistering the agent. #### Just-in-time authorization Perhaps the most important resource the agent needs to be authorized for access is the Azure Storage that is the target for a migration job. Authorization takes place through [Role-based access control](../role-based-access-control/overview.md). For an Azure blob container as a target, the registered agent's managed identity is assigned to the built-in role "Storage Blob Data Contributor" of the target container (not the whole storage account). -This assignment is made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent won't be authorized to perform any management plane actions, such as deleting the target container or configuring any features on it. +This assignment is made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent isn't authorized to perform any management plane actions, such as deleting the target container or configuring any features on it. > [!WARNING] > Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure ARC services. |
storage | Access Tiers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md | description: Azure storage offers different access tiers so that you can store y Previously updated : 06/23/2023 Last updated : 07/13/2023 The following table summarizes the features of the hot, cool, cold, and archive | | **Hot tier** | **Cool tier** | **Cold tier (preview)** |**Archive tier** | |--|--|--|--|--|-| **Availability** | 99.9% | 99% | 99% | Offline | -| **Availability** <br> **(RA-GRS reads)** | 99.99% | 99.99% | 99.9% | Offline | +| **Availability** | 99.9% | 99% | 99% | 99% | +| **Availability** <br> **(RA-GRS reads)** | 99.99% | 99.999% | 99.999% | 99.999% | | **Usage charges** | Higher storage costs, but lower access and transaction costs | Lower storage costs, but higher access and transaction costs | Lower storage costs, but higher access and transaction costs | Lowest storage costs, but highest access, and transaction costs | | **Minimum recommended data retention period** | N/A | 30 days<sup>1</sup> | 90 days<sup>1</sup> | 180 days | | **Latency** <br> **(Time to first byte)** | Milliseconds | Milliseconds | Milliseconds | Hours<sup>2</sup> | |
storage | Authorize Data Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md | The following table describes the options that Azure Storage offers for authoriz |--|--|--|--|--|--|--| | Azure Blobs | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../blobs/authorize-access-azure-active-directory.md) | Not supported | [Supported but not recommended](../blobs/anonymous-read-access-overview.md) | [Supported, only for SFTP](../blobs/secure-file-transfer-protocol-support-how-to.md) | | Azure Files (SMB) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | Not supported | Supported, only with [Azure AD Domain Services](../files/storage-files-identity-auth-active-directory-domain-service-enable.md) for cloud-only or [Azure AD Kerberos](../files/storage-files-identity-auth-azure-active-directory-enable.md) for hybrid identities | [Supported, credentials must be synced to Azure AD](../files/storage-files-active-directory-overview.md) | Not supported | Not supported |-| Azure Files (REST) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported (preview)](../files/authorize-oauth-rest.md) | Not supported | Not supported | Not supported | +| Azure Files (REST) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../files/authorize-oauth-rest.md) | Not supported | Not supported | Not supported | | Azure Queues | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../queues/authorize-access-azure-active-directory.md) | Not Supported | Not supported | Not supported | | Azure Tables | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../tables/authorize-access-azure-active-directory.md) | Not supported | Not supported | Not supported | |
storage | Authorize Data Operations Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/authorize-data-operations-portal.md | There are two new built-in roles that have the required permissions to access fi - [Storage File Data Privileged Reader](../../role-based-access-control/built-in-roles.md#storage-file-data-privileged-reader) - [Storage File Data Privileged Contributor](../../role-based-access-control/built-in-roles.md#storage-file-data-privileged-contributor) -For information about the built-in roles that support access to file data, see [Access Azure file shares using Azure Active Directory with Azure Files OAuth over REST (preview)](authorize-oauth-rest.md). +For information about the built-in roles that support access to file data, see [Access Azure file shares using Azure Active Directory with Azure Files OAuth over REST](authorize-oauth-rest.md). ++> [!NOTE] +> The **Storage File Data Privileged Contributor** role has permissions to read, write, delete, and modify ACLs/NTFS permissions on files/directories in Azure file shares. Modifying ACLs/NTFS permissions isn't supported via the Azure portal. Custom roles can support different combinations of the same permissions provided by the built-in roles. For more information about creating Azure custom roles, see [Azure custom roles](../../role-based-access-control/custom-roles.md) and [Understand role definitions for Azure resources](../../role-based-access-control/role-definitions.md). To update this setting for an existing storage account, follow these steps: ## See also -- [Access Azure file shares using Azure AD with Azure Files OAuth over REST (preview)](authorize-oauth-rest.md)+- [Access Azure file shares using Azure AD with Azure Files OAuth over REST](authorize-oauth-rest.md) - [Authorize access to data in Azure Storage](../common/authorize-data-access.md) |
storage | Authorize Oauth Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/authorize-oauth-rest.md | Title: Enable admin-level read and write access to Azure file shares using Azure Active Directory with Azure Files OAuth over REST (preview) + Title: Enable admin-level read and write access to Azure file shares using Azure Active Directory with Azure Files OAuth over REST description: Authorize access to Azure file shares and directories via the OAuth authentication protocol over REST APIs using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access files with an Azure AD account. Previously updated : 05/11/2023 Last updated : 07/13/2023 -# Access Azure file shares using Azure Active Directory with Azure Files OAuth over REST (preview) +# Access Azure file shares using Azure Active Directory with Azure Files OAuth over REST -Azure Files OAuth over REST (preview) enables admin-level read and write access to Azure file shares for users and applications via the [OAuth](https://oauth.net/) authentication protocol, using Azure Active Directory (Azure AD) for REST API based access. Users, groups, first-party services such as Azure portal, and third-party services and applications using REST interfaces can now use OAuth authentication and authorization with an Azure AD account to access data in Azure file shares. PowerShell cmdlets and Azure CLI commands that call REST APIs can also use OAuth to access Azure file shares. +Azure Files OAuth over REST enables admin-level read and write access to Azure file shares for users and applications via the [OAuth](https://oauth.net/) authentication protocol, using Azure Active Directory (Azure AD) for REST API based access. Users, groups, first-party services such as Azure portal, and third-party services and applications using REST interfaces can now use OAuth authentication and authorization with an Azure AD account to access data in Azure file shares. PowerShell cmdlets and Azure CLI commands that call REST APIs can also use OAuth to access Azure file shares. > [!IMPORTANT] > You must call the REST API using an explicit header to indicate your intent to use the additional privilege. This is also true for Azure PowerShell and Azure CLI access. ## Limitations -Azure Files OAuth over REST (preview) only supports the FileREST Data APIs that support operations on files and directories. OAuth isn't supported on FilesREST data plane APIs that manage FileService and FileShare resources. These management APIs are called using the Storage Account Key or SAS token, and are exposed through the data plane for legacy reasons. We recommend using the control plane APIs (the storage resource provider - Microsoft.Storage) that support OAuth for all management activities related to FileService and FileShare resources. +Azure Files OAuth over REST only supports the FileREST Data APIs that support operations on files and directories. OAuth isn't supported on FilesREST data plane APIs that manage FileService and FileShare resources. These management APIs are called using the Storage Account Key or SAS token, and are exposed through the data plane for legacy reasons. We recommend using the control plane APIs (the storage resource provider - Microsoft.Storage) that support OAuth for all management activities related to FileService and FileShare resources. Authorizing file data operations with Azure AD is supported only for REST API versions 2022-11-02 and later. SeeΓÇ»[Versioning for Azure Storage](/rest/api/storageservices/versioning-for-the-azure-storage-services). To use the Azure Files OAuth over REST feature, there are additional permissions Users, groups, or service principals that call the REST API with OAuth must have either the `readFileBackupSemantics` or `writeFileBackupSemantics` action assigned to the role that allows data access. This is a requirement to use this feature. For details on the permissions required to call specific File service operations, see [Permissions for calling data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations). -This preview provides two new built-in roles that include these new actions. +This feature provides two new built-in roles that include these new actions. | **Role** | **Data actions** | |-|| |
storage | Storage Files Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md | Title: Frequently asked questions (FAQ) for Azure Files description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 05/16/2023 Last updated : 07/12/2023 -### AD DS & Azure AD DS Authentication +### Identity-based authentication * <a id="ad-support-devices"></a> **Does Azure Active Directory Domain Services (Azure AD DS) support SMB access using Azure AD credentials from devices joined to or registered with Azure AD?** No, this scenario isn't supported. * <a id="ad-file-mount-cname"></a>-**Can I use the canonical name (CNAME) to mount an Azure file share while using identity-based authentication (AD DS or Azure AD DS)?** +**Can I use the canonical name (CNAME) to mount an Azure file share while using identity-based authentication?** - No, this scenario isn't currently supported in single-forest AD environments. As an alternative to CNAME, you can use DFS Namespaces with SMB Azure file shares. To learn more, see [How to use DFS Namespaces with Azure Files](files-manage-namespaces.md). + No, this scenario isn't currently supported in single-forest AD environments. This is because when receiving the mount request, Azure Files depends on the Kerberos ticket's server name field to determine what storage account the request is intended for. If `storageaccount.file.core.windows.net` isn't present in the Kerberos ticket as the server name, then the service can't decide which storage account the request is for and is therefore unable to set up an SMB session for the user. ++ As an alternative to CNAME, you can use DFS Namespaces with SMB Azure file shares. To learn more, see [How to use DFS Namespaces with Azure Files](files-manage-namespaces.md). ++ As a workaround for mounting the file share, see the instructions in [Mount the file share from a non-domain-joined VM](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm). * <a id="ad-vm-subscription"></a> **Can I access Azure file shares with Azure AD credentials from a VM under a different subscription?** |
storage | Storage Files Identity Ad Ds Mount File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md | description: Learn how to mount an Azure file share to your on-premises Active D Previously updated : 04/07/2023 Last updated : 07/12/2023 recommendations: false Sign in to the client using the credentials of the identity that you granted per Before you can mount the Azure file share, make sure you've gone through the following prerequisites: -- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing a new connection with AD DS or Azure AD credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#ad-ds--azure-ad-ds-authentication).+- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing a new connection with AD DS or Azure AD credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#identity-based-authentication). - Your client must have line of sight to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication. +> [!NOTE] +> Using the canonical name (CNAME) to mount an Azure file share isn't currently supported while using identity-based authentication in single-forest AD environments. + ## Mount the file share from a domain-joined VM Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md). |
storage | Storage Files Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md | description: Learn how to migrate to Azure file shares and find your migration g Previously updated : 05/30/2023 Last updated : 07/13/2023 Here are the two basic components of a file: - **File metadata**: The file metadata has these subcomponents: * File attributes like read-only * File permissions, which can be referred to as *NTFS permissions* or *file and folder ACLs*- * Timestamps, most notably the creation, and last-modified timestamps + * Timestamps, most notably the creation and last-modified timestamps * An alternative data stream, which is a space to store larger amounts of nonstandard properties File fidelity in a migration can be defined as the ability to: Taking the previous information into account, you can see that the target storag Unlike object storage in Azure blobs, an Azure file share can natively store file metadata. Azure file shares also preserve the file and folder hierarchy, attributes, and permissions. NTFS permissions can be stored on files and folders because they're on-premises. +> [!IMPORTANT] +> If you're migrating on-premises file servers to Azure File Sync, set the ACLs for the root directory of the file share **before** copying a large number of files, as changes to permissions for root ACLs can take up to a day to propagate if done after a large file migration. + A user of Active Directory, which is their on-premises domain controller, can natively access an Azure file share. So can a user of Azure Active Directory Domain Services (Azure AD DS). Each uses their current identity to get access based on share permissions and on file and folder ACLs. This behavior is similar to a user connecting to an on-premises file share. The alternative data stream is the primary aspect of file fidelity that currently can't be stored on a file in an Azure file share. It's preserved on-premises when Azure File Sync is used. |
traffic-manager | Traffic Manager Create Rum Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-create-rum-visual-studio.md | Title: Real User Measurements with Visual Studio Mobile Center - Azure Traffic Manager -description: Set up your mobile application developed using Visual Studio Mobile Center to send Real User Measurements to Traffic Manager + Title: Real User Measurements with Visual Studio App Center - Azure Traffic Manager +description: Set up your mobile application developed using Visual Studio App Center to send Real User Measurements to Traffic Manager documentationcenter: traffic-manager -# How to send Real User Measurements to Traffic Manager with Visual Studio Mobile Center +# How to send Real User Measurements to Traffic Manager with Visual Studio App Center -You can set up your mobile application developed using Visual Studio Mobile Center to send Real User Measurements to Traffic Manager by following the steps: +You can set up your mobile application developed using Visual Studio App Center to send Real User Measurements to Traffic Manager by following the steps: >[!NOTE] > Currently, sending Real User Measurements to Traffic manager is only supported for Android. To obtain the RUM Key using Azure portal using the following procedure: 6. Click the **Copy** button to copy the RUM Key. -## Step 2: Instrument your app with the RUM package of Mobile Center SDK +## Step 2: Instrument your app with the RUM package of App Center SDK -If you're new to Visual Studio Mobile Center, visit its [website](https://mobile.azure.com). For detailed instructions on SDK integration, see +If you're new to Visual Studio App Center, visit its [website](https://mobile.azure.com). For detailed instructions on SDK integration, see [Getting Started with the Android SDK](/mobile-center/sdk/getting-started/Android). To use Real User Measurements, complete the following procedure: To use Real User Measurements, complete the following procedure: ```java RealUserMeasurements.setRumKey("<Your RUM Key>");- MobileCenter.start(getApplication(), "<Your Mobile Center AppSecret>", RealUserMeasurements.class); + MobileCenter.start(getApplication(), "<Your App Center AppSecret>", RealUserMeasurements.class); ``` ## Next steps - Learn more about [Real User Measurements](traffic-manager-rum-overview.md) - Learn [how Traffic Manager works](traffic-manager-overview.md)-- Learn more about [Mobile Center](/mobile-center/)-- [Sign up](https://mobile.azure.com) for Mobile Center+- Learn more about [App Center](/appcenter) +- [Set up](/appcenter/dashboard/#set-up-your-app-center-account) an App Center account - Learn more about the [traffic-routing methods](traffic-manager-routing-methods.md) supported by Traffic Manager - Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md) |
update-center | Manage Vms Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md | Invoke-AzVMPatchAssessment -ResourceGroupName "myRG" -VMName "myVM" To trigger an update deployment to your Azure VM, specify the following POST request: ```rest-POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Compute/machines/virtualMachineName/installPatches?api-version=2020-12-01` +POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Compute/virtualMachines/virtualMachineName/installPatches?api-version=2020-12-01` ``` #### Request body |
update-center | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md | To schedule recurring updates on a single VM, follow these steps: - Start on - Maintenance window (in hours) > [!NOTE]- > The upper maintenance window is 3.55 hours. + > The upper maintenance window is 3 hours 55 mins. - Repeats (monthly, daily or weekly) - Add end date - Schedule summary |
virtual-desktop | Start Virtual Machine Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md | To use Start VM on Connect, make sure you follow these guidelines: ## Assign the Desktop Virtualization Power On Contributor role with the Azure portal -Before you can configure Start VM on Connect, you'll need to assign the *Desktop Virtualization Power On Contributor* role-based access control (RBAC) role with your Azure subscription as the assignable scope. Assigning this role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent Start VM on Connect from working properly. You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with Start VM on Connect. This role and assignment will allow Azure Virtual Desktop to power on VMs, check their status, and report diagnostic information in those subscriptions. +Before you can configure Start VM on Connect, you'll need to assign the *Desktop Virtualization Power On Contributor* role-based access control (RBAC) role with your Azure subscription as the assignable scope. Assigning this role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent Start VM on Connect from working properly. ++You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with Start VM on Connect. This role and assignment will allow Azure Virtual Desktop to power on VMs, check their status, and report diagnostic information in those subscriptions. To learn how to assign the *Desktop Virtualization Power On Contributor* role to the Azure Virtual Desktop service principal, see [Assign RBAC roles to the Azure Virtual Desktop service principal](service-principal-assign-roles.md). If you run into any issues with Start VM On Connect, we recommend you use the Az If the session host VM doesn't turn on, you'll need to check the health of the VM you tried to turn on as a first step. +> [!NOTE] +> Connecting to a session host outside of Azure Virtual Desktop that is powered off, such as using the MSTSC client, won't start the VM. + For other questions, check out the [Start VM on Connect FAQ](start-virtual-machine-connect-faq.md). ## Next steps |
virtual-desktop | What Is App Attach | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/what-is-app-attach.md | -MSIX app attach is a way to deliver MSIX applications to both physical and virtual machines. However, MSIX app attach is different from regular MSIX because it's made especially for supported products, such as Azure Virtual Desktop. This article will describe what MSIX app attach is and what it can do for you. +MSIX app attach is a way to deliver MSIX applications to Azure Virtual Desktop virtual machines. However, MSIX app attach is different from regular MSIX because it's made especially for supported products, such as Azure Virtual Desktop. This article will describe what MSIX app attach is and what it can do for you. ## Terminology |
virtual-machine-scale-sets | Spot Priority Mix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md | You can refer to this [ARM template example](https://paste.microsoft.com/f84d2f8 ### [Portal](#tab/portal-1) -You can set your Spot Priority Mix in the Scaling tab of the Virtual Machine Scale Sets creation process in the Azure portal. The following steps instruct you on how to access this feature during that process. +You can set your Spot Priority Mix in the Spot tab of the Virtual Machine Scale Sets creation process in the Azure portal. The following steps instruct you on how to access this feature during that process. 1. Log in to the [Azure portal](https://portal.azure.com). 1. In the search bar, search for and select **Virtual Machine Scale Sets**. 1. Select **Create** on the **Virtual Machine Scale Sets** page. 1. In the **Basics** tab, fill out the required fields, select **Flexible** as the **Orchestration** mode, and select the checkbox for **Run with Azure Spot discount**.-1 In the **Spot** tab, select the check-box next to *Scale with VMs and Spot VMs* option under the **Scale with VMs and discounted Spot VMs** section. +1. In the **Spot** tab, select the check-box next to *Scale with VMs and Spot VMs* option under the **Scale with VMs and discounted Spot VMs** section. 1. Fill out the **Base VM (uninterruptible) count** and **Instance distribution** fields to configure your percentage split between Spot and Standard VMs. 1. Continue through the Virtual Machine Scale Set creation process. New-AzVmss ` ``` -- ## Updating your Spot Priority Mix Should your ideal percentage split of Spot and Standard VMs change, you can update your Spot Priority Mix after your scale set has been deployed. Updating your Spot Priority Mix will apply for all scale set actions *after* the change is made, existing VMs will remain as is. ### [Portal](#tab/portal-2) You can update your existing Spot Priority Mix in the Configuration tab of the Virtual Machine Scale Set resource page in the Azure portal. The following steps instruct you on how to access this feature during that process. Note: in Portal, you can only update the Spot Priority Mix for scale sets that already have Spot Priority Mix enabled. +You can update your existing Spot Priority Mix in the Configuration tab of the Virtual Machine Scale Set resource page in the Azure portal. The following steps instruct you on how to access this feature during that process. Note: in Portal, you can only update the Spot Priority Mix for scale sets that already have Spot Priority Mix enabled. + 1. Navigate to the specific virtual machine scale set that you're adjusting the Spot Priority Mix on. 1. In the left side bar, scroll down to and select **Configuration**. 1. Your current Spot Priority Mix should be visible. Here you can change the **Base VM (uninterruptible) count** and **Instance distribution** of Spot and Standard VMs. Update-AzVmss ` ``` -- ## Examples The following examples have scenario assumptions, a table of actions, and walk-through of results to help you understand how Spot Priority Mix configuration works. The following scenario assumptions apply to this example: | Scale out | 120 | 10 | 27 | 83 (73 running VMs, 10 Stop-Deallocated VMs) | +++++++++ Example walk-through: 1. With the initial creation of the Virtual Machine Scale Set and Spot Priority Mix, you have 20 VMs. - 10 of those VMs are the Base (standard) VMs, 2 extra standard VMs, and 8 Spot priority VMs for your 25% *regularPriorityPercentageAboveBase*. If Spot Priority Mix isn't available to you, be sure to configure the `priorityM ## FAQs ### Q: I changed the Spot Priority Mix settings, why aren't my existing VMs changing?+ Spot Priority Mix applies for scale actions on the scale set. Changing the percentage split of Spot and Standard VMs won't rebalance existing scale set. You'll see the actual percentage split change as you scale the scale set. ### Q: Is Spot Priority Mix enabled for Uniform orchestration mode? Spot VMs, and therefore Spot Priority Mix, are available in all global Azure reg > [!div class="nextstepaction"] > [Learn more about Spot virtual machines](../virtual-machines/spot-vms.md)++ |
virtual-machine-scale-sets | Tutorial Use Disks Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-powershell.md | Update-AzVmss ` -VirtualMachineScaleSet $vmss ``` -Alternatively, if you want to add a data disk to an individual instance in a scale set, use [Add-AzVMDataDisk](/powershell/module/az.compute/add-azvmdatadisk). +Alternatively, if you want to add a data disk to an individual instance in a scale set, use [Add-AzVmssVMDataDisk](/powershell/module/az.compute/add-azvmssvmdatadisk). ```azurepowershell-interactive-$VirtualMachine = Get-AzVM -ResourceGroupName "myResourceGroup" -Name "myScaleSet_Instance1" -Add-AzVMDataDisk -VM $VirtualMachine -Name "disk1" -LUN 2 -Caching ReadOnly -DiskSizeinGB 1 -CreateOption Empty -Update-AzVM -ResourceGroupName "myResourceGroup" -VM $VirtualMachine +$VirtualMachine = Get-AzVmssVM -ResourceGroupName "myResourceGroup" -VMScaleSetName "myScaleSet" -InstanceId 1 +Add-AzVmssVMDataDisk -VirtualMachineScaleSetVM $VirtualMachine -LUN 2 -DiskSizeInGB 1 -CreateOption Empty -StorageAccountType Standard_LRS +Update-AzVmssVM -VirtualMachineScaleSetVM $VirtualMachine ``` ## List attached disks Update-AzVmss ` -VirtualMachineScaleSet $vmss ``` -Alternatively, if you want to remove a data disk to an individual instance in a scale set, use [Remove-AzVMDataDisk](/powershell/module/az.compute/remove-azvmdatadisk). +Alternatively, if you want to remove a data disk to an individual instance in a scale set, use [Remove-AzVmssVMDataDisk](/powershell/module/az.compute/remove-azvmssvmdatadisk). ```azurepowershell-interactive-$VirtualMachine = Get-AzVM -ResourceGroupName "myResourceGroup" -Name "myScaleSet_c91dfbd9" -Remove-AzVMDataDisk -VM $VirtualMachine -Name "myScaleSet_c91dfbd9_disk3_65c5d7a8f5ae40cb8d5f80c04b7b3d2e" -Update-AzVM -ResourceGroupName "myResourceGroup" -VM $VirtualMachine +$VirtualMachine = Get-AzVmssVM -ResourceGroupName "myResourceGroup" -VMScaleSetName "myScaleSet" -InstanceId "c91dfbd9" +Remove-AzVmssVMDataDisk -VirtualMachineScaleSetVM $VirtualMachine -Lun 2 +Update-AzVmssVM -VirtualMachineScaleSetVM -VM $VirtualMachine ``` ## Clean up resources |
virtual-machine-scale-sets | Virtual Machine Scale Sets Automatic Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md | The following example describes how to set automatic OS upgrades on a scale set "MaxUnhealthyInstancePercent": 25, "MaxUnhealthyUpgradedInstancePercent": 25, "PauseTimeBetweenBatches": "PT0S"- "automaticOSUpgradePolicy": { - "enableAutomaticOSUpgrade": true, - "useRollingUpgradePolicy": true, - "disableAutomaticRollback": false - } - } - }, + }, + "automaticOSUpgradePolicy": { + "enableAutomaticOSUpgrade": true, + "useRollingUpgradePolicy": true, + "disableAutomaticRollback": false + } + }, "imagePublisher": { "type": "string", "defaultValue": "MicrosoftWindowsServer" The following example describes how to set automatic OS upgrades on a scale set "defaultValue": "latest" } }+ ``` ### Bicep |
virtual-machine-scale-sets | Virtual Machine Scale Sets Design Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md | Generally, scale sets are useful for deploying highly available infrastructure w Some features are currently only available in VMs: -- You can capture an image from an individual VM, but not from a VM in a scale set.-- You can migrate an individual VM from native disks to managed disks, but you cannot migrate VM instances in a scale set.+- You can capture an image from a VM in a flexible scale set, but not from a VM in a uniform scale set. +- You can migrate an individual VM from classic disks to managed disks, but you cannot migrate VM instances in a uniform scale set. - You can assign IPv6 public IP addresses to individual VM virtual network interface cards (NICs), but cannot do so for VM instances in a scale set. You can assign IPv6 public IP addresses to load balancers in front of either individual VMs or scale set VMs. ## Storage |
virtual-machines | Dcasccv5 Dcadsccv5 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcasccv5-dcadsccv5-series.md | The DCas_cc_v5-series sizes offer a combination of vCPU and memory for most prod [Premium Storage](premium-storage-performance.md): Supported <br> [Premium Storage caching](premium-storage-performance.md): Supported <br>-[Live Migration](maintenance-and-updates.md): Supported <br> -[Memory Preserving Updates](maintenance-and-updates.md): Supported <br> -[VM Generation Support](generation-2.md): Generation 1 and 2 <br> -[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> +[Live Migration](maintenance-and-updates.md): Not Supported <br> +[Memory Preserving Updates](maintenance-and-updates.md): Not Supported <br> +[VM Generation Support](generation-2.md): Generation 2 <br> +[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported only for Marketplace Windows image <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br> |
virtual-machines | Disks Shared | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md | description: Learn about sharing Azure managed disks across multiple Linux VMs. Previously updated : 06/19/2023 Last updated : 07/12/2023 +Shared disks require a cluster manager, like Windows Server Failover Cluster (WSFC), or Pacemaker, that handles cluster node communication and write locking. Shared managed disks don't natively offer a fully managed file system that can be accessed using SMB/NFS. + ## How it works VMs in the cluster can read or write to their attached disk based on the reservation chosen by the clustered application using [SCSI Persistent Reservations](https://www.t10.org/members/w_spc3.htm) (SCSI PR). SCSI PR is an industry standard used by applications running on Storage Area Network (SAN) on-premises. Enabling SCSI PR on a managed disk allows you to migrate these applications to Azure as-is. Shared managed disks offer shared block storage that can be accessed from multiple VMs, these are exposed as logical unit numbers (LUNs). LUNs are then presented to an initiator (VM) from a target (disk). These LUNs look like direct-attached-storage (DAS) or a local drive to the VM. -Shared managed disks don't natively offer a fully managed file system that can be accessed using SMB/NFS. You need to use a cluster manager, like Windows Server Failover Cluster (WSFC), or Pacemaker, that handles cluster node communication and write locking. - ## Limitations [!INCLUDE [virtual-machines-disks-shared-limitations](../../includes/virtual-machines-disks-shared-limitations.md)] |
virtual-machines | Disks Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md | Title: Select a disk type for Azure IaaS VMs - managed disks description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 06/13/2023 Last updated : 07/12/2023 To deploy a Premium SSD v2, see [Deploy a Premium SSD v2](disks-deploy-premium-v ## Premium SSDs -Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. +Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs only supports 512E sector size. To learn more about individual Azure VM types and sizes for Windows or Linux, including size compatibility for premium storage, see [Sizes for virtual machines in Azure](sizes.md). You'll need to check each individual VM size article to determine if it's premium storage-compatible. For Premium SSDs, each I/O operation less than or equal to 256 kB of throughput ## Standard SSDs -Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. +Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. Standard SSD only supports 512E sector size. ### Standard SSD size Standard SSDs offer disk bursting, which provides better tolerance for the unpre ## Standard HDDs -Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. +Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. Standard HDDs only supports 512E sector size. ### Standard HDD size [!INCLUDE [disk-storage-standard-hdd-sizes](../../includes/disk-storage-standard-hdd-sizes.md)] |
virtual-machines | Ecasccv5 Ecadsccv5 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ecasccv5-ecadsccv5-series.md | The ECas_cc_v5-series sizes offer a combination of vCPU and memory for most prod [Premium Storage](premium-storage-performance.md): Supported <br> [Premium Storage caching](premium-storage-performance.md): Supported <br>-[Live Migration](maintenance-and-updates.md): Supported <br> -[Memory Preserving Updates](maintenance-and-updates.md): Supported <br> +[Live Migration](maintenance-and-updates.md): Not Supported <br> +[Memory Preserving Updates](maintenance-and-updates.md): Not Supported <br> [VM Generation Support](generation-2.md): Generation 2 <br>-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> -[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> +[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported only for Marketplace Windows image <br> +[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br> |
virtual-machines | Key Vault Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-linux.md | The Key Vault VM extension supports these Linux distributions: > [!NOTE] > To get extended security features, prepare to upgrade Ubuntu 16.04 and Debian 9 systems as these versions are reaching their end of designated support period.-> > [!NOTE]-> The Key Vault VM Extension downloads the certificates in the default location or to the location provided by "certStoreLocation" property in the VM Extension settings. The KeyValut VM Extension updates the folder permission to 700 (drwx) allowing read, write and execute permission to the owner of the folder only +> The Key Vault VM Extension downloads the certificates in the default location or to the location provided by "certStoreLocation" property in the VM Extension settings. The Key Vault VM Extension updates the folder permission to 700 (drwx) allowing read, write and execute permission to the owner of the folder only ### Supported certificate content types - PKCS #12 - PEM - ## Prerequisites - Key Vault instance with certificate. See [Create a Key Vault](../../key-vault/general/quick-create-portal.md) - VM/VMSS must have assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) The Key Vault VM extension supports these Linux distributions: } ` ## Key Vault VM extension version-* Users can chose to upgrade their key vault vm extension version to `V2.0` to use full certificate chain download feature. Issuer certificates (intermediate and root) will be appended to the leaf certificate in the PEM file. ++* Users can chose to upgrade their Key Vault vm extension version to `V2.0` to use full certificate chain download feature. Issuer certificates (intermediate and root) will be appended to the leaf certificate in the PEM file. * If you prefer to upgrade to `v2.0`, you would need to delete `v1.0` first, then install `v2.0`. ```azurecli The Key Vault VM extension supports these Linux distributions: * If the VM has certificates downloaded by v1.0, deleting the v1.0 AKVVM extension will NOT delete the downloaded certificates. After installing v2.0, the existing certificates will NOT be modified. You would need to delete the certificate files or roll-over the certificate to get the PEM file with full-chain on the VM. --- ## Extension schema The following JSON shows the schema for the Key Vault VM extension. The extension does not require protected settings - all its settings are considered information without security impact. The extension requires a list of monitored secrets, polling frequency, and the destination certificate store. Specifically: The following JSON shows the schema for the Key Vault VM extension. The extensio > Also **required** for **Azure Arc-enabled VMs**. > Set msiEndpoint to `http://localhost:40342/metadata/identity`. -- ### Property values | Name | Value / Example | Data Type | The following JSON shows the schema for the Key Vault VM extension. The extensio | msiEndpoint | http://169.254.169.254/metadata/identity | string | | msiClientId | c7373ae5-91c2-4165-8ab6-7381d6e75619 | string | - ## Template deployment Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment refresh of certificates. The extension can be deployed to individual VMs or virtual machine scale sets. The schema and configuration are common to both template types. The Azure PowerShell can be used to deploy the Key Vault VM extension to an exis # Start the deployment Update-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> -VirtualMachineScaleSet $vmss - ``` ## Azure CLI deployment |
virtual-machines | Expand Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md | +An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach data disks and use them for data storage. If you need to store data on the OS disk and require the additional space, convert it to GUID Partition Table (GPT). + > [!WARNING] > Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md). |
virtual-machines | Image Builder Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md | myBigFile.zip 826000 B / 826000 B 100.00% `File` customizer is suitable only for small (less than 20 MB) file downloads. For larger file downloads, use a script or inline command. For example, in Linux you can use `wget` or `curl`. In Windows, you can use `Invoke-WebRequest`. ++### The builder continually fails to run Windows-Restart with the error code 1190 ++#### Error ++```output +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] (telemetry) Starting provisioner windows-restart +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-plugin-azure plugin: 2023/06/13 08:28:58 [INFO] starting remote command: shutdown /r /f /t 10 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-plugin-azure plugin: 2023/06/13 08:28:58 [INFO] command 'shutdown /r /f /t 10' exited with code: 0 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT ==> azure-arm: A system shutdown has already been scheduled.(1190) +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-plugin-azure plugin: 2023/06/13 08:28:58 [INFO] RPC endpoint: Communicator ended with: 0 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] 0 bytes written for 'stdout' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] 0 bytes written for 'stderr' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] RPC client: Communicator ended with: 0 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] RPC endpoint: Communicator ended with: 0 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: [INFO] 0 bytes written for 'stdout' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: [INFO] 0 bytes written for 'stderr' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: [INFO] RPC client: Communicator ended with: 0 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: Check if machine is rebooting... +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-plugin-azure plugin: 2023/06/13 08:28:58 [INFO] starting remote command: shutdown /r /f /t 60 /c "packer restart test" +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-plugin-azure plugin: 2023/06/13 08:28:58 [INFO] command 'shutdown /r /f /t 60 /c "packer restart test"' exited with code: 1190 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-plugin-azure plugin: 2023/06/13 08:28:58 [INFO] RPC endpoint: Communicator ended with: 1190 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] 52 bytes written for 'stderr' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] 0 bytes written for 'stdout' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] RPC client: Communicator ended with: 1190 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 [INFO] RPC endpoint: Communicator ended with: 1190 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: [INFO] 52 bytes written for 'stderr' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: [INFO] 0 bytes written for 'stdout' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: [INFO] RPC client: Communicator ended with: 1190 +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:28:58 packer-provisioner-windows-restart plugin: Reboot already in progress, waiting... +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:29:08 packer-provisioner-windows-restart plugin: Check if machine is rebooting... +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:29:09 [INFO] 0 bytes written for 'stderr' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:29:09 packer-provisioner-windows-restart plugin: [INFO] 0 bytes written for 'stderr' +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:29:09 packer-provisioner-windows-restart plugin: Waiting for machine to reboot with timeout: 15m0s +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:29:09 packer-provisioner-windows-restart plugin: Waiting for machine to become available... +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT ==> Some builds didn't complete successfully and had errors: +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:46:26 machine readable: azure-arm,error []string{"Timeout waiting for machine to restart."} +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT --> azure-arm: Timeout waiting for machine to restart. +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR ==> Builds finished but no artifacts were created. +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:46:26 [INFO] (telemetry) Finalizing. +[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT ==> Builds finished but no artifacts were created. +``` ++#### Cause ++The windows update step declares prematurely in images based on Windows Server 2016. ++#### Solution ++Increase `restartTimeout` from 15 minutes to 30 minutes. + ### Error waiting on Azure Compute Gallery #### Error |
virtual-machines | Maintenance Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md | This scope is integrated with [update management center](../update-center/overvi - A minimum of 1 hour and 10 minutes is required for the maintenance window. :::image type="content" source="./media/maintenance-configurations/add-schedule-maintenance-window.png" alt-text="Screenshot of the upper maintenance window minimum time specification.":::-- The upper maintenance window is 3.55 hours.++- The upper maintenance window is 3 hours 55 mins. - A minimum of 1 hour and 30 minutes is required for the maintenance window. - There is no limit to the recurrence of your schedule. |
virtual-machines | Managed Disks Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md | A data disk is a managed disk that's attached to a virtual machine to store appl Every virtual machine has one attached operating system disk. That OS disk has a pre-installed OS, which was selected when the VM was created. This disk contains the boot volume. -This disk has a maximum capacity of 4,095 GiB, however, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach [data disks](#data-disk) and use them for data storage. If you need to store data on the OS disk and require the additional space, [convert it to GUID Partition Table](/windows-server/storage/disk-management/change-an-mbr-disk-into-a-gpt-disk) (GPT). To learn about the differences between MBR and GPT on Windows deployments, see [Windows and GPT FAQ](/windows-hardware/manufacture/desktop/windows-and-gpt-faq). +This disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach [data disks](#data-disk) and use them for data storage. If you need to store data on the OS disk and require the additional space, [convert it to GUID Partition Table](/windows-server/storage/disk-management/change-an-mbr-disk-into-a-gpt-disk) (GPT). To learn about the differences between MBR and GPT on Windows deployments, see [Windows and GPT FAQ](/windows-hardware/manufacture/desktop/windows-and-gpt-faq). ### Temporary disk |
virtual-machines | Expand Os Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/expand-os-disk.md | +An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach data disks and use them for data storage. If you need to store data on the OS disk and require the additional space, [convert it to GUID Partition Table](/windows-server/storage/disk-management/change-an-mbr-disk-into-a-gpt-disk) (GPT). To learn about the differences between MBR and GPT on Windows deployments, see [Windows and GPT FAQ](/windows-hardware/manufacture/desktop/windows-and-gpt-faq). ++ > [!IMPORTANT] > Unless you use [Expand without downtime](#expand-without-downtime), expanding a data disk requires the VM to be deallocated. > |
virtual-machines | Mainframe White Papers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/mainframe-white-papers.md | Modernize your infrastructure at cloud scale. TmaxSoft OpenFrame makes it easy t This white paper reflects AstadiaΓÇÖs more than 25 years of mainframe platform modernization expertise. They explain the benefits and challenges of modernization efforts. This guide gives an overview of the IBM mainframe and an IBM mainframe-to-Azure reference architecture. It also provides a look at the Astadia success methodology. -### [Deploying mainframe applications to Microsoft Azure](http://content.microfocus.com/deploying-mainframe-azure) +### Deploying mainframe applications to Microsoft Azure Solutions from Micro Focus free you from the constraints of proprietary mainframe hardware and software. In this guide, Micro Focus explains how to deploy your COBOL and PL/I applications running in IBM mainframes to the cloud instead. |
virtual-machines | Partner Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/partner-workloads.md | For more help with mainframe emulation and services, refer to the [Azure Mainfra - Asysco AMT COBOL development environment (Unisys, IBM mainframes, and other COBOL dialects such as Micro Focus COBOL). - Asysco AMT GO cloud-based deployment architecture for high-end workloads. - Asysco AMT Transform for converting data, code, scripting, security, interfaces and other mainframe artifacts.-- [Fujitsu NetCOBOL](https://www.fujitsu.com/global/products/software/developer-tool/netcobol/) development and integration tools.+- [Fujitsu NetCOBOL](https://www.adaptigent.com/products/cobol-compiler/) development and integration tools. - [Micro Focus Visual COBOL](https://www.microfocus.com/products/visual-cobol/) development and integration tools. - [Micro Focus PL/I](https://www.microfocus.com/documentation/enterprise-developer/ed30/Eclipse/BKPUPUUSNGS040.html) legacy compiler for the .NET platform, supporting mainframe PL/I syntax, data types, and behavior. - [Micro Focus Enterprise Server](https://www.microfocus.com/products/enterprise-suite/enterprise-server/) mainframe integration platform.-- [Modern Systems CTU (COBOL-To-Universal)](http://test.modernsystems.com/automated-cobol-conversion-with-cobol-to-universal/) development and integration tools.+- Modern Systems CTU (COBOL-To-Universal) development and integration tools. - [NTT Data Enterprise COBOL](https://us.nttdata.com/en/digital/application-development-and-modernization) development and integration tools. - [NTT Open PL/I](https://us.nttdata.com/en/digital/application-development-and-modernization) legacy compiler for the .NET platform, supporting mainframe PL/I syntax, data types, and behavior. - [Raincode COBOL compiler](https://www.raincode.com/products/cobol/) development and integration tools. |