Updates from: 09/20/2022 01:10:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
The following example demonstrates the use of a self-asserted technical profile
### Output claims sign-up or sign-in page
-In a combined sign-up and sign-in page, note the following when using a content definition [DataUri](contentdefinitions.md#datauri) element the specifies a `unifiedssp` or `unifiedssd` page type:
+In a combined sign-up and sign-in page, note the following when using a content definition [DataUri](contentdefinitions.md#datauri) element that specifies a `unifiedssp` or `unifiedssd` page type:
- Only the username and password claims are rendered. - The first two output claims must be the username and the password (in this order).
active-directory Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-integrations.md
- Title: View integration information about an authorization system in Permissions Management
-description: View integration information about an authorization system in Permissions Management.
------ Previously updated : 02/23/2022----
-# View integration information about an authorization system
-
-The **Integrations** dashboard in Permissions Management allows you to view all your authorization systems in one place, and to ensure all applications are functioning as one. This information helps improve quality and performance as a whole.
-
-## Display integration information about an authorization system
-
-Refer to the **Integration** subpages in Permissions Management for information about available authorization systems for integration.
-
-1. To display the **Integrations** dashboard, select **User** (your initials) in the upper right of the screen, and then select **Integrations.**
-
- The **Integrations** dashboard displays a tile for each available authorization system.
-
-1. Select an authorization system tile to view its integration information.
-
-## Available integrated authorization systems
-
-The following authorization systems may be listed in the **Integrations** dashboard, depending on which systems are integrated into the Permissions Management application.
--- **ServiceNow**: Manages digital workflows for enterprise operations, and the Permissions Management integration allows you to request and approve permissions through the ServiceNow ticketing workflow.-- **Splunk**: Searches, monitors, and analyzes machine-generated data, and the Permissions Management integration enables exporting usage analytics data, alerts, and logs.-- **HashiCorp Terraform**: Permissions Management enables the generation of least-privilege policies through the Hashi Terraform provider.-- **Permissions Management API**: The Permissions Management application programming interface (API) provides access to Permissions Management features.-- **Saviynt**: Enables you to view Identity entitlements and usage inside the Saviynt console.-- **Securonix**: Enables exporting usage analytics data, alerts, and logs.----
-<!## Next steps>
-
-<![Installation overview](installation.md)>
-<![Configure integration with the Permissions Management API](integration-api.md)>
-<![Sign up and deploy FortSentry in your organization](fortsentry-registration.md)>
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
const msalInstance = new PublicClientApplication(msalConfig);
You can test your application by signing in a user and then using the Azure portal to revoke the user's session. The next time the app calls the CAE-enabled API, the user will be asked to reauthenticate.
+## Code samples
+
+- [React single-page application using MSAL React to sign-in users against Azure Active Directory](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph)
+- [Enable your ASP.NET Core web app to sign in users and call Microsoft Graph with the Microsoft identity platform](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-1-Call-MSGraph)
+ ## Next steps - [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) conceptual overview - [Claims challenges, claims requests, and client capabilities](claims-challenge.md)-- [React single-page application using MSAL React to sign-in users against Azure Active Directory](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph)-- [Enable your ASP.NET Core web app to sign in users and call Microsoft Graph with the Microsoft identity platform](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-1-Call-MSGraph)+
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Previously updated : 06/28/2022 Last updated : 09/16/2022
The ID element identifies which property on the source provides the value for th
> [!NOTE] > Names and URIs of claims in the restricted claim set cannot be used for the claim type elements. For more information, see the "Exceptions and restrictions" section later in this article.
-### Group Filter (Preview)
+### Group Filter
**String:** GroupFilter
Based on the method chosen, a set of inputs and outputs is expected. Define the
|Join|string1, string2, separator|outputClaim|Joins input strings by using a separator in between. For example: string1:"foo@bar.com" , string2:"sandbox" , separator:"." results in outputClaim:"foo@bar.com.sandbox"| |ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other directory extensions, which are storing a UPN or email address value for the user, for example, johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
-**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has three attributes: **ClaimTypeReferenceId**, **TransformationClaimType** and **TreatAsMultiValue** (Preview)
+**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has three attributes: **ClaimTypeReferenceId**, **TransformationClaimType** and **TreatAsMultiValue**
- **ClaimTypeReferenceId** is joined with ID element of the claim schema entry to find the appropriate input claim. - **TransformationClaimType** is used to give a unique name to this input. This name must match one of the expected inputs for the transformation method.
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
device.objectId -ne null
## Extension properties and custom extension properties
-Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) can be synced from on-premises Window Server Active Directory or updated using Microsoft Graph and take the format of "ExtensionAttributeX", where X equals 1 - 15. Here's an example of a rule that uses an extension attribute as a property:
+Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) can be synced from on-premises Window Server Active Directory or updated using Microsoft Graph and take the format of "ExtensionAttributeX", where X equals 1 - 15. Multi-value extension properties are not supported in dynamic membership rules. Here's an example of a rule that uses an extension attribute as a property:
``` (user.extensionAttribute15 -eq "Marketing")
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 09/12/2022 Last updated : 09/19/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on September 12th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on September 19th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 Apps for Faculty | OFFICESUBSCRIPTION_FACULTY | 12b8c807-2e20-48fc-b453-542b6ee9d171 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91) | | Microsoft 365 Apps for Students | OFFICESUBSCRIPTION_STUDENT | c32f9321-a627-406d-a114-1f9c81aaafac | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | | Microsoft 365 Audio Conferencing for GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
+| Microsoft 365 Audio Conferencing Pay-Per-Minute - EA | MCOMEETACPEA | df9561a4-4969-4e6a-8e73-c601b68ec077 | MCOMEETACPEA (bb038288-76ab-49d6-afc1-eaa6c222c65a) | Microsoft 365 Audio Conferencing Pay-Per-Minute (bb038288-76ab-49d6-afc1-eaa6c222c65a) |
| Microsoft 365 Business Basic | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 Business Basic | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | Microsoft 365 Business Standard | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>STREAM_O365_SMB (3c53ea51-d578-46fa-a4c0-fd0a92809a60)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Business (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Kaizala Pro (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 1) (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Stream for Office 365 (3c53ea51-d578-46fa-a4c0-fd0a92809a60)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | MICROSOFT 365 G3 GCC | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | Microsoft 365 GCC G5 | M365_G5_GCC | e2be619b-b125-455f-8660-fb503e431a5d | CDS_O365_P3_GCC (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>FORMS_GOV_E5 (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS_GOV (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>STREAM_O365_E5_GOV (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_PREMIUM2_GOV (5400a66d-eaa5-427d-80f2-0f26d59d8fce)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P3_GCC (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P3_GOV (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>FLOW_O365_P3_GOV (8055d84a-c172-42eb-b997-6c2ae4628246) | Common Data Service for Teams (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>Customer Lockbox for Government (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>Exchange Online (Plan 2) for Government (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>Microsoft 365 Audio Conferencing for Government (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System for Government (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>Microsoft Forms for Government (Plan E5) (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics for Government (Full) (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery for Government (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>Stream for Office 365 for Government (E5) (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Information Protection Premium P2 for GCC (5400a66d-eaa5-427d-80f2-0f26d59d8fce)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 for Government (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>Power Automate for Office 365 for Government (8055d84a-c172-42eb-b997-6c2ae4628246) |
-| Microsoft 365 Phone System | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System for DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System for Faculty | MCOEV_FACULTY | d979703c-028d-4de5-acbf-7955566b69b9 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM(4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System for GCC | MCOEV_GOV | a460366a-ade7-4791-b581-9fbff1bdaa85 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 PHONE SYSTEM FOR GOVERNMENT (db23fce2-a974-42ef-9002-d78dd42a0f22) |
-| Microsoft 365 Phone System for GCCHIGH | MCOEV_GCCHIGH | 7035277a-5e49-4abc-a24f-0ec49c501bb5 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System for Small and Medium Business | MCOEVSMB_1 | aa6791d3-bb09-4bc2-afed-c30c3fe26032 | MCOEVSMB (ed777b71-af04-42ca-9798-84344c66f7c6) | SKYPE FOR BUSINESS CLOUD PBX FOR SMALL AND MEDIUM BUSINESS (ed777b71-af04-42ca-9798-84344c66f7c6) |
-| Microsoft 365 Phone System for Students | MCOEV_STUDENT | 1f338bbc-767e-4a1e-a2d4-b73207cc5b93 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System for TELSTRA | MCOEV_TELSTRA | ffaf2d68-1c95-4eb3-9ddd-59b81fba0f61 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System_USGOV_DOD | MCOEV_USGOV_DOD | b0e7de67-e503-4934-b729-53d595ba5cd1 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System_USGOV_GCCHIGH | MCOEV_USGOV_GCCHIGH | 985fcb26-7b94-475b-b512-89356697be71 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| Microsoft 365 Phone System - Virtual User | PHONESYSTEM_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | MICROSOFT 365 PHONE SYSTEM VIRTUAL USER (f47330e9-c134-43b3-9993-e7f004506889)|
-| Microsoft 365 Phone System - Virtual User for GCC | PHONESYSTEM_VIRTUALUSER_GOV | 2cf22bcb-0c9e-4bc6-8daf-7e7654c0f285 | MCOEV_VIRTUALUSER_GOV (0628a73f-3b4a-4989-bd7b-0f8823144313) | Microsoft 365 Phone System Virtual User for Government (0628a73f-3b4a-4989-bd7b-0f8823144313) |
| Microsoft 365 Security and Compliance for Firstline Workers | M365_SECURITY_COMPLIANCE_FOR_FLW | 2347355b-4e81-41a4-9c22-55057a399791 | AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving for Exchange Online (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft ML-based classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) | | Microsoft Business Center | MICROSOFT_BUSINESS_CENTER | 726a0894-2c77-4d65-99da-9775ef05aad1 | MICROSOFT_BUSINESS_CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) | MICROSOFT BUSINESS CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) | | Microsoft Cloud App Security | ADALLOM_STANDALONE | df845ce7-05f9-4894-b5f2-11bbfbcfd2b6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | | Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) | | Microsoft Teams Audio Conferencing select dial-out | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) |
-| MICROSOFT TEAMS (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
-| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
+| Microsoft Teams (Free) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
+| Microsoft Teams Exploratory | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
+| Microsoft Teams Phone Standard | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Standard for DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Standard for Faculty | MCOEV_FACULTY | d979703c-028d-4de5-acbf-7955566b69b9 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Standard for GCC | MCOEV_GOV | a460366a-ade7-4791-b581-9fbff1bdaa85 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 PHONE SYSTEM FOR GOVERNMENT (db23fce2-a974-42ef-9002-d78dd42a0f22) |
+| Microsoft Teams Phone Standard for GCCHIGH | MCOEV_GCCHIGH | 7035277a-5e49-4abc-a24f-0ec49c501bb5 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Standard for Small and Medium Business | MCOEVSMB_1 | aa6791d3-bb09-4bc2-afed-c30c3fe26032 | MCOEVSMB (ed777b71-af04-42ca-9798-84344c66f7c6) | SKYPE FOR BUSINESS CLOUD PBX FOR SMALL AND MEDIUM BUSINESS (ed777b71-af04-42ca-9798-84344c66f7c6) |
+| Microsoft Teams Phone Standard for Student | MCOEV_STUDENT | 1f338bbc-767e-4a1e-a2d4-b73207cc5b93 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Standard for TELSTRA | MCOEV_TELSTRA | ffaf2d68-1c95-4eb3-9ddd-59b81fba0f61 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Standard_System_USGOV_DOD | MCOEV_USGOV_DOD | b0e7de67-e503-4934-b729-53d595ba5cd1 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Standard_USGOV_GCCHIGH | MCOEV_USGOV_GCCHIGH | 985fcb26-7b94-475b-b512-89356697be71 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| Microsoft Teams Phone Resoure Account | PHONESYSTEM_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | Microsoft 365 Phone Standard Resource Account (f47330e9-c134-43b3-9993-e7f004506889)|
+| Microsoft Teams Phone Resource Account for GCC | PHONESYSTEM_VIRTUALUSER_GOV | 2cf22bcb-0c9e-4bc6-8daf-7e7654c0f285 | MCOEV_VIRTUALUSER_GOV (0628a73f-3b4a-4989-bd7b-0f8823144313) | Microsoft 365 Phone Standard Resource Account for Government (0628a73f-3b4a-4989-bd7b-0f8823144313) |
| Microsoft Teams Rooms Basic | Microsoft_Teams_Rooms_Basic | 6af4b3d6-14bb-4a2a-960c-6c902aad34f3 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Basic without Audio Conferencing | Microsoft_Teams_Rooms_Basic_without_Audio_Conferencing | 50509a35-f0bd-4c5e-89ac-22f0e16a00f8 | TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Pro | Microsoft_Teams_Rooms_Pro | 4cde982a-ede4-4409-9ae6-b003453c8ea6 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) | | Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) | | Nonprofit Portal | NONPROFIT_PORTAL | aa2695c9-8d59-4800-9dc8-12e01f1735af | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>NONPROFIT_PORTAL (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Nonprofit Portal (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2)|
-| Office 365 A1 for Faculty | STANDARDWOFFPACK_FACULTY | 94763226-9b3c-4e75-a931-5c89701abe66 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD 9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A1 for Faculty | STANDARDWOFFPACK_FACULTY | 94763226-9b3c-4e75-a931-5c89701abe66 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
| Office 365 A1 Plus for Faculty | STANDARDWOFFPACK_IW_FACULTY | 78e66a63-337a-4a9a-8959-41c6654dfb56 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A1 for Students | STANDARDWOFFPACK_STUDENT | 314c4481-f395-4525-be8b-2ec4bb1e9d91 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/> Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A1 Plus for Students | STANDARDWOFFPACK_IW_STUDENT | e82ae690-a2d5-4d76-8d30-7c6e01e6022e | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/> DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
The device sign-in flow prompts users who sign in with a Gmail account in an emb
Alternatively, you can have your existing and new Gmail users sign in with email one-time passcode. To have your Gmail users use email one-time passcode:
-1. [Enable email one-time passcode](one-time-passcode.md#enable-email-one-time-passcode)
-2. [Remove Google Federation](google-federation.md#how-do-i-remove-google-federation)
+1. [Enable email one-time passcode](one-time-passcode.md#enable-or-disable-email-one-time-passcodes).
+2. [Remove Google Federation](google-federation.md#how-do-i-remove-google-federation).
3. [Reset redemption status](reset-redemption-status.md) of your Gmail users so they can use email one-time passcode going forward. If you want to request an extension, impacted customers with affected OAuth client ID(s) should have received an email from Google Developers with the following information regarding a one-time policy enforcement extension, which must be completed by Jan 31, 2022:
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 08/31/2022 Last updated : 09/16/2022
The email one-time passcode feature is a way to authenticate B2B collaboration u
> [!IMPORTANT] >
-> - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you havenΓÇÖt explicitly turned it off. This feature provides a seamless fallback authentication method for your guest users. If you donΓÇÖt want to use this feature, you can [disable it](#disable-email-one-time-passcode), in which case users will be prompted to create a Microsoft account instead.
+> - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you havenΓÇÖt explicitly turned it off. This feature provides a seamless fallback authentication method for your guest users. If you donΓÇÖt want to use this feature, you can [disable it](#enable-or-disable-email-one-time-passcodes), in which case users will be prompted to create a Microsoft account instead.
## Sign-in endpoints
At the time of invitation, there's no indication that the user you're inviting w
Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google federation set up. Teri doesn't have a Microsoft account. They'll receive a one-time passcode for authentication.
-## Enable email one-time passcode
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD global administrator.
-
-1. In the navigation pane, select **Azure Active Directory**.
-
-1. Select **External Identities** > **All identity providers**.
-
-1. Select **Email one-time passcode** to open the configuration pane.
-
-1. Under **Email one-time passcode for guests**, select one of the following:
-
- - **Automatically enable email one-time passcode for guests starting October 2021** if you don't want to enable the feature immediately and want to wait for the automatic enablement date.
- - **Enable email one-time passcode for guests effective now** to enable the feature now.
- - **Yes** to enable the feature now if you see a Yes/No toggle (this toggle appears if the feature was previously disabled).
-
- ![Screenshots showing Email one-time passcode toggle enabled.](media/one-time-passcode/enable-email-otp-options.png)
-
-1. Select **Save**.
-
-> [!NOTE]
-> Email one-time passcode settings can also be configured with the [emailAuthenticationMethodConfiguration](/graph/api/resources/emailauthenticationmethodconfiguration) resource type in the Microsoft Graph API.
-
-## Disable email one-time passcode
+## Enable or disable email one-time passcodes
The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. This feature provides a seamless fallback authentication method for your guest users. If you don't want to use this feature, you can disable it, in which case users will be prompted to create a Microsoft account. > [!NOTE] >
-> If the email one-time passcode feature has been enabled in your tenant and you turn it off, any guest users who have redeemed a one-time passcode will not be able to sign in. You can [reset their redemption status](reset-redemption-status.md) so they can sign in again using another authentication method.
+> - Email one-time passcode settings can also be configured with the [emailAuthenticationMethodConfiguration](/graph/api/resources/emailauthenticationmethodconfiguration) resource type in the Microsoft Graph API.
+> - If the email one-time passcode feature has been enabled in your tenant and you turn it off, any guest users who have redeemed a one-time passcode will not be able to sign in. You can [reset their redemption status](reset-redemption-status.md) so they can sign in again using another authentication method.
-### To disable the email one-time passcode feature
+### To enable or disable email one-time passcodes
1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD global administrator.
The email one-time passcode feature is now turned on by default for all new tena
1. Select **External Identities** > **All identity providers**.
-1. Select **Email one-time passcode**, and then under **Email one-time passcode for guests**, select **Disable email one-time passcode for guests** (or **No** if the feature was previously enabled, disabled, or opted into during preview).
-
- ![Screenshots showing the Email one-time passcode toggle disabled.](media/one-time-passcode/disable-email-otp-options.png)
+1. Select **Email one-time passcode**.
- > [!NOTE]
- > Email one-time passcode settings have moved in the Azure portal from **External collaboration settings** to **All identity providers**.
- > If you see a toggle instead of the email one-time passcode options, this means you've previously enabled, disabled, or opted into the preview of the feature. Select **No** to disable the feature.
+1. Under **Email one-time passcode for guests**, select one of the following:
+ - **Yes**: The toggle is set to **Yes** by default unless the feature has been explicitly turned it off. To enable the feature, make sure **Yes** is selected.
+ - **No**: If you want to disable the email one-time passcode feature, select **No**.
+
+ ![Screenshots showing the Email one-time passcode toggle.](media/one-time-passcode/email-one-time-passcode-toggle.png)
1. Select **Save**.
-## Note for public preview customers
-
-If you've previously opted in to the email one-time passcode public preview, automatic feature enablement doesn't apply to you, so your related business processes won't be affected. Additionally, in the Azure portal, under the **Email one-time passcode for guests** properties, you won't see the option to **Automatically enable email one-time passcode for guests starting October 2021**. Instead, you'll see the following **Yes** or **No** toggle:
-
-![Screenshot showing Email one-time passcode opted in.](media/one-time-passcode/enable-email-otp-opted-in.png)
-
-However, if you'd prefer to opt out of the feature and allow it to be automatically enabled, you can revert to the default settings by using the Microsoft Graph API [email authentication method configuration resource type](/graph/api/resources/emailauthenticationmethodconfiguration). After you revert to the default settings, the following options will be available under **Email one-time passcode for guests**:
-
-![Screenshot showing Enable Email one-time passcode opted in.](media/one-time-passcode/email-otp-options.png)
--- **Automatically enable email one-time passcode for guests starting October 2021**. (Default) If the email one-time passcode feature isn't already enabled for your tenant, it will be automatically turned on. No further action is necessary if you want the feature enabled at that time. If you've already enabled or disabled the feature, this option will be unavailable.--- **Enable email one-time passcode for guests effective now**. Turns on the email one-time passcode feature for your tenant.--- **Disable email one-time passcode for guests**. Turns off the email one-time passcode feature for your tenant, and prevents the feature from turning on at the automatic enablement date.-
-## Note for Azure US Government customers
-
-The email one-time passcode feature is disabled by default in the Azure US Government cloud. Your partners will be unable to sign in unless this feature is enabled. Unlike the Azure public cloud, the Azure US Government cloud doesn't support redeeming invitations with self-service Azure Active Directory accounts.
-
- ![Screenshot showing Email one-time passcode disabled.](media/one-time-passcode/enable-email-otp-disabled.png)
-
-To enable the email one-time passcode feature in Azure US Government cloud:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD global administrator.
-2. In the navigation pane, select **Azure Active Directory**.
-3. Select **Organizational relationships**ΓÇ»>ΓÇ»**All identity providers**.
-
- > [!NOTE]
- > - If you don't see **Organizational relationships**, search for "External IdentitiesΓÇ¥ in the search bar at the top.
-
-4. Select **Email one-time passcode**, and then select **Yes**.
-5. Select **Save**.
-
-For more information about current limitations, see [Azure AD B2B in government and national clouds](b2b-government-national-clouds.md).
- ## Frequently asked questions **What happens to my existing guest users if I enable email one-time passcode?**
For more information about the different redemption pathways, see [B2B collabora
**Will the ΓÇ£No account? Create one!ΓÇ¥ option for self-service sign-up go away?**
-No. ItΓÇÖs easy to get [self-service sign-up in the context of External Identities](self-service-sign-up-overview.md) confused with self-service sign-up for email-verified users, but they're two different features. The unmanaged ("viral") feature that's going away is [self-service sign-up with email-verified users](../enterprise-users/directory-self-service-signup.md), which results in your guests creating an unmanaged Azure AD account. However, self-service sign-up for External Identities will continue to be available, which results in your guests signing up to your organization with a [variety of identity providers](identity-providers.md).ΓÇ»
+No. ItΓÇÖs easy to get [self-service sign-up in the context of External Identities](self-service-sign-up-overview.md) confused with self-service sign-up for email-verified users, but they're two different features. The unmanaged ("viral") feature that has been deprecated is [self-service sign-up with email-verified users](../enterprise-users/directory-self-service-signup.md), which resulted in guests creating an unmanaged Azure AD account. However, self-service sign-up for External Identities will continue to be available, which results in your guests signing up to your organization with a [variety of identity providers](identity-providers.md).ΓÇ»
**What does Microsoft recommend we do with existing Microsoft accounts (MSA)?**
When we support the ability to disable Microsoft Account in the Identity provide
**Regarding the change to enable email one-time-passcode by default, does this include SharePoint and OneDrive integration with Azure AD B2B?**
-No, the global rollout of the change to enable email one-time passcode by default doesn't include enabling SharePoint and OneDrive integration with Azure AD B2B. To learn how to enable integration so that collaboration on SharePoint and OneDrive uses B2B capabilities, or how to disable this integration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
+No, the global rollout of the change to enable email one-time passcode by default doesn't include enabling SharePoint and OneDrive integration with Azure AD B2B by default. To learn how to enable integration so that collaboration on SharePoint and OneDrive uses B2B capabilities, or how to disable this integration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
## Next steps
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The capability of replica sets in Azure AD DS is now generally available. [Learn
**Service category:** B2B **Product capability:** B2B/B2C
-Organizations in the Microsoft Azure Government cloud can now enable their guests to redeem invitations with Email One-Time Passcode. This ensures that any guest users with no Azure AD, Microsoft, or Gmail accounts in the Azure Government cloud can still collaborate with their partners by requesting and entering a temporary code to sign in to shared resources. [Learn more](../external-identities/one-time-passcode.md#note-for-azure-us-government-customers).
+Organizations in the Microsoft Azure Government cloud can now enable their guests to redeem invitations with Email One-Time Passcode. This ensures that any guest users with no Azure AD, Microsoft, or Gmail accounts in the Azure Government cloud can still collaborate with their partners by requesting and entering a temporary code to sign in to shared resources. [Learn more](../external-identities/one-time-passcode.md).
active-directory Configure Logic App Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md
Making an Azure Logic app compatible to run with the **Custom Task Extension** r
- Enable system assigned managed identity. - Configure AuthZ policies.
-> [!NOTE]
-> For our public preview we will provide a UI and a deployment script that will automate the following steps.
- To configure those you'll follow these steps: 1. Open the Azure Logic App you want to use with Lifecycle Workflow. Logic Apps may greet you with an introduction screen, which you can close with the X in the upper right corner.
To configure those you'll follow these steps:
1. Create two authorization policies based on the tables below:
+ Policy name: AzureADLifecycleWorkflowsAuthPolicy
+
|Claim |Value | ||| |Issuer | https://sts.windows.net/(Tenant ID)/ | |Audience | Application ID of your Logic Apps Managed Identity |
- |appID | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 |
+ |appid | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 |
Policy name: AzureADLifecycleWorkflowsAuthPolicyV2App
To configure those you'll follow these steps:
||| |Issuer | https://login.microsoftonline.com/(Tenant ID)/v2.0 | |Audience | Application ID of your Logic Apps Managed Identity |
- |appID | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 |
+ |azp | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 |
1. Save the Authorization policy. > [!NOTE]
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
1. select **Workflows (Preview)** 1. On the workflows screen, select the workflow template that you want to use.
- :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflows templates.":::
+ :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflows templates." lightbox="media/create-lifecycle-workflow/template-list.png":::
1. Enter a unique display name and description for the workflow and select **Next**. :::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of workflow template basic information.":::
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
There are no functional changes in this release.
This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability, see the CVE.
-To download the latest version of Azure AD Connect 1.6, see the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=103336).
- ### Release status 8/10/2021: Released for download only, not available for auto-upgrade
app-service Configure Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/configure-network-settings.md
az resource update --name $ASE_NAME/configurations/networking --set properties.f
az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.ftpEnabled ```
+The setting is also available for configuration through Azure portal at the App Service Environment configuration:
+ In addition to enabling access, you need to ensure that you have [configured DNS if you are using ILB App Service Environment](./networking.md#dns-configuration-for-ftp-access).
az resource update --name $ASE_NAME/configurations/networking --set properties.R
az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.remoteDebugEnabled ```
+The setting is also available for configuration through Azure portal at the App Service Environment configuration:
++ ## Next steps > [!div class="nextstepaction"]
app-service How To Upgrade Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-upgrade-preference.md
Title: Configure upgrade preference for App Service Environment
-description: Configure the upgrade preference for the Azure App Service Environment.
+ Title: Configure upgrade preference for App Service Environment planned maintenance
+description: Configure the upgrade preference for the Azure App Service Environment planned maintenance.
Previously updated : 01/08/2022 Last updated : 09/19/2022 zone_pivot_groups: app-service-cli-portal
-# Upgrade preference for App Service Environments
+# Upgrade preference for App Service Environment planned maintenance
-Azure App Service is regularly updated to provide new features, new runtime versions, performance improvements, and bug fixes. The upgrade happens automatically. The upgrades are applied progressively through the regions following [Azure Safe Deployment Practices](https://azure.microsoft.com/blog/advancing-safe-deployment-practices/). An App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for running App Service apps securely at high scale. Because of the isolated nature of App Service Environment, you have an opportunity to influence the upgrade process.
+Azure App Service is regularly updated to provide new features, new runtime versions, performance improvements, and bug fixes. This is also known as planned maintenance. The upgrade happens automatically. The upgrades are applied progressively through the regions following [Azure Safe Deployment Practices](https://azure.microsoft.com/blog/advancing-safe-deployment-practices/). An App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for running App Service apps securely at high scale. Because of the isolated nature of App Service Environment, you have an opportunity to influence the upgrade process.
If you don't have an App Service Environment, see [How to Create an App Service Environment v3](./creation.md).
If you don't have an App Service Environment, see [How to Create an App Service
> This article covers the features, benefits, and use cases of App Service Environment v3, which is used with App Service Isolated v2 plans. >
-With App Service Environment v3, you can specify your preference for when and how the upgrade is applied. The upgrade can be applied automatically or manually. Even with your preference set to automatic, you have some options to influence the timing.
+With App Service Environment v3, you can specify your preference for when and how the planned maintenance is applied. The upgrade can be applied automatically or manually. Even with your preference set to automatic, you have some options to influence the timing.
## Automatic upgrade preference
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
Title: Back up an app
description: Learn how to restore backups of your apps in Azure App Service or configure custom backups. Customize backups by including the linked database. ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Previously updated : 09/09/2022 Last updated : 09/19/2022
There are two types of backups in App Service. Automatic backups made for your a
||Automatic backups | Custom backups | |-|-|-|
-| Pricing tiers | **Standard**, **Premium**. | **Standard**, **Premium**, **Isolated**. |
+| Pricing tiers | **Basic**, **Standard**, **Premium**. | **Basic**, **Standard**, **Premium**, **Isolated**. |
| Configuration required | No. | Yes. | | Backup size | 30 GB. | 10 GB, 4 GB of which can be the linked database. | | Linked database | Not backed up. | The following linked databases can be backed up: [SQL Database](/azure/azure-sql/database/), [Azure Database for MySQL](../mysql/index.yml), [Azure Database for PostgreSQL](../postgresql/index.yml), [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/). |
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
recommendations: false
With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-* ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2022-08-31```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
-* With the model compose operation, you can assign up to 100 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results.
+* ```Custom form``` and ```Custom template``` models can be composed together into a single composed model.
+* With the model compose operation, you can assign up to 200 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results.
* For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates. * The response will include a ```docType``` property to indicate which of the composed models was used to analyze the document.
+* For ```Custom neural``` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. Model compose is best suited for scenarios when you have documents of different types being submitted for analysis.
## Compose model limits
With composed models, you can assign multiple custom models to a composed model
### Composed model compatibility
- |Custom model type | API Version |Custom form `2022-08-31` (v3.0)| Custom document `2022-08-31` (v3.0) | Custom form GA version (v2.1) or earlier|
+|Custom model type |Models trained with version 2.1 and v2.0 | Custom template models (3.0) preview | Custom neural models 3.0 Preview |Custom neural models 3.0 GA|
|--|--|--|--|--|
-|**Custom template** (updated custom form)| v3.0 | &#10033;| Γ£ô | X |
-|**Custom neural**| trained with current API version (`2022-08-31`) |Γ£ô |Γ£ô | X |
-|**Custom form**| Custom form GA version (v2.1) or earlier | X | X| Γ£ô|
+| Models trained with version 2.1 and v2.0 | Supported | Supported | Not Supported | Not Supported |
+| Custom template models (3.0) preview | Supported |Supported | Not Supported | Not Supported |
+| Custom template models 3.0 GA | Not Supported |Not Supported | Supported | Not Supported |
+| Custom neural models 3.0 Preview | Not Supported | NotSupported | Supported | Not Supported |
+|Custom Neural models 3.0 GA| Not Supported | NotSupported |NotSupported |Supported |
-**Table symbols**: ✔—supported; **X—not supported; ✱—unsupported for this API version, but will be supported in a future API version.
* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition will ensure that the v2.1 model can be composed with other models.
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Tabular fields are also useful when extracting repeating information within a do
## Supported regions
-As of August 01, 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
+As of September 16, 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
+* Australia East
* Brazil South * Canada Central * Central India
+* Central US
+* East Asia
+* France Central
* Japan East
-* West Europe
* South Central US * Southeast Asia
+* UK South
+* West Europe
+* West US2
> [!TIP] > You can [copy a model](disaster-recovery.md#copy-api-overview) trained in one of the select regions listed above to **any other region** and use it accordingly.
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## September 2022
+### Region expansion for training custom neural models
+
+Training custom neural models is now supported in six additional regions.
+* Australia East
+* Central US
+* East Asia
+* France Central
+* UK South
+* West US2
+
+For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
+ #### Form Recognizer SDK version 4.0.0 GA release * **Form Recognizer SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
To understand client requirements for TLS 1.2, see [TLS 1.2 for Azure Automation
### Python requirement
-Change Tracking and Inventory only supports Python2. If your machine is using a distro that doesn't include Python 2 by default then you must install it. The following sample commands will install Python 2 on different distros.
+Change Tracking and Inventory now support Python 2 and Python 3. If your machine uses a distro that doesn't include either of the versions, you must install them by default. The following sample commands will install Python 2 and Python 3 on different distros.
+> [!NOTE]
+> To use the OMS agent compatible with Python 3, ensure that you first uninstall Python 2; otherwise, the OMS agent will continue to run with python 2 by default.
+
+#### [Python 2](#tab/python-2)
- Red Hat, CentOS, Oracle: `yum install -y python2` - Ubuntu, Debian: `apt-get install -y python2` - SUSE: `zypper install -y python2`
+> [!NOTE]
+> The Python 2 executable must be aliased to *python*.
+
+#### [Python 3](#tab/python-3)
+
+- Red Hat, CentOS, Oracle: `yum install -y python3`
+- Ubuntu, Debian: `apt-get install -y python3`
+- SUSE: `zypper install -y python3`
-The python2 executable must be aliased to *python*.
+
## Network requirements
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Title: Configure data persistence - Premium Azure Cache for Redis description: Learn how to configure and manage data persistence your Premium tier Azure Cache for Redis instances + Previously updated : 05/17/2022 Last updated : 09/19/2022 # Configure data persistence for a Premium Azure Cache for Redis instance
Last updated 05/17/2022
> [!IMPORTANT] >
-> Check to see if your storage account has soft delete enabled before using hte data persistence feature. Using data persistence with soft delete will cause very high storage costs. For more information, see For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete)
->
+> Check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
+>
Azure Cache for Redis offers Redis persistence using the Redis database (RDB) and Append only File (AOF):
azure-fluid-relay Container Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-deletion.md
In this scenario, we will be deleting an existing Fluid container. Once a contai
## Requirements to delete a Fluid container - To get started, you need to install [Azure CLI](/cli/azure/install-azure-cli). If you already have Azure CLI installed, please ensure your version is 2.0.67 or greater by running `az version`.-- In order to delete a Fluid container, you must ensure your application and its clients are no longer connected to the container.
+- In order to delete a Fluid container, you must ensure that your application and its clients have been disconnected from the container for more than 10 minutes.
## List the containers within a Fluid Relay resource To see all of the containers belonging to your Fluid Relay resource, you can run the following command:
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
The Log Analytics agent for Linux is composed of multiple packages. The release
**Package** | **Version** | **Description** -- | -- | --
-omsagent | 1.13.9 | The Log Analytics Agent for Linux
+omsagent | 1.14.19 | The Log Analytics Agent for Linux
omsconfig | 1.1.1 | Configuration agent for the Log Analytics agent
-omi | 1.6.4 | Open Management Infrastructure (OMI) -- a lightweight CIM Server. *Note that OMI requires root access to run a cron job necessary for the functioning of the service*
-scx | 1.6.4 | OMI CIM Providers for operating system performance metrics
+omi | 1.6.9 | Open Management Infrastructure (OMI) -- a lightweight CIM Server. *Note that OMI requires root access to run a cron job necessary for the functioning of the service*
+scx | 1.6.9 | OMI CIM Providers for operating system performance metrics
apache-cimprov | 1.0.1 | Apache HTTP Server performance monitoring provider for OMI. Only installed if Apache HTTP Server is detected. mysql-cimprov | 1.0.1 | MySQL Server performance monitoring provider for OMI. Only installed if MySQL/MariaDB server is detected. docker-cimprov | 1.0.0 | Docker provider for OMI. Only installed if Docker is detected.
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 8/22/2022 Last updated : 9/15/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| August 2022 | <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> | 1.8.0.0 | Coming soon |
| July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None | | June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | None | | May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li><li>Linux support for Debian 11 distro</li><li>Fixed issue to list mount paths instead of device names for Linux disk metrics</li></ul> | 1.5.0.0 | 1.21.0 |
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
# Application Insights for ASP.NET Core applications
-This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application.
+This article describes how to enable and configure Application Insights for an [ASP.NET Core](/aspnet/core) application.
Application Insights can collect the following telemetry from your ASP.NET Core application:
Application Insights can collect the following telemetry from your ASP.NET Core
> * Heartbeats > * Logs
-We'll use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example that targets `netcoreapp3.0`. You can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
+We'll use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
> [!NOTE] > A preview [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. [Learn more](opentelemetry-overview.md).
We'll use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example tha
## Supported scenarios
-The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. Support covers the following scenarios:
+The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported and covers the following scenarios:
+ * **Operating system**: Windows, Linux, or Mac * **Hosting method**: In process or out of process * **Deployment method**: Framework dependent or self-contained
The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Micro
* **IDE**: Visual Studio, Visual Studio Code, or command line > [!NOTE]
-> ASP.NET Core 3.1 requires [Application Insights 2.8.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.8.0) or later.
+> - ASP.NET Core 6.0 requires [Application Insights 2.19.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.18.0) or later
+> - ASP.NET Core 3.1 requires [Application Insights 2.8.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.8.0) or later
## Prerequisites
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
1. Open your project in Visual Studio.
- > [!TIP]
- > To track all the changes that Application Insights makes, you can set up source control for your project. To set it up, select **File** > **Add to Source Control**.
-
-2. Select **Project** > **Add Application Insights Telemetry**.
+2. Go to **Project** > **Add Application Insights Telemetry**.
-3. Select **Get Started**. Depending on your version of Visual Studio, the name of this button might vary. In some earlier versions, it's named the **Start Free** button.
+3. Choose **Azure Application Insights**, then select **Next**.
-4. Select your subscription, and then select **Resource** > **Register**.
+4. Choose your subscription and Application Insights instance (or create a new instance with **Create new**), then select **Next**.
-5. After you add Application Insights to your project, check to confirm that you're using the latest stable release of the SDK. Go to **Project** > **Manage NuGet Packages** > **Microsoft.ApplicationInsights.AspNetCore**. If you need to, select **Update**.
+5. Add or confirm your Application Insights connection string (this should be prepopulated based on your selection in the previous step), then select **Finish**.
- ![Screenshot showing where to select the Application Insights package for update](./media/asp-net-core/update-nuget-package.png)
+6. After you add Application Insights to your project, check to confirm that you're using the latest stable release of the SDK. Go to **Project** > **Manage NuGet Packages...** > **Microsoft.ApplicationInsights.AspNetCore**. If you need to, select **Update**.
-6. If you added your project to source control, go to **View** > **Team Explorer** > **Changes**. You can select each file to see a diff view of the changes made by Application Insights telemetry.
+ :::image type="content" source="./media/asp-net-core/update-nuget-package.png" alt-text="Screenshot showing where to select the Application Insights package for update.":::
## Enable Application Insights server-side telemetry (no Visual Studio)
-1. Install the [Application Insights SDK NuGet package for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore). We recommend that you always use the latest stable version. Find full release notes for the SDK on the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases).
+1. Install the [Application Insights SDK NuGet package for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore)
+
+ We recommend that you always use the latest stable version. Find full release notes for the SDK on the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases).
The following code sample shows the changes to be added to your project's `.csproj` file. ```xml
- <ItemGroup>
- <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.16.0" />
- </ItemGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.21.0" />
+ </ItemGroup>
```
-2. Add `services.AddApplicationInsightsTelemetry();` to the `ConfigureServices()` method in your `Startup` class, as in this example:
+2. Add `AddApplicationInsightsTelemetry()` to your `startup.cs` or `program.cs` class (depending on your .NET Core version)
+ ### [ASP.NET Core 6.0](#tab/netcore6)
+
+ Add `builder.Services.AddApplicationInsightsTelemetry();` after the `WebApplication.CreateBuilder()` method in your `Program` class, as in this example:
+
```csharp
- // This method gets called by the runtime. Use this method to add services to the container.
- public void ConfigureServices(IServiceCollection services)
- {
- // The following line enables Application Insights telemetry collection.
- services.AddApplicationInsightsTelemetry();
+ // This method gets called by the runtime. Use this method to add services to the container.
+ var builder = WebApplication.CreateBuilder(args);
- // This code adds other services for your application.
- services.AddMvc();
- }
+ // The following line enables Application Insights telemetry collection.
+ builder.Services.AddApplicationInsightsTelemetry();
+
+ // This code adds other services for your application.
+ builder.Services.AddMvc();
+
+ var app = builder.Build();
```
+
+ ### [ASP.NET Core 3.1](#tab/netcore3)
+
+ Add `services.AddApplicationInsightsTelemetry();` to the `ConfigureServices()` method in your `Startup` class, as in this example:
+
+ ```csharp
+ // This method gets called by the runtime. Use this method to add services to the container.
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // The following line enables Application Insights telemetry collection.
+ services.AddApplicationInsightsTelemetry();
-3. Set up the connection string.
+ // This code adds other services for your application.
+ services.AddMvc();
+ }
+ ```
+
+
+
+3. Set up the connection string
Although you can provide a connection string as part of the `ApplicationInsightsServiceOptions` argument to AddApplicationInsightsTelemetry, we recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing. ```json
- {
- "ApplicationInsights": {
- "ConnectionString" : "Copy connection string from Application Insights Resource Overview"
- },
- "Logging": {
- "LogLevel": {
- "Default": "Warning"
- }
- }
+ {
+ "Logging": {
+ "LogLevel": {
+ "Default": "Information",
+ "Microsoft.AspNetCore": "Warning"
}
+ },
+ "AllowedHosts": "*",
+ "ApplicationInsights": {
+ "ConnectionString": "Copy connection string from Application Insights Resource Overview"
+ }
+ }
``` Alternatively, specify the connection string in the "APPLICATIONINSIGHTS_CONNECTION_STRING" environment variable or "ApplicationInsights:ConnectionString" in the JSON configuration file.-
+
For example:-
+
* `SET ApplicationInsights:ConnectionString = <Copy connection string from Application Insights Resource Overview>`-
+
* `SET APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview>`-
+
* Typically, `APPLICATIONINSIGHTS_CONNECTION_STRING` is used in [Azure Web Apps](./azure-web-apps.md?tabs=net), but it can also be used in all places where this SDK is supported.-
+
> [!NOTE] > An connection string specified in code wins over the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, which wins over other options.
The preceding steps are enough to help you start collecting server-side telemetr
1. In `_ViewImports.cshtml`, add injection:
- ```cshtml
- @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
- ```
+ ```cshtml
+ @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
+ ```
2. In `_Layout.cshtml`, insert `HtmlHelper` at the end of the `<head>` section but before any other script. If you want to report any custom JavaScript telemetry from the page, inject it after this snippet:
- ```cshtml
- @Html.Raw(JavaScriptSnippet.FullScript)
- </head>
- ```
+ ```cshtml
+ @Html.Raw(JavaScriptSnippet.FullScript)
+ </head>
+ ```
As an alternative to using the `FullScript`, the `ScriptBody` is available starting in Application Insights SDK for ASP.NET Core version 2.14. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy: ```cshtml
- <script> // apply custom changes to this script tag.
- @Html.Raw(JavaScriptSnippet.ScriptBody)
- </script>
+<script> // apply custom changes to this script tag.
+ @Html.Raw(JavaScriptSnippet.ScriptBody)
+</script>
``` The `.cshtml` file names referenced earlier are from a default MVC application template. Ultimately, if you want to properly enable client-side monitoring for your application, the JavaScript snippet must appear in the `<head>` section of each page of your application that you want to monitor. Add the JavaScript snippet to `_Layout.cshtml` in an application template to enable client-side monitoring.
You can customize the Application Insights SDK for ASP.NET Core to change the de
You can modify a few common settings by passing `ApplicationInsightsServiceOptions` to `AddApplicationInsightsTelemetry`, as in this example:
+### [ASP.NET Core 6.0](#tab/netcore6)
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+var aiOptions = new Microsoft.ApplicationInsights.AspNetCore.Extensions.ApplicationInsightsServiceOptions();
+
+// Disables adaptive sampling.
+aiOptions.EnableAdaptiveSampling = false;
+
+// Disables QuickPulse (Live Metrics stream).
+aiOptions.EnableQuickPulseMetricStream = false;
+
+builder.Services.AddApplicationInsightsTelemetry(aiOptions);
+var app = builder.Build();
+```
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+ ```csharp public void ConfigureServices(IServiceCollection services) {
public void ConfigureServices(IServiceCollection services)
} ``` ++ This table has the full list of `ApplicationInsightsServiceOptions` settings: |Setting | Description | Default
In Microsoft.ApplicationInsights.AspNetCore SDK version [2.15.0](https://www.nug
} ```
-If `services.AddApplicationInsightsTelemetry(aiOptions)` is used, it overrides the settings from `Microsoft.Extensions.Configuration.IConfiguration`.
+If `builder.Services.AddApplicationInsightsTelemetry(aiOptions)` for ASP.NET Core 6.0 or `services.AddApplicationInsightsTelemetry(aiOptions)` for ASP.NET Core 3.1 and earlier is used, it overrides the settings from `Microsoft.Extensions.Configuration.IConfiguration`.
### Sampling
When you want to enrich telemetry with additional information, use [telemetry in
Add any new `TelemetryInitializer` to the `DependencyInjection` container as shown in the following code. The SDK automatically picks up any `TelemetryInitializer` that's added to the `DependencyInjection` container.
+### [ASP.NET Core 6.0](#tab/netcore6)
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();
+
+var app = builder.Build();
+```
+
+> [!NOTE]
+> `builder.Services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();` works for simple initializers. For others, the following is required: `builder.Services.AddSingleton(new MyCustomTelemetryInitializer() { fieldName = "myfieldName" });`
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+ ```csharp public void ConfigureServices(IServiceCollection services) {
public void ConfigureServices(IServiceCollection services)
> [!NOTE] > `services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();` works for simple initializers. For others, the following is required: `services.AddSingleton(new MyCustomTelemetryInitializer() { fieldName = "myfieldName" });`++ ### Removing TelemetryInitializers By default, telemetry initializers are present. To remove all or specific telemetry initializers, use the following sample code *after* calling `AddApplicationInsightsTelemetry()`.
+### [ASP.NET Core 6.0](#tab/netcore6)
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.AddApplicationInsightsTelemetry();
+
+// Remove a specific built-in telemetry initializer
+var tiToRemove = builder.Services.FirstOrDefault<ServiceDescriptor>
+ (t => t.ImplementationType == typeof(AspNetCoreEnvironmentTelemetryInitializer));
+if (tiToRemove != null)
+{
+ builder.Services.Remove(tiToRemove);
+}
+
+// Remove all initializers
+// This requires importing namespace by using Microsoft.Extensions.DependencyInjection.Extensions;
+builder.Services.RemoveAll(typeof(ITelemetryInitializer));
+
+var app = builder.Build();
+```
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+ ```csharp public void ConfigureServices(IServiceCollection services) {
public void ConfigureServices(IServiceCollection services)
} ``` ++ ### Adding telemetry processors You can add custom telemetry processors to `TelemetryConfiguration` by using the extension method `AddApplicationInsightsTelemetryProcessor` on `IServiceCollection`. You use telemetry processors in [advanced filtering scenarios](./api-filtering-sampling.md#itelemetryprocessor-and-itelemetryinitializer). Use the following example.
+### [ASP.NET Core 6.0](#tab/netcore6)
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+// ...
+builder.Services.AddApplicationInsightsTelemetry();
+builder.Services.AddApplicationInsightsTelemetryProcessor<MyFirstCustomTelemetryProcessor>();
+
+// If you have more processors:
+builder.Services.AddApplicationInsightsTelemetryProcessor<MySecondCustomTelemetryProcessor>();
+
+var app = builder.Build();
+```
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+ ```csharp public void ConfigureServices(IServiceCollection services) {
public void ConfigureServices(IServiceCollection services)
} ``` ++ ### Configuring or removing default TelemetryModules Application Insights automatically collects telemetry about specific workloads without requiring manual tracking by user.
By default, the following automatic-collection modules are enabled. These module
To configure any default `TelemetryModule`, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example.
+### [ASP.NET Core 6.0](#tab/netcore6)
+
+```csharp
+using Microsoft.ApplicationInsights.DependencyCollector;
+using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector;
+
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.AddApplicationInsightsTelemetry();
+
+// The following configures DependencyTrackingTelemetryModule.
+// Similarly, any other default modules can be configured.
+builder.Services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) =>
+ {
+ module.EnableW3CHeadersInjection = true;
+ });
+
+// The following removes all default counters from EventCounterCollectionModule, and adds a single one.
+builder.Services.ConfigureTelemetryModule<EventCounterCollectionModule>((module, o) =>
+ {
+ module.Counters.Add(new EventCounterCollectionRequest("System.Runtime", "gen-0-size"));
+ });
+
+// The following removes PerformanceCollectorModule to disable perf-counter collection.
+// Similarly, any other default modules can be removed.
+var performanceCounterService = builder.Services.FirstOrDefault<ServiceDescriptor>(t => t.ImplementationType == typeof(PerformanceCollectorModule));
+if (performanceCounterService != null)
+{
+ builder.Services.Remove(performanceCounterService);
+}
+
+var app = builder.Build();
+```
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+ ```csharp using Microsoft.ApplicationInsights.DependencyCollector; using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector;
public void ConfigureServices(IServiceCollection services)
} ``` ++ In versions 2.12.2 and later, [`ApplicationInsightsServiceOptions`](#using-applicationinsightsserviceoptions) includes an easy option to disable any of the default modules. ### Configuring a telemetry channel The default [telemetry channel](./telemetry-channels.md) is `ServerTelemetryChannel`. The following example shows how to override it.
+### [ASP.NET Core 6.0](#tab/netcore6)
+ ```csharp using Microsoft.ApplicationInsights.Channel;
- public void ConfigureServices(IServiceCollection services)
- {
- // Use the following to replace the default channel with InMemoryChannel.
- // This can also be applied to ServerTelemetryChannel.
- services.AddSingleton(typeof(ITelemetryChannel), new InMemoryChannel() {MaxTelemetryBufferCapacity = 19898 });
+var builder = WebApplication.CreateBuilder(args);
- services.AddApplicationInsightsTelemetry();
- }
+// Use the following to replace the default channel with InMemoryChannel.
+// This can also be applied to ServerTelemetryChannel.
+builder.Services.AddSingleton(typeof(ITelemetryChannel), new InMemoryChannel() {MaxTelemetryBufferCapacity = 19898 });
+
+builder.Services.AddApplicationInsightsTelemetry();
+
+var app = builder.Build();
+```
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+
+```csharp
+using Microsoft.ApplicationInsights.Channel;
+
+public void ConfigureServices(IServiceCollection services)
+{
+ // Use the following to replace the default channel with InMemoryChannel.
+ // This can also be applied to ServerTelemetryChannel.
+ services.AddSingleton(typeof(ITelemetryChannel), new InMemoryChannel() {MaxTelemetryBufferCapacity = 19898 });
+
+ services.AddApplicationInsightsTelemetry();
+}
``` ++ > [!NOTE] > See [Flushing data](api-custom-events-metrics.md#flushing-data) if you want to flush the buffer--for example, if you are using the SDK in an application that shuts down.
using Microsoft.ApplicationInsights.Channel;
If you want to disable telemetry conditionally and dynamically, you can resolve the `TelemetryConfiguration` instance with an ASP.NET Core dependency injection container anywhere in your code and set the `DisableTelemetry` flag on it.
+### [ASP.NET Core 6.0](#tab/netcore6)
+ ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddApplicationInsightsTelemetry();
- }
+var builder = WebApplication.CreateBuilder(args);
- public void Configure(IApplicationBuilder app, IHostingEnvironment env, TelemetryConfiguration configuration)
- {
- configuration.DisableTelemetry = true;
- ...
- }
+builder.Services.AddApplicationInsightsTelemetry();
+
+// any custom configuration can be done here:
+builder.Services.Configure<TelemetryConfiguration>(x => x.DisableTelemetry = true);
+
+var app = builder.Build();
+```
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+
+```csharp
+public void ConfigureServices(IServiceCollection services)
+{
+ services.AddApplicationInsightsTelemetry();
+}
+
+public void Configure(IApplicationBuilder app, IHostingEnvironment env, TelemetryConfiguration configuration)
+{
+ configuration.DisableTelemetry = true;
+ ...
+}
``` ++ The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see [Remove the telemetry module](#configuring-or-removing-default-telemetrymodules). ## Frequently asked questions
Yes. Feature support for the SDK is the same in all platforms, with the followin
* The SDK collects [event counters](./eventcounters.md) on Linux because [performance counters](./performance-counters.md) are only supported in Windows. Most metrics are the same. * Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel:
- ```csharp
- using Microsoft.ApplicationInsights.Channel;
- using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
-
- public void ConfigureServices(IServiceCollection services)
- {
- // The following will configure the channel to use the given folder to temporarily
- // store telemetry items during network or Application Insights server issues.
- // User should ensure that the given folder already exists
- // and that the application has read/write permissions.
- services.AddSingleton(typeof(ITelemetryChannel),
- new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
- services.AddApplicationInsightsTelemetry();
- }
- ```
+### [ASP.NET Core 6.0](#tab/netcore6)
+
+```csharp
+using Microsoft.ApplicationInsights.Channel;
+using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
+
+var builder = WebApplication.CreateBuilder(args);
+
+// The following will configure the channel to use the given folder to temporarily
+// store telemetry items during network or Application Insights server issues.
+// User should ensure that the given folder already exists
+// and that the application has read/write permissions.
+builder.Services.AddSingleton(typeof(ITelemetryChannel),
+ new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
+builder.Services.AddApplicationInsightsTelemetry();
+
+var app = builder.Build();
+```
+
+### [ASP.NET Core 3.1](#tab/netcore3)
+
+```csharp
+using Microsoft.ApplicationInsights.Channel;
+using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
+
+public void ConfigureServices(IServiceCollection services)
+{
+ // The following will configure the channel to use the given folder to temporarily
+ // store telemetry items during network or Application Insights server issues.
+ // User should ensure that the given folder already exists
+ // and that the application has read/write permissions.
+ services.AddSingleton(typeof(ITelemetryChannel),
+ new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
+ services.AddApplicationInsightsTelemetry();
+}
+```
++ This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.3.1.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.3.1/applicationinsights-agent-3.3.1.jar) file.
+Download the [applicationinsights-agent-3.4.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.0/applicationinsights-agent-3.4.0.jar) file.
> [!WARNING]
->
-> If you're upgrading from 3.2.x:
->
-> - Starting from 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
+>
+> If you are upgrading from an earlier 3.x version,
+>
+> Starting from 3.4.0:
+>
+> - Rate-limited sampling is now the default (if you have not configured a fixed percentage previously). By default, it will capture at most around 5 requests per second (along with their dependencies, traces and custom events). See [fixed-percentage sampling](./java-standalone-config.md#fixed-percentage-sampling) if you wish to revert to the previous behavior of capturing 100% of requests.
+>
+> Starting from 3.3.0:
+>
+> - `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
> - Exception records are no longer recorded for failed dependencies, they are only recorded for failed requests. >
-> If you're upgrading from 3.1.x:
+> Starting from 3.2.0:
>
-> - Starting from 3.2.0, controller "InProc" dependencies are not captured by default. For details on how to enable this, please see the [config options](./java-standalone-config.md#autocollect-inproc-dependencies-preview).
+> - Controller "InProc" dependencies are no longer captured by default. For details on how to re-enable these, please see the [config options](./java-standalone-config.md#autocollect-inproc-dependencies-preview).
> - Database dependency names are now more concise with the full (sanitized) query still present in the `data` field. HTTP dependency names are now more descriptive. > This change can affect custom dashboards or alerts if they relied on the previous values. > For details, see the [3.2.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0). >
-> If you're upgrading from 3.0.x:
+> Starting from 3.1.0:
> > - The operation names and request telemetry names are now prefixed by the HTTP method, such as `GET` and `POST`. > This change can affect custom dashboards or alerts if they relied on the previous values.
Download the [applicationinsights-agent-3.3.1.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` to your application's JVM args.
+Add `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` to your applicati
APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.3.1.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.0.jar` with the following content:
```json {
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
You can enable the Azure Monitor Application Insights agent for Java by adding a
### Usual case
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.3.1.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.0.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.3.1.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.0.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.3.1.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.0.jar" -jar <myapp.jar>
``` ## Programmatic configuration
To use the programmatic configuration and attach the Application Insights agent
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.3.1</version>
+ <version>3.4.0</version>
</dependency> ```
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Read the Spring Boot documentation [here](../app/java-in-process-agent.md).
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.1.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.0.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.1.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.1.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.0.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.1.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.0.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.1.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.0.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.3.1.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.0.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.3.1.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.0.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.3.1.jar
+-javaagent:path/to/applicationinsights-agent-3.4.0.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.3.1.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.0.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following JVM argument: ```--javaagent:path/to/applicationinsights-agent-3.3.1.jar
+-javaagent:path/to/applicationinsights-agent-3.4.0.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.3.1.jar
+-javaagent:path/to/applicationinsights-agent-3.4.0.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.3.1.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.0.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.1.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.0.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.1.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.0.jar` is located.
```json {
Furthermore, sampling is trace ID based, to help ensure consistent sampling deci
### Rate-Limited Sampling
-Starting from 3.4.0-BETA, rate-limited sampling is available, and is now the default.
+Starting from 3.4.0, rate-limited sampling is available, and is now the default.
If no sampling has been configured, the default is now rate-limited sampling configured to capture at most
-(approximately) 5 requests per second. This replaces the prior default which was to capture all requests.
+(approximately) 5 requests per second, along with all the dependencies and logs on those requests.
+
+This replaces the prior default which was to capture all requests.
If you still wish to capture all requests, use [fixed-percentage sampling](#fixed-percentage-sampling) and set the sampling percentage to 100.
Here is an example how to set the sampling to capture at most (approximately) 1
```json { "sampling": {
- "limitPerSecond": 1.0
+ "requestsPerSecond": 1.0
} } ```
-Note that `limitPerSecond` can be a decimal, so you can configure it to capture less than one request per second if you
-wish.
+Note that `requestsPerSecond` can be a decimal, so you can configure it to capture less than one request per second if you wish.
+For example, a value of `0.5` means capture at most 1 request every 2 seconds.
-You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_LIMIT_PER_SECOND`
+You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_REQUESTS_PER_SECOND`
(which will then take precedence over rate limit specified in the json configuration). ### Fixed-Percentage Sampling
Starting from version 3.2.0, if you want to set a custom dimension programmatica
## Connection string overrides (preview)
-This feature is in preview, starting from 3.4.0-BETA.
+This feature is in preview, starting from 3.4.0.
Connection string overrides allow you to override the [default connection string](#connection-string), for example: * Set one connection string for one http path prefix `/myapp1`.
These are the valid `level` values that you can specify in the `applicationinsig
> | project timestamp, message, itemType > ``` +
+### Code properties for Logback (preview)
+
+You can enable code properties (_FileName_, _ClassName_, _MethodName_, _LineNumber_) for Logback:
+
+```json
+{
+ "preview": {
+ "captureLogbackCodeAttributes": true
+ }
+}
+```
+
+> [!WARNING]
+>
+> This feature could add a performance overhead.
+
+This feature is in preview, starting from 3.4.0.
+ ### LoggingLevel Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field.
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
Literal values in JDBC queries are masked by default in order to avoid accidentally capturing sensitive data.
-Starting from 3.4.0-BETA, this behavior can be disabled if desired, e.g.
+Starting from 3.4.0, this behavior can be disabled if desired, e.g.
```json {
Starting from 3.4.0-BETA, this behavior can be disabled if desired, e.g.
} ```
+## Mongo query masking
+
+Literal values in Mongo queries are masked by default in order to avoid accidentally capturing sensitive data.
+
+Starting from 3.4.0, this behavior can be disabled if desired, e.g.
+
+```json
+{
+ "instrumentation": {
+ "mongo": {
+ "masking": {
+ "enabled": false
+ }
+ }
+ }
+}
+```
+ ## HTTP headers Starting from version 3.3.0, you can capture request and response headers on your server (request) telemetry:
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.3.1.jar` is located.
+`applicationinsights-agent-3.4.0.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
+
+ Title: Java Profiler for Azure Monitor Application Insights
+description: How to configure the Azure Monitor Application Insights for Java Profiler
+ Last updated : 07/19/2022
+ms.devlang: java
+++
+# Java Profiler for Azure Monitor Application Insights
+
+> [!NOTE]
+> The Java Profiler feature is in preview, starting from 3.4.0.
+
+The Application Insights Java Profiler provides a system for:
+
+> [!div class="checklist"]
+> - Generating JDK Flight Recorder (JFR) profiles on demand from the Java Virtual Machine (JVM).
+> - Generating JFR profiles automatically when certain trigger conditions are met from JVM, such as CPU or memory breaching a configured threshold.
+
+## Overview
+
+The Application Insights Java profiler uses the JFR profiler provided by the JVM to record profiling data, allowing users to download the JFR recordings at a later time and analyze them to identify the cause of performance issues.
+
+This data is gathered on demand when trigger conditions are met. The available triggers are thresholds over CPU usage and Memory consumption.
+
+When a threshold is reached, a profile of the configured type and duration is gathered and uploaded. This profile is then visible within the performance blade of the associated Application Insights Portal UI.
+
+> [!WARNING]
+> The JFR profiler by default executes the "profile-without-env-data" profile. A JFR file is a series of events emitted by the JVM. The "profile-without-env-data" configuration, is similar to the "profile" configuration that ships with the JVM, however has had some events disabled that have the potential to contain sensitive deployment information such as environment variables, arguments provided to the JVM and processes running on the system.
+
+The flags that have been disabled are:
+
+- jdk.JVMInformation
+- jdk.InitialSystemProperty
+- jdk.OSInformation
+- jdk.InitialEnvironmentVariable
+- jdk.SystemProcess
+
+However, you should review all enabled flags to ensure that profiles don't contain sensitive data.
+
+See [Configuring Profile Contents](#configuring-profile-contents) on setting a custom profiler configuration.
+
+## Prerequisites
+
+- JVM with Java Flight Recorder (JFR) capability
+ - Java 8 update 262+
+ - Java 11+
+
+> [!WARNING]
+> OpenJ9 JVM is not supported
+
+## Usage
+
+### Triggers
+
+For more detailed description of the various triggers available, see [profiler overview](../profiler/profiler-overview.md).
+
+The ApplicationInsights Java Agent monitors CPU and memory consumption and if it breaches a configured threshold a profile is triggered. Both thresholds are a percentage.
+
+#### Profile now
+
+Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button will immediately request a profile in all agents that are attached to the Application Insights instance.
+
+#### CPU
+
+CPU threshold is a percentage of the usage of all available cores on the system.
+
+As an example, if one core of an eight core machine were saturated the CPU percentage would be considered 12.5%.
+
+#### Memory
+
+Memory percentage is the current Tenured memory region (OldGen) occupancy against the maximum possible size of the region.
+
+Occupancy is evaluated after a tenured collection has been performed. The maximum size of the tenured region is the size it would be if the JVMs' heap grew to its maximum size.
+
+For instance, take the following scenario:
+
+- The Java heap could grow to a maximum of 1024 mb.
+- The Tenured Generation could grow to 90% of the heap.
+- Therefore the maximum possible size of tenured would be 922 mb.
+- Your threshold was set via the user interface to 75%, therefore your threshold would be 75% of 922 mb, 691 mb.
+
+In this scenario, a profile will occur in the following circumstances:
+
+- Full garbage collection is executed
+- The Tenured regions occupancy is above 691 mb after collection
+
+### Installation
+
+The following steps will guide you through enabling the profiling component on the agent and configuring resource limits that will trigger a profile if breached.
+
+1. Configure the resource thresholds that will cause a profile to be collected:
+
+ 1. Browse to the Performance -> Profiler section of the Application Insights instance.
+ :::image type="content" source="./media/java-standalone-profiler/performance-blade.png" alt-text="Screenshot of the link to open performance blade." lightbox="media/java-standalone-profiler/performance-blade.png":::
+ :::image type="content" source="./media/java-standalone-profiler/profiler-button.png" alt-text="Screenshot of the Profiler button from the Performance blade." lightbox="media/java-standalone-profiler/profiler-button.png":::
+
+ 2. Select "Triggers"
+
+ 3. Configure the required CPU and Memory thresholds and select Apply.
+ :::image type="content" source="./media/java-standalone-profiler/cpu-memory-trigger-settings.png" alt-text="Screenshot of trigger settings pane for CPU and Memory triggers.":::
+
+1. Inside the `applicationinsights.json` configuration of your process, enable profiler with the `preview.profiler.enabled` setting:
+ ```json
+ {
+ "connectionString" : "...",
+ "preview" : {
+ "profiler" : {
+ "enabled" : true
+ }
+ }
+ }
+ ```
+ Alternatively, set the `APPLICATIONINSIGHTS_PROFILER_ENABLED` environment variable to true.
+
+1. Restart your process with the updated configuration.
+
+> [!WARNING]
+> The Java profiler does not support the "Sampling" trigger. Configuring this will have no effect.
+
+After these steps have been completed, the agent will monitor the resource usage of your process and trigger a profile when the threshold is exceeded. When a profile has been triggered and completed, it will be viewable from the
+Application Insights instance within the Performance -> Profiler section. From that screen the profile can be downloaded, once download the JFR recording file can be opened and analyzed within a tool of your choosing, for example JDK Mission Control (JMC).
++
+### Configuration
+
+Configuration of the profiler triggering settings, such as thresholds and profiling periods, are set within the ApplicationInsights UI under the Performance, Profiler, Triggers UI as described in [Installation](#installation).
+
+Additionally, many parameters can be configured using environment variables and the `applicationinsights.json` configuration file.
+
+#### Configuring Profile Contents
+
+If you wish to provide a custom profile configuration, alter the `memoryTriggeredSettings`, and `cpuTriggeredSettings` to provide the path to a `.jfc` file with your required configuration.
+
+Profiles can be generated/edited in the JDK Mission Control (JMC) user interface under the `Window->Flight Recording Template Manager` menu and control over individual flags is found inside `Edit->Advanced` of this user interface.
+
+### Environment variables
+
+- `APPLICATIONINSIGHTS_PROFILER_ENABLED`: boolean (default: `false`)
+ Enables/disables the profiling feature.
+
+### Configuration file
+
+Example configuration:
+
+```json
+{
+ "preview": {
+ "profiler": {
+ "enabled": true,
+ "cpuTriggeredSettings": "profile-without-env-data",
+ "memoryTriggeredSettings": "profile-without-env-data",
+ "manualTriggeredSettings": "profile-without-env-data"
+ }
+ }
+}
+
+```
+
+`memoryTriggeredSettings` This configuration will be used if a memory profile is requested. This value can be one of:
+
+- `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see Warning section above for details.
+- `profile`. Uses the `profile.jfc` configuration that ships with JFR.
+- A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
+
+`cpuTriggeredSettings` This configuration will be used if a cpu profile is requested.
+This value can be one of:
+
+- `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see Warning section above for details.
+- `profile`. Uses the `profile.jfc` jfc configuration that ships with JFR.
+- A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
+
+`manualTriggeredSettings` This configuration will be used if a manual profile is requested.
+This value can be one of:
+
+- `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see
+ Warning section above for details.
+- `profile`. Uses the `profile.jfc` jfc configuration that ships with JFR.
+- A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
+
+## Frequently asked questions
+
+### What is Azure Monitor Application Insights Java Profiling?
+Azure Monitor Application Insights Java profiler uses Java Flight Recorder (JFR) to profile your application using a customized configuration.
+
+### What is Java Flight Recorder (JFR)?
+Java Flight Recorder is a tool for collecting profiling data of a running Java application. It's integrated into the Java Virtual Machine (JVM) and is used for troubleshooting performance issues. Learn more about [Java SE JFR Runtime](https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/about.htm#JFRUH170).
+
+### What is the price and/or licensing fee implications for enabling App Insights Java Profiling?
+Java Profiling enablement is a free feature with Application Insights. [Azure Monitor Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/) is based on ingestion cost.
+
+### Which Java profiling information is collected?
+Profiling data collected by the JFR includes: method and execution profiling data, garbage collection data, and lock profiles.
+
+### How can I use App Insights Java Profiling and visualize the data?
+JFR recording can be viewed and analyzed with your preferred tool, for example [Java Mission Control (JMC)](https://jdk.java.net/jmc/8/).
+
+### Are performance diagnosis and fix recommendations provided with App Insights Java Profiling?
+'Performance diagnostics and recommendations' is a new feature that will be available as Application Insights Java Diagnostics. You may [sign up](https://aka.ms/JavaO11y) to preview this feature. JFR recording can be viewed with Java Mission Control (JMC).
+
+### What's the difference between on-demand and automatic Java Profiling in App Insights?
+
+On-demand is user triggered profiling in real-time whereas automatic profiling is with preconfigured triggers.
+
+Use [Profile Now](https://github.com/johnoliver/azure-docs-pr/blob/add-java-profiler-doc/articles/azure-monitor/profiler/profiler-settings.md) for the on-demand profiling option. [Profile Now](https://github.com/johnoliver/azure-docs-pr/blob/add-java-profiler-doc/articles/azure-monitor/profiler/profiler-settings.md) will immediately profile all agents attached to the Application Insights instance.
+
+Automated profiling is triggered a breach in a resource threshold.
+
+### Which Java profiling triggers can I configure?
+Application Insights Java Agent currently supports monitoring of CPU and memory consumption. CPU threshold is configured as a percentage of all available cores on the machine. Memory is the current Tenured memory region (OldGen) occupancy against the maximum possible size of the region.
+
+### What are the required prerequisites to enable Java Profiling?
+
+Review the [Pre-requisites](#prerequisites) at the top of this article.
+
+### Can I use Java Profiling for microservices application?
+
+Yes, you can profile a JVM running microservices using the JFR.
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
To begin, create a configuration file named *applicationinsights.json*. Save it
"sampling": { "overrides": [ {
+ "telemetryKind": "request",
"attributes": [ ... ], "percentage": 0 }, {
+ "telemetryKind": "request",
"attributes": [ ... ],
To begin, create a configuration file named *applicationinsights.json*. Save it
} ```
-> [!NOTE]
-> Starting from 3.4.0-BETA, `telemetryKind` of `request`, `dependency`, `trace` (log), or `exception` is supported
-> (and should be set) on all sampling overrides, e.g.
-> ```json
-> {
-> "connectionString": "...",
-> "sampling": {
-> "percentage": 10
-> },
-> "preview": {
-> "sampling": {
-> "overrides": [
-> {
-> "telemetryKind": "request",
-> "attributes": [
-> ...
-> ],
-> "percentage": 0
-> },
-> {
-> "telemetryKind": "request",
-> "attributes": [
-> ...
-> ],
-> "percentage": 100
-> }
-> ]
-> }
-> }
-> }
-> ```
- ## How it works
-When a span is started, the attributes present on the span at that time are used to check if any of the sampling
+`telemetryKind` must be one of `request`, `dependency`, `trace` (log), or `exception`.
+
+When a span is started, the type of span and the attributes present on it at that time are used to check if any of the sampling
overrides match. Matches can be either `strict` or `regexp`. Regular expression matches are performed against the entire attribute value,
If no sampling overrides match:
[top-level sampling configuration](./java-standalone-config.md#sampling) is used. * If this is not the first span in the trace, then the parent sampling decision is used.
-> [!NOTE]
-> Starting from 3.4.0-BETA, sampling overrides do not apply to "standalone" telemetry by default. Standalone telemetry
-> is any telemetry that is not associated with a request, e.g. startup logs.
-> You can make a sampling override apply to standalone telemetry by including the attribute
-> `includingStandaloneTelemetry` in the sampling override, e.g.
-> ```json
-> {
-> "connectionString": "...",
-> "preview": {
-> "sampling": {
-> "overrides": [
-> {
-> "telemetryKind": "dependency",
-> "includingStandaloneTelemetry": true,
-> "attributes": [
-> ...
-> ],
-> "percentage": 0
-> }
-> ]
-> }
-> }
-> }
-> ```
- ## Example: Suppress collecting telemetry for health checks This will suppress collecting telemetry for all requests to `/health-checks`.
This will also suppress collecting any downstream spans (dependencies) that woul
"sampling": { "overrides": [ {
+ "telemetryKind": "request",
"attributes": [ { "key": "http.url",
This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
"sampling": { "overrides": [ {
+ "telemetryKind": "dependency",
"attributes": [ { "key": "db.system",
This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
} ```
-> [!NOTE]
-> Starting from 3.4.0-BETA, `telemetryKind` is supported (and recommended) on all sampling overrides, e.g.
- ## Example: Collect 100% of telemetry for an important request type This will collect 100% of telemetry for `/login`.
those will also be collected for all '/login' requests.
"sampling": { "overrides": [ {
+ "telemetryKind": "request",
"attributes": [ { "key": "http.url",
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
# Upgrading from Application Insights Java 2.x SDK
-If you're already using Application Insights Java 2.x SDK in your application, you can keep using it.
-The Application Insights Java 3.x agent will detect it,
-and capture and correlate any custom telemetry you're sending via the 2.x SDK,
-while suppressing any auto-collection performed by the 2.x SDK to prevent duplicate telemetry.
+There are typically no code changes when upgrading to 3.x. The 3.x SDK dependencies are just no-op API versions of the
+2.x SDK dependencies, but when used along with the 3.x Java agent, the 3.x Java agent provides the implementation
+for them, and your custom instrumentation will be correlated with all the new
+auto-instrumentation which is provided by the 3.x Java agent.
-If you were using Application Insights 2.x agent, you need to remove the `-javaagent:` JVM arg
-that was pointing to the 2.x agent.
+## Step 1: Update dependencies
-The rest of this document describes limitations and changes that you may encounter
-when upgrading from 2.x to 3.x, as well as some workarounds that you may find helpful.
+| 2.x dependency | Action | Remarks |
+|-|--||
+| `applicationinsights-core` | Update the version to `3.4.0` or later | |
+| `applicationinsights-web` | Update the version to `3.4.0` or later, and remove the Application Insights web filter your `web.xml` file. | |
+| `applicationinsights-web-auto` | Replace with `3.4.0` or later of `applicationinsights-web` | |
+| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 1.2 is auto-instrumented in the 3.x Java agent. |
+| `applicationinsights-logging-log4j2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 2 is auto-instrumented in the 3.x Java agent. |
+| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is auto-instrumented in the 3.x Java agent. |
+| `applicationinsights-spring-boot-starter` | Replace with `3.4.0` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. |
+
+## Step 2: Add the 3.x Java agent
+
+Add the 3.x Java agent to your JVM command-line args, for example
+
+```
+-javaagent:path/to/applicationinsights-agent-3.4.0.jar
+```
+
+If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
+
+> [!Note]
+> If you were using the spring-boot-starter and if you prefer, there is an alternative to using the Java agent. See [3.x Spring Boot](./java-spring-boot.md).
+## Step 3: Configure your Application Insights connection string
+See [configuring the connection string](./java-standalone-config.md#connection-string).
+
+## Additional notes
+
+The rest of this document describes limitations and changes that you may encounter
+when upgrading from 2.x to 3.x, as well as some workarounds that you may find helpful.
## TelemetryInitializers and TelemetryProcessors
This use case is supported in Application Insights Java 3.x using [Instrumentati
## Operation names
-In the Application Insights Java 2.x SDK, in some cases, the operation names contained the full path, e.g.
+In the Application Insights Java 2.x SDK, in some cases, the operation names contained the full path, for example
:::image type="content" source="media/java-ipa/upgrade-from-2x/operation-names-with-full-path.png" alt-text="Screenshot showing operation names with full path"::: Operation names in Application Insights Java 3.x have changed to generally provide a better aggregated view
-in the Application Insights Portal U/X, e.g.
+in the Application Insights Portal U/X, for example
:::image type="content" source="media/java-ipa/upgrade-from-2x/operation-names-parameterized.png" alt-text="Screenshot showing operation names parameterized":::
The snippet below configures 3 telemetry processors that combine to replicate th
The telemetry processors perform the following actions (in order): 1. The first telemetry processor is an attribute processor (has type `attribute`),
- which means it applies to all telemetry which has attributes
+ which means it applies to all telemetry that has attributes
(currently `requests` and `dependencies`, but soon also `traces`). It will match any telemetry that has attributes named `http.method` and `http.url`.
The telemetry processors perform the following actions (in order):
} } ```-
-## 2.x SDK logging appenders
-
-Application Insights Java 3.x [auto-collects logging](./java-standalone-config.md#auto-collected-logging)
-without the need for configuring any logging appenders.
-If you are using 2.x SDK logging appenders, those can be removed,
-as they will be suppressed by the Application Insights Java 3.x anyways.
-
-## 2.x SDK spring boot starter
-
-There is no Application Insights Java 3.x spring boot starter.
-3.x setup and configuration follows the same [simple steps](./java-in-process-agent.md#get-started)
-whether you are using spring boot or not.
-
-When upgrading from the Application Insights Java 2.x SDK spring boot starter,
-note that the cloud role name will no longer default to `spring.application.name`.
-See the [3.x configuration docs](./java-standalone-config.md#cloud-role-name)
-for setting the cloud role name in 3.x via json config or environment variable.
azure-resource-manager Bicep Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-object.md
The example returns:
} ```
-The items() function sorts the objects in the alphabetical order. For example, **item001** appears before **item002** in the outputs of the two preceding samples.
<a id="json"></a>
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/data-types.md
Title: Data types in Bicep description: Describes the data types that are available in Bicep Previously updated : 07/06/2022 Last updated : 09/16/2022 # Data types in Bicep
var environmentSettings = {
output accessorResult string = environmentSettings['dev'].name ``` + ## Strings In Bicep, strings are marked with singled quotes, and must be declared on a single line. All Unicode characters with code points between *0* and *10FFFF* are allowed.
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/outputs.md
Title: Outputs in Bicep description: Describes how to define output values in Bicep Previously updated : 02/20/2022 Last updated : 09/16/2022 # Outputs in Bicep
az deployment group show \
+## Object sorting in outputs
++ ## Next steps * To learn about the available properties for outputs, see [Understand the structure and syntax of Bicep](./file.md).
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/data-types.md
description: Describes the data types that are available in Azure Resource Manag
Previously updated : 06/27/2022 Last updated : 09/16/2022 # Data types in ARM templates
You can get a property from an object with dot notation.
} ``` + ## Strings Strings are marked with double quotes.
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/outputs.md
Title: Outputs in templates description: Describes how to define output values in an Azure Resource Manager template (ARM template). Previously updated : 01/19/2022 Last updated : 09/16/2022
az deployment group show \
+## Object sorting in outputs
++ ## Next steps * To learn about the available properties for outputs, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Title: Template functions - objects description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with objects. Previously updated : 05/09/2022 Last updated : 09/16/2022 # Object functions for ARM templates
The example returns:
} ```
-The items() function sorts the objects in the alphabetical order. For example, **item001** appears before **item002** in the outputs of the two preceding samples.
<a id="json"></a>
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Last updated 09/14/2022
This article provides a comprehensive list of language support by service features in Azure Video Indexer. For the list and definitions of all the features, see [Overview](video-indexer-overview.md). > [!NOTE]
-> To make sure a language is supported by the Azure Video Indexer frontend (the website and widget), check [the frontend language support](#language-support-in-frontend-experiences) table below.
+> The list below contains the source languages for transcription that are supported by the Video Indexer API. Some languages are supported only through the
+> API and not through the Video Indexer website or widgets.
+>
+> To make sure a language is supported for search, transcription, or translation by the Azure Video Indexer website and widgets, see the [frontend language
+> support table](#language-support-in-frontend-experiences) further below.
## General language support
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
Azure Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-deta
You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for the integration gives you better visibility on the health of your workflow and an easy way to debug it.
+> [!TIP]
+> If you are using a classic AVI account, see [Logic Apps connector with classic-based AVI accounts](logic-apps-connector-tutorial.md).
+
+## Get started with the Azure Video Indexer connectors
+ To help you get started quickly with the Azure Video Indexer connectors, the example in this article creates Logic App flows. The Logic App and Power Automate capabilities and their editors are almost identical, thus the diagrams and explanations are applicable to both. The example in this article is based on the ARM AVI account. If you're working with a classic account, see [Logic App connectors with classic-based AVI accounts](logic-apps-connector-tutorial.md). The "upload and index your video automatically" scenario covered in this article is composed of two different flows that work together. The "two flow" approach is used to support async upload and indexing of larger files effectively.
The "upload and index your video automatically" scenario covered in this article
* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes. * The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
-> [!NOTE]
-> For details about the Azure Video Indexer REST ARM API and the request/response examples, see [API](https://aka.ms/avam-arm-api). For example, [Generate an Azure Video Indexer access token](/rest/api/videoindexer/generate/access-token?tabs=HTTP). Press **Try it** to get the correct values for your account.
->
-> If you are using a classic AVI account, see [Logic Apps connector with classic-based AVI accounts]( logic-apps-connector-tutorial.md).
+The logic apps that you create in this article, contain one flow per app. The second section ("**Create a second flow - JSON extraction**") explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
## Prerequisites
The following image shows the first flow:
|-|-| |Location| Location of the associated the Azure Video Indexer account.| | Account ID| Account ID of the associated Azure Video Indexer account. You can find the **Account ID** in the **Overview** page of your account, in the Azure portal. Or, the **Account settings** tab, left of the [Azure Video Indexer website](https://www.videoindexer.ai/).|
- |Access Token| Select **accessToken** from the **dynamic content** of the **Parse JSON** action.|
+ |Access Token| Use the `body('HTTP')['accessToken']` expression to extract the access token in the right format from the previous HTTP call.|
| Video Name| Select **List of Files Name** from the dynamic content of **When a blob is added or modified** action. | |Video URL|Select **Web Url** from the dynamic content of **Create SAS URI by path** action.| | Body| Can be left as default.| ![Screenshot of the upload and index action.](./media/logic-apps-connector-arm-accounts/upload-and-index.png)
+ Select **Save**.
+ The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output. ## Create a second flow - JSON extraction
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Loc
With the ARM-based [paid (unlimited)](accounts-overview.md) account you are able to use: -- The [Azure role-based access control (RBAC)](../role-based-access-control/overview.md).
+- [Azure role-based access control (RBAC)](../role-based-access-control/overview.md).
- Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs). - Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform.
Now supporting source languages for STT (speech-to-text), translation, and searc
For more information, see [supported languages](language-support.md).
+### Configure confidence level in a person model with an API
+
+Use the [Patch person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Patch-Person-Model) API to configure the confidence level for face recognition within a person model.
+ ## August 2022 ### Update topic inferencing model
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
# Examine the Azure Video Indexer output
-When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
+When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
To visually examine the video's insights, press the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Last updated 04/22/2022
>[!NOTE] >Run commands are executed one at a time in the order submitted.
-In this how-to, you learn how to:
+In this article, you learn how to:
> [!div class="checklist"]
-> * List all existing external identity sources integrated with vCenter Server SSO
-> * Add Active Directory over LDAP, with or without SSL
+> * Export the certificate for LDAPS authentication
+> * Upload the LDAPS certificate to blob storage and generate a SAS URL
+> * Configure NSX-T DNS for resolution to your Active Directory Domain
+> * Add Active Directory over (Secure) LDAPS (LDAP over SSL) or (unsecure) LDAP
> * Add existing AD group to cloudadmin group
+> * List all existing external identity sources integrated with vCenter Server SSO
+> * Assign additional vCenter Server Roles to Active Directory Identities
> * Remove AD group from the cloudadmin role > * Remove existing external identity sources
In this how-to, you learn how to:
## Prerequisites -- Establish connectivity from your on-premises network to your private cloud.
+- Connectivity from your Active Directory network to your Azure VMware Solution private cloud must be operational.
-- If you have AD with SSL, download the certificate for AD authentication and upload it to an Azure Storage account as blob storage. Then, you'll need to [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+- For AD authentication with LDAPS:
-- If you use FQDN, enable DNS resolution on your on-premises AD.
+ - You will need access to the Active Directory Domain Controller(s) with Administrator permissions.
+ - Your Active Directory Domain Controller(s) must have LDAPS enabled with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [Third-party/Public CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
+ >[!NOTE]
+ >Self-sign certificates are not recommended for production environments.
+ - [Export the certificate for LDAPS authentication](#export-the-certificate-for-ldaps-authentication) and upload it to an Azure Storage account as blob storage. Then, you'll need to [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+- Ensure Azure VMware Solution has DNS resolution configured to your on-premises AD. Enable DNS Forwarder from Azure portal. See [Configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md) for further information.
-## List external identity
+>[!NOTE]
+>For further information about LDAPS and certificate issuance, consult with your security or identity management team.
+## Export the certificate for LDAPS authentication
+First, verify that the certificate used for LDAPS is valid.
-You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identity sources already integrated with vCenter Server SSO.
+1. Sign in to a domain controller with administrator permissions where LDAPS is enabled.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open the **Run command**, type **mmc** and select the **OK** button.
+1. Select the **File** menu option then **Add/Remove Snap-in**.
+1. Select the **Certificates** in the list of Snap-ins and select the **Add>** button.
+1. In the **Certificates snap-in** window, select **Computer account** then select **Next**.
+1. Keep the first option selected **Local computer...** , and select **Finish**, and then **OK**.
+1. Expand the **Personal** folder under the **Certificates (Local Computer)** management console and select the **Certificates** folder to list the installed certificates.
-1. Select **Run command** > **Packages** > **Get-ExternalIdentitySources**.
+ :::image type="content" source="media/run-command/ldaps-certificate-personal-certficates.png" alt-text="Screenshot showing displaying the list of certificates." lightbox="media/run-command/ldaps-certificate-personal-certficates.png":::
+
+1. Double click the certificate for LDAPS purposes. The **Certificate** General properties will display. Ensure the certificate date **Valid from** and **to** is current and the certificate has a **private key** that corresponds to the certificate.
- :::image type="content" source="media/run-command/run-command-overview.png" alt-text="Screenshot showing how to access the run commands available." lightbox="media/run-command/run-command-overview.png":::
+ :::image type="content" source="media/run-command/ldaps-certificate-personal-general.png" alt-text="Screenshot showing the properties of the certificate." lightbox="media/run-command/ldaps-certificate-personal-general.png":::
+
+1. On the same window, select the **Certification Path** tab and verify that the **Certification path** is valid, which it should include the certificate chain of root CA and optionally intermediate certificates and the **Certificate Status** is OK.
-1. Provide the required values or change the default values, and then select **Run**.
+ :::image type="content" source="media/run-command/ldaps-certificate-cert-path.png" alt-text="Screenshot showing the certificate chain." lightbox="media/run-command/ldaps-certificate-cert-path.png":::
+
+1. Close the window.
- :::image type="content" source="media/run-command/run-command-get-external-identity-sources.png" alt-text="Screenshot showing how to list external identity source. ":::
-
- | **Field** | **Value** |
- | | |
- | **Retain up to** |Retention period of the cmdlet output. The default value is 60 days. |
- | **Specify name for execution** | Alphanumeric name, for example, **getExternalIdentity**. |
- | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+Now proceed to export the certificate
-1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
-
- :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot showing how to check the run commands notification or status." lightbox="media/run-command/run-packages-execution-command-status.png":::
+1. Still on the Certificates console, right select the LDAPS certificate and select **All Tasks** > **Export**. The Certificate Export Wizard prompt is displayed, select the **Next** button.
-## Add Active Directory over LDAP with SSL
+1. In the **Export Private Key** section, select the 2nd option, **No, do not export the private key** and select the **Next** button.
+1. In the **Export File Format** section, select the 2nd option, **Base-64 encoded X.509(.CER)** and then select the **Next** button.
+1. In the **File to Export** section, select the **Browse...** button and select a folder location where to export the certificate, enter a name then select the **Save** button.
-You'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server.
+>[!NOTE]
+>If more than one domain controller is LDAPS enabled, repeat the export procedure in the additional domain controller(s) to also export the corresponding certificate(s). Be aware that you can only reference two LDAPS server in the `New-LDAPSIdentitySource` Run Command. If the certificate is a wildcard certificate, for example ***.avsdemo.net** you only need to export the certificate from one of the domain controllers.
-1. Download the certificate for AD authentication and upload it to an Azure Storage account as blob storage. If multiple certificates are required, upload each certificate individually.
+## Upload the LDAPS certificate to blob storage and generate a SAS URL
-1. For each certificate, [Grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md). These SAS strings are supplied to the cmdlet as a parameter.
+- Upload the certificate file (.cer format) you just exported to an Azure Storage account as blob storage. Then [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
- >[!IMPORTANT]
- >Make sure to copy each SAS string, because they will no longer be available once you leave this page.
-
-1. Select **Run command** > **Packages** > **New-LDAPSIdentitySource**.
+- If multiple certificates are required, upload each certificate individually and for each certificate, generate a SAS URL.
+
+> [!IMPORTANT]
+> Make sure to copy each SAS URL string(s), because they will no longer be available once you leave the page.
+
+> [!TIP]
+> Another alternative method for consolidating certificates is saving the certificate chains in a single file as mentioned in [this VMware KB article](https://kb.vmware.com/s/article/2041378), and generate a single SAS URL for the file that contains all the certificates.
+
+## Configure NSX-T DNS for resolution to your Active Directory Domain
+
+A DNS Zone needs to be created and added to the DNS Service, follow the instructions in [Configure a DNS forwarder in the Azure portal](./configure-dns-azure-vmware-solution.md) to complete these two steps.
+
+After completion, verify that your DNS Service has your DNS zone included.
+ :::image type="content" source="media/run-command/ldaps-dns-zone-service-configured.png" alt-text="Screenshot showing the DNS Service that includes the required DNS zone." lightbox="media/run-command/ldaps-dns-zone-service-configured.png":::
+
+Your Azure VMware Solution Private cloud should now be able to resolve your on-premises Active Directory domain name properly.
++
+## Add Active Directory over LDAP with SSL
+
+In your Azure VMware Solution private cloud you'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server.
+
+1. Browse to your Azure VMware Solution private cloud and then select **Run command** > **Packages** > **New-LDAPSIdentitySource**.
1. Provide the required values or change the default values, and then select **Run**. | **Field** | **Value** | | | |
- | **Name** | User-friendly name of the external identity source, for example, **avslab.local**. |
- | **DomainName** | The FQDN of the domain. |
- | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source if you're using SSPI authentications. |
- | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldaps://yourserver:636**. |
- | **SecondaryURL** | Secondary fall-back URL if there's primary failure. |
- | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=yourserver,DC=internal**. Base DN is needed to use LDAP Authentication. |
- | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=yourserver,DC= internal**. Base DN is needed to use LDAP Authentication. |
- | **Credential** | The username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avsldap.local** format. |
+ | **GroupName** | The group in the external identity source that gives the cloudadmin access. For example, **avs-admins**. |
| **CertificateSAS** | Path to SAS strings with the certificates for authentication to the AD source. If you're using multiple certificates, separate each SAS string with a comma. For example, **pathtocert1,pathtocert2**. |
- | **GroupName** | Group in the external identity source that gives the cloudadmin access. For example, **avs-admins**. |
+ | **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. |
+ | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=avsldap,DC=local**. Base DN is needed to use LDAP Authentication. |
+ | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=avsldap,DC=local**. Base DN is needed to use LDAP Authentication. |
+ | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldaps://yourserver.avslab.local.:636**. |
+ | **SecondaryURL** | Secondary fall-back URL if there's primary failure. For example, **ldaps://yourbackupldapserver.avslab.local:636**. |
+ | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source. Typically the **avsldap\** format. |
+ | **DomainName** | The FQDN of the domain, for example **avslab.local**. |
+ | **Name** | User-friendly name of the external identity source, for example, **avslab.local**. This is how it will be displayed in vCenter. |
| **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. | | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
-1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
-
+1. Check **Notifications** or the **Run Execution Status** pane to see the progress and successful completion.
## Add Active Directory over LDAP
You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an externa
| **Field** | **Value** | | | |
- | **Name** | User-friendly name of the external identity source, for example, **avslap.local**. |
- | **DomainName** | The FQDN of the domain. |
- | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source if you're using SSPI authentications. |
- | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldap://yourserver:389**. |
+ | **Name** | User-friendly name of the external identity source, for example, **avslab.local**. This is how it will be displayed in vCenter. |
+ | **DomainName** | The FQDN of the domain, for example **avslab.local**. |
+ | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source. Typically the **avsldap\** format. |
+ | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldap://yourserver.avslab.local:389**. |
| **SecondaryURL** | Secondary fall-back URL if there's primary failure. |
- | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=yourserver,DC=internal**. Base DN is needed to use LDAP Authentication. |
- | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=yourserver,DC= internal**. Base DN is needed to use LDAP Authentication. |
- | **Credential** | Username and password used for authentication with the AD source (not cloudadmin). |
- | **GroupName** | Group to give cloud admin access in your external identity source, for example, **avs-admins**. |
+ | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=avslab,DC=local**. Base DN is needed to use LDAP Authentication. |
+ | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=avslab,DC=local**. Base DN is needed to use LDAP Authentication. |
+ | **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. |
+ | **GroupName** | The group to give cloud admin access in your external identity source, for example, **avs-admins**. |
| **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. | | **Timeout** | The period after which a cmdlet exits if taking too long to finish. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress. - ## Add existing AD group to cloudadmin group
-You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to cloudadmin group. The users in this group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO.
+You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to a cloudadmin group. Users in the cloud admin group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO.
1. Select **Run command** > **Packages** > **Add-GroupToCloudAdmins**.
You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to cl
1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
+## List external identity
+
+You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identity sources already integrated with vCenter Server SSO.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Run command** > **Packages** > **Get-ExternalIdentitySources**.
+ :::image type="content" source="media/run-command/run-command-overview.png" alt-text="Screenshot showing how to access the run commands available." lightbox="media/run-command/run-command-overview.png":::
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ :::image type="content" source="media/run-command/run-command-get-external-identity-sources.png" alt-text="Screenshot showing how to list external identity source. ":::
+
+ | **Field** | **Value** |
+ | | |
+ | **Retain up to** |Retention period of the cmdlet output. The default value is 60 days. |
+ | **Specify name for execution** | Alphanumeric name, for example, **getExternalIdentity**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
+
+ :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot showing how to check the run commands notification or status." lightbox="media/run-command/run-packages-execution-command-status.png":::
++
+## Assign additional vCenter Server Roles to Active Directory Identities
+After you've added an external identity over LDAP or LDAPS you can assign vCenter Server Roles to Active Directory security groups based on your organization's security controls.
+
+1. After you sign in to vCenter Server with cloud admin privileges, you can select an item from the inventory, select **ACTIONS** menu and select **Add Permission**.
+
+ :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-1.png" alt-text="Screenshot displaying hot to add permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-1.png":::
+
+1. In the Add Permission prompt:
+ 1. *Domain*. Select the Active Directory that was added previously.
+ 1. *User/Group*. Enter the name of the desired user or group to find then select once is found.
+ 1. *Role*. Select the desired role to assign.
+ 1. *Propagate to children*. Optionally select the checkbox if permissions should be propagated down to children resources.
+ :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-2.png" alt-text="Screenshot displaying assign the permission." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png":::
+
+1. Switch to the **Permissions** tab and verify the permission assignment was added.
+ :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-3.png" alt-text="Screenshot displaying the add completion of permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png":::
+1. Users should now be able to sign in to vCenter Server using their Active Directory credentials.
## Remove AD group from the cloudadmin role
You'll run the `Remove-GroupFromCloudAdmins` cmdlet to remove a specified AD gro
1. Check **Notifications** or the **Run Execution Status** pane to see the progress. ---- ## Remove existing external identity sources You'll run the `Remove-ExternalIdentitySources` cmdlet to remove all existing external identity sources in bulk.
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/J
[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
+The credential scope used should be `https://webpubsub.azure.com/.default`.
+ You could also use **Role Based Access Control (RBAC)** to authorize the request from your server to Azure Web PubSub Service. [Learn how to configure Role Based Access Control roles for your resource](./howto-authorize-from-application.md#add-role-assignments-on-azure-portal)
You could also use **Role Based Access Control (RBAC)** to authorize the request
| Operation Group | Description | |--|-| |[Service Status](/rest/api/webpubsub/dataplane/health-api)| Provides operations to check the service status |
-|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. |
+|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. |
baremetal-infrastructure About The Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/about-the-public-preview.md
+
+ Title: About NC2 on Azure Public Preview
+description: Learn about NC2 on Azure Public Preview and the benefits it offers.
++ Last updated : 03/31/2021++
+# About Nutanix Cloud Clusters on Azure Public Preview
+
+The articles in this section are intended for the professionals participating in the Public Preview of NC2 on Azure.
+
+ To provide input, email [NC2-on-Azure Docs](mailto:AzNutanixPM@microsoft.com).
++
+In particular, this article highlights Public Preview features.
+
+## Unlock the benefits of Azure
+
+* Establish a consistent hybrid deployment strategy
+* Operate seamlessly with on-premises Nutanix Clusters in Azure
+* Build and scale without constraints
+* Invent for today and be prepared for tomorrow with NC2 on Azure
+
+### Scale and flexibility that align with your needs
+
+Get scale, automation, and fast provisioning for your Nutanix workloads on global Azure infrastructure to invent with purpose.
+
+### Optimize your investment
+
+Keep using your existing Nutanix investments, skills, and tools to quickly increase business agility with Azure cloud services.
+
+### Gain cloud cost efficiencies
+
+Manage your cloud spending with license portability to significantly reduce the cost of running workloads in the cloud.
+
+### Modernize through the power of Azure
+
+Adapt quicker with unified data governance and gain immediate insights with transformative analytics to drive innovation.
+
+### SKUs
+
+We offer two SKUs: AN36 and AN36P. For specifications, see [SKUs](skus.md).
+
+### More benefits
+
+* Microsoft Azure Consumption Contract (MACC) credits
+
+> [!NOTE]
+> During the public preview, RI is not supported.
+An additional discount may be available.
+
+## Support
+
+Nutanix (for software-related issues) and Microsoft (for infrastructure-related issues) will provide end-user support.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Use cases and supported scenarios](use-cases-and-supported-scenarios.md)
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/architecture.md
+
+ Title: Architecture of BareMetal Infrastructure for NC2
+description: Learn about the architecture of several configurations of BareMetal Infrastructure for NC2.
++ Last updated : 04/14/2021++
+# Architecture of BareMetal Infrastructure for Nutanix
+
+In this article, we look at the architectural options for BareMetal Infrastructure for Nutanix and the features each option supports.
+
+## Deployment example
+
+The image in this section shows one example of an NC2 on Azure deployment.
++
+### Cluster Management virtual network
+
+* Contains the Nutanix Ready Nodes
+* Nodes reside in a delegated subnet (special BareMetal construct)
+
+### Hub virtual network
+
+* Contains a gateway subnet and VPN Gateway
+* VPN Gateway is entry point from on-premises to cloud
+
+### PC virtual network
+
+* Contains Prism Central - Nutanix's software appliance that enables advanced functionality within the Prism portal.
+
+## Connect from cloud to on-premises
+
+Connecting from cloud to on-premises is supported by two traditional products: Express Route and VPN Gateway.
+One example deployment is to have a VPN gateway in the Hub virtual network.
+This virtual network is peered with both the PC virtual network and Cluster Management virtual network, providing connectivity across the network and to your on-premises site.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Requirements](requirements.md)
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/faq.md
+
+ Title: FAQ
+description: Questions frequently asked about NC2 on Azure
++ Last updated : 07/01/2022++
+# Frequently asked questions about NC2 on Azure
+
+This article addresses questions most frequently asked about NC2 on Azure.
+
+## What is Hyperconverged Infrastructure (HCI)?
+
+Hyper-converged infrastructure (HCI) uses locally attached storage resources to combine common data center hardware with intelligent software to create flexible building blocks that replace legacy infrastructure consisting of separate servers, storage networks, and storage arrays. [Video explanation](https://www.youtube.com/watch?v=OPYA5-V0yRo)
+
+## How can I create a VM on a node?
+
+After a customer provisions a cluster of Nutanix Ready Nodes, they can spin up a VM through the Nutanix Prism Portal.
+This operation should be exactly the same as on-premises in the prism portal.
+
+## Is NC2 on Azure a third party or first party offering?
+
+NC2 on Azure is a 3rd-party offering on Azure Marketplace.
+However, we're working hand in hand with Nutanix to offer the best product experience.
+
+## How will I be billed?
+
+Customers will be billed on a pay-as-you-go basis. Additionally, customers are able to use their existing Microsoft Azure Consumption Contract (MACC).
+
+## What software advantages does Nutanix have over competitors?
+
+Data locality
+Shadow Clones (which lead to faster boot time)
+Cluster level microservices that lead to world-class performance
+
+## Will this solution integrate with the rest of the Azure cloud?
+
+Yes! You can use the products and services in Azure that you already have and love.
+
+## Who supports NC2 on Azure?
+
+Microsoft delivers support for BareMetal infrastructure of NC2 on Azure.
+You can submit a support request. For Cloud Solution Provider (CSP) managed subscriptions, the first level of support provides the Solution Provider in the same fashion as CSP does for other Azure services.
+
+Nutanix delivers support for Nutanix software of NC2 on Azure.
+Nutanix offers a support tier called Production Support for NC2.
+For more information about Production Support tiers and SLAs, see Product Support Programs under Cloud Services Support.
+
+## Can I use my existing VPN or ER gateway for the DR scenario?
+
+Technically, yes. Raise a support ticket from Azure portal to get this functionality enabled.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Getting started](get-started.md)
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/get-started.md
+
+ Title: Getting started
+description: Learn how to sign up, set up, and use NC2 on Azure Public Preview.
++ Last updated : 07/01/2021++
+# Getting started with NC2 on Azure
+
+Learn how to sign up for, set up, and use NC2 on Azure Public Preview.
+
+## Sign up for the Public Preview
+
+Once you've satisfied the [requirements](requirements.md), go to [Nutanix Cloud Clusters
+on Azure Deployment
+and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf) to sign up for the Preview.
+
+## Set up NC2 on Azure
+
+To set up NC2 on Azure, go to [Nutanix Cloud Clusters
+on Azure Deployment and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+
+## Use NC2 on Azure
+
+For more information about using NC2 on Azure, see [Nutanix Cloud Clusters
+on Azure Deployment
+and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [About the Public Preview](about-the-public-preview.md)
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/nc2-baremetal-overview.md
+
+ Title: What is BareMetal Infrastructure for NC2 on Azure?
+description: Learn about the features BareMetal Infrastructure offers for NC2 workloads.
++ Last updated : 07/01/2022++
+# What is BareMetal Infrastructure for NC2 on Azure?
+
+In this article, we'll give an overview of the features BareMetal Infrastructure offers for Nutanix workloads.
+
+Nutanix Cloud Clusters (NC2) on Microsoft Azure provides a hybrid cloud solution that operates as a single cloud, allowing you to manage applications and infrastructure in your private cloud and Azure. With NC2 running on Azure, you can seamlessly move your applications between on-premises and Azure using a single management console. With NC2 on Azure, you can use your existing Azure accounts and networking setup (VPN, VNets, and Subnets), eliminating the need to manage any complex network overlays. With this hybrid offering, you use the same Nutanix software and licenses across your on-premises cluster and Azure to optimize your IT investment efficiently.
+
+You use the NC2 console to create a cluster, update the cluster capacity (the number of nodes), and delete a Nutanix cluster. After you create a Nutanix cluster in Azure using NC2, you can operate the cluster in the same manner as you operate your on-premises Nutanix cluster with minor changes in the Nutanix command-line interface (nCLI), Prism Element and Prism Central web consoles, and APIs.
+
+## Supported protocols
+
+The following protocols are used for different mount points within BareMetal servers for Nutanix workload.
+
+- OS mount ΓÇô internet small computer systems interface (iSCSI)
+- Data/log ΓÇô [Network File System version 3 (NFSv3)](/windows-server/storage/nfs/nfs-overview#nfs-version-3-continuous-availability)
+- Backup/archive ΓÇô [Network File System version 4 (NFSv4)](/windows-server/storage/nfs/nfs-overview#nfs-version-41)
+
+## Licensing
+
+You can bring your own on-premises capacity-based Nutanix licenses (CBLs).
+Alternatively, you can purchase licenses from Nutanix or from Azure Marketplace.
+
+## Operating system and hypervisor
+
+NC2 runs Nutanix Acropolis Operating System (AOS) and Nutanix Acropolis Hypervisor (AHV).
+
+- Servers are pre-loaded with [AOS 6.1](https://www.nutanixbible.com/4-book-of-aos.html).
+- AHV 6.1 is built into this product as the default hypervisor at no extra cost.
+- AHV hypervisor is based on open source Kernel-based Virtual Machine (KVM).
+- AHV will determine the lowest processor generation in the cluster and constrain all Quick Emulator (QEMU) domains to that level.
+
+This functionality allows mixing of processor generations within an AHV cluster and ensures the ability to live-migrate between hosts.
+
+AOS abstracts kvm, virsh, qemu, libvirt, and iSCSI from the end-user and handles all backend configuration.
+Thus users can use Prism to manage everything they would want to manage, while not needing to be concerned with low-level management.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Getting started with NC2 on Azure](get-started.md)
baremetal-infrastructure Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/requirements.md
+
+ Title: Requirements
+description: Learn what you need to run NC2 on Azure, including Azure, Nutanix, networking, and other requirements.
++ Last updated : 03/31/2021++
+# Requirements
+
+This article assumes prior knowledge of the Nutanix stack and Azure services to operate significant deployments on Azure.
+The following sections identify the requirements to use Nutanix Clusters on Azure:
+
+## Azure account requirements
+
+* An Azure account with a new subscription
+* An Azure Active Directory
+
+## My Nutanix account requirements
+
+For more information, see "NC2 on Azure Subscription and Billing" in [Nutanix Cloud Clusters on Azure Deployment and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+
+## Networking requirements
+
+* Connectivity between your on-premises datacenter and Azure. Both ExpressRoute and VPN are supported.
+* After a cluster is created, you'll need Virtual IP addresses for both the on-premises cluster and the cluster running in Azure.
+* Outbound internet access on your Azure portal.
+* Azure Directory Service resolves the FQDN:
+gateway-external-api.console.nutanix.com.
+
+## Other requirements
+
+* Minimum of three (or more) Azure Nutanix Ready nodes per cluster
+* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure
+* Prism Central instance deployed on NC2 on Azure to manage the Nutanix clusters in Azure
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Supported instances and regions](supported-instances-and-regions.md)
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/skus.md
+
+ Title: SKUs
+description: Learn about SKU options for NC2 on Azure Public Preview, including core, RAM, storage, and network.
++ Last updated : 07/01/2021++
+# SKUs
+
+This article identifies options associated with SKUs available for NC2 on Azure Public Preview, including core, RAM, storage, and network.
+
+## Options
+
+The following table presents component options for each available SKU.
+
+| Component |Ready Node for Nutanix AN36|Ready Node for Nutanix AN36P|
+| :- | -: |::|
+|Core|Intel 6140, 36 Core, 2.3 GHz|Intel 6240, 36 Core, 2.6 GHz|
+|vCPUs|72|72|
+|RAM|576 GB|768 GB|
+|Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|19.95 TB (2x375G Optane, 6x3.2TB NVMe)|
+|Network|100 Gbps (four links * 25 Gbps)|100 Gbps (four links * 25 Gbps)|
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [FAQ](faq.md)
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/solution-design.md
+
+ Title: Solution design
+description: Learn about topologies and constraints for NC2 on Azure Public Preview.
++ Last updated : 07/01/2022++
+# Solution design
+
+This article identifies topologies and constraints for NC2 on Azure Public Preview.
+
+## Supported topologies
+
+The following table describes the network topologies supported by each network features configuration of NC2 on Azure.
+
+|Topology |Basic network features |
+| :- |::|
+|Connectivity to BareMetal (BM) in a local VNet| Yes |
+|Connectivity to BM in a peered VNet (Same region)|Yes |
+|Connectivity to BM in a peered VNet (Cross region or global peering)|No |
+|Connectivity to a BM over ExpressRoute gateway |Yes|
+|ExpressRoute (ER) FastPath |No |
+|Connectivity from on-premises to a BM in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
+|Connectivity from on-premises to a BM in a spoke VNet over VPN gateway| Yes |
+|Connectivity from on-premises to a BM in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes |
+|Connectivity over Active/Passive VPN gateways| Yes |
+|Connectivity over Active/Active VPN gateways| No |
+|Connectivity over Active/Active Zone Redundant gateways| No |
+|Connectivity over Virtual WAN (VWAN)| No |
+
+## Constraints
+
+The following table describes whatΓÇÖs supported for each network features configuration:
+
+|Features |Basic network features |
+| :- | -: |
+|Delegated subnet per VNet |1|
+|[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No|
+|[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets|No|
+|Connectivity to [private endpoints](../../../private-link/private-endpoint-overview.md)|No|
+|Load balancers for NC2 on Azure traffic|No|
+|Dual stack (IPv4 and IPv6) virtual network|IPv4 only supported|
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Architecture](architecture.md)
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/supported-instances-and-regions.md
+
+ Title: Supported instances and regions
+description: Learn about instances and regions supported for NC2 on Azure Public Preview.
+++ Last updated : 03/31/2021++
+# Supported instances and regions
+
+Learn about instances and regions supported for NC2 on Azure Public Preview.
+
+## Supported instances
+
+Nutanix Clusters on Azure supports:
+
+* Minimum of three bare metal nodes per cluster.
+* Maximum of 16 bare metal nodes for public preview.
+* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure.
+* Prism Central instance deployed on Nutanix Clusters on Azure to manage the Nutanix clusters in Azure.
+
+## Supported regions
+
+NC2 on Azure supports the following Azure regions, using AN36:
+
+* East US
+* West US 2
+
+NC2 on Azure supports the following Azure regions, using AN36P:
+
+* East US 2
+* North Central US
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [SKUs](skus.md)
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/use-cases-and-supported-scenarios.md
+
+ Title: Use cases and supported scenarios
+description: Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
+++ Last updated : 07/01/2022++
+# Use cases and supported scenarios
+
+ Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
+
+## Unified management experience - cluster management
+
+That operations and cluster management be nearly identical to on-premises is critical to customers.
+Customers can update capacity, monitor alerts, replace hosts, monitor usage, and more by combining the respective strengths of Microsoft and Nutanix.
+
+## Disaster recovery
+
+Disaster recovery is critical to cloud functionality.
+A disaster can be any of the following:
+
+- Cyber attack
+- Data breach
+- Equipment failure
+- Natural disaster
+- Data loss
+- Human error
+- Malware and viruses
+- Network and internet blips
+- Hardware and/or software failure
+- Weather catastrophes
+- Flooding
+- Office vandalism
+
+ ...or anything else that puts your operations at risk.
+
+When a disaster strikes, the goal of any DR plan is to ensure operations run as normally as possible.
+While the business will be aware of the crisis, ideally, its customers and end-users shouldn't be affected.
+
+## On-demand elasticity
+
+Scale up and scale out as you like.
+We provide the flexibility that means you don't have to procure hardware yourself - with just a click of a button you can get additional nodes in the cloud nearly instantly.
+
+## Lift and shift
+
+Move applications to the cloud and modernize your infrastructure.
+Applications move with no changes, allowing for flexible operations and minimum downtime.
+
+> [!div class="nextstepaction"]
+> [Solution design](solution-design.md)
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
# What is Azure Bastion?
-Azure Bastion is a service you deploy that lets you connect to a virtual machine using your browser and the Azure portal. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software.
+ Azure Bastion is a service you deploy that lets you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software.
Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH.
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Batch node agents are not automatically upgraded for pools that have non-zero co
Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you are experiencing issues with your Batch pool or compute nodes, as discussed in the [Nodes](#nodes) section.
-[!NOTE]
+> [!NOTE]
> For general guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md). ### Pool lifetime and billing
cognitive-services Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/developer-guide.md
Previously updated : 08/24/2022 Last updated : 09/15/2022
The Language service provides support through a REST API, and client libraries i
## Client libraries (Azure SDK)
-The Language service provides three namespaces for using the available features. Depending on which features and programming language you're using, you will need to download one or more of the following packages.
+The Language service provides three namespaces for using the available features. Depending on which features and programming language you're using, you will need to download one or more of the following packages, and have the following framework/language version support:
+
+|Framework/Language | Minimum supported version |
+|||
+|.NET | .NET Framework 4.6.1 or newer, or .NET (formerly .NET Core) 2.0 or newer. |
+|Java | v8 or later |
+|JavaScript | v14 LTS or later |
+|Python| v3.7 or later |
### Azure.AI.TextAnalytics
The `Azure.AI.TextAnalytics` namespace enables you to use the following Language
As you use these features in your application, use the following documentation and code samples for additional information.
-|Reference documentation |Samples |
-|||
-| [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
-| [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
-| [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
-[Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
+| Language → Latest GA version |Reference documentation |Samples |
+||||
+| [C#/.NET → v5.2.0](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| [Java → v5.2.0](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+| [JavaScript → v5.1.0](https://www.npmjs.com/package/@azure/ai-text-analytics/v/5.1.0) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+| [Python → v5.2.0](https://pypi.org/project/azure-ai-textanalytics/5.2.0/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
### Azure.AI.Language.Conversations
The `Azure.AI.Language.Conversations` namespace enables you to use the following
As you use these features in your application, use the following documentation and code samples for additional information.
-| Reference documentation |Samples |
-|||
-| [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre) | [C# samples](https://aka.ms/sdk-sample-conversation-dot-net) |
-| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://aka.ms/sdk-samples-conversation-python) |
+| Language → Latest GA version | Reference documentation |Samples |
+||||
+| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre) | [C# samples](https://aka.ms/sdk-sample-conversation-dot-net) |
+| [Python → v1.0.0](https://pypi.org/project/azure-ai-language-conversations/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://aka.ms/sdk-samples-conversation-python) |
### Azure.AI.Language.QuestionAnswering
The `Azure.AI.Language.QuestionAnswering` namespace enables you to use the follo
As you use these features in your application, use the following documentation and code samples for additional information.
-|Reference documentation |Samples |
-|||
-| [C# documentation](/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering) |
-| [Python documentation](/python/api/overview/azure/ai-language-questionanswering-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-questionanswering) |
-
-## Version support
-
-The namespaces mentioned here have the following framework/language version support:
-
-|Framework/Language | Minimum supported version |
-|||
-|.NET | .NET Framework 4.6.1 or newer, or .NET (formerly .NET Core) 2.0 or newer. |
-|Java | v8 or later |
-|JavaScript | v14 LTS or later |
-|Python| v3.7 or later |
+| Language → Latest GA version |Reference documentation |Samples |
+||||
+| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering/1.0.0#readme-body-tab) | [C# documentation](/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering) |
+| [Python → v1.0.0](https://pypi.org/project/azure-ai-language-questionanswering/1.0.0/) | [Python documentation](/python/api/overview/azure/ai-language-questionanswering-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-questionanswering) |
# [REST API](#tab/rest-api)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
### Regional availability Conversational language understanding is only available in some Azure regions. To use conversational language understanding, you must choose a Language resource in one of following regions:
-* West US 2
+* Australia East
+* Central India
* East US * East US 2
-* West US 3
+* North Europe
* South Central US
+* Switzerland North
+* UK South
* West Europe
-* North Europe
-* UK south
-* Australia East
+* West US 2
+* West US 3
+ ## API limits
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features. ## September 2022
-Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
+
+* Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
+
+* The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md).
+
+ This release includes the following updates:
+
+ ### [C#/.NET](#tab/csharp)
+
+ [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0)
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md)
+
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/README.md)
+
+ [**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples)
+
+ ### [Java](#tab/java)
+
+ [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0)
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+ [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples)
+
+ ### [Python](#tab/python)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.2.0/)
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md)
+
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/README.md)
+
+ [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples)
+
+
## August 2022
Text Analytics for Health now [supports additional languages](./text-analytics-f
* A new version of the Language API (`2022-07-01-preview`) has been released. It provides: * [Automatic language detection](./concepts/use-asynchronously.md#automatic-language-detection) for asynchronous tasks.
- * For Text Analytics for health, confidence score are now returned in relations.
+ * Text Analytics for health confidence scores are now returned in relations.
To use this version in your REST API calls, use the following URL:
cosmos-db Monitoring Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitoring-solutions.md
Using the Mongo API, Dynatrace collects and delivers CosmosDB metrics, which inc
- [Try Dynatrace with 15 days free trial](https://www.dynatrace.com/trial) - [Launch from Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dynatrace.dynatrace-managed)-- [Documentation on how to Cosmos DB with Azure Monitor](https://www.dynatrace.com/support/help/technology-support/cloud-platforms/microsoft-azure-services/set-up-integration-with-azure-monitor/?_ga=2.184080354.559899881.1623174355-748416177.1603817475)
+- [Documentation on how to Cosmos DB with Azure Monitor](https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/microsoft-azure-services)
- [Cosmos DB - Dynatrace Integration details](https://www.dynatrace.com/news/blog/azure-services-explained-part-4-azure-cosmos-db/?_ga=2.185016301.559899881.1623174355-748416177.1603817475) - [Dynatrace Monitoring for Azure databases](https://www.dynatrace.com/technologies/azure-monitoring/azure-database-performance/) - [Dynatrace for Azure solution overview](https://www.dynatrace.com/technologies/azure-monitoring/)
cost-management-billing Track Consumption Commitment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/track-consumption-commitment.md
tags: billing
Previously updated : 07/28/2022 Last updated : 09/19/2022
MACC functionality in the Azure portal is only available for direct MCA and dire
In the scenario that a MACC commitment has been transacted prior to the expiration or completion of a prior MACC (on the same enrollment/billing account), actual decrement of a commitment will begin upon completion or expiration of the prior commitment. In other words, if you have a new MACC following the expiration or completion of an older MACC on the same enrollment or billing account, use of the new commitment starts when the old commitment expires or is completed.
+## Prerequisites
+
+- For an EA, the user needs to be an Enterprise administrator to view the MACC balance.
+- For an MCA, the user must be have the owner, contributor, or reader role on the billing account to view the MACC balance.
+ ## Track your MACC Commitment ### [Azure portal](#tab/portal)
cost-management-billing Understand Vm Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-vm-reservation-charges.md
Previously updated : 09/15/2021 Last updated : 09/19/2022
A reservation discount is "*use-it-or-lose-it*". So, if you don't have matching
When you shut down a resource or scale the number of VMs, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*.
+Stopped VMs are billed and continue to use reservation hours. Deallocate or delete VM resources or scale-in other VMs to use your available reservation hours with other workloads.
+ ## Reservation discount for non-Windows VMs The Azure reservation discount is applied to running VM instances on an hourly basis. The reservations that you have purchased are matched to the usage emitted by the running VMs to apply the reservation discount. For VMs that may not run the full hour, the reservation will be filled from other VMs not using a reservation, including concurrently running VMs. At the end of the hour, the reservation application for VMs in the hour is locked. In the event a VM does not run for an hour or concurrent VMs within the hour do not fill the hour of the reservation, the reservation is underutilized for that hour. The following graph illustrates the application of a reservation to billable VM usage. The illustration is based on one reservation purchase and two matching VM instances.
cost-management-billing Reservation Discount Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-databricks.md
# How Azure Databricks pre-purchase discount is applied
-You can use pre-purchased Azure Databricks commit units (DBCU) at any time during the purchase term. Any Azure Databricks usage is deducts from the pre-purchased DBCUs automatically.
+You can use pre-purchased Azure Databricks commit units (DBCU) at any time during the purchase term. Any Azure Databricks usage is deducted from the pre-purchased DBCUs automatically.
Unlike VMs, pre-purchased units don't expire on an hourly basis. You can use them at any time during the term of the purchase. To get the pre-purchase discounts, you don't need to redeploy or assign a pre-purchased plan to your Azure Databricks workspaces for the usage.
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 08/04/2022 Last updated : 09/02/2022 # Troubleshoot mapping data flows in Azure Data Factory
This article explores common troubleshooting methods for mapping data flows in A
2. Check the status of your file and table connections in the data flow designer. In debug mode, select **Data Preview** on your source transformations to ensure that you can access your data. 3. If everything looks correct in data preview, go into the Pipeline designer and put your data flow in a Pipeline activity. Debug the pipeline for an end-to-end test.
-### Improvement on CSV/CDM format in Data Flow
-
-If you use the **Delimited Text or CDM formatting for mapping data flow in Azure Data Factory V2**, you may face the behavior changes to your existing pipelines because of the improvement for Delimited Text/CDM in data flow starting from **1 May 2021**.
+### Internal server errors
-You may encounter the following issues before the improvement, but after the improvement, the issues were fixed. Read the following content to determine whether this improvement affects you.
+Specific scenarios that can cause internal server errors are shown as follows.
-#### Scenario 1: Encounter the unexpected row delimiter issue
+#### Scenario 1: Not choosing the appropriate compute size/type and other factors
- You are affected if you are in the following conditions:
+ Successful execution of data flows depends on many factors, including the compute size/type, numbers of source/sinks to process, the partition specification, transformations involved, sizes of datasets, the data skewness and so on.<br/>
+
+ For more guidance, see [Integration Runtime performance](concepts-integration-runtime-performance.md).
- Before the improvement, the default row delimiter `\n` may be unexpectedly used to parse delimited text files, because when Multiline setting is set to True, it invalidates the row delimiter setting, and the row delimiter is automatically detected based on the first 128 characters. If you fail to detect the actual row delimiter, it would fall back to `\n`.
+#### Scenario 2: Using debug sessions with parallel activities
- After the improvement, any one of the three-row delimiters: `\r`, `\n`, `\r\n` should have worked.
-
- The following example shows you one pipeline behavior change after the improvement:
+ When triggering a run using the data flow debug session with constructs like ForEach in the pipeline, multiple parallel runs can be submitted to the same cluster. This situation can lead to cluster failure problems while running because of resource issues, such as being out of memory.<br/>
+
+ To submit a run with the appropriate integration runtime configuration defined in the pipeline activity after publishing the changes, select **Trigger Now** or **Debug** > **Use Activity Runtime**.
- **Example**:<br/>
- For the following column:<br/>
- `C1, C2, {long first row}, C128\r\n `<br/>
- `V1, V2, {values………………….}, V128\r\n `<br/>
-
- Before the improvement, `\r` is kept in the column value. The parsed column result is:<br/>
- `C1 C2 {long first row} C128`**`\r`**<br/>
- `V1 V2 {values………………….} V128`**`\r`**<br/> 
+#### Scenario 3: Transient issues
- After the improvement, the parsed column result should be:<br/>
- `C1 C2 {long first row} C128`<br/>
- `V1 V2 {values………………….} V128`<br/>
+ Transient issues with microservices involved in the execution can cause the run to fail.<br/>
-#### Scenario 2: Encounter an issue of incorrectly reading column values containing '\r\n'
+ Configuring retries in the pipeline activity can resolve the problems caused by transient issues. For more guidance, see [Activity Policy](concepts-pipelines-activities.md#activity-json).
- You are affected if you are in the following conditions:
- Before the improvement, when reading the column value, the `\r\n` in it may be incorrectly replaced by `\n`.
+## Common error codes and messages
- After the improvement, `\r\n` in the column value will not be replaced by `\n`.
+This section lists common error codes and messages reported by mapping data flows in Azure Data Factory, along with their associated causes and recommendations.
- The following example shows you one pipeline behavior change after the improvement:
-
- **Example**:<br/>
-
- For the following column:<br/>
- **`"A\r\n"`**`, B, C\r\n`<br/>
+### Error code: DF-AdobeIntegration-InvalidMapToFilter
- Before the improvement, the parsed column result is:<br/>
- **`A\n`**` B C`<br/>
+- **Message**: Custom resource can only have one Key/Id mapped to filter.
+- **Cause**: Invalid configurations are provided.
+- **Recommendation**: In your AdobeIntegration settings, make sure that the custom resource can only have one Key/Id mapped to filter.
- After the improvement, the parsed column result should be:<br/>
- **`A\r\n`**` B C`<br/>
+### Error code: DF-AdobeIntegration-InvalidPartitionConfiguration
-#### Scenario 3: Encounter an issue of incorrectly writing column values containing '\n'
+- **Message**: Only single partition is supported. Partition schema may be RoundRobin or Hash.
+- **Cause**: Invalid partition configurations are provided.
+- **Recommendation**: In AdobeIntegration settings, confirm that only the single partition is set and partition schemas may be RoundRobin or Hash.
- You are affected if you are in the following conditions:
-
- Before the improvement, when writing the column value, the `\n` in it may be incorrectly replaced by `\r\n`.
+### Error code: DF-AdobeIntegration-InvalidPartitionType
- After the improvement, `\n` in the column value will not be replaced by `\r\n`.
-
- The following example shows you one pipeline behavior change after the improvement:
+- **Message**: Partition type has to be roundRobin.
+- **Cause**: Invalid partition types are provided.
+- **Recommendation**: Please update AdobeIntegration settings to make your partition type is RoundRobin.
- **Example**:<br/>
+### Error code: DF-AdobeIntegration-InvalidPrivacyRegulation
- For the following column:<br/>
- **`A\n`**` B C`<br/>
+- **Message**: Only privacy regulation that's currently supported is 'GDPR'.
+- **Cause**: Invalid privacy configurations are provided.
+- **Recommendation**: Please update AdobeIntegration settings while only privacy 'GDPR' is supported.
- Before the improvement, the CSV sink is:<br/>
- **`"A\r\n"`**`, B, C\r\n` <br/>
+### Error code: DF-AdobeIntegration-KeyColumnMissed
- After the improvement, the CSV sink should be:<br/>
- **`"A\n"`**`, B, C\r\n`<br/>
+- **Message**: Key must be specified for non-insertable operations.
+- **Cause**: Key columns are missed.
+- **Recommendation**: Update AdobeIntegration settings to ensure key columns are specified for non-insertable operations.
-#### Scenario 4: Encounter an issue of incorrectly reading empty string as NULL
-
- You are affected if you are in the following conditions:
-
- Before the improvement, the column value of unquoted empty string is read as NULL.
+### Error code: DF-AzureDataExplorer-InvalidOperation
- After the improvement, empty string will not be parsed as NULL value.
-
- The following example shows you one pipeline behavior change after the improvement:
+- **Message**: Blob operation is not supported on older storage accounts. Creating a new storage account may fix the issue.
+- **Cause**: Operation is not supported.
+- **Recommendation**: Change **Update method** configuration as delete, update and upsert are not supported in Azure Data Explorer.
- **Example**:<br/>
+### Error code: DF-AzureDataExplorer-ReadTimeout
- For the following column:<br/>
- `A, ,B, `<br/>
+- **Message**: Operation timeout while reading data.
+- **Cause**: Operation times out while reading data.
+- **Recommendation**: Increase the value in **Timeout** option in source transformation settings.
- Before the improvement, the parsed column result is:<br/>
- `A null B null`<br/>
+### Error code: DF-AzureDataExplorer-WriteTimeout
- After the improvement, the parsed column result should be:<br/>
- `A "" (empty string) B "" (empty string)`<br/>
+- **Message**: Operation timeout while writing data.
+- **Cause**: Operation times out while writing data.
+- **Recommendation**: Increase the value in **Timeout** option in sink transformation settings.
-### Internal server errors
+### Error code: DF-Blob-FunctionNotSupport
-Specific scenarios that can cause internal server errors are shown as follows.
+- **Message**: This endpoint does not support BlobStorageEvents, SoftDelete or AutomaticSnapshot. Please disable these account features if you would like to use this endpoint.
+- **Cause**: Azure Blob Storage events, soft delete or automatic snapshot is not supported in data flows if the Azure Blob Storage linked service is created with service principal or managed identity authentication.
+- **Recommendation**: Disable Azure Blob Storage events, soft delete or automatic snapshot feature on the Azure Blob account, or use key authentication to create the linked service.
-#### Scenario 1: Not choosing the appropriate compute size/type and other factors
+### Error code: DF-Blob-InvalidAccountConfiguration
- Successful execution of data flows depends on many factors, including the compute size/type, numbers of source/sinks to process, the partition specification, transformations involved, sizes of datasets, the data skewness and so on.<br/>
-
- For more guidance, see [Integration Runtime performance](concepts-integration-runtime-performance.md).
+- **Message**: Either one of account key or sas token should be specified.
+- **Cause**: An invalid credential is provided in the Azure Blob linked service.
+- **Recommendation**: Use either account key or SAS token for the Azure Blob linked service.
-#### Scenario 2: Using debug sessions with parallel activities
+### Error code: DF-Blob-InvalidAuthConfiguration
- When triggering a run using the data flow debug session with constructs like ForEach in the pipeline, multiple parallel runs can be submitted to the same cluster. This situation can lead to cluster failure problems while running because of resource issues, such as being out of memory.<br/>
-
- To submit a run with the appropriate integration runtime configuration defined in the pipeline activity after publishing the changes, select **Trigger Now** or **Debug** > **Use Activity Runtime**.
+- **Message**: Only one of the two auth methods (Key, SAS) can be specified.
+- **Cause**: An invalid authentication method is provided in the linked service.
+- **Recommendation**: Use key or SAS authentication for the Azure Blob linked service.
-#### Scenario 3: Transient issues
+### Error code: DF-Blob-InvalidCloudType
- Transient issues with microservices involved in the execution can cause the run to fail.<br/>
-
- Configuring retries in the pipeline activity can resolve the problems caused by transient issues. For more guidance, see [Activity Policy](concepts-pipelines-activities.md#activity-json).
+- **Message**: Cloud type is invalid.
+- **Cause**: An invalid cloud type is provided.
+- **Recommendation**: Please check the cloud type in your related Azure Blob linked service.
+### Error code: DF-Cosmos-DeleteDataFailed
-## Common error codes and messages
+- **Message**: Failed to delete data from cosmos after 3 times retry.
+- **Cause**: The throughput on the Cosmos collection is small and leads to meeting throttling or row data not existing in Cosmos.
+- **Recommendation**: Please take the following actions to solve this problem:
+ - If the error is 404, make sure that the related row data exists in the Cosmos collection.
+ - If the error is throttling, please increase the Cosmos collection throughput or set it to the automatic scale.
+ - If the error is request timed out, please set 'Batch size' in the Cosmos sink to smaller value, for example 1000.
-This section lists common error codes and messages reported by mapping data flows in Azure Data Factory, along with their associated causes and recommendations.
+### Error code: DF-Cosmos-FailToResetThroughput
-### Error code: DF-Executor-SourceInvalidPayload
+- **Message**: Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.
+- **Cause**: The throughput scale operation of the Azure Cosmos DB can't be performed because another scale operation is in progress.
+- **Recommendation**: Login to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput.
-- **Message**: Data preview, debug, and pipeline data flow execution failed because container does not exist-- **Cause**: A dataset contains a container that doesn't exist in storage.-- **Recommendation**: Make sure that the container referenced in your dataset exists and can be accessed.
+### Error code: DF-Cosmos-IdPropertyMissed
-### Error code: DF-Executor-SystemInvalidJson
+- **Message**: 'id' property should be mapped for delete and update operations.
+- **Cause**: The `id` property is missed for update and delete operations.
+- **Recommendation**: Make sure that the input data has an `id` column in Azure Cosmos DB sink transformation settings. If not, use a select or derived column transformation to generate this column before the sink transformation.
-- **Message**: JSON parsing error, unsupported encoding or multiline-- **Cause**: Possible problems with the JSON file: unsupported encoding, corrupt bytes, or using JSON source as a single document on many nested lines.-- **Recommendation**: Verify that the JSON file's encoding is supported. On the source transformation that's using a JSON dataset, expand **JSON Settings** and turn on **Single Document**.
-
-### Error code: DF-Executor-BroadcastTimeout
+### Error code: DF-Cosmos-InvalidAccountConfiguration
-- **Message**: Broadcast join timeout error, make sure broadcast stream produces data within 60 secs in debug runs and 300 secs in job runs-- **Cause**: Broadcast has a default timeout of 60 seconds on debug runs and 300 seconds on job runs. The stream chosen for broadcast is too large to produce data within this limit.-- **Recommendation**: Check the **Optimize** tab on your data flow transformations for join, exists, and lookup. The default option for broadcast is **Auto**. If **Auto** is set, or if you're manually setting the left or right side to broadcast under **Fixed**, you can either set a larger Azure integration runtime (IR) configuration or turn off broadcast. For the best performance in data flows, we recommend that you allow Spark to broadcast by using **Auto** and use a memory-optimized Azure IR.
-
- If you're running the data flow in a debug test execution from a debug pipeline run, you might run into this condition more frequently. That's because Azure Data Factory throttles the broadcast timeout to 60 seconds to maintain a faster debugging experience. You can extend the timeout to the 300-second timeout of a triggered run. To do so, you can use the **Debug** > **Use Activity Runtime** option to use the Azure IR defined in your Execute Data Flow pipeline activity.
+- **Message**: Either accountName or accountEndpoint should be specified.
+- **Cause**: Invalid account information is provided.
+- **Recommendation**: In the Cosmos DB linked service, specify the account name or account endpoint.
-- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance, then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.-- **Cause**: Broadcast has a default timeout of 60 seconds in debug runs and 300 seconds in job runs. On the broadcast join, the stream chosen for broadcast is too large to produce data within this limit. If a broadcast join isn't used, the default broadcast by dataflow can reach the same limit.-- **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams for which the processing can take more than 60 seconds. Choose a smaller stream to broadcast. Large Azure SQL Data Warehouse tables and source files aren't typically good choices. In the absence of a broadcast join, use a larger cluster if this error occurs.
+### Error code: DF-Cosmos-InvalidAccountKey
-### Error code: DF-Executor-Conversion
+- **Message**: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used.
+- **Cause**: There's no enough permission to read/write Azure Cosmos DB data.
+- **Recommendation**: Please use the read-write key to access Azure Cosmos DB.
-- **Message**: Converting to a date or time failed due to an invalid character-- **Cause**: Data isn't in the expected format.-- **Recommendation**: Use the correct data type.
+### Error code: DF-Cosmos-InvalidConnectionMode
-### Error code: DF-Executor-InvalidColumn
-- **Message**: Column name needs to be specified in the query, set an alias if using a SQL function-- **Cause**: No column name is specified.-- **Recommendation**: Set an alias if you're using a SQL function like min() or max().
+- **Message**: Invalid connection mode.
+- **Cause**: An invalid connection mode is provided.
+- **Recommendation**: Confirm that the supported mode is **Gateway** and **DirectHttps** in Cosmos DB settings.
-### Error code: DF-Executor-DriverError
-- **Message**: INT96 is legacy timestamp type, which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.-- **Cause**: Driver error.-- **Recommendation**: INT96 is a legacy timestamp type that's not supported by Azure Data Factory data flow. Consider upgrading the column type to the latest type.
+### Error code: DF-Cosmos-InvalidPartitionKey
-### Error code: DF-Executor-BlockCountExceedsLimitError
-- **Message**: The uncommitted block count cannot exceed the maximum limit of 100,000 blocks. Check blob configuration.-- **Cause**: The maximum number of uncommitted blocks in a blob is 100,000.-- **Recommendation**: Contact the Microsoft product team for more details about this problem.
+- **Message**: Partition key path cannot be empty for update and delete operations.
+- **Cause**: The partition key path is empty for update and delete operations.
+- **Recommendation**: Use the providing partition key in the Azure Cosmos DB sink settings.
+- **Message**: Partition key is not mapped in sink for delete and update operations.
+- **Cause**: An invalid partition key is provided.
+- **Recommendation**: In Cosmos DB sink settings, use the right partition key that is same as your container's partition key.
-### Error code: DF-Executor-PartitionDirectoryError
-- **Message**: The specified source path has either multiple partitioned directories (for example, &lt;Source Path&gt;/<Partition Root Directory 1>/a=10/b=20, &lt;Source Path&gt;/&lt;Partition Root Directory 2&gt;/c=10/d=30) or partitioned directory with other file or non-partitioned directory (for example &lt;Source Path&gt;/&lt;Partition Root Directory 1&gt;/a=10/b=20, &lt;Source Path&gt;/Directory 2/file1), remove partition root directory from source path and read it through separate source transformation.-- **Cause**: The source path has either multiple partitioned directories or a partitioned directory that has another file or non-partitioned directory.-- **Recommendation**: Remove the partitioned root directory from the source path and read it through separate source transformation.
+### Error code: DF-Cosmos-InvalidPartitionKeyContent
-### Error code: DF-Executor-InvalidType
-- **Message**: Please make sure that the type of parameter matches with type of value passed in. Passing float parameters from pipelines isn't currently supported.-- **Cause**: Data types are incompatible between the declared type and the actual parameter value.-- **Recommendation**: Check that the parameter values passed into the data flow match the declared type.
+- **Message**: partition key should start with /.
+- **Cause**: An invalid partition key is provided.
+- **Recommendation**: Ensure that the partition key start with `/` in Cosmos DB sink settings, for example: `/movieId`.
-### Error code: DF-Executor-ParseError
-- **Message**: Expression cannot be parsed.-- **Cause**: An expression generated parsing errors because of incorrect formatting.-- **Recommendation**: Check the formatting in the expression.
+### Error code: DF-Cosmos-PartitionKeyMissed
-### Error code: DF-Executor-SystemImplicitCartesian
-- **Message**: Implicit cartesian product for INNER join is not supported, use CROSS JOIN instead. Columns used in join should create a unique key for rows.-- **Cause**: Implicit cartesian products for INNER joins between logical plans aren't supported. If you're using columns in the join, create a unique key.-- **Recommendation**: For non-equality based joins, use CROSS JOIN.
+- **Message**: Partition key path should be specified for update and delete operations.
+- **Cause**: The partition key path is missing in the Azure Cosmos DB sink.
+- **Recommendation**: Provide the partition key in the Azure Cosmos DB sink settings.
-### Error code: GetCommand OutputAsync failed
-- **Message**: During Data Flow debug and data preview: GetCommand OutputAsync failed with ...-- **Cause**: This error is a back-end service error. -- **Recommendation**: Retry the operation and restart your debugging session. If retrying and restarting doesn't resolve the problem, contact customer support.
+### Error code: DF-Cosmos-ResourceNotFound
-### Error code: DF-Executor-OutOfMemoryError
-
-- **Message**: Cluster ran into out of memory issue during execution, please retry using an integration runtime with bigger core count and/or memory optimized compute type-- **Cause**: The cluster is running out of memory.-- **Recommendation**: Debug clusters are meant for development. Use data sampling and an appropriate compute type and size to run the payload. For performance tips, see [Mapping data flow performance guide](concepts-data-flow-performance.md).
+- **Message**: Resource not found.
+- **Cause**: Invalid configuration is provided (for example, the partition key with invalid characters) or the resource doesn't exist.
+- **Recommendation**: To solve this issue, refer to [Diagnose and troubleshoot Azure Cosmos DB not found exceptions](../cosmos-db/troubleshoot-not-found.md).
-### Error code: DF-Executor-illegalArgument
+### Error code: DF-Cosmos-ShortTypeNotSupport
-- **Message**: Please make sure that the access key in your Linked Service is correct-- **Cause**: The account name or access key is incorrect.-- **Recommendation**: Ensure that the account name or access key specified in your linked service is correct.
+- **Message**: Short data type is not supported in Cosmos DB.
+- **Cause**: The short data type is not supported in the Azure Cosmos DB.
+- **Recommendation**: Add a derived column transformation to convert related columns from short to integer before using them in the Azure Cosmos DB sink transformation.
+
+### Error code: DF-Delimited-ColumnDelimiterMissed
+
+- **Message**: Column delimiter is required for parse.
+- **Cause**: The column delimiter is missed.
+- **Recommendation**: In your CSV settings, confirm that you have the column delimiter which is required for parse.
+
+### Error code: DF-Delimited-InvalidConfiguration
+
+- **Message**: Either one of empty lines or custom header should be specified.
+- **Cause**: An invalid delimited configuration is provided.
+- **Recommendation**: Please update the CSV settings to specify one of empty lines or the custom header.
+
+### Error code: DF-DELTA-InvalidConfiguration
+
+- **Message**: Timestamp and version can't be set at the same time.
+- **Cause**: The timestamp and version can't be set at the same time.
+- **Recommendation**: Set the timestamp or version in the delta settings.
+
+### Error code: DF-Delta-InvalidProtocolVersion
+
+- **Message**: Unsupported Delta table protocol version, Refer https://docs.delta.io/latest/versioning.html#-table-version for versioning information.
+- **Cause**: Data flows don't support this version of the Delta table protocol.
+- **Recommendation**: Use a lower version of the Delta table protocol.
+
+### Error code: DF-DELTA-InvalidTableOperationSettings
+
+- **Message**: Recreate and truncate options can't be both specified.
+- **Cause**: Recreate and truncate options can't be specified simultaneously.
+- **Recommendation**: Update delta settings to have either recreate or truncate operation.
+
+### Error code: DF-DELTA-KeyColumnMissed
+
+- **Message**: Key column(s) should be specified for non-insertable operations.
+- **Cause**: Key column(s) are missed for non-insertable operations.
+- **Recommendation**: Specify key column(s) on delta sink to have non-insertable operations.
+
+### Error code: DF-Excel-DifferentSchemaNotSupport
+
+- **Message**: Read excel files with different schema is not supported now.
+- **Cause**: Reading excel files with different schemas is not supported now.
+- **Recommendation**: Please apply one of following options to solve this problem:
+ - Use **ForEach** + **data flow** activity to read Excel worksheets one by one.
+ - Update each worksheet schema to have the same columns manually before reading data.
+
+### Error code: DF-Excel-InvalidDataType
+
+- **Message**: Data type is not supported.
+- **Cause**: The data type is not supported.
+- **Recommendation**: Please change the data type to **'string'** for related input data columns.
+
+### Error code: DF-Excel-InvalidFile
+
+- **Message**: Invalid excel file is provided while only .xlsx and .xls are supported.
+- **Cause**: Invalid Excel files are provided.
+- **Recommendation**: Use the wildcard to filter, and get `.xls` and `.xlsx` Excel files before reading data.
+
+### Error code: DF-Excel-InvalidRange
+
+- **Message**: Invalid range is provided.
+- **Cause**: An invalid range is provided.
+- **Recommendation**: Check the parameter value and specify the valid range by the following reference: [Excel format in Azure Data Factory-Dataset properties](./format-excel.md#dataset-properties).
+
+### Error code: DF-Excel-InvalidWorksheetConfiguration
+
+- **Message**: Excel sheet name and index cannot exist at the same time.
+- **Cause**: The Excel sheet name and index are provided at the same time.
+- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
+
+### Error code: DF-Excel-WorksheetConfigMissed
+
+- **Message**: Excel sheet name or index is required.
+- **Cause**: An invalid Excel worksheet configuration is provided.
+- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
+
+### Error code: DF-Excel-WorksheetNotExist
+
+- **Message**: Excel worksheet does not exist.
+- **Cause**: An invalid worksheet name or index is provided.
+- **Recommendation**: Check the parameter value and specify a valid sheet name or index to read the Excel data.
+
+### Error code: DF-Executor-AcquireStorageMemoryFailed
+
+- **Message**: Transferring unroll memory to storage memory failed. Cluster ran out of memory during execution. Please retry using an integration runtime with more cores and/or memory optimized compute type.
+- **Cause**: The cluster has insufficient memory.
+- **Recommendation**: Please use an integration runtime with more cores and/or the memory optimized compute type.
+
+### Error code: DF-Executor-BlockCountExceedsLimitError
+
+- **Message**: The uncommitted block count cannot exceed the maximum limit of 100,000 blocks. Check blob configuration.
+- **Cause**: The maximum number of uncommitted blocks in a blob is 100,000.
+- **Recommendation**: Contact the Microsoft product team for more details about this problem.
+
+### Error code: DF-Executor-BroadcastFailure
+
+- **Message**: Dataflow execution failed during broadcast exchange. Potential causes include misconfigured connections at sources or a broadcast join timeout error. To ensure the sources are configured correctly, please test the connection or run a source data preview in a Dataflow debug session. To avoid the broadcast join timeout, you can choose the 'Off' broadcast option in the Join/Exists/Lookup transformations. If you intend to use the broadcast option to improve performance then make sure broadcast streams can produce data within 60 secs for debug runs and within 300 secs for job runs. If problem persists, contact customer support.
+
+- **Cause**:
+ 1. The source connection/configuration error could lead to a broadcast failure in join/exists/lookup transformations.
+ 2. Broadcast has a default timeout of 60 seconds in debug runs and 300 seconds in job runs. On the broadcast join, the stream chosen for the broadcast seems too large to produce data within this limit. If a broadcast join is not used, the default broadcast done by a data flow can reach the same limit.
+
+- **Recommendation**:
+ - Do data preview at sources to confirm the sources are well configured.
+ - Turn off the broadcast option or avoid broadcasting large data streams where the processing can take more than 60 seconds. Instead, choose a smaller stream to broadcast.
+ - Large SQL/Data Warehouse tables and source files are typically bad candidates.
+ - In the absence of a broadcast join, use a larger cluster if the error occurs.
+ - If the problem persists, contact the customer support.
+
+### Error code: DF-Executor-BroadcastTimeout
+
+- **Message**: Broadcast join timeout error, make sure broadcast stream produces data within 60 secs in debug runs and 300 secs in job runs
+- **Cause**: Broadcast has a default timeout of 60 seconds on debug runs and 300 seconds on job runs. The stream chosen for broadcast is too large to produce data within this limit.
+- **Recommendation**: Check the **Optimize** tab on your data flow transformations for join, exists, and lookup. The default option for broadcast is **Auto**. If **Auto** is set, or if you're manually setting the left or right side to broadcast under **Fixed**, you can either set a larger Azure integration runtime (IR) configuration or turn off broadcast. For the best performance in data flows, we recommend that you allow Spark to broadcast by using **Auto** and use a memory-optimized Azure IR.
+
+ If you're running the data flow in a debug test execution from a debug pipeline run, you might run into this condition more frequently. That's because Azure Data Factory throttles the broadcast timeout to 60 seconds to maintain a faster debugging experience. You can extend the timeout to the 300-second timeout of a triggered run. To do so, you can use the **Debug** > **Use Activity Runtime** option to use the Azure IR defined in your Execute Data Flow pipeline activity.
+
+- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance, then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.
+- **Cause**: Broadcast has a default timeout of 60 seconds in debug runs and 300 seconds in job runs. On the broadcast join, the stream chosen for broadcast is too large to produce data within this limit. If a broadcast join isn't used, the default broadcast by dataflow can reach the same limit.
+- **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams for which the processing can take more than 60 seconds. Choose a smaller stream to broadcast. Large Azure SQL Data Warehouse tables and source files aren't typically good choices. In the absence of a broadcast join, use a larger cluster if this error occurs.
### Error code: DF-Executor-ColumnUnavailable+ - **Message**: Column name used in expression is unavailable or invalid. - **Cause**: An invalid or unavailable column name is used in an expression. - **Recommendation**: Check the column names used in expressions.
- ### Error code: DF-Executor-OutOfDiskSpaceError
+### Error code: DF-Executor-Conversion
+
+- **Message**: Converting to a date or time failed due to an invalid character
+- **Cause**: Data isn't in the expected format.
+- **Recommendation**: Use the correct data type.
+
+### Error code: DF-Executor-DriverError
+
+- **Message**: INT96 is legacy timestamp type, which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.
+- **Cause**: Driver error.
+- **Recommendation**: INT96 is a legacy timestamp type that's not supported by Azure Data Factory data flow. Consider upgrading the column type to the latest type.
+
+### Error code: DF-Executor-FieldNotExist
+
+- **Message**: Field in struct does not exist.
+- **Cause**: Invalid or unavailable field names are used in expressions.
+- **Recommendation**: Check field names used in expressions.
+
+### Error code: DF-Executor-illegalArgument
+
+- **Message**: Please make sure that the access key in your Linked Service is correct
+- **Cause**: The account name or access key is incorrect.
+- **Recommendation**: Ensure that the account name or access key specified in your linked service is correct.
+
+### Error code: DF-Executor-IncorrectLinkedServiceConfiguration
+
+- **Message**: Possible causes are,
+ - The linked service is incorrectly configured as type 'Azure Blob Storage' instead of 'Azure DataLake Storage Gen2' and it has 'Hierarchical namespace' enabled. Please create a new linked service of type 'Azure DataLake Storage Gen2' for the storage account in question.
+ - Certain scenarios with any combinations of 'Clear the folder', non-default 'File name option', 'Key' partitioning may fail with a Blob linked service on a 'Hierarchical namespace' enabled storage account. You can disable these dataflow settings (if enabled) and try again in case you do not want to create a new Gen2 linked service.
+- **Cause**: Delete operation on the Azure Data Lake Storage Gen2 account failed since its linked service is incorrectly configured as Azure Blob Storage.
+- **Recommendation**: Create a new Azure Data Lake Storage Gen2 linked service for the storage account. If that's not feasible, some known scenarios like **Clear the folder**, non-default **File name option**, **Key** partitioning in any combinations may fail with an Azure Blob Storage linked service on a hierarchical namespace enabled storage account. You can disable these data flow settings if you enabled them and try again.
+
+### Error code: DF-Executor-InternalServerError
+
+- **Message**: Failed to execute dataflow with internal server error, please retry later. If issue persists, please contact Microsoft support for further assistance
+- **Cause**: The data flow execution is failed because of the system error.
+- **Recommendation**: To solve this issue, refer to [Internal server errors](#internal-server-errors).
+
+### Error code: DF-Executor-InvalidColumn
+
+- **Message**: Column name needs to be specified in the query, set an alias if using a SQL function.
+- **Cause**: No column name is specified.
+- **Recommendation**: Set an alias if you're using a SQL function like min() or max().
+
+### Error code: DF-Executor-InvalidInputColumns
+
+- **Message**: The column in source configuration cannot be found in source data's schema.
+- **Cause**: Invalid columns are provided on the source.
+- **Recommendation**: Check columns in the source configuration and make sure that it's the subset of the source data's schemas.
+
+### Error code: DF-Executor-InvalidOutputColumns
+
+- **Message**: The result has 0 output columns. Please ensure at least one column is mapped.
+- **Cause**: No column is mapped.
+- **Recommendation**: Please check the sink schema to ensure that at least one column is mapped.
+
+### Error code: DF-Executor-InvalidPartitionFileNames
+
+- **Message**: File names cannot have empty value(s) while file name option is set as per partition.
+- **Cause**: Invalid partition file names are provided.
+- **Recommendation**: Please check your sink settings to have the right value of file names.
+
+### Error code: DF-Executor-InvalidPath
+
+- **Message**: Path does not resolve to any file(s). Please make sure the file/folder exists and is not hidden.
+- **Cause**: An invalid file/folder path is provided, which can't be found or accessed.
+- **Recommendation**: Please check the file/folder path, and make sure it is existed and can be accessed in your storage.
+
+### Error code: DF-Executor-InvalidStageConfiguration
+
+- **Message**: Storage with user assigned managed identity authentication in staging is not supported.
+- **Cause**: An exception is happened because of invalid staging configuration.
+- **Recommendation**: The user-assigned managed identity authentication is not supported in staging. Use a different authentication to create an Azure Data Lake Storage Gen2 or Azure Blob Storage linked service, then use it as staging in mapping data flows.
+
+### Error code: DF-Executor-InvalidType
+
+- **Message**: Please make sure that the type of parameter matches with type of value passed in. Passing float parameters from pipelines isn't currently supported.
+- **Cause**: Data types are incompatible between the declared type and the actual parameter value.
+- **Recommendation**: Check that the parameter values passed into the data flow match the declared type.
+
+### Error code: DF-Executor-OutOfDiskSpaceError
+ - **Message**: Internal server error - **Cause**: The cluster is running out of disk space. - **Recommendation**: Retry the pipeline. If doing so doesn't resolve the problem, contact customer support.
+### Error code: DF-Executor-OutOfMemoryError
+
+- **Message**: Cluster ran into out of memory issue during execution, please retry using an integration runtime with bigger core count and/or memory optimized compute type
+- **Cause**: The cluster is running out of memory.
+- **Recommendation**: Debug clusters are meant for development. Use data sampling and an appropriate compute type and size to run the payload. For performance tips, see [Mapping data flow performance guide](concepts-data-flow-performance.md).
+
+### Error code: DF-Executor-OutOfMemorySparkBroadcastError
+
+- **Message**: Explicitly broadcasted dataset using left/right option should be small enough to fit in node's memory. You can choose broadcast option 'Off' in join/exists/lookup transformation to avoid this issue or use an integration runtime with higher memory.
+- **Cause**: The size of the broadcasted table far exceeds the limits of the node memory.
+- **Recommendation**: The broadcast left/right option should be used only for smaller dataset size which can fit into node's memory, so make sure to configure the node size appropriately or turn off the broadcast option.
+
+### Error code: DF-Executor-OutOfMemorySparkError
+
+- **Message**: The data may be too large to fit in the memory.
+- **Cause**: The size of the data far exceeds the limit of the node memory.
+- **Recommendation**: Increase the core count and switch to the memory optimized compute type.
+
+### Error code: DF-Executor-ParseError
+
+- **Message**: Expression cannot be parsed.
+- **Cause**: An expression generated parsing errors because of incorrect formatting.
+- **Recommendation**: Check the formatting in the expression.
+
+### Error code: DF-Executor-PartitionDirectoryError
+
+- **Message**: The specified source path has either multiple partitioned directories (for example, &lt;Source Path&gt;/<Partition Root Directory 1>/a=10/b=20, &lt;Source Path&gt;/&lt;Partition Root Directory 2&gt;/c=10/d=30) or partitioned directory with other file or non-partitioned directory (for example &lt;Source Path&gt;/&lt;Partition Root Directory 1&gt;/a=10/b=20, &lt;Source Path&gt;/Directory 2/file1), remove partition root directory from source path and read it through separate source transformation.
+- **Cause**: The source path has either multiple partitioned directories or a partitioned directory that has another file or non-partitioned directory.
+- **Recommendation**: Remove the partitioned root directory from the source path and read it through separate source transformation.
+
+### Error code: DF-Executor-RemoteRPCClientDisassociated
+
+- **Message**: Job aborted due to stage failure. Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues.
+- **Cause**: Data flow activity run failed because of transient network issues or one node in spark cluster ran out of memory.
+- **Recommendation**: Use the following options to solve this problem:
+ - Option-1: Use a powerful cluster (both drive and executor nodes have enough memory to handle big data) to run data flow pipelines with setting "Compute type" to "Memory optimized". The settings are shown in the picture below.
+
+ :::image type="content" source="media/data-flow-troubleshoot-guide/configure-compute-type.png" alt-text="Screenshot that shows the configuration of Compute type.":::
+
+ - Option-2: Use larger cluster size (for example, 48 cores) to run your data flow pipelines. You can learn more about cluster size through this document: [Cluster size](./concepts-integration-runtime-performance.md#cluster-size).
+
+ - Option-3: Repartition your input data. For the task running on the data flow spark cluster, one partition is one task and runs on one node. If data in one partition is too large, the related task running on the node needs to consume more memory than the node itself, which causes failure. So you can use repartition to avoid data skew, and ensure that data size in each partition is average while the memory consumption isn't too heavy.
+
+ :::image type="content" source="media/data-flow-troubleshoot-guide/configure-partition.png" alt-text="Screenshot that shows the configuration of partitions.":::
+
+ > [!NOTE]
+ > You need to evaluate the data size or the partition number of input data, then set reasonable partition number under "Optimize". For example, the cluster that you use in the data flow pipeline execution is 8 cores and the memory of each core is 20GB, but the input data is 1000GB with 10 partitions. If you directly run the data flow, it will meet the OOM issue because 1000GB/10 > 20GB, so it is better to set repartition number to 100 (1000GB/100 < 20GB).
+
+ - Option-4: Tune and optimize source/sink/transformation settings. For example, try to copy all files in one container, and don't use the wildcard pattern. For more detailed information, reference [Mapping data flows performance and tuning guide](./concepts-data-flow-performance.md).
+
+### Error code: DF-Executor-SourceInvalidPayload
+
+- **Message**: Data preview, debug, and pipeline data flow execution failed because container does not exist.
+- **Cause**: A dataset contains a container that doesn't exist in storage.
+- **Recommendation**: Make sure that the container referenced in your dataset exists and can be accessed.
+
+### Error code: DF-Executor-StoreIsNotDefined
- ### Error code: DF-Executor-StoreIsNotDefined
- **Message**: The store configuration is not defined. This error is potentially caused by invalid parameter assignment in the pipeline. - **Cause**: Invalid store configuration is provided. - **Recommendation**: Check the parameter value assignment in the pipeline. A parameter expression may contain invalid characters.
-### Error code: InvalidTemplate
-- **Message**: The pipeline expression cannot be evaluated.-- **Cause**: The pipeline expression passed in the Data Flow activity isn't being processed correctly because of a syntax error.-- **Recommendation**: Check data flow activity name. Check expressions in activity monitoring to verify the expressions. For example, data flow activity name can't have a space or a hyphen.-
-### Error code: 2011
-- **Message**: The activity was running on Azure Integration Runtime and failed to decrypt the credential of data store or compute connected via a Self-hosted Integration Runtime. Please check the configuration of linked services associated with this activity, and make sure to use the proper integration runtime type.-- **Cause**: Data flow doesn't support linked services on self-hosted integration runtimes.-- **Recommendation**: Configure data flow to run on a Managed Virtual Network integration runtime.-
-### Error code: DF-Xml-InvalidValidationMode
-- **Message**: Invalid xml validation mode is provided.-- **Cause**: An invalid XML validation mode is provided.-- **Recommendation**: Check the parameter value and specify the right validation mode.
+### Error code: DF-Executor-SystemImplicitCartesian
-### Error code: DF-Xml-InvalidDataField
-- **Message**: The field for corrupt records must be string type and nullable.-- **Cause**: An invalid data type of the column `\"_corrupt_record\"` is provided in the XML source.-- **Recommendation**: Make sure that the column `\"_corrupt_record\"` in the XML source has a string data type and nullable.
+- **Message**: Implicit cartesian product for INNER join is not supported, use CROSS JOIN instead. Columns used in join should create a unique key for rows.
+- **Cause**: Implicit cartesian products for INNER joins between logical plans aren't supported. If you're using columns in the join, create a unique key.
+- **Recommendation**: For non-equality based joins, use CROSS JOIN.
-### Error code: DF-Xml-MalformedFile
-- **Message**: Malformed xml with path in FAILFAST mode.-- **Cause**: Malformed XML with path exists in the FAILFAST mode.-- **Recommendation**: Update the content of the XML file to the right format.
+### Error code: DF-Executor-SystemInvalidJson
-### Error code: DF-Xml-InvalidReferenceResource
-- **Message**: Reference resource in xml data file cannot be resolved.-- **Cause**: The reference resource in the XML data file can't be resolved.-- **Recommendation**: Check the reference resource in the XML data file.
+- **Message**: JSON parsing error, unsupported encoding or multiline
+- **Cause**: Possible problems with the JSON file: unsupported encoding, corrupt bytes, or using JSON source as a single document on many nested lines.
+- **Recommendation**: Verify that the JSON file's encoding is supported. On the source transformation that's using a JSON dataset, expand **JSON Settings** and turn on **Single Document**.
-### Error code: DF-Xml-InvalidSchema
-- **Message**: Schema validation failed.-- **Cause**: The invalid schema is provided on the XML source.-- **Recommendation**: Check the schema settings on the XML source to make sure that it's the subset schema of the source data.
+### Error code: DF-File-InvalidSparkFolder
-### Error code: DF-Xml-UnsupportedExternalReferenceResource
-- **Message**: External reference resource in xml data file is not supported.-- **Cause**: The external reference resource in the XML data file is not supported.-- **Recommendation**: Update the XML file content when the external reference resource is not supported now.
+- **Message**: Failed to read footer for file.
+- **Cause**: Folder *_spark_metadata* is created by the structured streaming job.
+- **Recommendation**: Delete *_spark_metadata* folder if it exists. For more information, refer to this [article](https://forums.databricks.com/questions/12447/javaioioexception-could-not-read-footer-for-file-f.html).
### Error code: DF-GEN2-InvalidAccountConfiguration+ - **Message**: Either one of account key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken should be specified. - **Cause**: An invalid credential is provided in the ADLS Gen2 linked service. - **Recommendation**: Update the ADLS Gen2 linked service to have the right credential configuration. ### Error code: DF-GEN2-InvalidAuthConfiguration+ - **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified. - **Cause**: Invalid auth method is provided in ADLS gen2 linked service. - **Recommendation**: Update the ADLS Gen2 linked service to have one of three authentication methods that are Key, ServicePrincipal and MI.
-### Error code: DF-GEN2-InvalidServicePrincipalCredentialType
-- **Message**: Service principal credential type is invalid.-- **Cause**: The service principal credential type is invalid.-- **Recommendation**: Please update the ADLS Gen2 linked service to set the right service principal credential type.-
-### Error code: DF-Blob-InvalidAccountConfiguration
-- **Message**: Either one of account key or sas token should be specified.-- **Cause**: An invalid credential is provided in the Azure Blob linked service.-- **Recommendation**: Use either account key or SAS token for the Azure Blob linked service.-
-### Error code: DF-Blob-InvalidAuthConfiguration
-- **Message**: Only one of the two auth methods (Key, SAS) can be specified.-- **Cause**: An invalid authentication method is provided in the linked service.-- **Recommendation**: Use key or SAS authentication for the Azure Blob linked service.-
-### Error code: DF-Cosmos-PartitionKeyMissed
-- **Message**: Partition key path should be specified for update and delete operations.-- **Cause**: The partition key path is missing in the Azure Cosmos DB sink.-- **Recommendation**: Provide the partition key in the Azure Cosmos DB sink settings.-
-### Error code: DF-Cosmos-InvalidPartitionKey
-- **Message**: Partition key path cannot be empty for update and delete operations.-- **Cause**: The partition key path is empty for update and delete operations.-- **Recommendation**: Use the providing partition key in the Azure Cosmos DB sink settings.-- **Message**: Partition key is not mapped in sink for delete and update operations.-- **Cause**: An invalid partition key is provided.-- **Recommendation**: In Cosmos DB sink settings, use the right partition key that is same as your container's partition key.
+### Error code: DF-GEN2-InvalidCloudType
-### Error code: DF-Cosmos-IdPropertyMissed
-- **Message**: 'id' property should be mapped for delete and update operations.-- **Cause**: The `id` property is missed for update and delete operations.-- **Recommendation**: Make sure that the input data has an `id` column in Azure Cosmos DB sink transformation settings. If not, use a select or derived column transformation to generate this column before the sink transformation.
+- **Message**: Cloud type is invalid.
+- **Cause**: An invalid cloud type is provided.
+- **Recommendation**: Check the cloud type in your related ADLS Gen2 linked service.
-### Error code: DF-Cosmos-InvalidPartitionKeyContent
-- **Message**: partition key should start with /.-- **Cause**: An invalid partition key is provided.-- **Recommendation**: Ensure that the partition key start with `/` in Cosmos DB sink settings, for example: `/movieId`.
+### Error code: DF-GEN2-InvalidServicePrincipalCredentialType
-### Error code: DF-Cosmos-InvalidConnectionMode
-- **Message**: Invalid connection mode.-- **Cause**: An invalid connection mode is provided.-- **Recommendation**: Confirm that the supported mode is **Gateway** and **DirectHttps** in Cosmos DB settings.
+- **Message**: Service principal credential type is invalid.
+- **Cause**: The service principal credential type is invalid.
+- **Recommendation**: Please update the ADLS Gen2 linked service to set the right service principal credential type.
-### Error code: DF-Cosmos-InvalidAccountConfiguration
-- **Message**: Either accountName or accountEndpoint should be specified.-- **Cause**: Invalid account information is provided.-- **Recommendation**: In the Cosmos DB linked service, specify the account name or account endpoint.
+### Error code: DF-GEN2-InvalidStorageAccountConfiguration
+
+- **Message**: Blob operation is not supported on older storage accounts. Creating a new storage account may fix the issue.
+- **Cause**: The storage account is too old.
+- **Recommendation**: Create a new storage account.
### Error code: DF-Github-WriteNotSupported+ - **Message**: GitHub store does not allow writes. - **Cause**: The GitHub store is read only. - **Recommendation**: The store entity definition is in some other place.
-### Error code: DF-PGSQL-InvalidCredential
-- **Message**: User/password should be specified.-- **Cause**: The User/password is missed.-- **Recommendation**: Make sure that you have right credential settings in the related PostgreSQL linked service.-
-### Error code: DF-Snowflake-InvalidStageConfiguration
-- **Message**: Only blob storage type can be used as stage in snowflake read/write operation.-- **Cause**: An invalid staging configuration is provided in the Snowflake.-- **Recommendation**: Update Snowflake staging settings to ensure that only Azure Blob linked service is used.--- **Message**: Snowflake stage properties should be specified with Azure Blob + SAS authentication.-- **Cause**: An invalid staging configuration is provided in the Snowflake.-- **Recommendation**: Ensure that only the Azure Blob + SAS authentication is specified in the Snowflake staging settings.-
-### Error code: DF-Snowflake-InvalidDataType
-- **Message**: The spark type is not supported in snowflake.-- **Cause**: An invalid data type is provided in the Snowflake.-- **Recommendation**: Please use the derive transformation before applying the Snowflake sink to update the related column of the input data into the string type.- ### Error code: DF-Hive-InvalidBlobStagingConfiguration+ - **Message**: Blob storage staging properties should be specified. - **Cause**: An invalid staging configuration is provided in the Hive. - **Recommendation**: Please check if the account key, account name and container are set properly in the related Blob linked service, which is used as staging.
+### Error code: DF-Hive-InvalidDataType
+
+- **Message**: Unsupported Column(s).
+- **Cause**: Unsupported Column(s) are provided.
+- **Recommendation**: Update the column of input data to match the data type supported by the Hive.
+ ### Error code: DF-Hive-InvalidGen2StagingConfiguration+ - **Message**: ADLS Gen2 storage staging only support service principal key credential. - **Cause**: An invalid staging configuration is provided in the Hive. - **Recommendation**: Please update the related ADLS Gen2 linked service that is used as staging. Currently, only the service principal key credential is supported.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: An invalid staging configuration is provided in the Hive. - **Recommendation**: Update the related ADLS Gen2 linked service with right credentials that are used as staging in the Hive.
-### Error code: DF-Hive-InvalidDataType
-- **Message**: Unsupported Column(s).-- **Cause**: Unsupported Column(s) are provided.-- **Recommendation**: Update the column of input data to match the data type supported by the Hive.- ### Error code: DF-Hive-InvalidStorageType+ - **Message**: Storage type can either be blob or gen2. - **Cause**: Only Azure Blob or ADLS Gen2 storage type is supported. - **Recommendation**: Choose the right storage type from Azure Blob or ADLS Gen2.
-### Error code: DF-Delimited-InvalidConfiguration
-- **Message**: Either one of empty lines or custom header should be specified.-- **Cause**: An invalid delimited configuration is provided.-- **Recommendation**: Please update the CSV settings to specify one of empty lines or the custom header.
+### Error code: DF-JSON-WrongDocumentForm
-### Error code: DF-Delimited-ColumnDelimiterMissed
-- **Message**: Column delimiter is required for parse.-- **Cause**: The column delimiter is missed.-- **Recommendation**: In your CSV settings, confirm that you have the column delimiter which is required for parse.
+- **Message**: Malformed records are detected in schema inference. Parse Mode: FAILFAST.
+- **Cause**: Wrong document form is selected to parse JSON file(s).
+- **Recommendation**: Try different **Document form** (**Single document**/**Document per line**/**Array of documents**) in JSON settings. Most cases of parsing errors are caused by wrong configuration.
-### Error code: DF-MSSQL-InvalidCredential
-- **Message**: Either one of user/pwd or tenant/spnId/spnKey or miServiceUri/miServiceToken should be specified.-- **Cause**: An invalid credential is provided in the MSSQL linked service.-- **Recommendation**: Please update the related MSSQL linked service with right credentials, and one of **user/pwd** or **tenant/spnId/spnKey** or **miServiceUri/miServiceToken** should be specified.
+### Error code: DF-MSSQL-ErrorRowsFound
-### Error code: DF-MSSQL-InvalidDataType
-- **Message**: Unsupported field(s).-- **Cause**: Unsupported field(s) are provided.-- **Recommendation**: Modify the input data column to match the data type supported by MSSQL.
+- **Cause**: Error/Invalid rows were found while writing to Azure SQL Database sink.
+- **Recommendation**: Please find the error rows in the rejected data storage location if configured.
+
+### Error code: DF-MSSQL-ExportErrorRowFailed
+
+- **Message**: Exception is happened while writing error rows to storage.
+- **Cause**: An exception happened while writing error rows to the storage.
+- **Recommendation**: Check your rejected data linked service configuration.
### Error code: DF-MSSQL-InvalidAuthConfiguration+ - **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified. - **Cause**: An invalid authentication method is provided in the MSSQL linked service. - **Recommendation**: You can only specify one of the three authentication methods (Key, ServicePrincipal and MI) in the related MSSQL linked service. ### Error code: DF-MSSQL-InvalidCloudType+ - **Message**: Cloud type is invalid. - **Cause**: An invalid cloud type is provided. - **Recommendation**: Check your cloud type in the related MSSQL linked service.
-### Error code: DF-SQLDW-InvalidBlobStagingConfiguration
-- **Message**: Blob storage staging properties should be specified.-- **Cause**: Invalid blob storage staging settings are provided-- **Recommendation**: Please check if the Blob linked service used for staging has correct properties.
+### Error code: DF-MSSQL-InvalidCredential
-### Error code: DF-SQLDW-InvalidStorageType
-- **Message**: Storage type can either be blob or gen2.-- **Cause**: An invalid storage type is provided for staging.-- **Recommendation**: Check the storage type of the linked service used for staging and make sure that it's Blob or Gen2.
+- **Message**: Either one of user/pwd or tenant/spnId/spnKey or miServiceUri/miServiceToken should be specified.
+- **Cause**: An invalid credential is provided in the MSSQL linked service.
+- **Recommendation**: Please update the related MSSQL linked service with right credentials, and one of **user/pwd** or **tenant/spnId/spnKey** or **miServiceUri/miServiceToken** should be specified.
-### Error code: DF-SQLDW-InvalidGen2StagingConfiguration
-- **Message**: ADLS Gen2 storage staging only support service principal key credential.-- **Cause**: An invalid credential is provided for the ADLS gen2 storage staging.-- **Recommendation**: Use the service principal key credential of the Gen2 linked service used for staging.
-
+### Error code: DF-MSSQL-InvalidDataType
-### Error code: DF-SQLDW-InvalidConfiguration
-- **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken is required.-- **Cause**: Invalid ADLS Gen2 staging properties are provided.-- **Recommendation**: Please update ADLS Gen2 storage staging settings to have one of **key** or **tenant/spnId/spnCredential/spnCredentialType** or **miServiceUri/miServiceToken**.
+- **Message**: Unsupported field(s).
+- **Cause**: Unsupported field(s) are provided.
+- **Recommendation**: Modify the input data column to match the data type supported by MSSQL.
-### Error code: DF-DELTA-InvalidConfiguration
-- **Message**: Timestamp and version can't be set at the same time.-- **Cause**: The timestamp and version can't be set at the same time.-- **Recommendation**: Set the timestamp or version in the delta settings.
+### Error code: DF-MSSQL-InvalidFirewallSetting
-### Error code: DF-DELTA-KeyColumnMissed
-- **Message**: Key column(s) should be specified for non-insertable operations.-- **Cause**: Key column(s) are missed for non-insertable operations.-- **Recommendation**: Specify key column(s) on delta sink to have non-insertable operations.
+- **Message**: The TCP/IP connection to the host has failed. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
+- **Cause**: The SQL database's firewall setting blocks the data flow to access.
+- **Recommendation**: Please check the firewall setting for your SQL database, and allow Azure services and resources to access this server.
-### Error code: DF-DELTA-InvalidTableOperationSettings
-- **Message**: Recreate and truncate options can't be both specified.-- **Cause**: Recreate and truncate options can't be specified simultaneously.-- **Recommendation**: Update delta settings to have either recreate or truncate operation.
+### Error code: DF-PGSQL-InvalidCredential
-### Error code: DF-Excel-WorksheetConfigMissed
-- **Message**: Excel sheet name or index is required.-- **Cause**: An invalid Excel worksheet configuration is provided.-- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
+- **Message**: User/password should be specified.
+- **Cause**: The User/password is missed.
+- **Recommendation**: Make sure that you have right credential settings in the related PostgreSQL linked service.
-### Error code: DF-Excel-InvalidWorksheetConfiguration
-- **Message**: Excel sheet name and index cannot exist at the same time.-- **Cause**: The Excel sheet name and index are provided at the same time.-- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
+### Error code: DF-SAPODP-AuthInvalid
-### Error code: DF-Excel-InvalidRange
-- **Message**: Invalid range is provided.-- **Cause**: An invalid range is provided.-- **Recommendation**: Check the parameter value and specify the valid range by the following reference: [Excel format in Azure Data Factory-Dataset properties](./format-excel.md#dataset-properties).
+- **Message**: SapOdp Name or Password incorrect
+- **Cause**: Your input name or password is incorrect.
+- **Recommendation**: Confirm your input name or password is correct.
+
+### Error code: DF-SAPODP-ContextInvalid
+
+- **Cause**: The context value doesn't exist in SAP OPD.
+- **Recommendation**: Check the context value and make sure it's valid.
+
+### Error code: DF-SAPODP-ContextMissed
+
+- **Message**: Context is required
+- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
+
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | Your context value can't be empty when reading data. | Specify the context. |
+ | Your context value can't be empty when browsing object names. | Specify the context. |
+
+### Error code: DF-SAPODP-ObjectInvalid
+
+- **Cause**: The object name is not found or not released.
+- **Recommendation**: Check the object name and make sure it is valid and already released.
+
+### Error code: DF-SAPODP-ObjectNameMissed
+
+- **Message**: 'objectName' (SAP object name) is required
+- **Cause**: Object names must be defined when reading data from SAP ODP.
+- **Recommendation**: Specify the SAP ODP object name.
+
+### Error code: DF-SAPODP-SAPSystemError
+
+- **Cause**: This is an SAP system error: `user id locked`.
+- **Recommendation**: Contact SAP admin for assistance.
+
+### Error code: DF-SAPODP-SessionTerminate
+
+- **Message**: Internal session terminated with a runtime error RAISE_EXCEPTION (see ST22)
+- **Cause**: Transient issues for SLT objects.
+- **Recommendation**: Rerun the data flow activity.
+
+### Error code: DF-SAPODP-SHIROFFLINE
+
+- **Cause**: Your self-hosted integration runtime is offline.
+- **Recommendation**: Check your self-hosted integration runtime status and confirm it's online.
+
+### Error code: DF-SAPODP-SLT-LIMITATION
+
+- **Message**: Preview is not supported in SLT system
+- **Cause**: Your context or object is in SLT system that doesn't support preview. This is an SAP ODP SLT system limitation.
+- **Recommendation**: Directly run the data flow activity.
+
+### Error code: DF-SAPODP-StageAuthInvalid
+
+- **Message**: Invalid client secret provided
+- **Cause**: The service principal certificate credential of the staging storage is not correct.
+- **Recommendation**: Check whether the test connection is successful in your staging storage linked service, and confirm the authentication setting of your staging storage is correct.
+- **Message**: Failed to authenticate the request to storage
+- **Cause**: The key of your staging storage is not correct.
+- **Recommendation**: Check whether the test connection is successful in your staging storage linked service, and confirm the key of your staging Azure Blob Storage is correct.
+
+### Error code: DF-SAPODP-StageBlobPropertyInvalid
+
+- **Message**: Read from staging storage failed: Staging blob storage auth properties not valid.
+- **Cause**: Staging Blob storage properties aren't valid.
+- **Recommendation**: Check the authentication setting in your staging linked service.
+
+### Error code: DF-SAPODP-StageContainerInvalid
+
+- **Message**: Unable to create Azure Blob container
+- **Cause**: The input container is not existed in your staging storage.
+- **Recommendation**: Input a valid container name for the staging storage. Reselect another existed container name or create a new container manually with your input name.
+
+### Error code: DF-SAPODP-StageContainerMissed
+
+- **Message**: Container or file system is required for staging storage.
+- **Cause**: Your container or file system is not specified for staging storage.
+- **Recommendation**: Specify the container or file system for the staging storage.
+
+### Error code: DF-SAPODP-StageFolderPathMissed
+
+- **Message**: Folder path is required for staging storage
+- **Cause**: Your staging storage folder path is not specified.
+- **Recommendation**: Specify the staging storage folder.
+
+### Error code: DF-SAPODP-StageGen2PropertyInvalid
+
+- **Message**: Read from staging storage failed: Staging Gen2 storage auth properties not valid.
+- **Cause**: Authentication properties of your staging Azure Data Lake Storage Gen2 aren't valid.
+- **Recommendation**: Check the authentication setting in your staging linked service.
+
+### Error code: DF-SAPODP-StageStorageServicePrincipalCertNotSupport
+
+- **Message**: Read from staging storage failed: Staging storage auth not support service principal cert.
+- **Cause**: The service principal certificate credential is not supported for the staging storage.
+- **Recommendation**: Change your authentication to not use the service principal certificate credential.
+
+### Error code: DF-SAPODP-StageStorageTypeInvalid
+
+- **Message**: Your staging storage type of SapOdp is invalid
+- **Cause**: Only Azure Blob Storage and Azure Data Lake Storage Gen2 are supported for SAP ODP staging.
+- **Recommendation**: Select Azure Blob Storage or Azure Data Lake Storage Gen2 as your staging storage.
+
+### Error code: DF-SAPODP-SubscriberNameMissed
+
+- **Message**: 'subscriberName' is required while option 'enable change data capture' is selected
+- **Cause**: The SAP linked service property `subscriberName` is required while option 'enable change data capture' is selected.
+- **Recommendation**: Specify the `subscriberName` in SAP ODP linked service.
-### Error code: DF-Excel-WorksheetNotExist
-- **Message**: Excel worksheet does not exist.-- **Cause**: An invalid worksheet name or index is provided.-- **Recommendation**: Check the parameter value and specify a valid sheet name or index to read the Excel data.
+### Error code: DF-SAPODP-SystemError
-### Error code: DF-Excel-DifferentSchemaNotSupport
-- **Message**: Read excel files with different schema is not supported now.-- **Cause**: Reading excel files with different schemas is not supported now.-- **Recommendation**: Please apply one of following options to solve this problem:
- - Use **ForEach** + **data flow** activity to read Excel worksheets one by one.
- - Update each worksheet schema to have the same columns manually before reading data.
+- **Cause**: This error is a data flow system error or SAP server system error.
+- **Recommendation**: Check the error message. If it contains SAP server related error stacktrace, contact SAP admin for assistance. Otherwise, contact Microsoft support for further assistance.
-### Error code: DF-Excel-InvalidDataType
-- **Message**: Data type is not supported.-- **Cause**: The data type is not supported.-- **Recommendation**: Please change the data type to **'string'** for related input data columns.
+### Error code: DF-Snowflake-IncompatibleDataType
-### Error code: DF-Excel-InvalidFile
-- **Message**: Invalid excel file is provided while only .xlsx and .xls are supported.-- **Cause**: Invalid Excel files are provided.-- **Recommendation**: Use the wildcard to filter, and get `.xls` and `.xlsx` Excel files before reading data.
+- **Message**: Expression type does not match column data type, expecting VARIANT but got VARCHAR.
+- **Cause**: The column(s) type of input data which is string is different from the related column(s) type in the Snowflake sink transformation which is VARIANT.
+- **Recommendation**: For the snowflake VARIANT, it can only accept data flow value which is struct, map or array type. If the value of your input data column(s) is JSON or XML or other string, use a parse transformation before the Snowflake sink transformation to covert value into struct, map or array type.
-### Error code: DF-Executor-OutOfMemorySparkBroadcastError
-- **Message**: Explicitly broadcasted dataset using left/right option should be small enough to fit in node's memory. You can choose broadcast option 'Off' in join/exists/lookup transformation to avoid this issue or use an integration runtime with higher memory.-- **Cause**: The size of the broadcasted table far exceeds the limits of the node memory.-- **Recommendation**: The broadcast left/right option should be used only for smaller dataset size which can fit into node's memory, so make sure to configure the node size appropriately or turn off the broadcast option.
+### Error code: DF-Snowflake-InvalidDataType
-### Error code: DF-MSSQL-InvalidFirewallSetting
-- **Message**: The TCP/IP connection to the host has failed. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.-- **Cause**: The SQL database's firewall setting blocks the data flow to access.-- **Recommendation**: Please check the firewall setting for your SQL database, and allow Azure services and resources to access this server.
+- **Message**: The spark type is not supported in snowflake.
+- **Cause**: An invalid data type is provided in the Snowflake.
+- **Recommendation**: Please use the derive transformation before applying the Snowflake sink to update the related column of the input data into the string type.
-### Error code: DF-Executor-AcquireStorageMemoryFailed
-- **Message**: Transferring unroll memory to storage memory failed. Cluster ran out of memory during execution. Please retry using an integration runtime with more cores and/or memory optimized compute type.-- **Cause**: The cluster has insufficient memory.-- **Recommendation**: Please use an integration runtime with more cores and/or the memory optimized compute type.
+### Error code: DF-Snowflake-InvalidStageConfiguration
-### Error code: DF-Cosmos-DeleteDataFailed
-- **Message**: Failed to delete data from cosmos after 3 times retry.-- **Cause**: The throughput on the Cosmos collection is small and leads to meeting throttling or row data not existing in Cosmos.-- **Recommendation**: Please take the following actions to solve this problem:
- - If the error is 404, make sure that the related row data exists in the Cosmos collection.
- - If the error is throttling, please increase the Cosmos collection throughput or set it to the automatic scale.
- - If the error is request timed out, please set 'Batch size' in the Cosmos sink to smaller value, for example 1000.
+- **Message**: Only blob storage type can be used as stage in snowflake read/write operation.
+- **Cause**: An invalid staging configuration is provided in the Snowflake.
+- **Recommendation**: Update Snowflake staging settings to ensure that only Azure Blob linked service is used.
+
+- **Message**: Snowflake stage properties should be specified with Azure Blob + SAS authentication.
+- **Cause**: An invalid staging configuration is provided in the Snowflake.
+- **Recommendation**: Ensure that only the Azure Blob + SAS authentication is specified in the Snowflake staging settings.
### Error code: DF-SQLDW-ErrorRowsFound+ - **Cause**: Error/invalid rows are found when writing to the Azure Synapse Analytics sink. - **Recommendation**: Please find the error rows in the rejected data storage location if it is configured. ### Error code: DF-SQLDW-ExportErrorRowFailed+ - **Message**: Exception is happened while writing error rows to storage. - **Cause**: An exception happened while writing error rows to the storage. - **Recommendation**: Please check your rejected data linked service configuration.
-### Error code: DF-Executor-FieldNotExist
-- **Message**: Field in struct does not exist.-- **Cause**: Invalid or unavailable field names are used in expressions.-- **Recommendation**: Check field names used in expressions.-
-### Error code: DF-Xml-InvalidElement
-- **Message**: XML Element has sub elements or attributes which can't be converted.-- **Cause**: The XML element has sub elements or attributes which can't be converted.-- **Recommendation**: Update the XML file to make the XML element has right sub elements or attributes.-
-### Error code: DF-GEN2-InvalidCloudType
-- **Message**: Cloud type is invalid.-- **Cause**: An invalid cloud type is provided.-- **Recommendation**: Check the cloud type in your related ADLS Gen2 linked service.-
-### Error code: DF-Blob-InvalidCloudType
-- **Message**: Cloud type is invalid.-- **Cause**: An invalid cloud type is provided.-- **Recommendation**: Please check the cloud type in your related Azure Blob linked service.-
-### Error code: DF-Cosmos-FailToResetThroughput
-- **Message**: Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.-- **Cause**: The throughput scale operation of the Azure Cosmos DB can't be performed because another scale operation is in progress.-- **Recommendation**: Login to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput.-
-### Error code: DF-Executor-InvalidPath
-- **Message**: Path does not resolve to any file(s). Please make sure the file/folder exists and is not hidden.-- **Cause**: An invalid file/folder path is provided, which can't be found or accessed.-- **Recommendation**: Please check the file/folder path, and make sure it is existed and can be accessed in your storage.-
-### Error code: DF-Executor-InvalidPartitionFileNames
-- **Message**: File names cannot have empty value(s) while file name option is set as per partition.-- **Cause**: Invalid partition file names are provided.-- **Recommendation**: Please check your sink settings to have the right value of file names.-
-### Error code: DF-Executor-InvalidOutputColumns
-- **Message**: The result has 0 output columns. Please ensure at least one column is mapped.-- **Cause**: No column is mapped.-- **Recommendation**: Please check the sink schema to ensure that at least one column is mapped.
+### Error code: DF-SQLDW-InternalErrorUsingMSI
-### Error code: DF-Executor-InvalidInputColumns
-- **Message**: The column in source configuration cannot be found in source data's schema.-- **Cause**: Invalid columns are provided on the source.-- **Recommendation**: Check columns in the source configuration and make sure that it's the subset of the source data's schemas.
+- **Message**: An internal error occurred while authenticating against Managed Service Identity in Azure Synapse Analytics instance. Please restart the Azure Synapse Analytics instance or contact Azure Synapse Analytics Dedicated SQL Pool support if this problem persists.
+- **Cause**: An internal error occurred in Azure Synapse Analytics.
+- **Recommendation**: Restart the Azure Synapse Analytics instance or contact Azure Synapse Analytics Dedicated SQL Pool support if this problem persists.
-### Error code: DF-AdobeIntegration-InvalidMapToFilter
-- **Message**: Custom resource can only have one Key/Id mapped to filter.-- **Cause**: Invalid configurations are provided.-- **Recommendation**: In your AdobeIntegration settings, make sure that the custom resource can only have one Key/Id mapped to filter.
+### Error code: DF-SQLDW-InvalidBlobStagingConfiguration
-### Error code: DF-AdobeIntegration-InvalidPartitionConfiguration
-- **Message**: Only single partition is supported. Partition schema may be RoundRobin or Hash.-- **Cause**: Invalid partition configurations are provided.-- **Recommendation**: In AdobeIntegration settings, confirm that only the single partition is set and partition schemas may be RoundRobin or Hash.
+- **Message**: Blob storage staging properties should be specified.
+- **Cause**: Invalid blob storage staging settings are provided
+- **Recommendation**: Please check if the Blob linked service used for staging has correct properties.
-### Error code: DF-AdobeIntegration-KeyColumnMissed
-- **Message**: Key must be specified for non-insertable operations.-- **Cause**: Key columns are missed.-- **Recommendation**: Update AdobeIntegration settings to ensure key columns are specified for non-insertable operations.
+### Error code: DF-SQLDW-InvalidConfiguration
-### Error code: DF-AdobeIntegration-InvalidPartitionType
-- **Message**: Partition type has to be roundRobin.-- **Cause**: Invalid partition types are provided.-- **Recommendation**: Please update AdobeIntegration settings to make your partition type is RoundRobin.
+- **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken is required.
+- **Cause**: Invalid ADLS Gen2 staging properties are provided.
+- **Recommendation**: Please update ADLS Gen2 storage staging settings to have one of **key** or **tenant/spnId/spnCredential/spnCredentialType** or **miServiceUri/miServiceToken**.
-### Error code: DF-AdobeIntegration-InvalidPrivacyRegulation
-- **Message**: Only privacy regulation that's currently supported is 'GDPR'.-- **Cause**: Invalid privacy configurations are provided.-- **Recommendation**: Please update AdobeIntegration settings while only privacy 'GDPR' is supported.
+### Error code: DF-SQLDW-InvalidGen2StagingConfiguration
-### Error code: DF-Executor-RemoteRPCClientDisassociated
-- **Message**: Job aborted due to stage failure. Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues.-- **Cause**: Data flow activity run failed because of transient network issues or one node in spark cluster ran out of memory.-- **Recommendation**: Use the following options to solve this problem:
- - Option-1: Use a powerful cluster (both drive and executor nodes have enough memory to handle big data) to run data flow pipelines with setting "Compute type" to "Memory optimized". The settings are shown in the picture below.
-
- :::image type="content" source="media/data-flow-troubleshoot-guide/configure-compute-type.png" alt-text="Screenshot that shows the configuration of Compute type.":::
+- **Message**: ADLS Gen2 storage staging only support service principal key credential.
+- **Cause**: An invalid credential is provided for the ADLS gen2 storage staging.
+- **Recommendation**: Use the service principal key credential of the Gen2 linked service used for staging.
- - Option-2: Use larger cluster size (for example, 48 cores) to run your data flow pipelines. You can learn more about cluster size through this document: [Cluster size](./concepts-integration-runtime-performance.md#cluster-size).
-
- - Option-3: Repartition your input data. For the task running on the data flow spark cluster, one partition is one task and runs on one node. If data in one partition is too large, the related task running on the node needs to consume more memory than the node itself, which causes failure. So you can use repartition to avoid data skew, and ensure that data size in each partition is average while the memory consumption isn't too heavy.
-
- :::image type="content" source="media/data-flow-troubleshoot-guide/configure-partition.png" alt-text="Screenshot that shows the configuration of partitions.":::
+### Error code: DF-SQLDW-InvalidStorageType
- > [!NOTE]
- > You need to evaluate the data size or the partition number of input data, then set reasonable partition number under "Optimize". For example, the cluster that you use in the data flow pipeline execution is 8 cores and the memory of each core is 20GB, but the input data is 1000GB with 10 partitions. If you directly run the data flow, it will meet the OOM issue because 1000GB/10 > 20GB, so it is better to set repartition number to 100 (1000GB/100 < 20GB).
-
- - Option-4: Tune and optimize source/sink/transformation settings. For example, try to copy all files in one container, and don't use the wildcard pattern. For more detailed information, reference [Mapping data flows performance and tuning guide](./concepts-data-flow-performance.md).
+- **Message**: Storage type can either be blob or gen2.
+- **Cause**: An invalid storage type is provided for staging.
+- **Recommendation**: Check the storage type of the linked service used for staging and make sure that it's Blob or Gen2.
-### Error code: DF-MSSQL-ErrorRowsFound
-- **Cause**: Error/Invalid rows were found while writing to Azure SQL Database sink.-- **Recommendation**: Please find the error rows in the rejected data storage location if configured.
+### Error code: DF-Synapse-DBNotExist
-### Error code: DF-MSSQL-ExportErrorRowFailed
-- **Message**: Exception is happened while writing error rows to storage.-- **Cause**: An exception happened while writing error rows to the storage.-- **Recommendation**: Check your rejected data linked service configuration.
+- **Cause**: The database does not exist.
+- **Recommendation**: Check if the database exists.
### Error code: DF-Synapse-InvalidDatabaseType+ - **Message**: Database type is not supported. - **Cause**: The database type is not supported. - **Recommendation**: Check the database type and change it to the proper one. ### Error code: DF-Synapse-InvalidFormat+ - **Message**: Format is not supported.-- **Cause**: The format is not supported.
+- **Cause**: The format is not supported.
- **Recommendation**: Check the format and change it to the proper one.
-### Error code: DF-Synapse-InvalidTableDBName
-- **Message**: The table/database name is not a valid name for tables/databases. Valid names only contain alphabet characters, numbers and _.-- **Cause**: The table/database name is not valid.-- **Recommendation**: Change a valid name for the table/database. Valid names only contain alphabet characters, numbers and `_`.- ### Error code: DF-Synapse-InvalidOperation+ - **Cause**: The operation is not supported. - **Recommendation**: Change **Update method** configuration as delete, update and upsert are not supported in Workspace DB.
-### Error code: DF-Synapse-DBNotExist
-- **Cause**: The database does not exist.-- **Recommendation**: Check if the database exists.
+### Error code: DF-Synapse-InvalidTableDBName
+
+- **Message**: The table/database name is not a valid name for tables/databases. Valid names only contain alphabet characters, numbers and _.
+- **Cause**: The table/database name is not valid.
+- **Recommendation**: Change a valid name for the table/database. Valid names only contain alphabet characters, numbers and `_`.
### Error code: DF-Synapse-StoredProcedureNotSupported+ - **Message**: Use 'Stored procedure' as Source is not supported for serverless (on-demand) pool. - **Cause**: The serverless pool has limitations. - **Recommendation**: Retry using 'query' as the source or saving the stored procedure as a view, and then use 'table' as the source to read from view directly.
-### Error code: DF-Executor-BroadcastFailure
-- **Message**: Dataflow execution failed during broadcast exchange. Potential causes include misconfigured connections at sources or a broadcast join timeout error. To ensure the sources are configured correctly, please test the connection or run a source data preview in a Dataflow debug session. To avoid the broadcast join timeout, you can choose the 'Off' broadcast option in the Join/Exists/Lookup transformations. If you intend to use the broadcast option to improve performance then make sure broadcast streams can produce data within 60 secs for debug runs and within 300 secs for job runs. If problem persists, contact customer support.
+### Error code: DF-Xml-InvalidDataField
-- **Cause**:
- 1. The source connection/configuration error could lead to a broadcast failure in join/exists/lookup transformations.
- 2. Broadcast has a default timeout of 60 seconds in debug runs and 300 seconds in job runs. On the broadcast join, the stream chosen for the broadcast seems too large to produce data within this limit. If a broadcast join is not used, the default broadcast done by a data flow can reach the same limit.
+- **Message**: The field for corrupt records must be string type and nullable.
+- **Cause**: An invalid data type of the column `\"_corrupt_record\"` is provided in the XML source.
+- **Recommendation**: Make sure that the column `\"_corrupt_record\"` in the XML source has a string data type and nullable.
-- **Recommendation**:
- - Do data preview at sources to confirm the sources are well configured.
- - Turn off the broadcast option or avoid broadcasting large data streams where the processing can take more than 60 seconds. Instead, choose a smaller stream to broadcast.
- - Large SQL/Data Warehouse tables and source files are typically bad candidates.
- - In the absence of a broadcast join, use a larger cluster if the error occurs.
- - If the problem persists, contact the customer support.
+### Error code: DF-Xml-InvalidElement
-### Error code: DF-Cosmos-ShortTypeNotSupport
-- **Message**: Short data type is not supported in Cosmos DB.-- **Cause**: The short data type is not supported in the Azure Cosmos DB.-- **Recommendation**: Add a derived column transformation to convert related columns from short to integer before using them in the Azure Cosmos DB sink transformation.
+- **Message**: XML Element has sub elements or attributes which can't be converted.
+- **Cause**: The XML element has sub elements or attributes which can't be converted.
+- **Recommendation**: Update the XML file to make the XML element has right sub elements or attributes.
-### Error code: DF-Blob-FunctionNotSupport
-- **Message**: This endpoint does not support BlobStorageEvents, SoftDelete or AutomaticSnapshot. Please disable these account features if you would like to use this endpoint.-- **Cause**: Azure Blob Storage events, soft delete or automatic snapshot is not supported in data flows if the Azure Blob Storage linked service is created with service principal or managed identity authentication.-- **Recommendation**: Disable Azure Blob Storage events, soft delete or automatic snapshot feature on the Azure Blob account, or use key authentication to create the linked service.
+### Error code: DF-Xml-InvalidReferenceResource
-### Error code: DF-Cosmos-InvalidAccountKey
-- **Message**: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used.-- **Cause**: There's no enough permission to read/write Azure Cosmos DB data.-- **Recommendation**: Please use the read-write key to access Azure Cosmos DB.
+- **Message**: Reference resource in xml data file cannot be resolved.
+- **Cause**: The reference resource in the XML data file can't be resolved.
+- **Recommendation**: Check the reference resource in the XML data file.
-### Error code: DF-Cosmos-ResourceNotFound
-- **Message**: Resource not found.-- **Cause**: Invalid configuration is provided (for example, the partition key with invalid characters) or the resource doesn't exist.-- **Recommendation**: To solve this issue, refer to [Diagnose and troubleshoot Azure Cosmos DB not found exceptions](../cosmos-db/troubleshoot-not-found.md).
+### Error code: DF-Xml-InvalidSchema
-### Error code: DF-Snowflake-IncompatibleDataType
-- **Message**: Expression type does not match column data type, expecting VARIANT but got VARCHAR.-- **Cause**: The column(s) type of input data which is string is different from the related column(s) type in the Snowflake sink transformation which is VARIANT.-- **Recommendation**: For the snowflake VARIANT, it can only accept data flow value which is struct, map or array type. If the value of your input data column(s) is JSON or XML or other string, use a parse transformation before the Snowflake sink transformation to covert value into struct, map or array type.
+- **Message**: Schema validation failed.
+- **Cause**: The invalid schema is provided on the XML source.
+- **Recommendation**: Check the schema settings on the XML source to make sure that it's the subset schema of the source data.
-### Error code: DF-JSON-WrongDocumentForm
-- **Message**: Malformed records are detected in schema inference. Parse Mode: FAILFAST.-- **Cause**: Wrong document form is selected to parse JSON file(s).-- **Recommendation**: Try different **Document form** (**Single document**/**Document per line**/**Array of documents**) in JSON settings. Most cases of parsing errors are caused by wrong configuration.
+### Error code: DF-Xml-InvalidValidationMode
-### Error code: DF-File-InvalidSparkFolder
-- **Message**: Failed to read footer for file -- **Cause**: Folder *_spark_metadata* is created by the structured streaming job.-- **Recommendation**: Delete *_spark_metadata* folder if it exists. For more information, refer to this [article](https://forums.databricks.com/questions/12447/javaioioexception-could-not-read-footer-for-file-f.html).
+- **Message**: Invalid xml validation mode is provided.
+- **Cause**: An invalid XML validation mode is provided.
+- **Recommendation**: Check the parameter value and specify the right validation mode.
-### Error code: DF-Executor-InternalServerError
-- **Message**: Failed to execute dataflow with internal server error, please retry later. If issue persists, please contact Microsoft support for further assistance-- **Cause**: The data flow execution is failed because of the system error.-- **Recommendation**: To solve this issue, refer to [Internal server errors](#internal-server-errors).
+### Error code: DF-Xml-MalformedFile
-### Error code: DF-Executor-InvalidStageConfiguration
-- **Message**: Storage with user assigned managed identity authentication in staging is not supported -- **Cause**: An exception is happened because of invalid staging configuration.-- **Recommendation**: The user-assigned managed identity authentication is not supported in staging. Use a different authentication to create an Azure Data Lake Storage Gen2 or Azure Blob Storage linked service, then use it as staging in mapping data flows.
+- **Message**: Malformed xml with path in FAILFAST mode.
+- **Cause**: Malformed XML with path exists in the FAILFAST mode.
+- **Recommendation**: Update the content of the XML file to the right format.
+
+### Error code: DF-Xml-UnsupportedExternalReferenceResource
+
+- **Message**: External reference resource in xml data file is not supported.
+- **Cause**: The external reference resource in the XML data file is not supported.
+- **Recommendation**: Update the XML file content when the external reference resource is not supported now.
+
+### Error code: GetCommand OutputAsync failed
+
+- **Message**: During Data Flow debug and data preview: GetCommand OutputAsync failed with ...
+- **Cause**: This error is a back-end service error.
+- **Recommendation**: Retry the operation and restart your debugging session. If retrying and restarting doesn't resolve the problem, contact customer support.
+
+### Error code: InvalidTemplate
-### Error code: DF-GEN2-InvalidStorageAccountConfiguration
-- **Message**: Blob operation is not supported on older storage accounts. Creating a new storage account may fix the issue.-- **Cause**: The storage account is too old.-- **Recommendation**: Create a new storage account.
+- **Message**: The pipeline expression cannot be evaluated.
+- **Cause**: The pipeline expression passed in the Data Flow activity isn't being processed correctly because of a syntax error.
+- **Recommendation**: Check data flow activity name. Check expressions in activity monitoring to verify the expressions. For example, data flow activity name can't have a space or a hyphen.
-### Error code: DF-AzureDataExplorer-InvalidOperation
-- **Message**: Blob operation is not supported on older storage accounts. Creating a new storage account may fix the issue.-- **Cause**: Operation is not supported.-- **Recommendation**: Change **Update method** configuration as delete, update and upsert are not supported in Azure Data Explorer.
+### Error code: 2011
-### Error code: DF-AzureDataExplorer-WriteTimeout
-- **Message**: Operation timeout while writing data.-- **Cause**: Operation times out while writing data.-- **Recommendation**: Increase the value in **Timeout** option in sink transformation settings.
-
-### Error code: DF-AzureDataExplorer-ReadTimeout
-- **Message**: Operation timeout while reading data.-- **Cause**: Operation times out while reading data.-- **Recommendation**: Increase the value in **Timeout** option in source transformation settings.
+- **Message**: The activity was running on Azure Integration Runtime and failed to decrypt the credential of data store or compute connected via a Self-hosted Integration Runtime. Please check the configuration of linked services associated with this activity, and make sure to use the proper integration runtime type.
+- **Cause**: Data flow doesn't support linked services on self-hosted integration runtimes.
+- **Recommendation**: Configure data flow to run on a Managed Virtual Network integration runtime.
### Error code: 4502+ - **Message**: There are substantial concurrent MappingDataflow executions that are causing failures due to throttling under Integration Runtime. - **Cause**: A large number of Data Flow activity runs are occurring concurrently on the integration runtime. For more information, see [Azure Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits). - **Recommendation**: If you want to run more Data Flow activities in parallel, distribute them across multiple integration runtimes. ### Error code: 4503+ - **Message**: There are substantial concurrent MappingDataflow executions which is causing failures due to throttling under subscription '%subscriptionId;', ActivityId: '%activityId;'. - **Cause**: Throttling threshold was reached. - **Recommendation**: Retry the request after a wait period. ### Error code: 4506+ - **Message**: Failed to provision cluster for '%activityId;' because the request computer exceeds the maximum concurrent count of 200. Integration Runtime '%IRName;' - **Cause**: Transient error - **Recommendation**: Retry the request after a wait period. ### Error code: 4507+ - **Message**: Unsupported compute type and/or core count value. - **Cause**: Unsupported compute type and/or core count value was provided. - **Recommendation**: Use one of the supported compute type and/or core count values given on this [document](control-flow-execute-data-flow-activity.md#type-properties). ### Error code: 4508+ - **Message**: Spark cluster not found. - **Recommendation**: Restart the debug session. ### Error code: 4509+ - **Message**: Hit unexpected failure while allocating compute resources, please retry. If the problem persists, please contact Azure Support - **Cause**: Transient error - **Recommendation**: Retry the request after a wait period. ### Error code: 4510-- **Message**: Unexpected failure during execution. +
+- **Message**: Unexpected failure during execution.
- **Cause**: Since debug clusters work differently from job clusters, excessive debug runs could wear the cluster over time, which could cause memory issues and abrupt restarts. - **Recommendation**: Restart Debug cluster. If you are running multiple dataflows during debug session, use activity runs instead because activity level run creates separate session without taxing main debug cluster. ### Error code: 4511+ - **Message**: java.sql.SQLTransactionRollbackException. Deadlock found when trying to get lock; try restarting transaction. If the problem persists, please contact Azure Support - **Cause**: Transient error - **Recommendation**: Retry the request after a wait period.
-### Error code: DF-Executor-OutOfMemorySparkError
--- **Message**: The data may be too large to fit in the memory.-- **Cause**: The size of the data far exceeds the limit of the node memory.-- **Recommendation**: Increase the core count and switch to the memory optimized compute type.-
-### Error code: DF-SQLDW-InternalErrorUsingMSI
--- **Message**: An internal error occurred while authenticating against Managed Service Identity in Azure Synapse Analytics instance. Please restart the Azure Synapse Analytics instance or contact Azure Synapse Analytics Dedicated SQL Pool support if this problem persists.-- **Cause**: An internal error occurred in Azure Synapse Analytics.-- **Recommendation**: Restart the Azure Synapse Analytics instance or contact Azure Synapse Analytics Dedicated SQL Pool support if this problem persists.-
-### Error code: DF-Executor-IncorrectLinkedServiceConfiguration
--- **Message**: Possible causes are,
- - The linked service is incorrectly configured as type 'Azure Blob Storage' instead of 'Azure DataLake Storage Gen2' and it has 'Hierarchical namespace' enabled. Please create a new linked service of type 'Azure DataLake Storage Gen2' for the storage account in question.
- - Certain scenarios with any combinations of 'Clear the folder', non-default 'File name option', 'Key' partitioning may fail with a Blob linked service on a 'Hierarchical namespace' enabled storage account. You can disable these dataflow settings (if enabled) and try again in case you do not want to create a new Gen2 linked service.
-- **Cause**: Delete operation on the Azure Data Lake Storage Gen2 account failed since its linked service is incorrectly configured as Azure Blob Storage.-- **Recommendation**: Create a new Azure Data Lake Storage Gen2 linked service for the storage account. If that's not feasible, some known scenarios like **Clear the folder**, non-default **File name option**, **Key** partitioning in any combinations may fail with an Azure Blob Storage linked service on a hierarchical namespace enabled storage account. You can disable these data flow settings if you enabled them and try again.-
-### Error code: DF-Delta-InvalidProtocolVersion
--- **Message**: Unsupported Delta table protocol version, Refer https://docs.delta.io/latest/versioning.html#-table-version for versioning information.-- **Cause**: Data flows don't support this version of the Delta table protocol.-- **Recommendation**: Use a lower version of the Delta table protocol.-
-### Error code: DF-SAPODP-SubscriberNameMissed
--- **Message**: 'subscriberName' is required while option 'enable change data capture' is selected-- **Cause**: The SAP linked service property `subscriberName` is required while option 'enable change data capture' is selected.-- **Recommendation**: Specify the `subscriberName` in SAP ODP linked service.-
-### Error code: DF-SAPODP-StageContainerMissed
--- **Message**: Container or file system is required for staging storage.-- **Cause**: Your container or file system is not specified for staging storage.-- **Recommendation**: Specify the container or file system for the staging storage.-
-### Error code: DF-SAPODP-StageFolderPathMissed
--- **Message**: Folder path is required for staging storage-- **Cause**: Your staging storage folder path is not specified.-- **Recommendation**: Specify the staging storage folder.-
-### Error code: DF-SAPODP-ContextMissed
--- **Message**: Context is required-- **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.-
- | Cause analysis | Recommendation |
- | :-- | :-- |
- | Your context value can't be empty when reading data. | Specify the context. |
- | Your context value can't be empty when browsing object names. | Specify the context. |
-
-### Error code: DF-SAPODP-StageContainerInvalid
--- **Message**: Unable to create Azure Blob container-- **Cause**: The input container is not existed in your staging storage.-- **Recommendation**: Input a valid container name for the staging storage. Reselect another existed container name or create a new container manually with your input name.-
-### Error code: DF-SAPODP-SessionTerminate
--- **Message**: Internal session terminated with a runtime error RAISE_EXCEPTION (see ST22)-- **Cause**: Transient issues for SLT objects.-- **Recommendation**: Rerun the data flow activity.
+## Miscellaneous troubleshooting tips
+- **Issue**: Unexpected exception occurred and execution failed.
+ - **Message**: During Data Flow activity execution: Hit unexpected exception and execution failed.
+ - **Cause**: This error is a back-end service error. Retry the operation and restart your debugging session.
+ - **Recommendation**: If retrying and restarting doesn't resolve the problem, contact customer support.
-### Error code: DF-SAPODP-StageAuthInvalid
+- **Issue**: No output data on join during debug data preview.
+ - **Message**: There are a high number of null values or missing values which may be caused by having too few rows sampled. Try updating the debug row limit and refreshing the data.
+ - **Cause**: The join condition either didn't match any rows or resulted in a large number of null values during the data preview.
+ - **Recommendation**: In **Debug Settings**, increase the number of rows in the source row limit. Be sure to select an Azure IR that has a data flow cluster that's large enough to handle more data.
+
+- **Issue**: Validation error at source with multiline CSV files.
+ - **Message**: You might see one of these error messages:
+ - The last column is null or missing.
+ - Schema validation at source fails.
+ - Schema import fails to show correctly in the UX and the last column has a new line character in the name.
+ - **Cause**: In the Mapping data flow, multiline CSV source files don't currently work when \r\n is used as the row delimiter. Sometimes extra lines at carriage returns can cause errors.
+ - **Recommendation**: Generate the file at the source by using \n as the row delimiter rather than \r\n. Or use the Copy activity to convert the CSV file to use \n as a row delimiter.
-- **Message**: Invalid client secret provided-- **Cause**: The service principal certificate credential of the staging storage is not correct.-- **Recommendation**: Check whether the test connection is successful in your staging storage linked service, and confirm the authentication setting of your staging storage is correct.-- **Message**: Failed to authenticate the request to storage-- **Cause**: The key of your staging storage is not correct.-- **Recommendation**: Check whether the test connection is successful in your staging storage linked service, and confirm the key of your staging Azure Blob Storage is correct.
+### Improvement on CSV/CDM format in Data Flow
-### Error code: DF-SAPODP-ObjectNameMissed
+If you use the **Delimited Text or CDM formatting for mapping data flow in Azure Data Factory V2**, you may face the behavior changes to your existing pipelines because of the improvement for Delimited Text/CDM in data flow starting from **1 May 2021**.
-- **Message**: 'objectName' (SAP object name) is required-- **Cause**: Object names must be defined when reading data from SAP ODP.-- **Recommendation**: Specify the SAP ODP object name.
+You may encounter the following issues before the improvement, but after the improvement, the issues were fixed. Read the following content to determine whether this improvement affects you.
-### Error code: DF-SAPODP-ContextInvalid
+#### Scenario 1: Encounter the unexpected row delimiter issue
-- **Cause**: The context value doesn't exist in SAP OPD.-- **Recommendation**: Check the context value and make sure it's valid.
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text with the Multiline setting set to True or CDM as the source.
+ - The first row has more than 128 characters.
+ - The row delimiter in data files is not `\n`.
-### Error code: DF-SAPODP-ObjectInvalid
+ Before the improvement, the default row delimiter `\n` may be unexpectedly used to parse delimited text files, because when Multiline setting is set to True, it invalidates the row delimiter setting, and the row delimiter is automatically detected based on the first 128 characters. If you fail to detect the actual row delimiter, it would fall back to `\n`.
-- **Cause**: The object name is not found or not released.-- **Recommendation**: Check the object name and make sure it is valid and already released.
+ After the improvement, any one of the three-row delimiters: `\r`, `\n`, `\r\n` should have worked.
+
+ The following example shows you one pipeline behavior change after the improvement:
-### Error code: DF-SAPODP-SLT-LIMITATION
+ **Example**:<br/>
+ For the following column:<br/>
+ `C1, C2, {long first row}, C128\r\n `<br/>
+ `V1, V2, {values………………….}, V128\r\n `<br/>
+
+ Before the improvement, `\r` is kept in the column value. The parsed column result is:<br/>
+ `C1 C2 {long first row} C128`**`\r`**<br/>
+ `V1 V2 {values………………….} V128`**`\r`**<br/> 
-- **Message**: Preview is not supported in SLT system-- **Cause**: Your context or object is in SLT system that doesn't support preview. This is an SAP ODP SLT system limitation.-- **Recommendation**: Directly run the data flow activity.
+ After the improvement, the parsed column result should be:<br/>
+ `C1 C2 {long first row} C128`<br/>
+ `V1 V2 {values………………….} V128`<br/>
+
+#### Scenario 2: Encounter an issue of incorrectly reading column values containing '\r\n'
-### Error code: DF-SAPODP-AuthInvalid
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text with the Multiline setting set to True or CDM as a source.
+ - The row delimiter is `\r\n`.
-- **Message**: SapOdp Name or Password incorrect-- **Cause**: Your input name or password is incorrect.-- **Recommendation**: Confirm your input name or password is correct.
+ Before the improvement, when reading the column value, the `\r\n` in it may be incorrectly replaced by `\n`.
-### Error code: DF-SAPODP-SHIROFFLINE
+ After the improvement, `\r\n` in the column value will not be replaced by `\n`.
-- **Cause**: Your self-hosted integration runtime is offline.-- **Recommendation**: Check your self-hosted integration runtime status and confirm it's online.
+ The following example shows you one pipeline behavior change after the improvement:
+
+ **Example**:<br/>
+
+ For the following column:<br/>
+ **`"A\r\n"`**`, B, C\r\n`<br/>
-### Error code: DF-SAPODP-SAPSystemError
+ Before the improvement, the parsed column result is:<br/>
+ **`A\n`**` B C`<br/>
-- **Cause**: This is an SAP system error: `user id locked`.-- **Recommendation**: Contact SAP admin for assistance.
+ After the improvement, the parsed column result should be:<br/>
+ **`A\r\n`**` B C`<br/>
-### Error code: DF-SAPODP-SystemError
+#### Scenario 3: Encounter an issue of incorrectly writing column values containing '\n'
-- **Cause**: This error is a data flow system error or SAP server system error.-- **Recommendation**: Check the error message. If it contains SAP server related error stacktrace, contact SAP admin for assistance. Otherwise, contact Microsoft support for further assistance.
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text as a sink.
+ - The column value contains `\n`.
+ - The row delimiter is set to `\r\n`.
+
+ Before the improvement, when writing the column value, the `\n` in it may be incorrectly replaced by `\r\n`.
-### Error code: DF-SAPODP-StageStorageTypeInvalid
+ After the improvement, `\n` in the column value will not be replaced by `\r\n`.
+
+ The following example shows you one pipeline behavior change after the improvement:
-- **Message**: Your staging storage type of SapOdp is invalid-- **Cause**: Only Azure Blob Storage and Azure Data Lake Storage Gen2 are supported for SAP ODP staging.-- **Recommendation**: Select Azure Blob Storage or Azure Data Lake Storage Gen2 as your staging storage.
+ **Example**:<br/>
-### Error code: DF-SAPODP-StageBlobPropertyInvalid
+ For the following column:<br/>
+ **`A\n`**` B C`<br/>
-- **Message**: Read from staging storage failed: Staging blob storage auth properties not valid.-- **Cause**: Staging Blob storage properties aren't valid.-- **Recommendation**: Check the authentication setting in your staging linked service.
+ Before the improvement, the CSV sink is:<br/>
+ **`"A\r\n"`**`, B, C\r\n` <br/>
-### Error code: DF-SAPODP-StageStorageServicePrincipalCertNotSupport
+ After the improvement, the CSV sink should be:<br/>
+ **`"A\n"`**`, B, C\r\n`<br/>
-- **Message**: Read from staging storage failed: Staging storage auth not support service principal cert.-- **Cause**: The service principal certificate credential is not supported for the staging storage.-- **Recommendation**: Change your authentication to not use the service principal certificate credential.
+#### Scenario 4: Encounter an issue of incorrectly reading empty string as NULL
+
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text as a source.
+ - NULL value is set to non-empty value.
+ - The column value is empty string and is unquoted.
+
+ Before the improvement, the column value of unquoted empty string is read as NULL.
-### Error code: DF-SAPODP-StageGen2PropertyInvalid
+ After the improvement, empty string will not be parsed as NULL value.
+
+ The following example shows you one pipeline behavior change after the improvement:
-- **Message**: Read from staging storage failed: Staging Gen2 storage auth properties not valid.-- **Cause**: Authentication properties of your staging Azure Data Lake Storage Gen2 aren't valid.-- **Recommendation**: Check the authentication setting in your staging linked service.
+ **Example**:<br/>
+ For the following column:<br/>
+ `A, ,B, `<br/>
-## Miscellaneous troubleshooting tips
-- **Issue**: Unexpected exception occurred and execution failed.
- - **Message**: During Data Flow activity execution: Hit unexpected exception and execution failed.
- - **Cause**: This error is a back-end service error. Retry the operation and restart your debugging session.
- - **Recommendation**: If retrying and restarting doesn't resolve the problem, contact customer support.
+ Before the improvement, the parsed column result is:<br/>
+ `A null B null`<br/>
-- **Issue**: No output data on join during debug data preview.
- - **Message**: There are a high number of null values or missing values which may be caused by having too few rows sampled. Try updating the debug row limit and refreshing the data.
- - **Cause**: The join condition either didn't match any rows or resulted in a large number of null values during the data preview.
- - **Recommendation**: In **Debug Settings**, increase the number of rows in the source row limit. Be sure to select an Azure IR that has a data flow cluster that's large enough to handle more data.
-
-- **Issue**: Validation error at source with multiline CSV files.
- - **Message**: You might see one of these error messages:
- - The last column is null or missing.
- - Schema validation at source fails.
- - Schema import fails to show correctly in the UX and the last column has a new line character in the name.
- - **Cause**: In the Mapping data flow, multiline CSV source files don't currently work when \r\n is used as the row delimiter. Sometimes extra lines at carriage returns can cause errors.
- - **Recommendation**: Generate the file at the source by using \n as the row delimiter rather than \r\n. Or use the Copy activity to convert the CSV file to use \n as a row delimiter.
+ After the improvement, the parsed column result should be:<br/>
+ `A "" (empty string) B "" (empty string)`<br/>
## Next steps
databox Data Box File Acls Preservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-file-acls-preservation.md
Previously updated : 07/13/2022 Last updated : 09/12/2022
The following file attributes aren't transferred:
Read-only attributes on directories aren't transferred.
+## Alternate data streams and extended attributes
+
+[Alternate data streams](/openspecs/windows_protocols/ms-fscc/e2b19412-a925-4360-b009-86e3b8a020c8) and extended attributes are not supported in Azure Files, page blob, or block blob storage, so they are not transferred when copying data.
+ ## ACLs <!--ACLs DEFINITION
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
Last updated 06/19/2022
# Defender for Containers architecture
-Defender for Containers is designed differently for each container environment whether they're running in:
+Defender for Containers is designed differently for each Kubernetes environment whether they're running in:
- **Azure Kubernetes Service (AKS)** - Microsoft's managed service for developing, deploying, and managing containerized applications.
defender-for-cloud Episode Seventeen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seventeen.md
+
+ Title: Defender for Cloud integration with Microsoft Entra | Defender for Cloud in the Field
+
+description: Learn about Defender for Cloud integration with Microsoft Entra.
+ Last updated : 09/19/2022++
+# Defender for Cloud integration with Microsoft Entra | Defender for Cloud in the Field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Bar Brownshtein joins Yuri Diogenes to talk about the new Defender for Cloud integration with Microsoft Entra. Bar explains the rationale behind this integration, the importance of having everything in a single dashboard and how this integration works. Bar also covers the recommendations that are generated by this integration and demonstrate the experience in the dashboard.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=96a0ecdb-b1c3-423f-9ff1-47fcc5d6ab1b" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [00:00](https://learn.microsoft.com/shows/mdc-in-the-field/integrate-entra#time=00m0s) - Defender for Cloud integration with Microsoft Entra
+
+- [00:55](https://learn.microsoft.com/shows/mdc-in-the-field/integrate-entra#time=00m55s) - What is Cloud Infrastructure Entitlement Management?
+
+- [02:20](https://learn.microsoft.com/shows/mdc-in-the-field/integrate-entra#time=02m20s) - How does the integration with MDC work?
+
+- [03:58](https://learn.microsoft.com/shows/mdc-in-the-field/integrate-entra#time=03m58s) - Demonstration
+
+## Recommended resources
+
+Learn more about [Entra Permission Management](other-threat-protections.md#entra-permission-management-formerly-cloudknox)
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Sixteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-sixteen.md
Last updated 08/04/2022
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Defender for Cloud integration with Microsoft Entra | Defender for Cloud in the Field](episode-seventeen.md)
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
# Scale SNAT ports with Azure Virtual Network NAT
-Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale set instance (Minimum of 2 instances), and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 512,000 available SNAT ports with this configuration. For example, when you use it to protect large [Azure Virtual Desktop deployments](./protect-azure-virtual-desktop.md) that integrate with Microsoft 365 Apps.
+Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale set instance (Minimum of 2 instances), and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 1,248,000 available SNAT ports with this configuration. For example, when you use it to protect large [Azure Virtual Desktop deployments](./protect-azure-virtual-desktop.md) that integrate with Microsoft 365 Apps.
Another challenge with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes.
hdinsight Hdinsight Administer Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-powershell.md
description: Learn how to perform administrative tasks for the Apache Hadoop clu
Previously updated : 02/13/2020 Last updated : 09/19/2022 # Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell
hdinsight Hdinsight Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upgrade-cluster.md
description: Learn guidelines to migrate your Azure HDInsight cluster to a newer
Previously updated : 01/31/2020 Last updated : 09/19/2022 # Migrate HDInsight cluster to a newer version
-To take advantage of the latest HDInsight features, we recommend that HDInsight clusters be regularly migrated to latest version. HDInsight does not support in-place upgrades where an existing cluster is upgraded to a newer component version. You must create a new cluster with the desired component and platform version and then migrate your applications to use the new cluster. Follow the below guidelines to migrate your HDInsight cluster versions.
+To take advantage of the latest HDInsight features, we recommend that HDInsight clusters be regularly migrated to latest version. HDInsight doesn't support in-place upgrades where an existing cluster is upgraded to a newer component version. You must create a new cluster with the desired component and platform version and then migrate your applications to use the new cluster. Follow the below guidelines to migrate your HDInsight cluster versions.
> [!NOTE] > For information on supported versions of HDInsight, see [HDInsight component versions](hdinsight-component-versioning.md#supported-hdinsight-versions).
The workflow to upgrade HDInsight Cluster is as follows.
3. Copy existing jobs, data sources, and sinks to the new environment. 4. Perform validation testing to make sure that your jobs work as expected on the new cluster.
-Once you have verified that everything works as expected, schedule downtime for the migration. During this downtime, do the following actions:
+Once you've verified that everything works as expected, schedule downtime for the migration. During this downtime, do the following actions:
-1. Back up any transient data stored locally on the cluster nodes. For example, if you have data stored directly on a head node.
+1. Back up any transient data stored locally on the cluster nodes. For example, if you've data stored directly on a head node.
1. [Delete the existing cluster](./hdinsight-delete-cluster.md). 1. Create a cluster in the same VNET subnet with latest (or supported) HDI version using the same default data store that the previous cluster used. This allows the new cluster to continue working against your existing production data. 1. Import any transient data you backed up.
For more information about database backup and restore, see [Recover a database
## Upgrade scenarios
-As mentioned above, Microsoft recommends that HDInsight clusters be regularly migrated to the latest version in order to take advantage of new features and fixes. Please see the following list of reasons we would request that a cluster be deleted and redeployed:
+As mentioned above, Microsoft recommends that HDInsight clusters be regularly migrated to the latest version in order to take advantage of new features and fixes. See the following list of reasons we would request that a cluster to be deleted and redeployed:
-* The cluster version is [Retired](hdinsight-retired-versions.md) or in [Basic support](hdinsight-36-component-versioning.md) and you are having a cluster issue that would be resolved with a newer version.
-* The root cause of a cluster issue is determined to be related to an undersized VM. [View Microsoft's recommended node configuration](hdinsight-supported-node-configuration.md).
+* The cluster version is [Retired](hdinsight-retired-versions.md) or in [Basic support](hdinsight-36-component-versioning.md) and you're having a cluster issue that would be resolved with a newer version.
+* The root cause of a cluster issue is determined to relate an undersized VM. [View Microsoft's recommended node configuration](hdinsight-supported-node-configuration.md).
* A customer opens a support case and the Microsoft engineering team determines the issue has already been fixed in a newer cluster version.
-* A default metastore database (Ambari, Hive, Oozie, Ranger) has reached it's utilization limit. Microsoft will ask you to recreate the cluster using a [custom metastore](hdinsight-use-external-metadata-stores.md#custom-metastore) database.
+* A default metastore database (Ambari, Hive, Oozie, Ranger) has reached its utilization limit. Microsoft will ask you to recreate the cluster using a [custom metastore](hdinsight-use-external-metadata-stores.md#custom-metastore) database.
* The root cause of a cluster issue is due to an **Unsupported Operation**. Here are some of the common unsupported operations:
- * **Moving or Adding a service in Ambari**. When viewing information on the cluster services in Ambari, one of the actions available from the Service Actions menu is **Move [Service Name]**. Another action is **Add [Service Name]**. Both of these options are unsupported.
+ * **Moving or Adding a service in Ambari**. See the information on the cluster services in Ambari, one of the actions available from the Service Actions menu is **Move [Service Name]**. Another action is **Add [Service Name]**. Both of these options are unsupported.
* **Python package corruption**. HDInsight clusters depend on the built-in Python environments, Python 2.7 and Python 3.5. Directly installing custom packages in those default built-in environments may cause unexpected library version changes and break the cluster. Learn how to [safely install custom external Python packages](./spark/apache-spark-python-package-installation.md#safely-install-external-python-packages) for your Spark applications.
- * **Third-party software**. Customers have the ability to install third-party software on their HDInsight clusters; however, we will recommend recreating the cluster if it breaks the existing functionality.
- * **Multiple workloads on the same cluster**. In HDInsight 4.0, the Hive Warehouse Connector needs separate clusters for Spark and Interactive Query workloads. [Follow these steps to set up both clusters in Azure HDInsight](interactive-query/apache-hive-warehouse-connector.md). Similarly, integrating [Spark with HBASE](hdinsight-using-spark-query-hbase.md) requires 2 different clusters.
- * **Custom Ambari DB password changed**. The Ambari DB password is set during cluster creation and there is no current mechanism to update it. If a customer deploys the cluster with a [custom Ambari DB](hdinsight-custom-ambari-db.md), they will have the ability to change the DB password on the SQL DB; however, there is no way to update this password for a running HDInsight cluster.
+ * **Third-party software**. Customers have the ability to install third-party software on their HDInsight clusters; however, we'll recommend recreating the cluster if it breaks the existing functionality.
+ * **Multiple workloads on the same cluster**. In HDInsight 4.0, the Hive Warehouse Connector needs separate clusters for Spark and Interactive Query workloads. [Follow these steps to set up both clusters in Azure HDInsight](interactive-query/apache-hive-warehouse-connector.md). Similarly, integrating [Spark with HBASE](hdinsight-using-spark-query-hbase.md) requires two different clusters.
+ * **Custom Ambari DB password changed**. The Ambari DB password is set during cluster creation and there's no current mechanism to update it. If a customer deploys the cluster with a [custom Ambari DB](hdinsight-custom-ambari-db.md), they'll have the ability to change the DB password on the SQL DB; however, there's no way to update this password for a running HDInsight cluster.
## Next steps
hdinsight Optimize Hbase Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/optimize-hbase-ambari.md
Title: Optimize Apache HBase with Apache Ambari in Azure HDInsight
description: Use the Apache Ambari web UI to configure and optimize Apache HBase. Previously updated : 02/01/2021 Last updated : 09/19/2022 # Optimize Apache HBase with Apache Ambari in Azure HDInsight
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
The FHIR service supports `$export` at the following levels:
* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET {{fhirurl}}/Patient/$export` * [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) ΓÇô *The FHIR service exports all referenced resources but doesn't export the characteristics of the group resource itself: `GET {{fhirurl}}/Group/[ID]/$export`
-When data is exported, a separate file is created for each resource type. The FHIR service will create a new file when the size of a single exported file exceeds 64 MB. The result is that you may get multiple files for a resource type, which will be enumerated (e.g., `Patient-1.ndjson`, `Patient-2.ndjson`).
-
+When data is exported, a separate file is created for each resource type. No individual file will exceed one million resource records. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`). Every file will not necessarily have one million resource records listed.
> [!Note] > `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if a resource is in multiple groups or in a compartment of more than one resource.
healthcare-apis How To Use Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-device-mappings.md
Previously updated : 07/07/2022 Last updated : 09/12/2022
-# How to use Device mappings
+# How to use device mappings
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
+This article describes how to configure the MedTech service device mapping.
+
+The MedTech service requires two types of JSON-based mappings. The first type, **device mapping**, is responsible for mapping the device payloads sent to the MedTech service device message event hub end point. The device mapping extracts types, device identifiers, measurement date time, and the measurement value(s).
-This article describes how to configure the MedTech service using Device mappings.
+The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) destination mapping**, controls the mapping for FHIR resource. The FHIR destination mapping allows configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
-MedTech service requires two types of JSON-based mappings. The first type, **Device mapping**, is responsible for mapping the device payloads sent to the `devicedata` Azure Event Hubs end point. It extracts types, device identifiers, measurement date time, and the measurement value(s).
+> [!NOTE]
+> Device and FHIR destination mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately.
-The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) destination mapping**, controls the mapping for FHIR resource. It allows configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
+The two types of mappings are composed into a JSON document based on their type. These JSON documents are then added to your MedTech service through the Azure portal. The device mapping is added through the **Device mapping** page and the FHIR destination mapping through the **Destination** page.
-The two types of mappings are composed into a JSON document based on their type. These JSON documents are then added to your MedTech service through the Azure portal. The Device mapping document is added through the **Device mapping** page and the FHIR destination mapping document through the **Destination** page.
+> [!TIP]
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings; and export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
> [!NOTE]
-> Mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately.
+> Links to OSS projects on the GitHub website are for informational purposes only and do not constitute an endorsement or guarantee of any kind. You should review the information and licensing terms on the OSS projects on GitHub before using it.
## Device mappings overview
The content payload itself is an Azure Event Hubs message, which is composed of
The five device content-mapping types supported today rely on JSONPath to both match the required mapping and extracted values. More information on JSONPath can be found [here](https://goessner.net/articles/JsonPath/). All five template types use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
-You can define one or more templates within the Device mapping template. Each Event Hubs device message received is evaluated against all device mapping templates.
+You can define one or more templates within the MedTech service device mapping. Each event hub device message received is evaluated against all device mapping templates.
A single inbound device message can be separated into multiple outbound messages that are later mapped to different observations in the FHIR service.
-Various template types exist and may be used when building the Device mapping file.
+Various template types exist and may be used when building the MedTech service device mapping.
|Name | Description | |-|-|
Various template types exist and may be used when building the Device mapping fi
In this article, you learned how to use Device mappings. To learn how to use FHIR destination mappings, see >[!div class="nextstepaction"]
->[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
+>[How to use the FHIR destination mapping](how-to-use-fhir-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
load-balancer Move Across Regions External Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-portal.md
The following procedures show how to prepare the external load balancer for the
"name": "[parameters('publicIPAddresses_myPubIP_name')]", "location": "<target-region>", "sku": {
- "name": "Basic",
+ "name": "Standard",
"tier": "Regional" }, "properties": {
The following procedures show how to prepare the external load balancer for the
"resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be", "ipAddress": "52.177.6.204", "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
+ "publicIPAllocationMethod": "Static",
"idleTimeoutInMinutes": 4, "ipTags": [] }
The following procedures show how to prepare the external load balancer for the
"name": "[parameters('publicIPAddresses_myPubIP_name')]", "location": "<target-region>", "sku": {
- "name": "Basic",
+ "name": "Standard",
"tier": "Regional" }, ```
+ * **Availability zone**. You can change the zone(s) of the public IP by changing the **zone** property. If the zone property isn't specified, the public IP will be created as no-zone. You can specify a single zone to create a zonal public IP or all 3 zones for a zone-redundant public IP.
- For information on the differences between basic and standard SKU public IPs, see [Create, change, or delete a public IP address](../virtual-network/ip-services/virtual-network-public-ip-address.md).
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/publicIPAddresses",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('publicIPAddresses_myPubIP_name')]",
+ "location": "<target-region>",
+ "sku": {
+ "name": "Standard",
+ "tier": "Regional"
+ },
+ "zones": [
+ "1",
+ "2",
+ "3"
+ ],
+ ```
- * **Public IP allocation method** and **Idle timeout**. You can change the public IP allocation method by changing the **publicIPAllocationMethod** property from **Dynamic** to **Static** or from **Static** to **Dynamic**. You can change the idle timeout by changing the **idleTimeoutInMinutes** property to the desired value. The default is **4**.
+ * **Public IP allocation method** and **Idle timeout**. You can change the public IP allocation method by changing the **publicIPAllocationMethod** property from **Static** to **Dynamic** or from **Dynamic** to **Static**. You can change the idle timeout by changing the **idleTimeoutInMinutes** property to the desired value. The default is **4**.
```json "resources": [
The following procedures show how to prepare the external load balancer for the
"name": "[parameters('publicIPAddresses_myPubIP_name')]", "location": "<target-region>", "sku": {
- "name": "Basic",
+ "name": "Standard",
"tier": "Regional" },
+ "zones": [
+ "1",
+ "2",
+ "3"
+ ],
"properties": { "provisioningState": "Succeeded", "resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be", "ipAddress": "52.177.6.204", "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
+ "publicIPAllocationMethod": "Static",
"idleTimeoutInMinutes": 4, "ipTags": []
The following procedures show how to prepare the external load balancer for the
11. You can also change other parameters in the template if you want to or need to, depending on your requirements:
- * **SKU**. You can change the SKU of the external load balancer in the configuration from standard to basic or from basic to standard by changing the **name** property under **sku** in the template.json file:
+ * **SKU**. You can change the SKU of the external load balancer in the configuration from Standard to Basic or from Basic to Standard by changing the **name** property under **sku** in the template.json file:
```json "resources": [
In this tutorial, you moved an Azure external load balancer from one region to a
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
These resources are ephemeral and exist only for the duration of the load test r
- An existing virtual network and a subnet to use with Azure Load Testing. - The virtual network must be in the same subscription and the same region as the Azure Load Testing resource.
+- You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
- The subnet you use for Azure Load Testing must have enough unassigned IP addresses to accommodate the number of load test engines for your test. Learn more about [configuring your test for high-scale load](./how-to-high-scale-load.md). - The subnet shouldn't be delegated to any other Azure service. For example, it shouldn't be delegated to Azure Container Instances (ACI). Learn more about [subnet delegation](/azure/virtual-network/subnet-delegation-overview). - Azure CLI version 2.2.0 or later (if you're using CI/CD). Run `az --version` to find the version that's installed on your computer. If you need to install or upgrade the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## September 19, 2022
+[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `22.09.19`
+
+Main changes:
+
+- `.Net Framework` to version `3.1.423`
+- `Azure Cli` to version `2.40.0`
+- `Intelijidea` to version `2022.2.2`
+- Microsoft Edge Browser to version `107.0.1379.1`
+- `Nodejs` to version `v16.17.0`
+- `Pycharm` to version `2022.2.1`
+
+Environment Specific Updates:
+
+`azureml_py38`:
+- `azureml-core` to version `1.45.0`
+
+`py38_default`:
+- `Jupyter Lab` to version `3.4.7`
+- `azure-core` to version `1.25.1`
+- `keras` to version `2.10.0`
+- `tensorflow-gpu` to version `2.10.0`
+ ## September 12, 2022 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
When you deploy an Azure Machine Learning workspace, various other services are
To create a new workspace where the __services are automatically created__, use the following command: ```azurecli-interactive
-az ml workspace create -w <workspace-name> -g <resource-group-name>
+az ml workspace create -n <workspace-name> -g <resource-group-name>
``` # [Bring existing resources](#tab/bringexistingresources)
To check for problems with your workspace, see [How to use workspace diagnostics
To learn how to move a workspace to a new Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
-For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
+For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
Previously updated : 07/06/2022 Last updated : 09/13/2022 # Troubleshoot issues for Azure Managed Grafana
To check if your Managed Grafana instance already has a dashboard with the same
1. Rename the old or the new dashboard. 1. You can also edit the UID of a JSON dashboard before importing it by editing the field named **uid** in the JSON file.
+## Nothing changes after updating the managed identity role assignment
+
+After disabling System-Assigned Managed Identity, the data source that has been configured with Managed Identity can still access the data from Azure services.
+
+### Solution: wait for the change to take effect
+
+Data sources configured with a managed identity may still be able to access data from Azure services for up to 24 hours. When a role assignment is updated in a managed identity for Azure Managed Grafana, this change can take up to 24 hours to be effective, due to limitations of managed identities.
+ ## Next steps > [!div class="nextstepaction"]
marketplace Pc Saas Fulfillment Subscription Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api.md
Response body example:
"subscriptionName": "Contoso Cloud Solution", // SaaS subscription name "offerId": "offer1", // purchased offer ID "planId": "silver", // purchased offer's plan ID
- "quantity": "20", // number of purchased seats, might be empty if the plan is not per seat
+ "quantity": 20, // number of purchased seats, might be empty if the plan is not per seat
"subscription": { // full SaaS subscription details, see Get Subscription APIs response body for full description "id": "<guid>", "publisherId": "contoso",
mysql How To Configure Server Parameters Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-cli.md
az mysql flexible-server parameter set --name init_connect --resource-group myre
## Working with the time zone parameter
-### Populating the time zone tables
-
-The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench.
-
-> [!NOTE]
-> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`.
-
-```sql
-CALL mysql.az_load_timezone();
-```
-
-> [!IMPORTANT]
->You should restart the server to ensure the time zone tables are properly populated.<!-- fIX me To restart the server, use the [Azure portal](howto-restart-server-portal.md) or [CLI](howto-restart-server-cli.md). -->
-
-To view available time zone values, run the following command:
-
-```sql
-SELECT name FROM mysql.time_zone_name;
-```
- ### Setting the global level time zone The global level time zone can be set using the [az mysql flexible-server parameter set](/cli/azure/mysql/flexible-server/parameter) command.
SET time_zone = 'US/Pacific';
Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz).
+>[!Note]
+> To change time zone at session level, Server parameter time_zone has to be updated globally to required timezone at least once, in order to update the [mysql.time_zone_name](https://dev.mysql.com/doc/refman/8.0/en/time-zone-support.html) table.
## Next steps
mysql How To Configure Server Parameters Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-portal.md
If the server parameter you want to update is non-modifiable, you can optionally
## Working with the time zone parameter
-### Populating the time zone tables
-
-The time zone tables on your server can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench.
-
-> [!NOTE]
-> If you are running the `mysql.az_load_timezone` command from MySQL Workbench, you may need to turn off safe update mode first using `SET SQL_SAFE_UPDATES=0;`.
-
-```sql
-CALL mysql.az_load_timezone();
-```
-
-> [!IMPORTANT]
->You should restart the server to ensure the time zone tables are properly populated.<!-- FIX ME To restart the server, use the [Azure portal](how-to-restart-server-portal.md) or [CLI](how-to-restart-server-cli.md).-->
-
-To view available time zone values, run the following command:
-
-```sql
-SELECT name FROM mysql.time_zone_name;
-```
- ### Setting the global level time zone The global level time zone can be set from the **Server parameters** page in the Azure portal. The below sets the global time zone to the value "US/Pacific".
SET time_zone = 'US/Pacific';
Refer to the MySQL documentation for [Date and Time Functions](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_convert-tz).
+>[!Note]
+> To change time zone at session level, Server parameter time_zone has to be updated globally to required timezone at least once, in order to update the [mysql.time_zone_name](https://dev.mysql.com/doc/refman/8.0/en/time-zone-support.html) table.
++ ## Next steps - How to configure [server parameters in Azure CLI](./how-to-configure-server-parameters-cli.md)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
With data encryption with customer-managed keys (CMKs) for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. With customer managed keys (CMKs), the customer is responsible for and in a full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys. [Learn More](concepts-customer-managed-key.md)
+- **Change Timezone of your Azure Database for MySQL - Flexible Server in a single step**
+
+ Previously to change time_zone of your Azure Database for MySQL - Flexible Server required two steps to take effect. Now you no longer need to call the procedure mysql.az_load_timezone() to populate the mysql.time_zone_name table. Flexible Server timezone can be changed directly by just changing the server parameter time_zone from [portal](./how-to-configure-server-parameters-portal.md#working-with-the-time-zone-parameter) or [CLI](./how-to-configure-server-parameters-cli.md#working-with-the-time-zone-parameter).
## August 2022 - **Server logs for Azure Database for MySQL - Flexible Server**
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
Here are some limitations for working with virtual networks:
* A flexible server doesn't support Azure Private Link. Instead, it uses virtual network injection to make the flexible server available within a virtual network. > [!IMPORTANT]
-> Azure Resource Manager supports ability to lock resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: CanNotDelete and ReadOnly. These lock types can be applied either to a Private DNS zone, or to an individual record set. Applying a lock of either type against Private DNS Zone or individual record set may interfere with ability of Azure Database for PostgreSQL - Flexible Server service to update DNS records and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. Please make sure you are not utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL - Flexible Server.
+> Azure Resource Manager supports ability to lock resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: CanNotDelete and ReadOnly. These lock types can be applied either to a Private DNS zone, or to an individual record set. Applying a lock of either type against Private DNS Zone or individual record set may interfere with ability of Azure Database for PostgreSQL - Flexible Server service to update DNS records and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. For these reasons, please make sure you are not utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL - Flexible Server.
## Public access (allowed IP addresses)
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
For more information to create and complete a scan, see [the manage data sources
In Microsoft Purview Data Estate Insights, you can get an overview of the assets that have been scanned into the Data Map and view key gaps that can be closed by governance stakeholders, for better governance of the data estate.
-> [!NOTE]
-> After you have scanned your source types, give asset insights 3-8 hours to reflect the new assets. The delay may be due to high traffic in deployment region or size of your workload. For further information, please contact support.
- 1. Navigate to your Microsoft Purview account in the Azure portal. 1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview governance portal** tile.
purview Data Stewardship https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/data-stewardship.md
Before getting started with Microsoft Purview Data Estate Insights, make sure th
For more information to create and complete a scan, see [the manage data sources in Microsoft Purview article](manage-data-sources.md).
-## Understand your data estate and catalog health in Data Estate Insights
+## Understand your data estate and catalog health in Data Estate Insights
In Microsoft Purview Data Estate Insights, you can get an overview of all assets inventoried in the Data Map, and any key gaps that can be closed by governance stakeholders, for better governance of the data estate.
-> [!NOTE]
-> After you have scanned your source types, give asset insights 3-8 hours to reflect the new assets. The delay may be due to high traffic in deployment region or size of your workload. For further information, please contact support.
- 1. Navigate to your Microsoft Purview account in the Azure portal. 1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview governance portal** tile.
purview Enable Disable Data Estate Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/enable-disable-data-estate-insights.md
Last updated 06/27/2022
# Disable or enable Data Estate Insights
-> [!IMPORTANT]
-> The option to disable the Data Estate Insights application will only be available July 1st after 9am PST.
- Microsoft Purview Data Estate Insights automatically aggregates metrics and creates reports about your Microsoft Purview account and your data estate. When you scan registered sources and populate your Microsoft Purview Data Map, the Data Estate Insights application automatically extracts valuable governance gaps and highlights them in its top metrics. It also provides drill-down experience that enables all stakeholders, such as data owners and data stewards, to take appropriate action to close the gaps. These features are optional and can be enabled or disabled at any time. This article provides the specific steps required to enable or disable Microsoft Purview Data Estate Insights features.
search Cognitive Search Skill Text Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-text-translation.md
Previously updated : 09/16/2022 Last updated : 09/19/2022 # Text Translation cognitive skill The **Text Translation** skill evaluates text and, for each record, returns the text translated to the specified target language. This skill uses the [Translator Text API v3.0](../cognitive-services/translator/reference/v3-0-translate.md) available in Cognitive Services.
-This capability is useful if you expect that your documents may not all be in one language, in which case you can normalize the text to a single language before indexing for search by translating it. It is also useful for localization use cases, where you may want to have copies of the same text available in multiple languages.
+This capability is useful if you expect that your documents may not all be in one language, in which case you can normalize the text to a single language before indexing for search by translating it. It's also useful for localization use cases, where you may want to have copies of the same text available in multiple languages.
-The [Translator Text API v3.0](../cognitive-services/translator/reference/v3-0-reference.md) is a non-regional Cognitive Service, meaning that your data is not guaranteed to stay in the same region as your Azure Cognitive Search or attached Cognitive Services resource.
+The [Translator Text API v3.0](../cognitive-services/translator/reference/v3-0-reference.md) is a non-regional Cognitive Service, meaning that your data isn't guaranteed to stay in the same region as your Azure Cognitive Search or attached Cognitive Services resource.
> [!NOTE] > This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
->
-> When using this skill, take into consideration that all documents in the source will be processed for translation, even if the source document language is the same as the required target language. This is useful for multi-language support within the same document. However, keep this in mind when planning for your source location data, to avoid unexpected billing charges from documents that didn't need to be processed for translation.
>
+> When using this skill, all documents in the source are processed and billed for translation, even if the source and target languages are the same. This behavior is useful for multi-language support within the same document, but it can result in unnecessary processing. To avoid unexpected billing charges from documents that don't need processing, move them out of the data source container prior to running the skill.
+>
+
+## @odata.type
-## @odata.type
Microsoft.Skills.Text.TranslationSkill ## Data limits+ The maximum size of a record should be 50,000 characters as measured by [`String.Length`](/dotnet/api/system.string.length). If you need to break up your data before sending it to the text translation skill, consider using the [Text Split skill](cognitive-search-skill-textsplit.md). ## Skill parameters
Parameters are case-sensitive.
| Inputs | Description | ||-|
-| defaultToLanguageCode | (Required) The language code to translate documents into for documents that don't specify the to language explicitly. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
-| defaultFromLanguageCode | (Optional) The language code to translate documents from for documents that don't specify the from language explicitly. If the defaultFromLanguageCode is not specified, the automatic language detection provided by the Translator Text API will be used to determine the from language. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
-| suggestedFrom | (Optional) The language code to translate documents from when neither the fromLanguageCode input nor the defaultFromLanguageCode parameter are provided, and the automatic language detection is unsuccessful. If the suggestedFrom language is not specified, English (en) will be used as the suggestedFrom language. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
+| defaultToLanguageCode | (Required) The language code to translate documents into for documents that don't specify the "to" language explicitly. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
+| defaultFromLanguageCode | (Optional) The language code to translate documents from for documents that don't specify the "from" language explicitly. If the defaultFromLanguageCode isn't specified, the automatic language detection provided by the Translator Text API will be used to determine the "from" language. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
+| suggestedFrom | (Optional) The language code to translate documents from if `fromLanguageCode` or `defaultFromLanguageCode` are unspecified, and the automatic language detection is unsuccessful. If the suggestedFrom language isn't specified, English (en) will be used as the suggestedFrom language. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
## Skill inputs | Input name | Description | |--|-| | text | The text to be translated.|
-| toLanguageCode | A string indicating the language the text should be translated to. If this input is not specified, the defaultToLanguageCode will be used to translate the text. <br/>See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
-| fromLanguageCode | A string indicating the current language of the text. If this parameter is not specified, the defaultFromLanguageCode (or automatic language detection if the defaultFromLanguageCode is not provided) will be used to translate the text. <br/>See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
+| toLanguageCode | A string indicating the language the text should be translated to. If this input isn't specified, the defaultToLanguageCode will be used to translate the text. <br/>See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
+| fromLanguageCode | A string indicating the current language of the text. If this parameter isn't specified, the defaultFromLanguageCode (or automatic language detection if the defaultFromLanguageCode isn't provided) will be used to translate the text. <br/>See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
## Skill outputs | Output name | Description | |--|-| | translatedText | The string result of the text translation from the translatedFromLanguageCode to the translatedToLanguageCode.|
-| translatedToLanguageCode | A string indicating the language code the text was translated to. Useful if you are translating to multiple languages and want to be able to keep track of which text is which language.|
+| translatedToLanguageCode | A string indicating the language code the text was translated to. Useful if you're translating to multiple languages and want to be able to keep track of which text is which language.|
| translatedFromLanguageCode | A string indicating the language code the text was translated from. Useful if you opted for the automatic language detection option as this output will give you the result of that detection.| ## Sample definition
Parameters are case-sensitive.
} ``` - ## Errors and warnings
-If you provide an unsupported language code for either the from or the to language, an error is generated and text is not translated.
+
+If you provide an unsupported language code for either the "to" or "from" language, an error is generated, and text isn't translated.
If your text is empty, a warning will be produced.
-If your text is larger than 50,000 characters, only the first 50,000 characters will be translated and a warning will be issued.
+If your text is larger than 50,000 characters, only the first 50,000 characters will be translated, and a warning will be issued.
## See also
search Search Security Trimming For Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-trimming-for-azure-search.md
Last updated 12/16/2020
# Security filters for trimming results in Azure Cognitive Search
-You can apply security filters to trim search results in Azure Cognitive Search based on user identity. This search experience generally requires comparing the identity of whoever requests the search against a field containing the principles who have permissions to the document. When a match is found, the user or principal (such as a group or role) has access to that document.
+You can apply security filters to trim search results in Azure Cognitive Search based on user identity. This search experience generally requires comparing the identity of whoever requests the search against a field containing the principals who have permissions to the document. When a match is found, the user or principal (such as a group or role) has access to that document.
One way to achieve security filtering is through a complicated disjunction of equality expressions: for example, `Id eq 'id1' or Id eq 'id2'`, and so forth. This approach is error-prone, difficult to maintain, and in cases where the list contains hundreds or thousands of values, slows down query response time by many seconds.
This article described a pattern for filtering results based on user identity an
For an alternative pattern based on Active Directory, or to revisit other security features, see the following links. * [Security filters for trimming results using Active Directory identities](search-security-trimming-for-azure-search-with-aad.md)
-* [Security in Azure Cognitive Search](search-security-overview.md)
+* [Security in Azure Cognitive Search](search-security-overview.md)
sentinel Add Advanced Conditions To Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-advanced-conditions-to-automation-rules.md
+
+ Title: Add advanced conditions to Microsoft Sentinel automation rules
+description: This article explains how to add complex, advanced "Or" conditions to automation rules in Microsoft Sentinel, for more effective triage of incidents.
++ Last updated : 09/13/2022+++
+# Add advanced conditions to Microsoft Sentinel automation rules
+
+> [!IMPORTANT]
+>
+> The advanced conditions capability for automation rules is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to add advanced "Or" conditions to automation rules in Microsoft Sentinel, for more effective triage of incidents.
+
+Add "Or" conditions in the form of *condition groups* in the Conditions section of your automation rule.
+
+Condition groups can contain two levels of conditions:
+
+- [**Simple**](#example-1-simple-conditions): At least two conditions, each separated by an `OR` operator:
+
+ - **A `OR` B**
+ - **A `OR` B `OR` C** ([See Example 1B below](#example-1b-add-more-or-conditions).)
+ - and so on.
+
+- [**Compound**](#example-2-compound-conditions): More than two conditions, with at least two conditions on at least one side of an `OR` operator:
+
+ - **(A `and` B) `OR` C**
+ - **(A `and` B) `OR` (C `and` D)**
+ - **(A `and` B) `OR` (C `and` D `and` E)**
+ - **(A `and` B) `OR` (C `and` D) `OR` (E `and` F)**
+ - and so on.
+
+You can see that this capability affords you great power and flexibility in determining when rules will run. It can also greatly increase your efficiency by enabling you to combine many old automation rules into one new rule.
+
+## Add a condition group
+
+Since condition groups offer a lot more power and flexibility in creating automation rules, the best way to explain how to do this is by presenting some examples.
+
+Let's create a rule that will change the severity of an incoming incident from whatever it is to High, assuming it meets the conditions we'll set.
+
+1. From the **Automation** page, select **Create > Automation rule** from the button bar at the top.
+
+ See the [general instructions for creating an automation rule](create-manage-use-automation-rules.md) for details.
+
+1. Give the rule a name: "Triage: Change Severity to High"
+
+1. Select the trigger **When incident is created**.
+
+1. Under **Conditions**, leave the **Analytics rule name** condition as is. We'll add more conditions below.
+
+1. Under **Actions**, select **Change severity** from the drop-down list.
+
+1. Select **High** from the drop-down list that appears below **Change severity**.
++
+## Example 1: simple conditions
+
+In this first example, we'll create a simple condition group: If either condition A **or** condition B is true, the rule will run and the incident's severity will be set to *High*.
+
+1. Select the **+ Add** expander and choose **Condition group (Or) (Preview)** from the drop-down list.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/add-condition-group.png" alt-text="Screenshot of adding a condition group to an automation rule's condition set.":::
+
+1. See that two sets of condition fields are displayed, separated by an `OR` operator. These are the "A" and "B" conditions we mentioned above: If A or B is true, the rule will run.
+ (Don't be confused by all the different layers of "Add" links - these will all be explained.)
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/empty-condition-group.png" alt-text="Screenshot of empty condition group fields.":::
+
+1. Let's decide what these conditions will be. That is, what two *different* conditions will cause the incident severity to be changed to *High*? Let's suggest the following:
+
+ - If the incident's associated MITRE ATT&CK **Tactics** include any of the four we've selected from the drop-down (see the image below), the severity should be raised to High.
+
+ - If the incident contains a **Host name** entity named "SUPER_SECURE_STATION", the severity should be raised to High.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/add-simple-or-condition.png" alt-text="Screenshot of adding simple OR conditions to an automation rule.":::
+
+ As long as at least ONE of these conditions is true, the actions we define in the rule will run, changing the severity of the incident to High.
+
+### Example 1A: Add an OR value within a single condition
+
+Let's say we have not one, but two super-sensitive workstations whose incidents we want to make high-severity.
+We can add another value to an existing condition (for any conditions based on entity properties) by selecting the dice icon to the right of the existing value and adding the new value below.
++
+### Example 1B: Add more OR conditions
+
+Let's say we want to have this rule run if one of THREE (or more) conditions is true. If A *or* B *or* C is true, the rule will run.
+
+1. Remember all those "Add" links? To add another OR condition, select the **+ Add** connected by a line to the `OR` operator.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/add-another-or-condition.png" alt-text="Screenshot of adding another OR condition to an automation rule.":::
+
+1. Now, fill in the parameters and values of this condition the same way you did the first two.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/added-another-or-condition.png" alt-text="Screenshot of another OR condition added to an automation rule.":::
+
+## Example 2: compound conditions
+
+Now we decide we're going to be a little more picky. We want to add more conditions to each side of our original OR condition. That is, we want the rule to run if A *and* B are true, *OR* if C *and* D are true.
+
+1. To add a condition to one side of an OR condition group, select the **+ Add** link immediately below the existing condition, on the same side of the `OR` operator (in the same blue-shaded area) to which you want to add the new condition.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/add-a-compound-condition.png" alt-text="Screenshot of adding a compound condition to an automation rule.":::
+
+ You'll see a new row added where the **+ Add** link was, separated by an `AND` operator.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/empty-new-condition.png" alt-text="Screenshot of empty new condition row in automation rules.":::
+
+1. Fill in the parameters and values of this condition the same way you did the others.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/fill-in-new-condition.png" alt-text="Screenshot of new condition fields to fill in to add to automation rules.":::
+
+1. Repeat the previous two steps to add an AND condition to the other side of the OR condition group.
+
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/add-compound-conditions.png" alt-text="Screenshot of adding multiple compound conditions to an automation rule.":::
+
+That's it! You can use what you've learned here to add more conditions and condition groups, using different combinations of `AND` and `OR` operators, to create powerful, flexible, and efficient automation rules to really help your SOC run smoothly and lower your response and resolution times.
+
+## Next steps
+
+In this document, you learned how to add condition groups using `OR` operators to automation rules.
+
+- For instructions on creating basic automation rules, see [Create and use Microsoft Sentinel automation rules to manage response](create-manage-use-automation-rules.md).
+- To learn more about automation rules, see [Automate incident handling in Microsoft Sentinel with automation rules](automate-incident-handling-with-automation-rules.md)
+- To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+- For help with implementing automation rules and playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
sentinel Create Manage Use Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md
Title: Create and use Microsoft Sentinel automation rules to manage response | Microsoft Docs
+ Title: Create and use Microsoft Sentinel automation rules to manage response
description: This article explains how to create and use automation rules in Microsoft Sentinel to manage and handle incidents, in order to maximize your SOC's efficiency and effectiveness in response to security threats. Previously updated : 05/23/2022 Last updated : 09/13/2022 # Create and use Microsoft Sentinel automation rules to manage response
+> [!IMPORTANT]
+>
+> Some features of automation rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> Features in preview will be so indicated when they are mentioned throughout this article.
This article explains how to create and use automation rules in Microsoft Sentinel to manage and orchestrate threat response, in order to maximize your SOC's efficiency and effectiveness.
From the **Trigger** drop-down, select **When incident is created**, **When inci
### Add conditions (incidents only)
-Add any other conditions you want this automation rule's activation to depend on. Select **+ Add condition** and choose conditions from the drop-down list. The list of conditions is populated by incident property and [entity property](entities-reference.md) fields.
+Add any other conditions you want this automation rule's activation to depend on. You now have two ways to add conditions:
+
+- **AND conditions**: individual conditions that will be evaluated as a group. The rule will execute if *all* the conditions of this type are met. This type of condition will be explained below.
+
+- **OR conditions** (also known as *condition groups*, **now in Preview**): groups of conditions, each of which will be evaluated independently. The rule will execute if one or more groups of conditions are true. To learn how to work with these complex types of conditions, see [Add advanced conditions to automation rules](add-advanced-conditions-to-automation-rules.md).
+
+Select the **+ Add** expander and choose **Condition (And)** from the drop-down list. The list of conditions is populated by incident property and [entity property](entities-reference.md) fields.
+ 1. Select a property from the first drop-down box on the left. You can begin typing any part of a property name in the search box to dynamically filter the list, so you can find what you're looking for quickly. :::image type="content" source="media/create-manage-use-automation-rules/filter-list.png" alt-text="Screenshot of typing in a search box to filter the list of choices.":::
Add any other conditions you want this automation rule's activation to depend on
| Property | Operator set | | -- | -- | | - Title<br>- Description<br>- Tag<br>- All listed entity properties | - Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with |
- | - Severity<br>- Status<br>- Incident provider | - Equals/Does not equal |
- | - Tactics<br>- Alert product names | - Contains/Does not contain |
+ | - Severity<br>- Status<br>- Incident provider<br>- Custom details key (Preview) | - Equals/Does not equal |
+ | - Tactics<br>- Alert product names<br>- Custom details value (Preview) | - Contains/Does not contain |
#### Conditions available with the update trigger
Add any other conditions you want this automation rule's activation to depend on
| - Tag (in addition to above)<br>- Alerts<br>- Comments | - Added | | - Severity<br>- Status | - Equals/Does not equal<br>- Changed<br>- Changed from<br>- Changed to | | - Owner | - Changed |
- | - Incident provider<br>- Updated by | - Equals/Does not equal |
+ | - Incident provider<br>- Updated by<br>- Custom details key (Preview) | - Equals/Does not equal |
| - Tactics | - Contains/Does not contain<br>- Added |
- | - Alert product names | - Contains/Does not contain |
+ | - Alert product names<br>- Custom details value (Preview) | - Contains/Does not contain |
1. Enter a value in the text box on the right. Depending on the property you chose, this might be a drop-down list from which you would select the values you choose. You might also be able to add several values by selecting the icon to the right of the text box (highlighted by the red arrow below). :::image type="content" source="media/create-manage-use-automation-rules/add-values-to-condition.png" alt-text="Screenshot of adding values to your condition in automation rules.":::
+Again, for setting complex **Or** conditions with different fields, see [Add advanced conditions to automation rules](add-advanced-conditions-to-automation-rules.md).
+
+#### Conditions based on custom details (Preview)
+
+You can set the value of a [custom detail surfaced in an incident](surface-custom-details-in-alerts.md) as a condition of an automation rule. Recall that custom details are data points in raw event log records that can be surfaced and displayed in alerts and the incidents generated from them. Through custom details you can get to the actual relevant content in your alerts without having to dig through query results.
+
+To add a condition based on a custom detail, take the following steps:
+
+1. Create a new automation rule as described above.
+
+1. Add a condition or a condition group.
+
+1. Select **Custom details key (Preview)** from the properties drop-down list. Select **Equals** or **Does not equal** from the operators drop-down list.
+
+ For the custom details condition, the values in the last drop-down list come from the custom details that were surfaced in all the analytics rules listed in the first condition. Select the custom detail you want to use as a condition.
+
+ :::image type="content" source="media/create-manage-use-automation-rules/custom-detail-key-condition.png" alt-text="Screenshot of adding a custom detail key as a condition.":::
+
+1. You've now chosen the field you want to evaluate for this condition. Now you have to specify the value appearing in that field that will make this condition evaluate to *true*.
+Select **+ Add item condition**.
+
+ :::image type="content" source="media/create-manage-use-automation-rules/add-item-condition.png" alt-text="Screenshot of selecting add item condition for automation rules.":::
+
+ The value condition line appears below.
+
+ :::image type="content" source="media/create-manage-use-automation-rules/custom-details-value.png" alt-text="Screenshot of the custom detail value field appearing.":::
+
+1. Select **Contains** or **Does not contain** from the operators drop-down list. In the text box to the right, enter the value for which you want the condition to evaluate to *true*.
+
+ :::image type="content" source="media/create-manage-use-automation-rules/custom-details-value-filled.png" alt-text="Screenshot of the custom detail value field filled in.":::
+
+In this example, if the incident has the custom detail *DestinationEmail*, and if the value of that detail is `pwned@bad-botnet.com`, the actions defined in the automation rule will run.
+ ### Add actions Choose the actions you want this automation rule to take. Available actions include **Assign owner**, **Change status**, **Change severity**, **Add tags**, and **Run playbook**. You can add as many actions as you like.
Playbook actions within an automation rule may be treated differently under some
| More than two minutes | Two minutes after playbook began running,<br>regardless of whether or not it was completed | ## Next steps+ In this document, you learned how to use automation rules to centrally manage response automation for Microsoft Sentinel incidents and alerts.
+- To learn how to add advanced conditions with `OR` operators to automation rules, see [Add advanced conditions to Microsoft Sentinel automation rules](add-advanced-conditions-to-automation-rules.md).
- To learn more about automation rules, see [Automate incident handling in Microsoft Sentinel with automation rules](automate-incident-handling-with-automation-rules.md) - To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md). - To migrate alert-trigger playbooks to be invoked by automation rules, see [Migrate your Microsoft Sentinel alert-trigger playbooks to automation rules](migrate-playbooks-to-automation-rules.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## September 2022
+- [Create automation rule conditions based on custom details (Preview)](#create-automation-rule-conditions-based-on-custom-details-preview)
+- [Add advanced "Or" conditions to automation rules (Preview)](#add-advanced-or-conditions-to-automation-rules-preview)
- [Heads up: Name fields being removed from UEBA UserPeerAnalytics table](#heads-up-name-fields-being-removed-from-ueba-userpeeranalytics-table) - [Windows DNS Events via AMA connector (Preview)](#windows-dns-events-via-ama-connector-preview) - [Create and delete incidents manually (Preview)](#create-and-delete-incidents-manually-preview) - [Add entities to threat intelligence (Preview)](#add-entities-to-threat-intelligence-preview)
+### Create automation rule conditions based on custom details (Preview)
+
+You can set the value of a [custom detail surfaced in an incident](surface-custom-details-in-alerts.md) as a condition of an automation rule. Recall that custom details are data points in raw event log records that can be surfaced and displayed in alerts and the incidents generated from them. Through custom details you can get to the actual relevant content in your alerts without having to dig through query results.
+
+Learn how to [add a condition based on a custom detail](create-manage-use-automation-rules.md#conditions-based-on-custom-details-preview).
+
+### Add advanced "Or" conditions to automation rules (Preview)
+
+You can now add OR conditions to automation rules. Also known as condition groups, these allow you to combine several rules with identical actions into a single rule, greatly increasing your SOC's efficiency.
+
+For more information, see [Add advanced conditions to Microsoft Sentinel automation rules](add-advanced-conditions-to-automation-rules.md).
+ ### Heads up: Name fields being removed from UEBA UserPeerAnalytics table As of **September 30, 2022**, the UEBA engine will no longer perform automatic lookups of user IDs and resolve them into names. This change will result in the removal of four name fields from the *UserPeerAnalytics* table:
Learn how to [add an entity to your threat intelligence](add-entity-to-threat-in
### Heads up: Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
-[Microsoft 365 Defender](/microsoft-365/security/defender/) now includes the integration of [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents.
+[Microsoft 365 Defender](/microsoft-365/security/defender/) is gradually rolling out the integration of [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents.
Microsoft Sentinel customers with the [Microsoft 365 Defender connector](microsoft-365-defender-sentinel-integration.md) enabled will automatically start receiving AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md
Title: Quickstart - Create a service connection in App Service with the Azure CLI description: Quickstart showing how to create a service connection in App Service with the Azure CLI--++ Previously updated : 05/03/2022 Last updated : 09/15/2022 ms.devlang: azurecli
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
## View supported target service types
-Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command create and manage service connections to App Service.
+Use the Azure CLI [az webapp connection list](/cli/azure/webapp/connection#az-webapp-connection-list) command to get a list of supported target services for App Service.
```azurecli-interactive az provider register -n Microsoft.ServiceLinker
az webapp connection list-support-types --output table
#### [Using Access Key](#tab/Using-access-key)
-Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command to create a service connection to an Azure Blob Storage with an access key, providing the following information:
+Use the Azure CLI [az webapp connection create](/cli/azure/webapp/connection/create) command to create a service connection to an Azure Blob Storage with an access key, providing the following information:
- **Source compute service resource group name:** the resource group name of the App Service. - **App Service name:** the name of your App Service that connects to the target service.
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
For latest Runtime and SDK you can download from below:
| Package |Version| | | |
-|[Install Service fabric runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.0.1048.9590.exe) | 9.0.1048 |
-|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.0.1048.msi) | 6.0.1048 |
+|[Install Service fabric runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.0.1107.9590.exe) | 9.0.1107 |
+|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.0.1107.msi) | 6.0.1107 |
You can find direct links to the installers for previous releases on [Service Fabric Releases](https://github.com/microsoft/service-fabric/tree/master/release_notes)
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
### Current versions | Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
+| 9.0 CU3<br>9.0.1107.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
| 9.0 CU2<br>9.0.1048.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 9.0 CU1<br>9.0.1028.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
+| 8.2CU6<br>8.2.1686.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support |
Support for Service Fabric on a specific OS ends when support for the OS version
| OS version | Service Fabric support end date | OS Lifecycle link | ||||
+|Windows Server 2022|10/14/2031|<a href="/lifecycle/products/windows-server-2022">Windows Server 2022 - Microsoft Lifecycle</a>|
|Windows Server 2019|1/9/2029|<a href="/lifecycle/products/windows-server-2019">Windows Server 2019 - Microsoft Lifecycle</a>| |Windows Server 2016 |1/12/2027|<a href="/lifecycle/products/windows-server-2016">Windows Server 2016 - Microsoft Lifecycle</a>| |Windows Server 2012 R2 |10/10/2023|<a href="/lifecycle/products/windows-server-2012-r2">Windows Server 2012 R2 - Microsoft Lifecycle</a>|
Support for Service Fabric on a specific OS ends when support for the OS version
### Current versions | Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | |
+| 9.0 CU3<br>9.0.1103.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
| 9.0 CU2.1<br>9.0.1086.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
+| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | December 1, 2022 |
| 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | December 1, 2022 | | Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support |
The following table lists the version names of Service Fabric and their correspo
| Version name | Windows version number | Linux version number | | | | |
+| 9.0 CU3 | 9.0.1107.9590 | 9.0.1103.1 |
| 9.0 CU2.1 | Not applicable | 9.0.1086.1 |
+| 8.2 CU6 | 8.2.1686.9590 | 8.2.1485.1 |
| 8.2 CU5.1 | Not applicable | 8.2.1483.1 | | 9.0 CU2 | 9.0.1048.9590 | 9.0.1056.1 | | 9.0 CU1 | 9.0.1028.9590 | 9.0.1035.1 |
spring-apps How To Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-staging-environment.md
Title: Set up a staging environment in Azure Spring Apps | Microsoft Docs
+ Title: Set up a staging environment in Azure Spring Apps
description: Learn how to use blue-green deployment with Azure Spring Apps
This article explains how to set up a staging deployment by using the blue-green
## Prerequisites
-* Azure Spring Apps instance on a Standard pricing tier
-* [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
+- Azure Spring Apps instance on a Standard pricing tier
+- [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
-This article uses an application built from Spring Initializr. If you want to use a different application for this example, you'll need to make a simple change in a public-facing portion of the application to differentiate your staging deployment from production.
+This article uses an application built from Spring Initializr. If you want to use a different application for this example, make a change in a public-facing portion of the application to differentiate your staging deployment from the production deployment.
> [!TIP] > [Azure Cloud Shell](https://shell.azure.com) is a free interactive shell that you can use to run the instructions in this article. It has common, preinstalled Azure tools, including the latest versions of Git, JDK, Maven, and the Azure CLI. If you're signed in to your Azure subscription, start your Cloud Shell instance. To learn more, see [Overview of Azure Cloud Shell](../cloud-shell/overview.md).
To build the application, follow these steps:
1. Generate the code for the sample app by using Spring Initializr with [this configuration](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.3.4.RELEASE&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-starter-sleuth,cloud-starter-zipkin,cloud-config-client).
-2. Download the code.
-3. Add the following *HelloController.java* source file to the folder *\src\main\java\com\example\hellospring\*:
+1. Download the code.
+1. Add the following *HelloController.java* source file to the folder *\src\main\java\com\example\hellospring\*:
```java package com.example.hellospring;
To build the application, follow these steps:
} ```
-4. Build the *.jar* file:
+1. Build the *.jar* file:
```azurecli mvn clean package -DskipTests ```
-5. Create the app in your Azure Spring Apps instance:
+1. Create the app in your Azure Spring Apps instance:
```azurecli az spring app create -n demo -g <resourceGroup> -s <Azure Spring Apps instance> --assign-endpoint ```
-6. Deploy the app to Azure Spring Apps:
+1. Deploy the app to Azure Spring Apps:
```azurecli az spring app deploy -n demo -g <resourceGroup> -s <Azure Spring Apps instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar ```
-7. Modify the code for your staging deployment:
+1. Modify the code for your staging deployment:
```java package com.example.hellospring;
To build the application, follow these steps:
} ```
-8. Rebuild the *.jar* file:
+1. Rebuild the *.jar* file:
```azurecli mvn clean package -DskipTests ```
-9. Create the green deployment:
+1. Create the green deployment:
```azurecli az spring app deployment create -n green --app demo -g <resourceGroup> -s <Azure Spring Apps instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar
To build the application, follow these steps:
## View apps and deployments
-View deployed apps by using the following procedure:
+Use the following steps to view deployed apps.
1. Go to your Azure Spring Apps instance in the Azure portal. 1. From the left pane, open the **Apps** pane to view apps for your service instance.
- ![Screenshot of the open Apps pane.](media/spring-cloud-blue-green-staging/app-dashboard.png)
+ :::image type="content" source="media/how-to-staging-environment/app-dashboard.png" lightbox="media/how-to-staging-environment/app-dashboard.png" alt-text="Screenshot of the Apps pane showing apps for your service instance.":::
-1. You can select an app and view details.
+1. Select an app to view details.
- ![Screenshot of details for an app.](media/spring-cloud-blue-green-staging/app-overview.png)
+ :::image type="content" source="media/how-to-staging-environment/app-overview.png" lightbox="media/how-to-staging-environment/app-overview.png" alt-text="Screenshot of details for an app.":::
1. Open **Deployments** to see all deployments of the app. The grid shows both production and staging deployments.
- ![Screenshot that shows listed app deployments.](media/spring-cloud-blue-green-staging/deployments-dashboard.png)
+ :::image type="content" source="media/how-to-staging-environment/deployments-dashboard.png" lightbox="media/how-to-staging-environment/deployments-dashboard.png" alt-text="Screenshot that shows listed app deployments.":::
1. Select the URL to open the currently deployed application.
- ![Screenshot that shows the U R L for the deployed application.](media/spring-cloud-blue-green-staging/running-blue-app.png)
+ :::image type="content" source="media/how-to-staging-environment/running-blue-app.png" lightbox="media/how-to-staging-environment/running-blue-app.png" alt-text="Screenshot that shows the URL of the deployed application.":::
1. Select **Production** in the **State** column to see the default app.
- ![Screenshot that shows the U R L for the default app.](media/spring-cloud-blue-green-staging/running-default-app.png)
+ :::image type="content" source="media/how-to-staging-environment/running-default-app.png" lightbox="media/how-to-staging-environment/running-default-app.png" alt-text="Screenshot that shows the URL of the default app.":::
1. Select **Staging** in the **State** column to see the staging app.
- ![Screenshot that shows the U R L for the staging app.](media/spring-cloud-blue-green-staging/running-staging-app.png)
+ :::image type="content" source="media/how-to-staging-environment/running-staging-app.png" lightbox="media/how-to-staging-environment/running-staging-app.png" alt-text="Screenshot that shows the URL of the staging app.":::
>[!TIP]
-> * Confirm that your test endpoint ends with a slash (/) to ensure that the CSS file is loaded correctly.
-> * If your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the format *https://\<username>:\<password>@\<cluster-name>.test.azureapps.io/gateway/green*. Use this format to access your endpoint.
+> Confirm that your test endpoint ends with a slash (/) to ensure that the CSS file is loaded correctly. If your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the format `https://\<username>:\<password>@\<cluster-name>.test.azureapps.io/gateway/green`. Use this format to access your endpoint.
>[!NOTE]
-> Configuration server settings apply to both your staging environment and your production environment. For example, if you set the context path (*server.servlet.context-path*) for your app gateway in the configuration server as *somepath*, the path to your green deployment changes to *https://\<username>:\<password>@\<cluster-name>.test.azureapps.io/gateway/green/somepath/...*.
+> Configuration server settings apply to both your staging environment and your production environment. For example, if you set the context path (*server.servlet.context-path*) for your app gateway in the configuration server as *somepath*, the path to your green deployment changes to `https://\<username>:\<password>@\<cluster-name>.test.azureapps.io/gateway/green/somepath/...`.
If you visit your public-facing app gateway at this point, you should see the old page without your new change.
If you visit your public-facing app gateway at this point, you should see the ol
1. Select the ellipsis after **Registration status** of the green deployment, and then select **Set as production**.
- ![Screenshot that shows selections for setting the staging build to production.](media/spring-cloud-blue-green-staging/set-staging-deployment.png)
+ :::image type="content" source="media/how-to-staging-environment/set-staging-deployment.png" lightbox="media/how-to-staging-environment/set-staging-deployment.png" alt-text="Screenshot that shows selections for setting the staging build to production.":::
1. Confirm that the URL of the app displays your changes.
- ![Screenshot that shows the U R L of the app now in production.](media/spring-cloud-blue-green-staging/new-production-deployment.png)
+ :::image type="content" source="media/how-to-staging-environment/new-production-deployment.png" lightbox="media/how-to-staging-environment/new-production-deployment.png" alt-text="Screenshot that shows the URL of the app now in production.":::
>[!NOTE] > After you've set the green deployment as the production environment, the previous deployment becomes the staging deployment.
az spring app deployment delete -n <staging-deployment-name> -g <resource-group-
## Next steps
-* [CI/CD for Azure Spring Apps](./how-to-cicd.md?pivots=programming-language-java)
+- [CI/CD for Azure Spring Apps](./how-to-cicd.md?pivots=programming-language-java)
spring-apps How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-service.md
This article shows you how to start or stop your Azure Spring Apps service insta
> [!NOTE] > Stop and start is currently under preview and we do not recommend this feature for production.
-Your applications running in Azure Spring Apps may not need to run continuously - for example, if you have a service instance that's used only during business hours. At these times, Azure Spring Apps may be idle, and running only the system components.
+Your applications running in Azure Spring Apps may not need to run continuously. For example, an application may not need to run continuously if you have a service instance that's used only during business hours. There may be times when Azure Spring Apps is idle and running only the system components.
You can reduce the active footprint of Azure Spring Apps by reducing the running instances and ensuring costs for compute resources are reduced. To reduce your costs further, you can completely stop your Azure Spring Apps service instance. All user apps and system components will be stopped. However, all your objects and network settings will be saved so you can restart your service instance and pick up right where you left off. > [!NOTE]
-> The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days during preview. If your cluster is stopped for more than 90 days, the cluster state cannot be recovered.
-> The maximum stop time may change after preview.
+> The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days during preview. If your cluster is stopped for more than 90 days, the cluster state cannot be recovered. The maximum stop time may change after preview.
You can only start, view, or delete a stopped Azure Spring Apps service instance. You must start your service instance before performing any update operation, such as creating or scaling an app.
You can only start, view, or delete a stopped Azure Spring Apps service instance
In the Azure portal, use the following steps to stop a running Azure Spring Apps instance: 1. Go to the Azure Spring Apps service overview page.
-2. Select **Stop** to stop a running instance.
- :::image type="content" source="media/stop-start-service/spring-cloud-stop-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Stop button and Status value highlighted.":::
+1. Select **Stop** to stop a running instance.
-3. After the instance stops, the status will show **Succeeded (Stopped)**.
+ :::image type="content" source="media/how-to-start-stop-service/spring-cloud-stop-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Stop button and Status value highlighted.":::
+
+1. After the instance stops, the status will show **Succeeded (Stopped)**.
## Start a stopped instance In the Azure portal, use the following steps to start a stopped Azure Spring Apps instance: 1. Go to Azure Spring Apps service overview page.
-2. Select **Start** to start a stopped instance.
- :::image type="content" source="media/stop-start-service/spring-cloud-start-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Start button and Status value highlighted.":::
+1. Select **Start** to start a stopped instance.
+
+ :::image type="content" source="media/how-to-start-stop-service/spring-cloud-start-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Start button and Status value highlighted.":::
-3. After the instance starts, the status will show **Succeeded (Running)**.
+1. After the instance starts, the status will show **Succeeded (Running)**.
## [Azure CLI](#tab/azure-cli)
az spring show \
## Next steps - [Monitor app lifecycle events using Azure Activity log and Azure Service Health](./monitor-app-lifecycle-events.md)-- [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md)
+- [Azure Monitor cost and usage](../azure-monitor/usage-estimated-costs.md)
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
When a new file or directory is created under an existing directory, the default
### umask
-When creating a file or directory, umask is used to modify how the default ACLs are set on the child item. umask is a 9-bit value on parent directories that contains an RWX value for **owning user**, **owning group**, and **other**.
+When creating a default ACL, the umask is applied to the access ACL to determine the initial permissions of a default ACL. If a default ACL is defined on the parent directory, the umask is effectively ignored and the default ACL of the parent directory is used to define these initial values instead.
+
+The umask is a 9-bit value on parent directories that contains an RWX value for **owning user**, **owning group**, and **other**.
The umask for Azure Data Lake Storage Gen2 a constant value that is set to 007. This value translates to: | umask component | Numeric form | Short form | Meaning | ||--|||
-| umask.owning_user | 0 | `` | For owning user, copy the parent's default ACL to the child's access ACL |
-| umask.owning_group | 0 | `` | For owning group, copy the parent's default ACL to the child's access ACL |
+| umask.owning_user | 0 | `` | For owning user, copy the parent's access ACL to the child's default ACL |
+| umask.owning_group | 0 | `` | For owning group, copy the parent's access ACL to the child's default ACL |
| umask.other | 7 | `RWX` | For other, remove all permissions on the child's access ACL |
-The umask value used by Azure Data Lake Storage Gen2 effectively means that the value for **other** is never transmitted by default on new children, unless a default ACL is defined on the parent directory. In that case, the umask is effectively ignored and the permissions defined by the default ACL are applied to the child item.
-
-The following pseudocode shows how the umask is applied when creating the ACLs for a child item.
-
-```console
-def set_default_acls_for_new_child(parent, child):
- child.acls = []
- for entry in parent.acls :
- new_entry = None
- if (entry.type == OWNING_USER) :
- new_entry = entry.clone(perms = entry.perms & (~umask.owning_user))
- elif (entry.type == OWNING_GROUP) :
- new_entry = entry.clone(perms = entry.perms & (~umask.owning_group))
- elif (entry.type == OTHER) :
- new_entry = entry.clone(perms = entry.perms & (~umask.other))
- else :
- new_entry = entry.clone(perms = entry.perms )
- child_acls.add( new_entry )
-```
- ## FAQ ### Do I have to enable support for ACLs?
A GUID is shown if the entry represents a user and that user doesn't exist in Az
### How do I set ACLs correctly for a service principal? When you define ACLs for service principals, it's important to use the Object ID (OID) of the *service principal* for the app registration that you created. It's important to note that registered apps have a separate service principal in the specific Azure AD tenant. Registered apps have an OID that's visible in the Azure portal, but the *service principal* has another (different) OID.-
+Article
To get the OID for the service principal that corresponds to an app registration, you can use the `az ad sp show` command. Specify the Application ID as the parameter. Here's an example on obtaining the OID for the service principal that corresponds to an app registration with App ID = 18218b12-1895-43e9-ad80-6e8fc1ea88ce. Run the following command in the Azure CLI: ```azurecli
storage Data Lake Storage Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-dotnet.md
Title: Use .NET to manage ACLs in Azure Data Lake Storage Gen2 description: Use .NET to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled.- Last updated 02/17/2021-++
storage Data Lake Storage Acl Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-java.md
Title: Use Java to manage ACLs in Azure Data Lake Storage Gen2 description: Use Azure Storage libraries for Java to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled.-++ Last updated 02/17/2021 ms.devlang: java -
storage Data Lake Storage Acl Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-javascript.md
Title: Use JavaScript (Node.js) to manage ACLs in Azure Data Lake Storage Gen2 description: Use Azure Storage Data Lake client library for JavaScript to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled.-++ Last updated 03/19/2021-
storage Data Lake Storage Acl Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-python.md
Title: Use Python to manage ACLs in Azure Data Lake Storage Gen2 description: Use Python manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled.-++ Last updated 02/17/2021-
storage Data Lake Storage Directory File Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-dotnet.md
Title: Use .NET to manage data in Azure Data Lake Storage Gen2 description: Use the Azure Storage client library for .NET to manage directories and files in storage accounts that has hierarchical namespace enabled.-++ Last updated 02/17/2021-
storage Data Lake Storage Directory File Acl Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-java.md
Title: Use Java to manage data in Azure Data Lake Storage Gen2 description: Use Azure Storage libraries for Java to manage directories and files in storage accounts that has hierarchical namespace enabled.-++ Last updated 02/17/2021 ms.devlang: java -
storage Data Lake Storage Directory File Acl Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-javascript.md
Title: Use JavaScript (Node.js) to manage data in Azure Data Lake Storage Gen2 description: Use Azure Storage Data Lake client library for JavaScript to manage directories and files in storage accounts that has hierarchical namespace enabled.-++ Last updated 03/19/2021-
storage Data Lake Storage Directory File Acl Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-python.md
Title: Use Python to manage data in Azure Data Lake Storage Gen2 description: Use Python to manage directories and files in storage accounts that has hierarchical namespace enabled.-++ Last updated 02/17/2021-
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
Title: Create and manage a blob snapshot in .NET
description: Learn how to use the .NET client library to create a read-only snapshot of a blob to back up blob data at a given moment in time. -++ Last updated 08/27/2020-+ ms.devlang: csharp
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
Title: Create account SAS tokens - JavaScript
description: Create and use account SAS tokens in a JavaScript application that works with Azure Blob Storage. This article helps you set up a project and authorizes access to an Azure Blob Storage endpoint. -++ Last updated 07/05/2022-+
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
Title: Append data to a blob with .NET - Azure Storage description: Learn how to append data to a blob in Azure Storage by using the.NET client library. ---++ Last updated 03/28/2022
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
Title: Delete and restore a blob container with JavaScript - Azure Storage description: Learn how to delete and restore a blob container in your Azure Storage account using the JavaScript client library. -++ Last updated 03/28/2022- ms.devlang: javascript
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
Title: Delete and restore a blob container with .NET - Azure Storage description: Learn how to delete and restore a blob container in your Azure Storage account using the .NET client library. -++ Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
Title: Create and manage blob or container leases with .NET - Azure Storage description: Learn how to manage a lock on a blob or container in your Azure Storage account using the .NET client library. -++ Last updated 03/28/2022- ms.devlang: csharp
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
Title: Use JavaScript to manage properties and metadata for a blob container
description: Learn how to set and retrieve system properties and store custom metadata on blob containers in your Azure Storage account using the JavaScript client library. -++ Last updated 03/28/2022-+ ms.devlang: javascript
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
Title: Use .NET to manage properties and metadata for a blob container
description: Learn how to set and retrieve system properties and store custom metadata on blob containers in your Azure Storage account using the .NET client library. -++ Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
Title: List blob containers with JavaScript - Azure Storage description: Learn how to list blob containers in your Azure Storage account using the JavaScript client library. -++ Last updated 03/28/2022-+ ms.devlang: javascript
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
Title: List blob containers with .NET - Azure Storage description: Learn how to list blob containers in your Azure Storage account using the .NET client library. -++ Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
Title: Copy a blob with JavaScript - Azure Storage description: Learn how to copy a blob in Azure Storage by using the JavaScript client library. ---++ Last updated 03/28/2022
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Title: Copy a blob with .NET - Azure Storage description: Learn how to copy a blob in Azure Storage by using the .NET client library. ---++ Last updated 03/28/2022
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
Title: Create user delegation SAS tokens - JavaScript
description: Create and use user delegation SAS tokens in a JavaScript application that works with Azure Blob Storage. This article helps you set up a project and authorizes access to an Azure Blob Storage endpoint. -++ Last updated 07/15/2022-+
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
Title: Delete and restore a blob with JavaScript - Azure Storage description: Learn how to delete and restore a blob in your Azure Storage account using the JavaScript client library ---++ Last updated 03/28/2022
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Title: Delete and restore a blob with .NET - Azure Storage description: Learn how to delete and restore a blob in your Azure Storage account using the .NET client library ---++ Last updated 03/28/2022
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
Title: Get started with Azure Blob Storage and .NET
description: Get started developing a .NET application that works with Azure Blob Storage. This article helps you set up a project and authorize access to an Azure Blob Storage endpoint. -++ Last updated 03/28/2022-+
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Title: Download a blob with JavaScript - Azure Storage description: Learn how to download a blob in Azure Storage by using the JavaScript client library. ---++ Last updated 03/28/2022
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Title: Download a blob with .NET - Azure Storage description: Learn how to download a blob in Azure Storage by using the .NET client library. ---++ Last updated 03/28/2022
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
Title: Get container and blob urlJavaScript - Azure Storage description: Learn how to get a container or blob URL in Azure Storage by using the JavaScript client library. ---++ Last updated 09/13/2022
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
Title: Get started with Azure Blob Storage and JavaScript
description: Get started developing a JavaScript application that works with Azure Blob Storage. This article helps you set up a project and authorizes access to an Azure Blob Storage endpoint. -++ Last updated 07/06/2022-+
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
Title: Manage properties and metadata for a blob with JavaScript - Azure Storage description: Learn how to set and retrieve system properties and store custom metadata on blobs in your Azure Storage account using the JavaScript client library. ---++ Last updated 03/28/2022
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
Title: Manage properties and metadata for a blob with .NET - Azure Storage description: Learn how to set and retrieve system properties and store custom metadata on blobs in your Azure Storage account using the .NET client library. ---++ Last updated 03/28/2022
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
Title: Use blob index tags to find data in Azure Blob Storage (JavaScript) description: Learn how to categorize, manage, and query for blob objects by using the JavaScript client library. ---++ Last updated 03/28/2022
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
Title: Use blob index tags to find data in Azure Blob Storage (.NET) description: Learn how to categorize, manage, and query for blob objects by using the .NET client library. ---++ Last updated 03/28/2022
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
Title: Upload a blob using JavaScript - Azure Storage description: Learn how to upload a blob to your Azure Storage account using the JavaScript client library. ---++ Last updated 07/18/2022
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
Title: Upload a blob using .NET - Azure Storage description: Learn how to upload a blob to your Azure Storage account using the .NET client library. ---++ Last updated 03/28/2022
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
Title: List blobs with JavaScript - Azure Storage description: Learn how to list blobs in your storage account using the Azure Storage client library for JavaScript. Code examples show how to list blobs in a flat listing, or how to list blobs hierarchically, as though they were organized into directories or folders. -++ Last updated 03/28/2022-+ ms.devlang: javascript
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
Title: List blobs with .NET - Azure Storage description: Learn how to list blobs in your storage account using the Azure Storage client library for .NET. Code examples show how to list blobs in a flat listing, or how to list blobs hierarchically, as though they were organized into directories or folders. -++ Last updated 03/28/2022-+ ms.devlang: csharp, python
storage Storage Quickstart Blobs Dotnet Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet-legacy.md
Title: "Quickstart: Azure Blob Storage client library for .NET" description: In this quickstart, you learn how to use the Azure Blob Storage client library for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.--++ Last updated 07/24/2020
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Title: "Quickstart: Azure Blob Storage library v12 - .NET" description: In this quickstart, you will learn how to use the Azure Blob Storage client library version 12 for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.--++ Last updated 10/06/2021
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
Title: Azure Quickstart - Create a blob in object storage using Go | Microsoft Docs description: In this quickstart, you create a storage account and a container in object (Blob) storage. Then you use the storage client library for Go to upload a blob to Azure Storage, download a blob, and list the blobs in a container.--++ Last updated 12/10/2021
storage Storage Quickstart Blobs Java Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java-legacy.md
Title: "Quickstart: Azure Blob storage client library v8 for Java" description: Create a storage account and a container in object (Blob) storage. Then use the Azure Storage client library v8 for Java to upload a blob to Azure Storage, download a blob, and list the blobs in a container.- -++ Last updated 01/19/2021
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Title: "Quickstart: Azure Blob Storage library v12 - Java" description: In this quickstart, you learn how to use the Azure Blob Storage client library version 12 for Java to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.-++ - Last updated 12/01/2020
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
Title: "Quickstart: Azure Blob storage library v12 - JavaScript" description: In this quickstart, you learn how to use the Azure Blob storage blob npm package version 12 for JavaScript to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.--++ Last updated 09/13/2022
storage Storage Quickstart Blobs Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-php.md
Title: Azure Quickstart - Create a blob in object storage using PHP | Microsoft Docs description: Quickly learn to transfer objects to/from Azure Blob storage using PHP. Upload, download, and list block blobs in a container in Azure Blob storage.--++ Last updated 11/14/2018
storage Storage Quickstart Blobs Python Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python-legacy.md
Title: 'Quickstart: Azure Blob storage client library v2.1 for Python' description: In this quickstart, you create a storage account and a container in object (Blob) storage. Then you use the storage client library v2.1 for Python to upload a blob to Azure Storage, download a blob, and list the blobs in a container.--++ Last updated 07/24/2020
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Title: 'Quickstart: Azure Blob Storage library v12 - Python' description: In this quickstart, you learn how to use the Azure Blob Storage client library version 12 for Python to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.--++ Last updated 01/28/2021
storage Storage Quickstart Blobs Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-ruby.md
Title: "Quickstart: Azure Blob Storage client library - Ruby" description: Create a storage account and a container in Azure Blob Storage. Use the storage client library for Ruby to create a blob, download a blob, and list the blobs in a container.--++ Last updated 12/04/2020
storage Storage C Plus Plus Enumeration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-c-plus-plus-enumeration.md
Title: List Azure Storage resources with C++ client library description: Learn how to use the listing APIs in Microsoft Azure Storage Client Library for C++ to enumerate containers, blobs, queues, tables, and entities.---++ Last updated 01/23/2017
storage Storage Samples C Plus Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-c-plus-plus.md
Title: Azure Storage samples using C++ | Microsoft Docs description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the C++ storage client libraries.---++ Last updated 10/01/2020
storage Storage Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-dotnet.md
Title: Azure Storage samples using .NET | Microsoft Docs description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the .NET storage client libraries.---++ Last updated 10/01/2020
storage Storage Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-java.md
Title: Azure Storage samples using Java | Microsoft Docs description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the Java storage client libraries.- -++ Last updated 10/01/2020
storage Storage Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-javascript.md
Title: Azure Storage samples using JavaScript | Microsoft Docs description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the JavaScript/Node.js storage client libraries.---++ Last updated 10/01/2020
storage Storage Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-python.md
Title: Azure Storage samples using Python | Microsoft Docs description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the Python storage client libraries.---++ Last updated 10/01/2020
storage Storage Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples.md
Title: Azure Storage code samples | Microsoft Docs description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the .NET, Java, Python, Node.js, Azure CLI, and C++ storage client libraries.---++ Last updated 10/01/2020
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
Title: Use Azurite emulator for local Azure Storage development description: The Azurite open-source emulator provides a free local environment for testing your Azure storage applications.---++ Last updated 08/04/2022
storage Storage Use Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-emulator.md
Title: Use the Azure Storage Emulator for development and testing (deprecated) description: The Azure Storage Emulator (deprecated) provides a free local development environment for developing and testing your Azure Storage applications.---++ Last updated 07/14/2021
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 08/25/2022 Last updated : 09/19/2022
To enable AD DS authentication over SMB for Azure file shares, you need to regis
## Option one (recommended): Use AzFilesHybrid PowerShell module
-The cmdlets in the AzFilesHybrid PowerShell module make the necessary modifications and enables the feature for you. Since some parts of the cmdlets interact with your on-premises AD DS, we explain what the cmdlets do, so you can determine if the changes align with your compliance and security policies, and ensure you have the proper permissions to execute the cmdlets. Though we recommend using AzFilesHybrid module, if you are unable to do so, we provide the steps so that you may perform them manually.
+The cmdlets in the AzFilesHybrid PowerShell module make the necessary modifications and enable the feature for you. Because some parts of the cmdlets interact with your on-premises AD DS, we explain what the cmdlets do, so you can determine if the changes align with your compliance and security policies, and ensure you have the proper permissions to execute the cmdlets. Although we recommend using AzFilesHybrid module, if you're unable to do so, we provide [manual steps](#option-two-manually-perform-the-enablement-actions).
### Download AzFilesHybrid module -- If you don't have [.NET Framework 4.7.2](https://dotnet.microsoft.com/download/dotnet-framework/net472) installed, install it now. It is required for the module to import successfully.-- [Download and unzip the AzFilesHybrid module (GA module: v0.2.0+)](https://github.com/Azure-Samples/azure-files-samples/releases) Note that AES 256 kerberos encryption is supported on v0.2.2 or above. If you have enabled the feature with a AzFilesHybrid version below v0.2.2 and want to update to support AES 256 Kerberos encryption, please refer to [this article](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).-- Install and execute the module in a device that is domain joined to on-premises AD DS with AD DS credentials that have permissions to create a service logon account or a computer account in the target AD.-- Run the script using an on-premises AD DS credential that is synced to your Azure AD. The on-premises AD DS credential must have either **Owner** or **Contributor** Azure role on the storage account.
+- If you don't have [.NET Framework 4.7.2](https://dotnet.microsoft.com/download/dotnet-framework/net472) installed, install it now. It's required for the module to import successfully.
+- [Download and unzip the AzFilesHybrid module (GA module: v0.2.0+)](https://github.com/Azure-Samples/azure-files-samples/releases). Note that AES-256 Kerberos encryption is supported on v0.2.2 or above. If you've enabled the feature with an AzFilesHybrid version below v0.2.2 and want to update to support AES-256 Kerberos encryption, see [this article](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).
+- Install and execute the module on a device that is domain joined to on-premises AD DS with AD DS credentials that have permissions to create a service logon account or a computer account in the target AD.
### Run Join-AzStorageAccount
-The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. The script uses the cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you can't use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. Note that service logon accounts don't support AES256 encryption. If you choose to run the command manually, you should select the account best suited for your environment.
+The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. The script uses the cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you can't use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. Note that service logon accounts don't support AES-256 encryption. If you choose to run the command manually, you should select the account best suited for your environment. You must run the script using an on-premises AD DS credential that is synced to your Azure AD. The on-premises AD DS credential must have either **Owner** or **Contributor** Azure role on the storage account.
The AD DS account created by the cmdlet represents the storage account. If the AD DS account is created under an organizational unit (OU) that enforces password expiration, you must update the password before the maximum password age. Failing to update the account password before that date results in authentication failures when accessing Azure file shares. To learn how to update the password, see [Update AD DS account password](storage-files-identity-ad-ds-update-password.md).
The AD DS account created by the cmdlet represents the storage account. If the A
> The `Join-AzStorageAccount` cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to register as a computer account or service logon account, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control) for details. Service logon account passwords can expire in AD if they have a default password expiration age set on the AD domain or OU. Because computer account password changes are driven by the client machine and not AD, they don't expire in AD, although client computers change their passwords by default every 30 days. > For both account types, we recommend you check the password expiration age configured and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit in AD](/powershell/module/activedirectory/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
-Replace the placeholder values with your own in the parameters below before executing it in PowerShell.
+Replace the placeholder values with your own in the parameters below before executing the script in PowerShell.
```PowerShell # Change the execution policy to unblock importing AzFilesHybrid.psm1 module
Set-AzStorageAccount `
To enable AES-256 encryption, follow the steps in this section. If you plan to use RC4, skip this section. > [!IMPORTANT]
-> The domain object that represents your storage account must be created as a computer object in the on-premises AD domain. If your domain object doesn't meet this requirement, delete it and create a new domain object that does. Note that Service Logon Accounts do not support AES256 encryption.
+> The domain object that represents your storage account must be created as a computer object in the on-premises AD domain. If your domain object doesn't meet this requirement, delete it and create a new domain object that does. Note that Service Logon Accounts do not support AES-256 encryption.
Replace `<domain-object-identity>` and `<domain-name>` with your values, then run the following cmdlet to configure AES-256 support. You must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges.
synapse-analytics How To Create A Workspace With Data Exfiltration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-create-a-workspace-with-data-exfiltration-protection.md
Previously updated : 12/01/2020 Last updated : 09/19/2022 # Create a workspace with data exfiltration protection enabled+ This article describes how to create a workspace with data exfiltration protection enabled and how to manage the approved Azure AD tenants for this workspace.
->[!Note]
->You cannot change the workspace configuration for managed virtual network and data exfiltration protection after the workspace is created.
+> [!Note]
+> You cannot change the workspace configuration for managed virtual network and data exfiltration protection after the workspace is created.
## Prerequisites - Permissions to create a workspace resource in Azure.
Follow the steps listed in [Quickstart: Create a Synapse workspace](../quickstar
You can create managed private endpoints to connect to Azure resources that reside in Azure AD tenants, which are approved for a workspace. Follow the steps listed in the guide for [creating managed private endpoints](./how-to-create-managed-private-endpoints.md).
->[!IMPORTANT]
->Resources in tenants other than the workspace's tenant must not have blocking firewall rules in place for the SQL pools to connect to them. Resources within the workspaceΓÇÖs managed virtual network, such as Spark clusters, can connect over managed private links to firewall-protected resources.
+> [!IMPORTANT]
+> Resources in tenants other than the workspace's tenant must not have blocking firewall rules in place for the SQL pools to connect to them. Resources within the workspaceΓÇÖs managed virtual network, such as Spark clusters, can connect over managed private links to firewall-protected resources.
## Known limitations
-Users can provide an environment configuration file to install Python packages from public repositories like PyPI. In data exfiltration protected workspaces, connections to outbound repositories are blocked. As a result, Python library installed from public repositories like PyPI are not supported.
+Users can provide an environment configuration file to install Python packages from public repositories like PyPI. In data exfiltration protected workspaces, connections to outbound repositories are blocked. As a result, Python libraries installed from public repositories like PyPI are not supported.
As an alternative, users can upload workspace packages or create a private channel within their primary Azure Data Lake Storage account. For more information, visit [Package management in Azure Synapse Analytics](./spark/../../spark/apache-spark-azure-portal-add-libraries.md) +
+Ingesting data [from an Event Hub into Data Explorer pools](../data-explorer/ingest-dat) will not work if your Synapse workspace uses a managed virtual network with data exfiltration protection enabled.
## Next steps
-Learn more about [data exfiltration protection in Synapse workspaces](./workspace-data-exfiltration-protection.md)
-
-Learn more about [Managed workspace Virtual Network](./synapse-workspace-managed-vnet.md)
-
-Learn more about [Managed private endpoints](./synapse-workspace-managed-private-endpoints.md)
-
-[Create Managed private endpoints to your data sources](./how-to-create-managed-private-endpoints.md)
+ - Learn more about [data exfiltration protection in Synapse workspaces](./workspace-data-exfiltration-protection.md)
+ - Learn more about [Managed workspace Virtual Network](./synapse-workspace-managed-vnet.md)
+ - Learn more about [Managed private endpoints](./synapse-workspace-managed-private-endpoints.md)
+ - [Create Managed private endpoints to your data sources](./how-to-create-managed-private-endpoints.md)
synapse-analytics Synapse Workspace Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
Previously updated : 08/31/2022- Last updated : 09/16/2022+
You can connect to your Synapse workspace using Synapse Studio. You can also use
Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1443. These ports are used by Synapse Studio.
-Also, you need to allow outgoing communication on UDP port 53 for Synapse Studio. To connect using tools such as SSMS and Power BI, you must allow outgoing communication on TCP port 1433. The 1433 port used by SSMS (Desktop Application).
+To connect using tools such as SSMS and Power BI, you must allow outgoing communication on TCP port 1433. The 1433 port used by SSMS (Desktop Application).
## Manage the Azure Synapse workspace firewall
traffic-manager Traffic Manager Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-FAQs.md
na Previously updated : 01/31/2022 Last updated : 09/19/2022
Further investigation should therefore focus on the application.
The HTTP host header sent from the client's browser is the most common source of problems. Make sure that the application is configured to accept the correct host header for the domain name youΓÇÖre using. For endpoints using the Azure App Service, see [configuring a custom domain name for a web app in Azure App Service using Traffic Manager](../app-service/configure-domain-traffic-manager.md).
+### How can I resolve a 500 (Internal Server Error) problem when using Traffic Manager?
+
+If your client or application receives an HTTP 500 error while using Traffic Manager, this can be caused by a stale DNS query. To resolve the issue, clear the DNS cache and allow the client to issue a new DNS query.
+
+When a service endpoint is unresponsive, clients and applications that are using that endpoint do not reset until the DNS cache is refreshed. The duration of the cache is determined by the time-to-live (TTL) of the DNS record. For more information, see [Traffic Manager and the DNS cache](traffic-manager-how-it-works.md#traffic-manager-and-the-dns-cache).
+ ### What is the performance impact of using Traffic Manager? As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. Since clients connect to your service endpoints directly, thereΓÇÖs no performance impact incurred when using Traffic Manager once the connection is established.
traffic-manager Traffic Manager How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-how-it-works.md
description: This article will help you understand how Traffic Manager routes tr
documentationcenter: '' -+ na Previously updated : 03/05/2019 Last updated : 09/19/2022
Contoso Corp have developed a new partner portal. The URL for this portal is `ht
To achieve this configuration, they complete the following steps: 1. Deploy three instances of their service. The DNS names of these deployments are 'contoso-us.cloudapp.net', 'contoso-eu.cloudapp.net', and 'contoso-asia.cloudapp.net'.
-1. Create a Traffic Manager profile, named 'contoso.trafficmanager.net', and configure it to use the 'Performance' traffic-routing method across the three endpoints.
-1. Configure their vanity domain name, 'partners.contoso.com', to point to 'contoso.trafficmanager.net', using a DNS CNAME record.
+2. Create a Traffic Manager profile, named 'contoso.trafficmanager.net', and configure it to use the 'Performance' traffic-routing method across the three endpoints.
+3. Configure their vanity domain name, 'partners.contoso.com', to point to 'contoso.trafficmanager.net', using a DNS CNAME record.
![Traffic Manager DNS configuration][1]
Continuing from the previous example, when a client requests the page `https://p
7. The recursive DNS service consolidates the results and returns a single DNS response to the client. 8. The client receives the DNS results and connects to the given IP address. The client connects to the application service endpoint directly, not through Traffic Manager. Since it is an HTTPS endpoint, the client performs the necessary SSL/TLS handshake, and then makes an HTTP GET request for the '/login.aspx' page.
+#### Traffic Manager and the DNS cache
+ The recursive DNS service caches the DNS responses it receives. The DNS resolver on the client device also caches the result. Caching enables subsequent DNS queries to be answered more quickly by using data from the cache rather than querying other name servers. The duration of the cache is determined by the 'time-to-live' (TTL) property of each DNS record. Shorter values result in faster cache expiry and thus more round-trips to the Traffic Manager name servers. Longer values mean that it can take longer to direct traffic away from a failed endpoint. Traffic Manager allows you to configure the TTL used in Traffic Manager DNS responses to be as low as 0 seconds and as high as 2,147,483,647 seconds (the maximum range compliant with [RFC-1035](https://www.ietf.org/rfc/rfc1035.txt)), enabling you to choose the value that best balances the needs of your application. ## FAQs
The recursive DNS service caches the DNS responses it receives. The DNS resolver
* [Why am I seeing an HTTP error when using Traffic Manager?](./traffic-manager-faqs.md#why-am-i-seeing-an-http-error-when-using-traffic-manager)
+* [How can I resolve a 500 (Internal Server Error) problem when using Traffic Manager?](./traffic-manager-faqs.md#how-can-i-resolve-a-500-internal-server-error-problem-when-using-traffic-manager)
+ * [What is the performance impact of using Traffic Manager?](./traffic-manager-faqs.md#what-is-the-performance-impact-of-using-traffic-manager) * [What application protocols can I use with Traffic Manager?](./traffic-manager-faqs.md#what-application-protocols-can-i-use-with-traffic-manager)
virtual-desktop Host Pool Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/host-pool-load-balancing.md
Title: Azure Virtual Desktop host pool load-balancing - Azure
description: Learn about host pool load-balancing algorithms for a Azure Virtual Desktop environment. Previously updated : 09/14/2021 Last updated : 09/19/2022
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/host-pool-load-balancing-2019.md).
-Azure Virtual Desktop supports two load-balancing algorithms. Each algorithm determines which session host will host a user's session when they connect to a resource in a host pool.
+Azure Virtual Desktop supports two load-balancing algorithms. Each algorithm determines which session host will host a user's session when they connect to a resource in a pooled host pool. The information in this article only applies to pooled host pools.
The following load-balancing algorithms are available in Azure Virtual Desktop:
virtual-desktop Start Virtual Machine Connect Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect-faq.md
Title: Azure Virtual Desktop Start VM Connect FAQ - Azure
description: Frequently asked questions and best practices for using the Start VM on Connect feature. Previously updated : 07/29/2021 Last updated : 09/19/2022
Signing users out won't deallocate their VMs. To learn how to deallocate VMs, se
Yes. Users can shut down the VM by using the Start menu within their session, just like they would with a physical machine. However, shutting down the VM won't deallocate the VM. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md) for personal host pools and [Autoscale](autoscale-scaling-plan.md) for pooled host pools.
+## How does load balancing affect Start VM on Connect?
+
+For pooled host pools, Start VM on Connect will wait until all virtual machines hit their maximum session limit before turning on additional VMs.
+
+For example, let's say your host pool has three VMs and has a maximum session limit of five users per machine. If you turn on two VMs, Start VM on Connect won't turn on the third machine until both VMs reach their maximum session limit of five users.
+ ## Next steps To learn how to configure Start VM on Connect, see [Start virtual machine on connect](start-virtual-machine-connect.md).
virtual-machines Co Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/co-location.md
When specifying `intent`, you can also add the optional `zone` parameter to spec
Proximity Placement Group creation or update will succeed only when at least one data center supports all the VM Sizes specified in the intent. Otherwise, the creation or update will fail with "OverconstrainedAllocationRequest", indicating that the combination of VM Sizes can't be supported within a proximity placement group. The **intent does not provide any capacity reservation or guarantee**. The VM Sizes and zone given in `intent` are used to select an appropriate data center, reducing the chances of failure if the desired VM size isn't available in a data center. Allocation failures can still occur if there is no more capacity for a VM size at the time of deployment.
+> [!NOTE]
+> To use intent for your proximity placement groups, ensure that the API version is 2021-11-01 or higher
+ ### Best Practices while using intent - Provide an availability zone for your proximity placement group only when you provide an intent. Providing an availability zone without an intent will result in an error when creating the proximity placement group.
virtual-machines Easv5 Eadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv5-eadsv5-series.md
Easv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
| Standard_E48as_v5 | 48 | 384 | Remote Storage Only | 32 | 76800/1152 | 80000/2000 | 8 | 24000 | | Standard_E64as_v5<sup>2</sup> | 64 | 512 | Remote Storage Only | 32 | 80000/1200 | 80000/2000 | 8 | 32000 | | Standard_E96as_v5<sup>2</sup> | 96 | 672 | Remote Storage Only | 32 | 80000/1600 | 80000/2000 | 8 | 40000 |
-| Standard_E112ias_v5 | 112 | 672 | Remote Storage Only | 64 | 1200000/2000 | 120000/2000 | 8 | 50000 |
+| Standard_E112ias_v5<sup>3</sup> | 112 | 672 | Remote Storage Only | 64 | 120000/2000 | 120000/2000 | 8 | 50000 |
<sup>1</sup> Easv5-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.<br>
-<sup>2</sup> [Constrained core sizes available](constrained-vcpu.md)
-
+<sup>2</sup> [Constrained core sizes available](constrained-vcpu.md)<br>
+<sup>3</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E112ias_v5** results in higher IOPs and MBps than standard premium disks:
+- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 160000/2000
+- Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 160000/2000
## Eadsv5-series
Eadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
| Standard_E48ads_v5 | 48 | 384 | 1800 | 32 | 225000 / 3000 | 76800/1152 | 80000/2000 | 8 | 24000 | | Standard_E64ads_v5<sup>2</sup> | 64 | 512 | 2400 | 32 | 300000 / 4000 | 80000/1200 | 80000/2000 | 8 | 32000 | | Standard_E96ads_v5<sup>2</sup> | 96 | 672 | 3600 | 32 | 450000 / 4000 | 80000/1600 | 80000/2000 | 8 | 40000 |
-| Standard_E112iads_v5 | 112 | 672 | 3800 | 64 | 450000 / 4000 | 120000/2000 | 120000/2000 | 8 | 50000 |
+| Standard_E112iads_v5<sup>3</sup> | 112 | 672 | 3800 | 64 | 450000 / 4000 | 120000/2000 | 120000/2000 | 8 | 50000 |
+
+<sup>1</sup> Eadsv5-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
-* These IOPs values can be achieved by using Gen2 VMs.<br>
-<sup>1</sup> Eadsv5-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.<br>
<sup>2</sup> [Constrained core sizes available](constrained-vcpu.md).
+<sup>3</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E112iads_v5** results in higher IOPs and MBps than standard premium disks:
+- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 160000/2000
+- Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 160000/2000
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake)
- [Intel® Advanced Vector Extensions 512 (Intel® AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) - Support for [Intel® Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html)
+> [!IMPORTANT]
+> - Accelerated networking is required and turned on by default on all Ebsv5 and Ebdsv5 VMs.
+> - Accelerated networking can be applied to two NICs.
+>- Ebsv5 and Ebdsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
+ ## Ebdsv5 series Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors. The Ebdsv5 VM sizes feature up to 512 GiB of RAM, in addition to fast and large local SSD storage (up to 2400 GiB). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance, low latency, high-speed local storage. Remote Data disk storage is billed separately from VMs.
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo
- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported - Nested virtualization: Supported
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
-| | | | | | | | | | |
-| Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 |
-| Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
-| Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 | 4 | 10000 |
-| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 | 8 | 12500 |
-| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/1250 | 88000/2500 | 120000/4000 | 8 | 16000 |
-| Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/2000 | 120000/4000 | 120000/4000 | 8 | 16000 |
-| Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 | 8 | 20000 |
-
-> [!NOTE]
-> Accelerated networking is required and turned on by default on all Ebdsv5 VMs.
->
-> Accelerated networking can be applied to two NICs.
-
-> [!NOTE]
-> Ebdsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium v1 SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium v1 SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium v2 SSD disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium v2 SSD disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+|||||||||||||
+| Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 |
+| Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
+| Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
+| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500 |
+| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/1250 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 |
+| Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/2000 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 16000 |
+| Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 20000 |
## Ebsv5 series
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported - Nested virtualization: Supported
-| Size | vCPU | Memory: GiB | Max data disks | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
-| | | | | | | | |
-| Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 2 | 10000 |
-| Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 2 | 10000 |
-| Standard_E8bs_v5 | 8 | 64 | 16 | 22000/625 | 40000/1200 | 4 | 10000 |
-| Standard_E16bs_v5 | 16 | 128 | 32 | 44000/1250 | 64000/2000 | 8 | 12500
-| Standard_E32bs_v5 | 32 | 256 | 32 | 88000/2500 | 120000/4000 | 8 | 16000 |
-| Standard_E48bs_v5 | 48 | 384 | 32 | 120000/4000 | 120000/4000 | 8 | 16000 |
-| Standard_E64bs_v5 | 64 | 512 | 32 | 120000/4000 | 120000/4000 | 8 | 20000 |
-
-> [!NOTE]
-> Accelerated networking is required and turned on by default on all Ebsv5 VMs.
->
-> Accelerated networking can be applied to two NICs.
-
-> [!NOTE]
-> Ebsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium v1 SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium v1 SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium v2 SSD disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium v2 SSD disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| | | | | | | | | | |
+| Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 |
+| Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
+| Standard_E8bs_v5 | 8 | 64 | 16 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
+| Standard_E16bs_v5 | 16 | 128 | 32 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500
+| Standard_E32bs_v5 | 32 | 256 | 32 | 88000/2500 | 120000/4000 |117920/2500 |160000/4000 | 8 | 16000 |
+| Standard_E48bs_v5 | 48 | 384 | 32 | 120000/4000 | 120000/4000 | 160000/4000| 160000/4000| 8 | 16000 |
+| Standard_E64bs_v5 | 64 | 512 | 32 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 20000 |
+ [!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv4-edsv4-series.md
Edsv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice
| Standard_E32ds_v4 | 32 | 256 | 1200 | 32 | 150000/2000 | 51200/768 | 64000/1600 | 8 | 16000 | | Standard_E48ds_v4 | 48 | 384 | 1800 | 32 | 225000/3000 | 76800/1152 | 80000/2000 | 8 | 24000 | | Standard_E64ds_v4 <sup>2</sup> | 64 | 504 | 2400 | 32 | 300000/4000 | 80000/1200 | 80000/2000 | 8 | 30000 |
-| Standard_E80ids_v4 <sup>3</sup> | 80 | 504 | 2400 | 64 | 375000/4000 | 80000/1200 | 80000/2000 | 8 | 30000 |
+| Standard_E80ids_v4 <sup>3,5</sup> | 80 | 504 | 2400 | 64 | 375000/4000 | 80000/1200 | 80000/2000 | 8 | 30000 |
+
+<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)
+
+<sup>1</sup> Edsv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+<sup>2</sup> [Constrained core sizes available](./constrained-vcpu.md).
+
+<sup>3</sup> Instance is isolated to hardware dedicated to a single customer.
-<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br>
-<sup>1</sup> Edsv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.<br>
-<sup>2</sup> [Constrained core sizes available)](./constrained-vcpu.md).<br>
-<sup>3</sup> Instance is isolated to hardware dedicated to a single customer.<br>
<sup>4</sup> Accelerated networking can only be applied to a single NIC.
+<sup>5</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E80ids_v4** results in higher IOPs and MBps than standard premium disks:
+- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 120000/1800
+- Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 120000/2000
+ [!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
The Edv5 and Edsv5-series Virtual Machines run on the 3rd Generation Intel&reg;
## Edv5-series
-Edv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM as well as fast, local SSD storage up to 3800 GiB. Edv5-series virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
+Edv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM and fast, local SSD storage up to 3800 GiB. Edv5-series virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
Edv5-series virtual machines support Standard SSD and Standard HDD disk types. To use Premium SSD or Ultra Disk storage, select Edsv5-series virtual machines. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
Edv5-series virtual machines support Standard SSD and Standard HDD disk types. T
## Edsv5-series
-Edsv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM as well as fast, local SSD storage up to 3800 GiB. Edsv5-series virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
+Edsv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM and fast, local SSD storage up to 3800 GiB. Edsv5-series virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types. You can attach Standard SSDs, Standard HDDs, and Premium SSDs disk storage to these VMs. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types.
| Standard_E48ds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 76800/1315 | 80000/3000 | 8 | 24000 | | Standard_E64ds_v5 | 64 | 512 | 2400 | 32 | 375000/4000 | 80000/1735 | 80000/3000 | 8 | 30000 | | Standard_E96ds_v5<sup>3</sup> | 96 | 672 | 3600 | 32 | 450000/4000 | 80000/2600 | 80000/4000 | 8 | 35000 |
-| Standard_E104ids_v5<sup>4</sup> | 104 | 672 | 3800 | 64 | 450000/4000 | 120000/4000 | 120000/4000 | 8 | 100000 |
+| Standard_E104ids_v5<sup>4,6</sup> | 104 | 672 | 3800 | 64 | 450000/4000 | 120000/4000 | 120000/4000 | 8 | 100000 |
+
+<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)
+
+<sup>1</sup> Accelerated networking is required and turned on by default on all Edsv5 virtual machines.
+
+<sup>2</sup> Accelerated networking can be applied to two NICs.
+
+<sup>3</sup> [Constrained Core](constrained-vcpu.md) sizes available.
+
+<sup>4</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
-<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br>
-<sup>1</sup> Accelerated networking is required and turned on by default on all Edsv5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.<br>
-<sup>3</sup> [Constrained Core](constrained-vcpu.md) sizes available.<br>
-<sup>4</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.<br>
<sup>5</sup> Edsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+<sup>6</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E104ids_v5** results in higher IOPs and MBps than standard premium disks:
+- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 160000/4000
+- Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 160000/4000
+++++ [!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)] ## Other sizes and information
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types.
Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
-More information on Disks Types : [Disk Types](./disks-types.md#ultra-disks)
+More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
virtual-machines Ev4 Esv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev4-esv4-series.md
Esv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice L
| Standard_E32s_v4 | 32 | 256 | Remote Storage Only | 32 | 51200/768 | 64000/1600 | 8|16000 | | Standard_E48s_v4 | 48 | 384 | Remote Storage Only | 32 | 76800/1152 | 80000/2000 | 8|24000 | | Standard_E64s_v4 <sup>2</sup> | 64 | 504| Remote Storage Only | 32 | 80000/1200 | 80000/2000 | 8|30000 |
-| Standard_E80is_v4 <sup>3</sup> | 80 | 504 | Remote Storage Only | 64 | 80000/1200 | 80000/2000 | 8|30000 |
+| Standard_E80is_v4 <sup>3,5</sup> | 80 | 504 | Remote Storage Only | 64 | 80000/1200 | 80000/2000 | 8|30000 |
+
+<sup>1</sup> Esv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+<sup>2</sup> [Constrained core sizes available)](./constrained-vcpu.md).
+
+<sup>3</sup> Instance is isolated to hardware dedicated to a single customer.
-<sup>1</sup> Esv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.<br>
-<sup>2</sup> [Constrained core sizes available)](./constrained-vcpu.md).<br>
-<sup>3</sup> Instance is isolated to hardware dedicated to a single customer.<br>
<sup>4</sup> Accelerated networking can only be applied to a single NIC.
+<sup>5</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E80is_v4** results in higher IOPs and MBps than standard premium disks:
+- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 120000/1800
+- Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 120000/2000
+++++ [!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Ev5 Esv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev5-esv5-series.md
The Ev5 and Esv5-series virtual machines run on the 3rd Generation Intel&reg; Xe
## Ev5-series
-Ev5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM. Ev5-series virtual machines do not have any temporary storage thus lowering the price of entry.
+Ev5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM. Ev5-series virtual machines don't have temporary storage thus lowering the price of entry.
Ev5-series supports Standard SSD and Standard HDD disk types. To use Premium SSD or Ultra Disk storage, select Esv5-series virtual machines. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
Ev5-series supports Standard SSD and Standard HDD disk types. To use Premium SSD
## Esv5-series
-Esv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM. Esv5-series virtual machines do not have any temporary storage thus lowering the price of entry.
+Esv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 104 vCPU and 672 GiB of RAM. Esv5-series virtual machines don't have temporary storage thus lowering the price of entry.
-Esv5-series supports Standard SSDs, Standard HDDs, and Premium SSDs disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+Esv5-series supports Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
[Premium Storage](premium-storage-performance.md): Supported<br> [Premium Storage caching](premium-storage-performance.md): Supported<br>
Esv5-series supports Standard SSDs, Standard HDDs, and Premium SSDs disk types.
| Standard_E48s_v5 | 48 | 384 | Remote Storage Only | 32 | 76800/1315 | 80000/3000 | 8 | 24000 | | Standard_E64s_v5 | 64 | 512 | Remote Storage Only | 32 | 80000/1735 | 80000/3000 | 8 | 30000 | | Standard_E96s_v5<sup>3</sup> | 96 | 672 | Remote Storage Only | 32 | 80000/2600 | 80000/4000 | 8 | 35000 |
-| Standard_E104is_v5<sup>4</sup> | 104 | 672 | Remote Storage Only | 64 | 120000/4000 | 120000/4000 | 8 | 100000 |
+| Standard_E104is_v5<sup>4,6</sup> | 104 | 672 | Remote Storage Only | 64 | 120000/4000 | 120000/4000 | 8 | 100000 |
+
+<sup>1</sup> Accelerated networking is required and turned on by default on all Esv5 virtual machines.
+
+<sup>2</sup> Accelerated networking can be applied to two NICs.
+
+<sup>3</sup> [Constrained core](constrained-vcpu.md) sizes available.
+
+<sup>4</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
-<sup>1</sup> Accelerated networking is required and turned on by default on all Esv5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.<br>
-<sup>3</sup> [Constrained core](constrained-vcpu.md) sizes available.<br>
-<sup>4</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.<br>
<sup>5</sup> Esv5-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+<sup>6</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E104is_v5** results in higher IOPs and MBps than standard premium disks:
+- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 160000/4000
+- Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 160000/4000
+++++ [!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)] ## Other sizes and information
Esv5-series supports Standard SSDs, Standard HDDs, and Premium SSDs disk types.
Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
-More information on Disks Types : [Disk Types](./disks-types.md#ultra-disks)
+More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
virtual-machines Msv2 Mdsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/msv2-mdsv2-series.md
The Msv2 and Mdsv2 Medium Memory VM Series features Intel® Xeon® Platinum 8280
| Standard_M64ms_v2 | 64 | 1792 | 0 | 64 | 40000/1000 | 80000/2000 | 8 | 16000 | | Standard_M128s_v2 | 128 | 2048 | 0 | 64 | 80000/2000 | 80000/4000 | 8 | 30000 | | Standard_M128ms_v2 | 128 | 3892 | 0 | 64 | 80000/2000 | 80000/4000 | 8 | 30000 |
-| Standard_M192is_v2 | 192 | 2048 | 0 | 64 | 80000/2000 | 80000/4000 | 8 | 30000 |
+| Standard_M192is_v2<sup>2</sup> | 192 | 2048 | 0 | 64 | 80000/2000 | 80000/4000 | 8 | 30000 |
| Standard_M192ims_v2 | 192 | 4096 | 0 | 64 | 80000/2000 | 80000/4000 | 8 | 30000 |
+<sup>1</sup> Msv2 and Mdsv2 medium memory VMs can [burst](./disk-bursting.md) their disk performance for up to 30 minutes at a time.
+
+<sup>2</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_M192is_v2** results in higher IOPs and MBps than standard premium disks:
+- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 120000/2000
+- Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 120000/4000
++ ## Mdsv2 Medium Memory with Disk | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disk | Max cached and temp storage throughput: IOPS / MBps | Burst cached and temp storage throughput: IOPS/MBps<sup>1</sup> | Max uncached disk throughput: IOPS/MBps | Burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
# Use SAP Deployment Automation Framework from Azure DevOps Services Using Azure DevOps will streamline the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities.
-You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application.
+You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application.
## Sign up for Azure DevOps Services
-To use Azure DevOps Services, you'll need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account.
+To use Azure DevOps Services, you'll need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account.
## Configure Azure DevOps Services for the SAP Deployment Automation Framework You can use the following script to do a basic installation of Azure Devops Services for the SAP Deployment Automation Framework.
-Login to Azure Cloud Shell
+Log in to Azure Cloud Shell
```bash
- export ADO_ORGANIZATION=<yourOrganization>
- export ADO_PROJECT=SAP Deployment Automation
- wget https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/scripts/create_devops_artifacts.sh -O devops.sh
- chmod +x ./devops.sh
- ./devops.sh
- rm ./devops.sh
+ export ADO_ORGANIZATION=https://dev.azure.com/<yourorganization>
+ export ADO_PROJECT=SAP Deployment Automation
+ wget https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/create_devops_artifacts.sh -O devops.sh
+ chmod +x ./devops.sh
+ ./devops.sh
+ rm ./devops.sh
```
Validate that the project has been created by navigating to the Azure DevOps por
You can finalize the Azure DevOps configuration by running the following scripts on your local workstation. Open a PowerShell Console and define the environment variables. Replace the bracketed values with the actual values. -
+> [!IMPORTANT]
+> Run the following steps on your local workstation, also make sure that you have logged on to Azure using az login first.
```powershell
-$Env:ADO_ORGANIZATION="https://dev.azure.com/<yourorganization>"
+ $Env:ADO_ORGANIZATION="https://dev.azure.com/<yourorganization>"
-$Env:ADO_PROJECT="<yourProject>"
-$Env:YourPrefix="<yourPrefix>"
+ $Env:ADO_PROJECT="<yourProject>"
+ $Env:YourPrefix="<yourPrefix>"
-$Env:ControlPlaneSubscriptionID="<YourControlPlaneSubscriptionID>"
-$Env:DevSubscriptionID="<YourDevSubscriptionID>"
+ $Env:ControlPlaneSubscriptionID="<YourControlPlaneSubscriptionID>"
+ $Env:DevSubscriptionID="<YourDevSubscriptionID>"
``` > [!NOTE]
Once the variables are defined run the following script to create the service pr
```powershell
-Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/scripts/update_devops_credentials.ps1 -OutFile .\configureDevOps.ps1 ; .\configureDevOps.ps1
+Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/update_devops_credentials.ps1 -OutFile .\configureDevOps.ps1 ; .\configureDevOps.ps1
```
-## Manual Configuration
+## Manual configuration of Azure DevOps Services for the SAP Deployment Automation Framework
### Create a new project
Open (https://dev.azure.com) and create a new project by clicking on the _New Pr
Record the URL of the project. ### Import the repository
-Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos.
+Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos.
Navigate to the Repositories section and choose Import a repository, import the 'https://github.com/Azure/sap-automation.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true)
If you're unable to import a repository, you can create the 'sap-automation' rep
### Create the repository for manual import > [!NOTE]
-> Only do this step if you are unable to import the repository directly.
+> Only do this step if you are unable to import the repository directly.
-Create the 'sap-automation' repository by navigating to the 'Repositories' section in 'Project Settings' and clicking the _Create_ button.
+Create the 'sap-automation' repository by navigating to the 'Repositories' section in 'Project Settings' and clicking the _Create_ button.
Choose the repository type 'Git' and provide a name for the repository, for example 'sap-automation'. ### Cloning the repository
-In order to provide a more comprehensive editing capability of the content, you can clone the repository to a local folder and edit the contents locally.
+In order to provide a more comprehensive editing capability of the content, you can clone the repository to a local folder and edit the contents locally.
Clone the repository to a local folder by clicking the _Clone_ button in the Files view in the Repos section of the portal. For more info, see [Cloning a repository](/azure/devops/repos/git/clone?view=azure-devops#clone-an-azure-repos-git-repo&preserve-view=true) :::image type="content" source="./media/automation-devops/automation-repo-clone.png" alt-text="Picture showing how to clone the repository":::
Clone the repository to a local folder by clicking the _Clone_ button in the Fi
You can also download the content from the SAP Deployment Automation Framework repository manually and add it to your local clone of the Azure DevOps repository.
-Navigate to 'https://github.com/Azure/SAP-automation' repository and download the repository content as a ZIP file by clicking the _Code_ button and choosing _Download ZIP_.
+Navigate to 'https://github.com/Azure/SAP-automation' repository and download the repository content as a ZIP file by clicking the _Code_ button and choosing _Download ZIP_.
Copy the content from the zip file to the root folder of your local clone.
Select the source control icon and provide a message about the change, for examp
> In order to ensure that your configuration files are not overwritten by changes in the SAP Deployment Automation Framework, store them in a separate folder hierarchy.
-Create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. Create the following folders in the 'WORKSPACES' folder: 'DEPLOYER', 'LIBRARY', 'LANDSCAPE' and 'SYSTEM'. These will contain the configuration files for the different components of the SAP Deployment Automation Framework.
+Create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. Create the following folders in the 'WORKSPACES' folder: 'DEPLOYER', 'LIBRARY', 'LANDSCAPE' and 'SYSTEM'. These will contain the configuration files for the different components of the SAP Deployment Automation Framework.
Optionally you may copy the sample configuration files from the 'samples/WORKSPACES' folders to the WORKSPACES folder you created, this will allow you to experiment with sample deployments.
The automation framework optionally provisions a web app as a part of the contro
# [Linux](#tab/linux) Replace MGMT with your environment as necessary. ```bash
-echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json
+echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json
TF_VAR_app_registration_app_id=$(az ad app create --display-name MGMT-webapp-registration --enable-id-token-issuance true --sign-in-audience AzureADMyOrg --required-resource-access @manifest.json --query "appId" | tr -d '"') echo $TF_VAR_app_registration_app_id
-az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query "password"
+az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query "password"
rm manifest.json ```
$TF_VAR_app_registration_app_id=(az ad app create --display-name MGMT-webapp-reg
echo $TF_VAR_app_registration_app_id
-az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query "password"
+az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query "password"
del manifest.json ```
Save the app registration ID and password values for later use.
## Create Azure Pipelines
-Azure Pipelines are implemented as YAML files and they're stored in the 'deploy/pipelines' folder in the repository.
+Azure Pipelines are implemented as YAML files and they're stored in the 'deploy/pipelines' folder in the repository.
## Control plane deployment pipeline Create the control plane deployment pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
The pipelines use a custom task to run Ansible. The custom task can be installed
The pipelines use a custom task to perform cleanup activities post deployment. The custom task can be installed from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before running the pipelines.
-## Preparations for self-hosted agent
+## Preparations for self-hosted agent
1. Create an Agent Pool by navigating to the Organizational Settings and selecting _Agent Pools_ from the Pipelines section. Click the _Add Pool_ button and choose Self-hosted as the pool type. Name the pool to align with the workload zone environment, for example `DEV-WEEU-POOL`. Ensure _Grant access permission to all pipelines_ is selected and create the pool using the _Create_ button.
az pipelines variable-group create --name SDAF-General --variables ANSIBLE_HOST_
### Environment specific variables
-As each environment may have different deployment credentials you'll need to create a variable group per environment, for example 'SDAF-MGMT','SDAF-DEV', 'SDAF-QA'.
+As each environment may have different deployment credentials you'll need to create a variable group per environment, for example 'SDAF-MGMT','SDAF-DEV', 'SDAF-QA'.
Create a new variable group 'SDAF-MGMT' for the control plane environment using the Library page in the Pipelines section. Add the following variables:
Enter a Service connection name, for instance 'Connection to MGMT subscription'
## Permissions > [!NOTE]
-> Most of the pipelines will add files to the Azure Repos and therefore require pull permissions. Assign "Contribute" permissions to the 'Build Service' using the Security tab of the source code repository in the Repositories section in Project settings.
+> Most of the pipelines will add files to the Azure Repos and therefore require pull permissions. Assign "Contribute" permissions to the 'Build Service' using the Security tab of the source code repository in the Repositories section in Project settings.
:::image type="content" source="./media/automation-devops/automation-repo-permissions.png" alt-text="Picture showing repository permissions":::
Select the _Control plane deployment_ pipeline, provide the configuration names
### Configure the Azure DevOps Services self-hosted agent manually
-> [!NOTE]
+> [!NOTE]
>This is only needed if the Azure DevOps Services agent is not automatically configured. Please check that the agent pool is empty before proceeding.
Connect to the deployer by following these steps:
1. The default username is *azureadm*
-1. Choose *SSH Private Key from Azure Key Vault*
+1. Choose *SSH Private Key from Azure Key Vault*
1. Select the subscription containing the control plane.
The agent will now be configured and started.
## Deploy the Control Plane Web Application
-Checking the "deploy the web app infrastructure" parameter when running the Control plane deployment pipeline will provision the infrastructure necessary for hosting the web app. The "Deploy web app" pipeline will publish the application's software to that infrastructure.
+Checking the "deploy the web app infrastructure" parameter when running the Control plane deployment pipeline will provision the infrastructure necessary for hosting the web app. The "Deploy web app" pipeline will publish the application's software to that infrastructure.
Wait for the deployment to finish. Once the deployment is complete, navigate to the Extensions tab and follow the instructions to finalize the configuration and update the 'reply-url' values for the app registration.
virtual-network-manager Concept Connectivity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-connectivity-configuration.md
Previously updated : 11/02/2021 Last updated : 05/09/2022
When you create a mesh topology, a new connectivity construct is created called
A hub-and-spoke is a network topology in which you have a virtual network selected as the hub virtual network. This virtual network gets bi-directionally peered with every spoke virtual network in the configuration. This topology is useful for when you want to isolate a virtual network but still want it to have connectivity to common resources in the hub virtual network. + In this configuration, you have settings you can enable such as *direct connectivity* between spoke virtual networks. By default, this connectivity is only for virtual networks in the same region. To allow connectivity across different Azure regions, you'll need to enable *Global mesh*. You can also enable *Gateway* transit to allow spoke virtual networks to use the VPN or ExpressRoute gateway deployed in the hub. ### Direct connectivity Enabling *Direct connectivity* creates an overlay of a [*connected group*](#connectedgroup) on top of your hub and spoke topology, which contains spoke virtual networks of a given group. Direct connectivity allows a spoke VNet to talk directly to other VNets in its spoke group, but not to VNets in other spokes. + For example, you create two network groups. You enable direct connectivity for the *Production* network group but not for the *Test* network group. This set up only allows virtual networks in the *Production* network group to communicate with one another but not the ones in the *Test* network group. See example diagram below: When you look at effective routes on a VM, the route between the hub and the spoke virtual networks will have the next hop type of *VNetPeering* or *GlobalVNetPeering*. Routes between spokes virtual networks will show up with the next hop type of *ConnectedGroup*. With the example above, only the *Production* network group would have a *ConnectedGroup* because it has *Direct connectivity* enabled.
virtual-network-manager How To Configure Cross Tenant Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-portal.md
+
+ Title: Configure cross-tenant connection in Azure Virtual Network Manager (Preview) - Portal
+description: Learn how to create cross-tenant connections in Azure Virtual Network Manager to support virtual networks across subscriptions and management groups in different tenants.
++++ Last updated : 09/19/2022+
+#customerintent: As a cloud admin, in need to manage multiple tenants from a single network manager instance. Cross tenant functionality will give me this so I can easily manage all network resources governed by azure virtual network manager.
+++
+# Configure cross-tenant connection in Azure Virtual Network Manager (Preview) - portal
+
+In this article, you'll learn to create [cross-tenant connections](concept-cross-tenant.md) in the Azure portal with Azure Virtual Network Manager. First, you'll create the scope connection on the central network manager. Then you'll create the network manager connection on the connecting tenant, and verify connection. Last, you'll add virtual networks from different tenants to your network group and verify. Once completed, You can centrally manage the resources of other tenants from single network manager instance.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- Two Azure tenants with virtual networks needing to be managed by an Azure Virtual Network Manager instance. During the how-to, the tenants will be referred to as follows:
+ - **Central management tenant** - The tenant where an Azure Virtual Network Manager instance is installed, and you'll centrally manage network groups from cross-tenant connections.
+ - **Target managed tenant** - The tenant containing virtual networks to be managed. This tenant will be connected to the central management tenant.
+- Azure Virtual Network Manager deployed in the central management tenant.
+- Required permissions include:
+ - Administrator of central management tenant has guest account in target managed tenant.
+ - Administrator guest account has *Network Contributor* permissions applied at appropriate scope level(Management group, subscription, or virtual network).
+
+Need help with setting up permissions? Check out how to [add guest users in the Azure portal](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md), and how to [assign user roles to resources in Azure portal](../role-based-access-control/role-assignments-portal.md)
+
+## Create scope connection within network manager
+Creation of the scope connection begins on the central management tenant with a network manager deployed. This is the network manager where you plan to manager all of your resources across tenants. In this task, you'll set up a scope connection to add a subscription from a target tenant.
+1. Go to your Azure Virtual Network Manager instance.
+1. Under **Settings**, select **Cross-tenant connections** and select **Create cross-tenant connection**.
+1. On the **Create a connection** page, enter the connection name and target tenant information, and select **Create** when completed.
+1. Verify the scope connection is listed under **Cross-tenant connections** and the status is **Pending**
+
+## Create network manager connection on subscription in other tenant
+Once the scope connection is created, you'll switch to the target managed tenant, and you'll connect to the target managed tenant by creating another cross-tennant connection in the **Virtual Network Manager** hub.
+1. In the target tenant, search for **virtual network manager** and select **Virtual Network Manager**.
+1. Under **Virtual network manager**, select **Cross-tenant connections**.
+1. Select **Create a connection**.
+1. On the **Create a connection** page, enter the information for your central network manager tenant, and select **Create** when complete.
+
+## Verify the connection state
+Once both connections are created, it's time to verify the connection on the central management tenant.
+1. On your central management tenant, select your network manager.
+1. Select **Cross-tenant connections** under **Settings**, and verify your cross-tenant connection is listed as **Connected**.
+
+## Add static members to your network group
+Now, you'll add virtual networks from both tenants into a static member network group.
+
+> [!NOTE]
+> Currently, cross-tenant connections only support static memberships within a network group. Dynamic membership with Azure Policy is not supported.
+
+1. From your network manager, add a network group if needed.
+1. Select your network group and select **Add virtual networks** under **Manually add members**.
+1. On the **Manually add members** page, select **Tenant:...** next to the search box, select the linked tenant from the list, and select **Apply**.
+1. To view the available virtual networks from the target managed tenant, select **authenticate** and proceed through the authentication process. If you have multiple Azure accounts, select the one you're currently signed in with that has permissions to the target managed tenant.
+1. Select the VNets to include in the network group and select **Add**.
+
+## Verify group members
+
+In the final step, you'll verify the virtual networks that are now members of the network group.
+1. On the **Overview** page of the network group, select **View group members** and verify the VNets you added manually are listed.
+## Next steps
+In this article, you deployed a cross-tenant connection between two Azure subscriptions. To learn more about using Azure Virtual Network Manager, see:
+- [Common uses cases for Azure Virtual Network Manager](concept-use-cases.md)
+- [Learn to build a secure hub-and-spoke network](tutorial-create-secured-hub-and-spoke.md)
+- [FAQ](faq.md)
+
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Make sure the virtual network gateway has been successfully deployed before depl
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vm-network-settings.png" alt-text="Screenshot of test VM's network settings.":::
-1. Then select **Effective routes** under *Support + troubleshooting* to see the routes for the virtual network peerings. The `10.3.0.0/16` route with the next hop of `VNetGlobalPeering` is the route to the hub virtual network. The `10.5.0.0/16` route with the next hop of `ConnectedGroup` is route to the other spoke virtual network. All spokes virtual network will be in a *ConnectedGroup* when **Transitivity** is enabled.
+1. Then select **Effective routes** under *Help* to see the routes for the virtual network peerings. The `10.3.0.0/16` route with the next hop of `VNetGlobalPeering` is the route to the hub virtual network. The `10.5.0.0/16` route with the next hop of `ConnectedGroup` is route to the other spoke virtual network. All spokes virtual network will be in a *ConnectedGroup* when **Transitivity** is enabled.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/effective-routes.png" alt-text="Screenshot of effective routes from test VM network interface." lightbox="./media/tutorial-create-secured-hub-and-spoke/effective-routes-expanded.png" :::
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
### Availability zones
-* A NAT gateway can be created in a specific availability zone or placed in 'no zone'. NAT gateway is placed in no zone by default. A non-zonal NAT gateway is placed in a zone for you by Azure and does not give a guarantee of redundancy.
+* A NAT gateway can be created in a specific availability zone or placed in 'no zone'.
-* NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment. After NAT gateway is deployed, the zone selection cannot be changed.
+* NAT gateway can be isolated in a specific zone when you create [zone isolation scenarios](/azure/virtual-network/nat-gateway/nat-availability-zones). This deployment is called a zonal deployment. After NAT gateway is deployed, the zone selection cannot be changed.
+
+* NAT gateway is placed in no zone by default. A [non-zonal NAT gateway](/azure/virtual-network/nat-gateway/nat-availability-zones#non-zonal) is placed in a zone for you by Azure.
### NAT gateway and basic SKU resources
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
-* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
+* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
web-application-firewall Custom Waf Rules Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/custom-waf-rules-overview.md
Must be one of the following operators:
- IPMatch - only used when Match Variable is *RemoteAddr* - Equal ΓÇô input is the same as the MatchValue
+- Any ΓÇô It should not have a MatchValue. It is recommended for Match Variable with a valid Selector.
- Contains - LessThan - GreaterThan