Updates from: 04/14/2022 07:24:14
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/find-help-open-support-ticket.md
+
+ Title: Find help and open a support ticket for Azure Active Directory B2C
+
+description: Learn how to find technical, pre-sales, billing, and subscription help and open a support ticket for Azure Active Directory B2C
+++++++ Last updated : 03/30/2022++++
+# Find help and open a support ticket for Azure Active Directory B2C
+
+Microsoft provides global technical, pre-sales, billing, and subscription support for Azure Active Directory B2C (Azure AD B2C). Support is available both online and by phone for Microsoft Azure paid and trial subscriptions. Phone support and online billing support are available in additional languages.
+
+## Find help without opening a support ticket
+
+Before creating a support ticket, check out the following resources for answers and information.
+
+* For content such as how-to information or code samples for IT professionals and developers, see the [technical documentation for Azure AD B2C at docs.microsoft.com](../active-directory-b2c/index.yml).
+
+* The [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for our IT pro partners and customers to collaborate, share, and learn. The [Microsoft Technical Community Info Center](https://techcommunity.microsoft.com/t5/Community-Info-Center/ct-p/Community-Info-Center) is used for announcements, blog posts, ask-me-anything (AMA) interactions with experts, and more. You can also [join the community to submit your ideas](https://techcommunity.microsoft.com/t5/Communities/ct-p/communities).
+
+## Open a support ticket
+
+If you're unable to find answers by using self-help resources, you can open an online support ticket. You should open each support ticket for only a single problem to enable us to connect you to the support engineers who are subject matter experts for your problem. Also, Azure AD B2C engineering teams prioritize their work based on incidents that are generated, so you're often contributing to service improvements.
+
+### How to open a support ticket for Azure AD B2C in the Azure portal
+
+> [!NOTE]
+> For billing or subscription issues, use the [Microsoft 365 admin center](https://admin.microsoft.com).
+
+1. Sign in to [the Azure portal](https://portal.azure.com).
+
+1. Make sure you're using the Azure Active Directory (Azure AD) tenant that contains your Azure subscription:
+
+ 1. In the Azure portal toolbar, select the **Directories + subscriptions** (:::image type="icon" source="./../active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false":::) icon.
+
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch** button next to it.
+
+1. In the Azure portal, search for and select **Azure Active Directory**.
+
+1. In the left menu, under **Troubleshooting + Support**, select **New support request**.
+
+1. On the **1. Problem description** tab:
+
+ 1. For **Issue type**, select **Technical**.
+
+ 1. For **Subscription**, select your Azure subscription.
+
+ 1. For **Service type**, select **Azure Active Directory Business to Consumer (B2C)**. This action shows **Summary** and **Problem type** fields.
+
+ 1. For **Summary**, write a descriptive summary for your request. The summary needs to be under 140 characters.
+
+ 1. For **Problem type**, and then select a category for that type.
+
+1. Select **Next**.
+
+1. Under the **2. Recommended solution** tab, you're offered self-help solution and documentation. If none of the solutions recommended resolves your problem, select **Next**.
+
+1. Under **3. Additional details** tab, fill out the required details accurately. For example:
+
+ 1. Your tenant ID or domain name. See how to find your [tenant ID](tenant-management.md#get-your-tenant-id) or [tenant name](tenant-management.md#get-your-tenant-name).
+
+ 1. The time and date when the problem occurred.
+
+ 1. Additional details to describe the problem.
+
+ 1. Under **Advanced diagnostic information**, select **Yes (Recommended)** to allow Microsoft support to access your Azure resources for faster problem resolution.
+
+ 1. Select a **[Severity](https://azure.microsoft.com/support/plans/response)**, and your preferred contact method.
+
+
+ :::image type="content" source="media/find-help-and-submit-support-ticket/find-help-and-submit-support-ticket-1.png" alt-text="Screenshot of how to find help and submit support ticket part 1.":::
+
+ :::image type="content" source="media/find-help-and-submit-support-ticket/find-help-and-submit-support-ticket-2.png" alt-text="Screenshot of how to find help and submit support ticket part 2.":::
+
+1. Select **Next**. Under **4. Review + create**, you'll see a summary of your support ticket.
+
+1. If the details of your support ticket are accurate, select **Create** to submit the support ticket. Otherwise, select **Previous** to make corrections.
+
+ :::image type="content" source="media/find-help-and-submit-support-ticket/review-and-create.png" alt-text="Screenshot of how to find help and submit support ticket Review and create tab.":::
+
+## Next steps
+
+* [Microsoft Tech Community](https://techcommunity.microsoft.com/)
+
+* [Technical documentation for Azure AD B2C at docs.microsoft.com](../active-directory-b2c/index.yml)
+
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
The recommendation is:
A shared cache is faster because it's not serialized. However, the memory will grow as tokens are cached. The number of tokens is equal to the number of tenants times the number of downstream APIs. An app token is about 2 KB in size, whereas tokens for a user are about 7 KB in size. It's great for development, or if you have few users. - If you want to use an in-memory token cache and control its size and eviction policies, use the [Microsoft.Identity.Web in-memory cache option](msal-net-token-cache-serialization.md?tabs=aspnet#in-memory-token-cache-1).-- If you build an SDK and want to write your own token cache serializer for confidential client applications, inherit from [Microsoft.Identity.Web.MsalAsbtractTokenCacheProvider](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.TokenCache/MsalAbstractTokenCacheProvider.cs) and override the `WriteCacheBytesAsync` and `ReadCacheBytesAsync` methods.
+- If you build an SDK and want to write your own token cache serializer for confidential client applications, inherit from [Microsoft.Identity.Web.MsalAbstractTokenCacheProvider](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.TokenCache/MsalAbstractTokenCacheProvider.cs) and override the `WriteCacheBytesAsync` and `ReadCacheBytesAsync` methods.
## [ASP.NET Core web apps and web APIs](#tab/aspnetcore)
The strategies are different depending on whether you're writing a token cache s
### Custom token cache for a web app or web API (confidential client application)
-If you want to write your own token cache serializer for confidential client applications, we recommend that you inherit from [Microsoft.Identity.Web.MsalAsbtractTokenCacheProvider](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.TokenCache/MsalAbstractTokenCacheProvider.cs) and override the `WriteCacheBytesAsync` and `ReadCacheBytesAsync` methods.
+If you want to write your own token cache serializer for confidential client applications, we recommend that you inherit from [Microsoft.Identity.Web.MsalAbstractTokenCacheProvider](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.TokenCache/MsalAbstractTokenCacheProvider.cs) and override the `WriteCacheBytesAsync` and `ReadCacheBytesAsync` methods.
Examples of token cache serializers are provided in [Microsoft.Identity.Web/TokenCacheProviders](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.TokenCache).
active-directory Groups Dynamic Rule More Efficient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-more-efficient.md
+
+ Title: Create simpler and faster rules for dynamic groups - Azure AD | Microsoft Docs
+description: How to optimize your membership rules to automatically populate groups.
+
+documentationcenter: ''
++++++ Last updated : 03/29/2022+++++++
+# Create simpler, more efficient rules for dynamic groups in Azure Active Directory
+
+The team for Azure Active Directory (Azure AD) sees numerous incidents related to dynamic groups and the processing time for their membership rules. This article contains the methods by which our engineering team helps customers to simplify their membership rules. Simpler and more efficient rules result in better dynamic group processing times. When writing membership rules for dynamic groups, these are steps you can take to ensure the rules are as efficient as possible.
++
+## Minimize use of MATCH
+
+Minimize the usage of the 'match' operator in rules as much as possible. Instead, explore if it's possible to use the `contains`, `startswith`, or `-eq` operators. Considering using other properties that allow you to write rules to select the users you want to be in the group without using the `-match` operator. For example, if you want a rule for the group for all users whose city is Lagos, then instead of using rules like:
+
+- `user.city -match "ago"`
+- `user.city -match ".*?ago.*"`
+
+It's better to use rules like:
+
+- `user.city -contains "ago,"`
+- `user.city -startswith "Lag,"`
+
+Or, best of all:
+
+- `user.city -eq "Lagos"`
+
+## Use fewer OR operators
+
+In your rule, identify when it uses various values for the same property linked together with `-or` operators. Instead, use the `-in` operator to group them into a single criterion to make the rule easier to evaluate. For example, instead of having a rule like this:
+
+```
+(user.department -eq "Accounts" -and user.city -eq "Lagos") -or
+(user.department -eq "Accounts" -and user.city -eq "Ibadan") -or
+(user.department -eq "Accounts" -and user.city -eq "Kaduna") -or
+(user.department -eq "Accounts" -and user.city -eq "Abuja") -or
+(user.department -eq "Accounts" -and user.city -eq "Port Harcourt")
+```
+
+It's better to have a rule like this:
+
+- `user.department -eq "Accounts" -and user.city -in ["Lagos", "Ibadan", "Kaduna", "Abuja", "Port Harcourt"]`
++
+Conversely, identify similar sub criteria with the same property not equal to various values, that are linked with `-and` operators. Then use the `-notin` operator to group them into a single criterion to make the rule easier to understand and evaluate. For example, instead of using a rule like this:
+
+- `(user.city -ne "Lagos") -and (user.city -ne "Ibadan") -and (user.city -ne "Kaduna") -and (user.city -ne "Abuja") -and (user.city -ne "Port Harcourt")`
+
+It's better to use a rule like this:
+
+- `user.city -notin ["Lagos", "Ibadan", "Kaduna", "Abuja", "Port Harcourt"]`
+
+## Avoid redundant criteria
+
+Ensure that you aren't using redundant criteria in your rule. For example, instead of using a rule like this:
+
+- `user.city -eq "Lagos" or user.city -startswith "Lag"`
+
+It's better to use a rule like this:
+
+- `user.city -startswith "Lag"`
++
+## Next steps
+
+- [Create a dynamic group](groups-dynamic-membership.md)
+
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 03/29/2022 Last updated : 04/13/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on March 29th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on April 13th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT BUSINESS CENTER | MICROSOFT_BUSINESS_CENTER | 726a0894-2c77-4d65-99da-9775ef05aad1 | MICROSOFT_BUSINESS_CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) | MICROSOFT BUSINESS CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) | | Microsoft Cloud App Security | ADALLOM_STANDALONE | df845ce7-05f9-4894-b5f2-11bbfbcfd2b6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | MICROSOFT DEFENDER FOR ENDPOINT | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) |
+| Microsoft Defender for Endpoint P1 | DEFENDER_ENDPOINT_P1 | 16a55f2f-ff35-4cd5-9146-fb784e3761a5 | Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) |
| Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT TEAMS (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 | | Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Microsoft Teams Rooms Standard without Audio Conferencing | MEETING_ROOM_NOAUDIOCONF | 61bec411-e46a-4dab-8f46-8b58ec845ffe | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| Microsoft Teams Trial | MS_TEAMS_IW | 74fbf1bb-47c6-4796-9623-77dc7371723b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | | Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
active-directory Active Directory Data Storage Japan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-japan.md
+
+ Title: Customer data storage for Japan customers - Azure AD
+description: Learn about where Azure Active Directory stores customer-related data for its Japan customers.
+++++++++ Last updated : 04/12/2022++++
+# Customer data storage for Japan customers in Azure Active Directory
+
+Azure Active Directory (Azure AD) stores its Customer Data in a geographical location based on the country you provided when you signed up for a Microsoft Online service. Microsoft Online services include Microsoft 365 and Azure.
+
+For information about where Azure AD and other Microsoft services' data is located, see the [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location) section of the Microsoft Trust Center.
+
+From April 15, 2022, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with a Japan billing address within the Japanese datacenters. From April 15, 2022 to June 30, 2022 a backup copy of the Azure ADΓÇÖs Customer Data for these new tenants will be stored in Asia to ensure a smooth transition to the Japanese datacenters. This copy will be destroyed on June 30, 2022.
+
+Additionally, certain Azure AD features do not yet support storage of Customer Data in Japan. Please go to the [Azure AD data map](https://msit.powerbi.com/view?r=eyJrIjoiYzEyZTc5OTgtNTdlZS00ZTVkLWExN2ItOTM0OWU4NjljOGVjIiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9), for specific feature information. For example, Microsoft Azure AD Multi-Factor Authentication stores Customer Data in the US and processes it globally. See [Data residency and customer data for Azure AD Multi-Factor Authentication](../authentication/concept-mfa-data-residency.md).
+
+> [!NOTE]
+> Microsoft products, services, and third-party applications that integrate with Azure AD have access to Customer Data. Evaluate each product, service, and application you use to determine how Customer Data is processed by that specific product, service, and application, and whether they meet your company's data storage requirements. For more information about Microsoft services' data residency, see the [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location) section of the Microsoft Trust Center.
+
+## Azure role-based access control (Azure RBAC)
+
+Role definitions, role assignments, and deny assignments are stored globally to ensure that you have access to your resources regardless of the region you created the resource. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md#where-is-azure-rbac-data-stored).
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
Last updated 1/18/2022
-# Publish your application in the Azure Active Directory application gallery
+# Request to Publish your application in the Azure Active Directory application gallery
You can publish your application in the Azure Active Directory (Azure AD) application gallery. When your application is published, it's made available as an option for users when they add applications to their tenant. For more information, see [Overview of the Azure Active Directory application gallery](overview-application-gallery.md).
To publish your application in the gallery, you need to complete the following t
- Join the Microsoft partner network. ## Prerequisites- - To publish your application in the gallery, you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).-- Every application in the gallery must implement one of the supported single sign-on (SSO) options. To learn more about the supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md). To learn more about authentication, see [Authentication vs. authorization](../develop/authentication-vs-authorization.md) and [Azure active Directory code samples](../develop/sample-v2-code.md). For password SSO, make sure that your application supports form authentication so that password vaulting can be used. For a quick introduction about single sign-on configuration in the portal, see [Enable single sign-on for an enterprise application](add-application-portal-setup-sso.md).-- For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/) to be listed in the gallery. The enterprise gallery applications must support multiple user configurations and not any specific user.-- For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be properly implemented for the application. The user can send the sign-in request to a common endpoint so that any user can provide consent to the application. You can control user access based on the tenant ID and the user's UPN received in the token.-- Supporting provisioning is optional, but highly recommended. Provisioning must be done using the System for Cross-domain Identity Management (SCIM) protocol, which is easy to implement. Using SCIM allows users to automatically create and update accounts in your application without relying on manual processes such as uploading CSV files. To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+- Support for single sign-on (SSO). To learn more about the supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md).
+ - For password SSO, make sure that your application supports form authentication so that password vaulting can be used.
+ - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/) to be listed in the gallery. The enterprise gallery applications must support multiple user configurations and not any specific user.
+ - For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be properly implemented for the application.
+- Supporting provisioning is optional, but highly recommended. To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
You can get a free test account with all the premium Azure AD features - 90 days free and can get extended as long as you do dev work with it: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
A client application can request a managed identity [app-only access token](../d
| Link | Description | | -- | -- | | [Get a token using HTTP](#get-a-token-using-http) | Protocol details for managed identities for Azure resources token endpoint |
+| [Get a token using Azure.Identity](#get-a-token-using-the-azure-identity-client-library) | Get a token using Azure.Identity library |
| [Get a token using the Microsoft.Azure.Services.AppAuthentication library for .NET](#get-a-token-using-the-microsoftazureservicesappauthentication-library-for-net) | Example of using the Microsoft.Azure.Services.AppAuthentication library from a .NET client | [Get a token using C#](#get-a-token-using-c) | Example of using managed identities for Azure resources REST endpoint from a C# client | | [Get a token using Java](#get-a-token-using-java) | Example of using managed identities for Azure resources REST endpoint from a Java client |
active-directory Memo 22 09 Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-authorization.md
Title: Memo 22-09 authorization requirements
-description: Guidance on meeting authorization requirements outlined in US government OMB memorandum 22-09
+description: Get guidance on meeting authorization requirements outlined in US government OMB memorandum 22-09.
-# Meet authorization requirements for Memorandum 22-09
+# Meet authorization requirements of memorandum 22-09
-This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document. We refer to it as ΓÇ£The memo.ΓÇ¥
+This series of articles offers guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles, as described in the US federal government's Office of Management and Budget (OMB) [memorandum 22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf).
-[Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf) requires specific types of enforcement within your MFA policies. Specifically, you must account for device-based, role-based, attribute-based controls, and privileged access management.
+The memo requires specific types of enforcement within your multifactor authentication (MFA) policies. Specifically, you must account for device-based controls, role-based controls, attribute-based controls, and privileged access management.
-
## Device-based controls
-M-22-09 specifically requires the use of at least one device-based signal when making an authorization decision to access a system or application. This requirement can be enforced using Conditional Access and there are several device signals that can be applied during the authorization. The following table describes the signal and the requirements to retrieve the signal:
+Memorandum 22-09 specifically requires the use of at least one device-based signal when you're making an authorization decision to access a system or application. You can enforce this requirement by using conditional access. Several device signals can be applied during the authorization. The following table describes the signal and the requirements to retrieve the signal:
| Signal| Signal retrieval | | - | - |
-| Device must be managed| Integration with Intune or another MDM that supports this integration are required.
-Hybrid Azure AD joined since the device is managed by active directory also qualifies |
-| Device must be compliant| Integration with Intune or other MDMΓÇÖs that support this integration are required. For more information, see [Use device compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) |
-| Threat signals| Microsoft Defender for Endpoint and other EDR tools have integrations with Azure AD and Intune to send threat signals that can be used to deny access. Threat signals are part of the compliant status signal |
-| Cross tenant access policies| permits an organization to trust device signals from devices belonging to other organizations. (public preview) |
+| Device must be managed| Integration with Intune or another mobile device management (MDM) solution that supports this integration is required.
+Hybrid Azure AD joined| The device is managed by Active Directory and qualifies.
+| Device must be compliant| Integration with Intune or another MDM solution that supports this integration is required. For more information, see [Use device compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started). |
+| Threat signals| Microsoft Defender for Endpoint and other endpoint detection and response (EDR) tools have integrations with Azure AD and Intune to send threat signals that can be used to deny access. Threat signals are part of the compliant status signal. |
+| Cross-tenant access policies (public preview)| These policies permit an organization to trust device signals from devices that belong to other organizations. |
-## Role-based access controls
+## Role-based controls
-Role based access control (RBAC role) remains an important way to enforce basic authorizations through assignments of users to a role in a particular scope. Azure AD has tools that make RBAC assignment and lifecycle management easier. This includes assigning access using [entitlement management](../governance/entitlement-management-overview.md) features, include [Access Packages](../governance/entitlement-management-access-package-create.md) and [Access Reviews](../governance/access-reviews-overview.md). These ease the burden of managing authorizations by providing self-service requests and automated functions to managed the lifecycle, for example by automatically ending access based of specific criteria.
+Role-based access control (RBAC) is an important way to enforce basic authorizations through assignments of users to a role in a particular scope. Azure AD has tools that make RBAC assignment and lifecycle management easier. For example, you can assign access by using [entitlement management](../governance/entitlement-management-overview.md) features, including [access packages](../governance/entitlement-management-access-package-create.md) and [access reviews](../governance/access-reviews-overview.md).
+
+These features ease the burden of managing authorizations by providing self-service requests and automated functions to manage the lifecycle. For example, you can automatically end access based on specific criteria.
## Attribute-based controls
-Attribute based access controls rely on metadata assigned to a user or resource as a mechanism to permit or deny access during authentication. There are several ways you can create authorizations using ABAC enforcements for data and resources through authentication.
+Attribute-based access control (ABAC) relies on metadata assigned to a user or resource as a mechanism to permit or deny access during authentication. There are several ways to create authorizations by using ABAC enforcements for data and resources through authentication.
### Attributes assigned to users
-Attributes assigned to users and stored in Azure AD can be leveraged to create authorizations for users. This is achieved through the automatic assignment of users to [Dynamic Groups](../enterprise-users/groups-create-rule.md) based on a particular ruleset defined during group creation. Rules are configured to add or remove a user from the group based on the evaluation of the rule against the user and one or more of their attributes. This feature has greater value when your attributes are maintained and not statically set on users from the day of creation.
+You can use attributes assigned to users and stored in Azure AD to create authorizations for users. Users can be automatically assigned to [dynamic groups](../enterprise-users/groups-create-rule.md) based on a particular ruleset that you define during group creation. Rules are configured to add or remove a user from the group based on the evaluation of the rule against the user and one or more of their attributes. This feature has greater value when your attributes are maintained and not statically set on users from the day of creation.
### Attributes assigned to data
-Azure AD allows integration of an authorization directly to the data. You can create integrate authorization in multiple ways.
+Azure AD allows integration of an authorization directly to the data. You can integrate authorization in multiple ways.
+
+You can configure [authentication context](../conditional-access/concept-conditional-access-cloud-apps.md) within conditional access policies. This allows you to, for example, restrict which actions a user can take within an application or on specific data. These authentication contexts are then mapped within the data source itself.
+
+Data sources can be Microsoft Office files like Word and Excel, or SharePoint sites that are mapped to your authentication context. For an example of this integration, see [Manage site access based on sensitivity label](/sharepoint/authentication-context-example).
-You can configure [authentication context](../conditional-access/concept-conditional-access-cloud-apps.md) within Conditional Access Policies. This allows you to, for example, restrict which actions a user can take within an application or on specific data. These authentication contexts are then mapped within the data source itself. Data sources can be office files like word and excel or SharePoint sites that use mapped to your authentication context. An example of this integration is shown [here](/sharepoint/authentication-context-example).
+You can also use authentication context assigned to data directly in your applications. This approach requires integration with the application code and [developers](../develop/developer-guide-conditional-access-authentication-context.md) to adopt this capability. You can use authentication context integration with Microsoft Defender for Cloud Apps to control [actions taken on data through session controls](/defender-cloud-apps/session-policy-aad).
-You can also leverage authentication context assigned to data directly in your applications. This requires integration with the application code and [developers](../develop/developer-guide-conditional-access-authentication-context.md) to adopt this capability. Authentication context integration with Microsoft Defender for Cloud Apps can be used to control [actions taken on data using session controls](/defender-cloud-apps/session-policy-aad). Dynamic groups mentioned previously when combined with Authentication context allow you to control user access mappings between the data and the user attributes.
+If you combine dynamic groups with authentication context, you can control user access mappings between the data and the user attributes.
### Attributes assigned to resources
-Azure includes [ABAC for Storage](../../role-based-access-control/conditions-overview.md) which allows the assignment of metadata tags on data stored in an Azure blob storage account. This metadata can then be assigned to users using role assignments to grant access.
+Azure includes [ABAC for Storage](../../role-based-access-control/conditions-overview.md), which allows the assignment of metadata tags on data stored in an Azure Blob Storage account. You can then assign this metadata to users by using role assignments to grant access.
-## Privileged Access Management
+## Privileged access management
-The memo specifically calls out the use of privileged access management tools that leverage single factor ephemeral credentials for accessing systems as insufficient. These technologies often include password vault products that accept MFA logon for an admin and produce a generated password for an alternate account used to access the system. The system being accessed is still accessed with a single factor. Microsoft has tools for implementing [Privileged identity management](../privileged-identity-management/pim-configure.md) (PIM) for privileged systems with the central identity management system of Azure AD. Using the methods described in the MFA section you can enforce MFA for most privileged systems directly, whether these are applications, infrastructure, or devices. Azure also features PIM capabilities to step up into a specific privileged role. This requires implementation of PIM with Azure AD identities and identifying those systems that are privileged and require additional protections to prevent lateral movement. Configuration guidance is located [here](../privileged-identity-management/pim-deployment-plan.md).
+The memo specifically calls out the use of privileged access management tools that use single-factor ephemeral credentials for accessing systems as insufficient. These technologies often include password vault products that accept MFA sign-in for an admin and produce a generated password for an alternate account that's used to access the system. The system is still accessed with a single factor.
+
+Microsoft has tools for implementing [Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM) for privileged systems with the central identity management system of Azure AD. You can enforce MFA for most privileged systems directly, whether these systems are applications, infrastructure elements, or devices.
+
+Azure also features PIM capabilities to step up into a specific privileged role. This requires implementation of PIM with Azure AD identities, along with identifying systems that are privileged and require additional protections to prevent lateral movement. For configuration guidance, see [Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md).
## Next steps
-The following articles are a part of this documentation set:
+The following articles are part of this documentation set:
-[Meet identity requirements of Memorandum 22-09](memo-22-09-meet-identity-requirements.md)
+[Meet identity requirements of memorandum 22-09](memo-22-09-meet-identity-requirements.md)
[Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md)
-[Multi-factor authentication](memo-22-09-multi-factor-authentication.md)
-
-[Authorization](memo-22-09-authorization.md)
+[Multifactor authentication](memo-22-09-multi-factor-authentication.md)
[Other areas of Zero Trust](memo-22-09-other-areas-zero-trust.md)
-Additional Zero Trust Documentation
+For more information about Zero Trust, see:
[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
Title: Memo 22-09 enterprise-wide identity management systems
-description: Guidance on meeting enterprise-wide identity management system requirements outlined in US government OMB memorandum 22-09
+ Title: Memo 22-09 enterprise-wide identity management system
+description: Get guidance on meeting enterprise-wide identity management system requirements outlined in US government OMB memorandum 22-09.
# Enterprise-wide identity management system
-M-22-09 requires agencies to develop a plan to consolidate their identity platforms to ΓÇ£as few agency managed identity systems as possibleΓÇ¥ within 60 days of publication, March 28, 2022. There are several advantages to consolidating your identity platform:
+Memorandum 22-09 requires agencies to develop a plan to consolidate their identity platforms to as few agency-managed identity systems as possible within 60 days of the publication date (March 28, 2022). There are several advantages to consolidating your identity platform:
-* Centralized management of identity lifecycle, policy enforcement, and auditable controls.
+* Centralized management of identity lifecycle, policy enforcement, and auditable controls
-* Uniform capability and parity of enforcement.
+* Uniform capability and parity of enforcement
-* Reduced need to train resources across multiple systems.
+* Reduced need to train resources across multiple systems
-* Enable users to sign in once and then directly access applications and services in the IT environment.
+* Enabling users to sign in once and then directly access applications and services in the IT environment
-* Integrate with as many agency applications as possible.
+* Integration with as many agency applications as possible
-* Facilitate integration among agencies using shared authentication services and trust relationships
-
-
+* Use of shared authentication services and trust relationships to facilitate integration among agencies
## Why Azure Active Directory?
-Azure Active Directory provides the capabilities necessary to implement the recommendations from M-22-09 as well as other broad identity controls that support Zero Trust initiatives. Additionally, if your agency uses Microsoft Office 365, you already have an Azure AD back end to which you can consolidate.
+Azure Active Directory (Azure AD) provides the capabilities necessary to implement the recommendations from memorandum 22-09. It also provides broad identity controls that support Zero Trust initiatives. If your agency uses Microsoft Office 365, you already have an Azure AD back end to which you can consolidate.
## Single sign-on requirements
-The memo requires that users sign in once and then directly access applications. MicrosoftΓÇÖs robust single-sign-on capabilities enable the ability for users to sign-in once and then access cloud and other applications. For more information, see [Azure Active Directory Single sign-on](../hybrid/how-to-connect-sso.md).
+The memo requires that users sign in once and then directly access applications. Microsoft's robust single sign-on (SSO) capabilities enable users to sign in once and then access cloud and other applications. For more information, see [Azure Active Directory single sign-on](../hybrid/how-to-connect-sso.md).
+
+## Integration across agencies
-### Integration across agencies
+[Azure AD B2B collaboration](../external-identities/what-is-b2b.md) helps you meet the requirement to facilitate integration among agencies. It does this by:
-[Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) helps you to meet the requirement to facilitate integration among agencies. It does this by both limiting what other Microsoft tenants your users can access, and by enabling you to allow access to users that you do not have to manage in your own tenant, but whom you can subject to your MFA and other access requirements.
+- Limiting what other Microsoft tenants your users can access.
+- Enabling you to allow access to users whom you don't have to manage in your own tenant, but whom you can subject to your multifactor authentication (MFA) and other access requirements.
-## Connect applications
+## Connecting applications
-To consolidate your enterprise to using Azure AD as the enterprise-wide identity system, you must first understand the relevant assets that will be in scope.
+To consolidate your enterprise to using Azure AD as the enterprise-wide identity system, you must first understand the assets that will be in scope.
### Document applications and services
-To do so, you must inventory the applications and services that users will be accessing: An identity management system can protect only what it knows. Assets must be classified in terms of the sensitivity of data they contain, as well as laws and regulations that establish specific requirements for confidentiality, integrity, or availability of data/information in each major system and that apply to the systemΓÇÖs particular information protection requirements.
+You must inventory the applications and services that users will access. An identity management system can protect only what it knows.
-### Classifying applications and services
+Classify assets in terms of:
-As a part of your application inventory, you will need to determine if your current applications use ΓÇ£cloud readyΓÇ¥ or legacy authentication protocols
+- The sensitivity of data that they contain.
+- Laws and regulations that establish specific requirements for confidentiality, integrity, or availability of data/information in each major system and that apply to the system's information protection requirements.
-* Cloud ready applications support modern protocols for authentication such as SAML, WS-Federation/Trust, OpenID Connect (OIDC), and OAuth 2.0.
+As a part of your application inventory, you need to determine if your current applications use cloud-ready protocols or [legacy authentication protocols](../fundamentals/auth-sync-overview.md):
-* Legacy authentication applications rely on legacy or proprietary methods of authentication to include but not limited to Kerberos/NTLM (windows authentication), header based, LDAP, and basic authentication.
+* Cloud-ready applications support modern protocols for authentication, such as SAML, WS-Federation/Trust, OpenID Connect (OIDC), and OAuth 2.0.
+
+* Legacy authentication applications rely on older or proprietary methods of authentication, such as Kerberos/NTLM (Windows authentication), header-based authentication, LDAP, and Basic authentication.
#### Tools for application and service discovery
-Microsoft makes available several tools to help with your discovery of applications.
+Microsoft offers the following tools to help with your discovery of applications:
| Tool| Usage | | - | - |
-| [Usage Analytics for AD FS](../hybrid/how-to-connect-health-adfs.md)| Analyzes the authentication traffic of your federated servers. |
-| [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps) (MDCA)| Previously known as Microsoft Cloud App Security (MCAS), Defender for Cloud Apps scans firewall logs to detect cloud apps, IaaS and PaaS services used by your organization. Integrating MDCA with Defender for Endpoint allows discovery to happen from data analyzed from window client devices. |
-| [Application Documentation worksheet](https://download.microsoft.com/download/2/8/3/283F995C-5169-43A0-B81D-B0ED539FB3DD/Application%20Discovery%20worksheet.xlsx)| Helps you document the current states of your applications |
+| [Usage Analytics for Active Directory Federation Services (AD FS)](../hybrid/how-to-connect-health-adfs.md)| Analyzes the authentication traffic of your federated servers. |
+| [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps)| Scans firewall logs to detect cloud apps, infrastructure as a service (IaaS) services, and platform as a service (PaaS) services that your organization uses. It was previously called Microsoft Cloud App Security. Integrating Defender for Cloud Apps with Defender for Endpoint allows discovery to happen from data analyzed from Windows client devices. |
+| [Application Discovery worksheet](https://download.microsoft.com/download/2/8/3/283F995C-5169-43A0-B81D-B0ED539FB3DD/Application%20Discovery%20worksheet.xlsx)| Helps you document the current states of your applications. |
-We recognize that your apps may be in systems other than Microsoft, and that our tools may not discover those apps. Ensure you do a complete inventory. All providers should have mechanisms for discovering applications using their services.
+We recognize that your apps might be in systems other than Microsoft's, and that Microsoft tools might not discover those apps. Ensure that you do a complete inventory. All providers should have mechanisms for discovering applications that use their services.
-#### Prioritize applications for connection
+#### Prioritizing applications for connection
-Once you discover all applications in your environment, you will need to prioritize them for migration. You should consider business criticality, user profiles, usage, and lifespan.
+After you discover all applications in your environment, you need to prioritize them for migration. Consider business criticality, user profiles, usage, and lifespan.
For more information on prioritizing applications for migration, see [Migrating your applications to Azure Active Directory](https://aka.ms/migrateapps/whitepaper).
-First, connect your cloud-ready apps in priority order. Then look at applications using legacy authentication protocols.
+First, connect your cloud-ready apps in priority order. Then look at apps that use [legacy authentication protocols](../fundamentals/auth-sync-overview.md).
-For apps using [legacy authentication protocols](../fundamentals/auth-sync-overview.md), consider the following:
+For apps that use legacy authentication protocols, consider the following:
-* For apps with modern authentication not yet using Azure AD, reconfigure them to use Azure AD.
+* For apps with modern authentication that aren't yet using Azure AD, reconfigure them to use Azure AD.
* For apps without modern authentication, there are two choices:
- * Modernize the application code to use modern protocols by integrating the [Microsoft Authentication Library (MSAL).](../develop/v2-overview.md)
+ * Modernize the application code to use modern protocols by integrating the [Microsoft Authentication Library (MSAL)](../develop/v2-overview.md).
- * [Use Azure AD Application Proxy or Secure hybrid partner access](../manage-apps/secure-hybrid-access.md) to provide secure access.
+ * [Use Azure AD Application Proxy or secure hybrid partner access](../manage-apps/secure-hybrid-access.md) to provide secure access.
-* Decommission access to apps that are no longer needed, or are not supported (for example, apps added by shadow IT processes).
+* Decommission access to apps that are no longer needed or that aren't supported (for example, apps that shadow IT processes added).
-## Connect devices
+## Connecting devices
-Part of centralizing your identity management system will include enabling sign-in to devices. This enables users to sign in to physical and virtual devices.
+Part of centralizing your identity management system will include enabling users to sign in to physical and virtual devices.
-You can connect Windows and Linux devices in your centralized Azure AD system, eliminating the need to have multiple, separate identity systems.
+You can connect Windows and Linux devices in your centralized Azure AD system. That eliminates the need to have multiple, separate identity systems.
-During your inventory and scope phase you should consider identifying your devices and infrastructure so they may be integrated with Azure AD to centralize your authentication and management and take advantage of conditional access policies and MFA that can be enforced through Azure AD.
+During your inventory and scope phase, consider identifying your devices and infrastructure so they can be integrated with Azure AD. This integration will centralize your authentication and management. It will also take advantage of conditional access policies and MFA that can be enforced through Azure AD.
### Tools for discovering devices
-You can leverage [Azure automation accounts](../../automation/change-tracking/manage-inventory-vms.md) to identify devices through inventory collection connected to Azure monitor.
+You can use [Azure Automation accounts](../../automation/change-tracking/manage-inventory-vms.md) to identify devices through inventory collection connected to Azure Monitor.
-[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/machines-view-overview?view=o365-worldwide) (MDE) also features device inventory capabilities and discovery. This feature looks at devices that have MDE configured as well as network discovery of devices not configured with MDE. Device inventory may also come from on-premises systems such as [Configuration manager](/mem/configmgr/core/clients/manage/inventory/introduction-to-hardware-inventory) to do device inventory or other 3<sup data-htmlnode="">rd</sup> party systems used to manage devices and clients.
+[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/machines-view-overview) also features device inventory capabilities and discovery. This feature discovers which devices have Defender for Endpoint configured and which devices don't. Device inventory can also come from on-premises systems such as [System Center Configuration Manager](/mem/configmgr/core/clients/manage/inventory/introduction-to-hardware-inventory) or other systems that manage devices and clients.
-### Integrate devices to Azure AD
+### Integration of devices with Azure AD
-Devices integrated with Azure AD can either be [hybrid joined devices](../devices/concept-azure-ad-join-hybrid.md) or [Azure Active directory joined devices](../devices/concept-azure-ad-join-hybrid.md). Agencies should separate device onboarding by client and end user devices and physical and virtual machines that operate as infrastructure. For more information about choosing and implementing your end-user device deployment strategy, see [Plan your Azure AD device deployment](../devices/plan-device-deployment.md). For servers and infrastructure consider the following examples for connecting:
+Devices integrated with Azure AD can be either [hybrid joined devices](../devices/concept-azure-ad-join-hybrid.md) or [Azure AD joined devices](../devices/concept-azure-ad-join.md). Agencies should separate device onboarding by client and user devices, and by physical and virtual machines that operate as infrastructure. For more information about choosing and implementing your deployment strategy for user devices, see [Plan your Azure AD device deployment](../devices/plan-device-deployment.md). For servers and infrastructure, consider the following examples for connecting:
-* [Azure windows VMΓÇÖs](../devices/howto-vm-sign-in-azure-ad-windows.md)
+* [Azure Windows virtual machines](../devices/howto-vm-sign-in-azure-ad-windows.md)
-* [Azure Linux VMΓÇÖs](../devices/howto-vm-sign-in-azure-ad-linux.md)
+* [Azure Linux virtual machines](../devices/howto-vm-sign-in-azure-ad-linux.md)
-* [VDI infrastructure](../devices/howto-device-identity-virtual-desktop-infrastructure.md)
+* [Virtual desktop infrastructure](../devices/howto-device-identity-virtual-desktop-infrastructure.md)
## Next steps
-The following articles are a part of this documentation set:
-
-[Meet identity requirements of Memorandum 22-09](memo-22-09-meet-identity-requirements.md)
+The following articles are part of this documentation set:
-[Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md)
+[Meet identity requirements of memorandum 22-09](memo-22-09-meet-identity-requirements.md)
-[Multi-factor authentication](memo-22-09-multi-factor-authentication.md)
+[Multifactor authentication](memo-22-09-multi-factor-authentication.md)
[Authorization](memo-22-09-authorization.md) [Other areas of Zero Trust](memo-22-09-other-areas-zero-trust.md)
-Additional Zero Trust Documentation
+For more information about Zero Trust, see:
[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
Title: Memo 22-09 identity requirements overview
-description: Guidance on meeting requirements outlined in US government OMB memorandum 22-09
+description: Get guidance on meeting requirements outlined in US government OMB memorandum 22-09.
-# Meeting identity requirements of Memorandum 22-09 with Azure Active Directory
+# Meet identity requirements of memorandum 22-09 with Azure Active Directory
-Executive order [14028, Improving the NationΓÇÖs Cyber Security](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity), directs federal agencies on advancing security measures that dramatically reduce the risk of successful cyber attacks against the federal governmentΓÇÖs digital infrastructure. On January 26, 2022, the [Office of Management and Budget (OMB)](https://www.whitehouse.gov/omb/) released the Federal Zero Trust Strategy [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf) in support of EO 14028.
+US executive order [14028, Improving the Nation's Cyber Security](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity), directs federal agencies on advancing security measures that dramatically reduce the risk of successful cyberattacks against the federal government's digital infrastructure. On January 26, 2022, the [Office of Management and Budget (OMB)](https://www.whitehouse.gov/omb/) released the federal Zero Trust strategy in [memorandum 22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf), in support of EO 14028.
-This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document we refer to it as "The memo."
+This series of articles offers guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles, as described in memorandum 22-09.
-The release of Memorandum 22-09 is designed to support Zero trust initiatives within federal agencies; it also provides regulatory guidance in supporting Federal Cybersecurity and Data Privacy Laws. The Memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf),
+The release of memorandum 22-09 is designed to support Zero Trust initiatives within federal agencies. It also provides regulatory guidance in supporting federal cybersecurity and data privacy paws. The memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf):
-"The foundational tenet of the Zero Trust Model is that no actor, system, network, or service operating outside or within the security perimeter is trusted. Instead, we must verify anything and everything attempting to establish access. It is a dramatic paradigm shift in philosophy of how we secure our infrastructure, networks, and data, from verify once at the perimeter to continual verification of each user, device, application, and transaction."
+>"The foundational tenet of the Zero Trust Model is that no actor, system, network, or service operating outside or within the security perimeter is trusted. Instead, we must verify anything and everything attempting to establish access. It is a dramatic paradigm shift in philosophy of how we secure our infrastructure, networks, and data, from verify once at the perimeter to continual verification of each user, device, application, and transaction."
-The Memo identifies five core goals that must be reached by federal agencies. These goals are organized using the Cybersecurity Information Systems Architecture (CISA) Maturity Model. CISAΓÇÖs zero trust model describes five complementary areas of effort ΓÇô or pillars: Identity, Devices, Networks, Applications and Workloads, and Data; with three themes that cut across these areas (Visibility and Analytics, Automation and Orchestration, and Governance).
+The memo identifies five core goals that federal agencies must reach. These goals are organized through the Cybersecurity Information Systems Architecture (CISA) Maturity Model. CISA's Zero Trust model describes five complementary areas of effort, or pillars: identity, devices, networks, applications and workloads, and data. These themes cut across these areas: visibility and analytics, automation and orchestration, and governance.
## Scope of guidance
-This series of articles provides practical guidance for administrators and decision makers to adapt a plan to meet memo requirements. It assumes that you are using Microsoft 365 products, and therefore have an Azure Active Directory tenant available. If this is inaccurate, see [Access & create new tenant](../fundamentals/active-directory-access-create-new-tenant.md).
+This series of articles provides practical guidance for administrators and decision makers to adapt a plan to meet memo requirements. It assumes that you're using Microsoft 365 products and therefore have an Azure AD tenant available. If this is inaccurate, see [Create a new tenant in Azure Active Directory](../fundamentals/active-directory-access-create-new-tenant.md).
-It features guidance encompassing existing agency investments in Microsoft technologies that align with the identity-related actions outlined in the memo:
+The article series features guidance that encompasses existing agency investments in Microsoft technologies that align with the identity-related actions outlined in the memo:
-* Agencies must employ centralized identity management systems for agency users that
-can be integrated into applications and common platforms.
+* Agencies must employ centralized identity management systems for agency users that can be integrated into applications and common platforms.
+* Agencies must use strong multifactor authentication (MFA) throughout their enterprise:
-* Agencies must use strong multi-factor authentication (MFA) throughout their enterprise.
+ * MFA must be enforced at the application layer instead of the network layer.
- * MFA must be enforced at the application layer, instead of the network layer.
-
- * For agency staff, contractors, and partners, phishing-resistant MFA is required.
-
- * For public users, phishing-resistant MFA must be an option.
+ * For agency staff, contractors, and partners, phishing-resistant MFA is required. For public users, phishing-resistant MFA must be an option.
* Password policies must not require the use of special characters or regular rotation.
-* When authorizing users to access resources, agencies must consider at least one device-level signal alongside identity information about the authenticated user.
+* When agencies are authorizing users to access resources, they must consider at least one device-level signal alongside identity information about the authenticated user.
## Next steps
-The following articles are a part of this documentation set:
-
-[Meet identity requirements of Memorandum 22-09](memo-22-09-meet-identity-requirements.md)
+The following articles are part of this documentation set:
[Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md)
-[Multi-factor authentication](memo-22-09-multi-factor-authentication.md)
+[Multifactor authentication](memo-22-09-multi-factor-authentication.md)
[Authorization](memo-22-09-authorization.md) [Other areas of Zero Trust](memo-22-09-other-areas-zero-trust.md)
-Additional Zero Trust Documentation
+For more information about Zero Trust, see:
[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
active-directory Memo 22 09 Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-multi-factor-authentication.md
Title: Memo 22-09 multi-factor authentication requirements overview
-description: Guidance on meeting multi-factor authentication requirements outlined in US government OMB memorandum 22-09
+ Title: Memo 22-09 multifactor authentication requirements overview
+description: Get guidance on meeting multifactor authentication requirements outlined in US government OMB memorandum 22-09.
-# Multi-factor authentication
+# Meet multifactor authentication requirements of memorandum 22-09
-This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document we refer to it as "The Memo."
+This series of articles offers guidance for using Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles, as described in the US federal government's Office of Management and Budget (OMB) [memorandum 22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf).
-The Memo requires that all employees use enterprise-managed identities to access applications, and that phishing-resistant multi-factor authentication (MFA) protect those personnel from sophisticated online attacks. *Phishing* is the attempt to obtain and compromise credentials, for example through sending a spoofed email that leads to an inauthentic site.
+The memo requires that all employees use enterprise-managed identities to access applications, and that phishing-resistant multifactor authentication (MFA) protect those personnel from sophisticated online attacks. Phishing is the attempt to obtain and compromise credentials, such as by sending a spoofed email that leads to an inauthentic site.
-Adoption of MFA is critical to preventing unauthorized access to accounts and data. The Memo requires MFA usage with phishing resistant methods, defined as "authentication processes designed to detect and prevent disclosure of authentication secrets and outputs to a website or application masquerading as a legitimate system." The first step is to establish what MFA methods qualify as phishing resistant.
+Adoption of MFA is critical for preventing unauthorized access to accounts and data. The memo requires MFA usage with phishing-resistant methods, defined as "authentication processes designed to detect and prevent disclosure of authentication secrets and outputs to a website or application masquerading as a legitimate system." The first step is to establish what MFA methods qualify as phishing resistant.
-## Phishing resistant methods
+## Phishing-resistant methods
-* AD FS as a federated identity provider configured with Certificate Based Authentication
+* Active Directory Federation Services (AD FS) as a federated identity provider that's configured with certificate-based authentication.
-* Azure AD Certificate Based Authentication
+* Azure AD certificate-based authentication.
-* FIDO2 security keys
+* FIDO2 security keys.
-* Windows Hello for Business
+* Windows Hello for Business.
-* Microsoft Authenticator + Conditional access policies that enforce managed or compliant devices to access the application or service
+* Microsoft Authenticator and conditional access policies that enforce managed or compliant devices to access the application or service. Microsoft Authenticator native phishing resistance is in development.
- * Microsoft Authenticator native phishing resistance is in development.
+Your current device capabilities, user personas, and other requirements might dictate specific multifactor methods. For example, if you're adopting FIDO2 security keys that have only USB-C support, they can be used only from devices with USB-C ports.
-### MFA requirements by method
+Consider the following approaches to evaluating phishing-resistant MFA methods:
-Your current device capabilities, user personas, and other requirements may dictate specific multi-factor methods. For example, if you are adopting FIDO2 security keys that have only USB-C support, they can only be leveraged from devices with USB-C ports.
+* Device types and capabilities that you want to support. Examples include kiosks, laptops, mobile phones, biometric readers, USB, Bluetooth, and near-field communication devices.
-Consider an approach to evaluating phishing resistant MFA methods that encompasses the following aspects:
+* User personas within your organization. Examples include front-line workers, remote workers with and without company-owned hardware, administrators with privileged access workstations, and business-to-business guest users.
-* Device types and capabilities you wish to support
+* Logistics of distributing, configuring, and registering MFA methods such as FIDO2 security keys, smart cards, government-furnished equipment, or Windows devices with TPM chips.
-* Examples: Kiosks, laptops, mobile phones, biometric readers, USB, Bluetooth, NFC, etc.
+* Need for FIPS 140 validation at a specific [authenticator assurance level](nist-about-authenticator-assurance-levels.md). For example, some FIDO security keys are FIPS 140 validated at levels required for [AAL3](nist-authenticator-assurance-level-3.md), as set by [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
-* The user personas within your organization
+## Implementation considerations for phishing-resistant MFA
- * Examples: Front line workers, remote workers with and without company owned hardware, Administrators with privileged access workstation, B2B guest users, etc.
+The following sections describe support for implementing phishing-resistant methods for both application and virtual device sign-in scenarios.
-* Logistics of distributing, configuring, and registering MFA methods such as FIDO 2.0 security keys, smart cards, government furnished equipment, or Windows devices with TPM chips.
+### Application sign-in scenarios from various clients
-* Need for FIPS 140 validation at a specific [authenticator assurance level](nist-about-authenticator-assurance-levels.md) (AAL).
+The following table details the availability of phishing-resistant MFA scenarios, based on the device type that's used to sign in to the applications:
- * For example, some FIDO security keys are FIPS 140-validated at levels required for [AAL3](nist-authenticator-assurance-level-3.md) as set by [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
-
-## Implementation considerations for phishing resistant MFA
-
-The following describes support for implementing phishing resistant methods mentioned previously for both application and virtual device sign-in scenarios.
-
-### Application sign-in scenarios from different clients
-
-The following table details the availability of phishing-resistant MFA scenarios based on the device type being used to sign-in to the applications.
--
-| Devices | AD FS as a federated IDP configured with certificate-based authentication| Azure AD certificate-based authentication| FIDO 2.0 security keys| Windows hello for Business| Microsoft authenticator + CA for managed devices |
+| Device | AD FS as a federated identity provider configured with certificate-based authentication| Azure AD certificate-based authentication| FIDO2 security keys| Windows Hello for Business| Microsoft authenticator + certificate authority for managed devices |
| - | - | - | - | - | - | | Windows device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
-| iOS mobile device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| N/A| N/A| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
-| Android mobile device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| N/A| N/A| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
-| MacOS device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| Edge/Chrome (Safari coming later)| N/A| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
+| iOS mobile device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| Not applicable| Not applicable| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
+| Android mobile device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| Not applicable| Not applicable| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
+| MacOS device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| Edge/Chrome | Not applicable| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
-To learn more, see [Browser support for Fido 2.0 passwordless authentication](../authentication/fido2-compatibility.md).
+To learn more, see [Browser support for FIDO2 passwordless authentication](../authentication/fido2-compatibility.md).
-### Virtual device sign-in scenarios requiring integration
+### Virtual device sign-in scenarios that require integration
-To enforce the use of phishing resistant MFA methods, integration may be necessary based on your requirements. MFA should be enforced both when users access applications and devices.
+To enforce the use of phishing-resistant MFA methods, integration might be necessary based on your requirements. MFA should be enforced when users access applications and devices.
For each of the five phishing-resistant MFA types previously mentioned, you use the same capabilities to access the following device types:
-| Target System| Integration Actions |
+| Target system| Integration actions |
| - | - |
-| Azure Linux VM| Enable the [Linux VM for Azure AD sign-in](../devices/howto-vm-sign-in-azure-ad-linux.md) |
-| Azure Windows VM| Enable the [Windows VM for Azure AD sign-in](../devices/howto-vm-sign-in-azure-ad-windows.md) |
-| Azure Virtual Desktop| Enable [Azure virtual desktop for Azure AD sign-in](/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join) |
-| VMs hosted on-prem or in other clouds| Enable [Azure Arc](../../azure-arc/overview.md) on the VM then enable Azure AD sign-in. (Currently in private preview for Linux. Support for Windows VMs hosted in these environments is on our roadmap.) |
-| Non-Microsoft virtual desktop solutions| Integrate 3rd party virtual desktop solution as an app in Azure AD |
+| Azure Linux virtual machine (VM)| Enable the [Linux VM for Azure AD sign-in](../devices/howto-vm-sign-in-azure-ad-linux.md). |
+| Azure Windows VM| Enable the [Windows VM for Azure AD sign-in](../devices/howto-vm-sign-in-azure-ad-windows.md). |
+| Azure Virtual Desktop| Enable [Azure Virtual Desktop for Azure AD sign-in](/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join). |
+| VMs hosted on-premises or in other clouds| Enable [Azure Arc](../../azure-arc/overview.md) on the VM and then enable Azure AD sign-in. (Currently in private preview for Linux. Support for Windows VMs hosted in these environments is on our roadmap.) |
+| Non-Microsoft virtual desktop solution| Integrate the virtual desktop solution as an app in Azure AD. |
### Enforcing phishing-resistant MFA
-Conditional Access enables you to enforce MFA for users in your tenant. With the addition of Cross Tenant Access Policies, you can enforce it on external users.
+Conditional access enables you to enforce MFA for users in your tenant. With the addition of [cross-tenant access policies](../external-identities/cross-tenant-access-overview.md), you can enforce it on external users.
#### Enforcement across agencies
-[Azure AD B2B collaboration](../external-identities/what-is-b2b.md) (B2B) helps you to meet the requirement to facilitate integration among agencies. It does this by both limiting what other Microsoft tenants your users can access, and by enabling you to allow access to users that you do not have to manage in your own tenant, but whom you can subject to your MFA and other access requirements.
+[Azure AD B2B collaboration](../external-identities/what-is-b2b.md) helps you meet the requirement to facilitate integration among agencies. It does this by:
+
+- Limiting what other Microsoft tenants your users can access.
+- Enabling you to allow access to users whom you don't have to manage in your own tenant, but whom you can subject to your MFA and other access requirements.
+
+You must enforce MFA for partners and external users who access your organization's resources. This is common in many inter-agency collaboration scenarios. Azure AD provides cross-tenant access policies to help you configure MFA for external users who access your applications and resources.
-You must enforce MFA for partners and external users who access your organizationΓÇÖs resources. This is common in many inter-agency collaboration scenarios. Azure AD provides [Cross Tenant Access Policies (XTAP)](../external-identities/cross-tenant-access-overview.md) to help you configure MFA for external users accessing your applications and resources. XTAP uses trust settings that allow you to trust the MFA method used by the guest userΓÇÖs tenant instead of having them register an MFA method directly with your tenant. These policies can be configured on a per organization basis. This requires you to understand the available MFA methods in the userΓÇÖs home tenant and determine if they meet the requirement for phishing resistance.
+By using trust settings in cross-tenant access policies, you can trust the MFA method that the guest user's tenant is using instead of having them register an MFA method directly with your tenant. These policies can be configured on a per-organization basis. This ability requires you to understand the available MFA methods in the user's home tenant and determine if they meet the requirement for phishing resistance.
## Password policies
-The memo requires organizations to change password policies that have proven to be ineffective, such as complex passwords that are rotated often. This includes the removal of the requirement for special characters and numbers as well as time-based password rotation policies. Instead, consider doing the following:
+The memo requires organizations to change password policies that are proven ineffective, such as complex passwords that are rotated often. This includes the removal of the requirement for special characters and numbers, along with time-based password rotation policies. Instead, consider doing the following:
-* Use [password protection](..//authentication/concept-password-ban-bad.md) to enforce a common list Microsoft maintains of weak passwords. You can also add custom banned passwords.
+* Use [password protection](..//authentication/concept-password-ban-bad.md) to enforce a common list of weak passwords that Microsoft maintains. You can also add custom banned passwords.
-* Use [self-service password protection](..//authentication/tutorial-enable-sspr.md) (SSPR) to enable users to reset passwords as needed, for example after an account recovery.
+* Use [self-service password protection](..//authentication/tutorial-enable-sspr.md) to enable users to reset passwords as needed, such as after an account recovery.
* Use [Azure AD Identity Protection](..//identity-protection/concept-identity-protection-risks.md) to be alerted about compromised credentials so you can take immediate action.
-While the memo isnΓÇÖt specific on which policies to use with passwords consider the standard from [NIST 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
+Although the memo isn't specific on which policies to use with passwords, consider the standard from [NIST 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
## Next steps
-The following articles are a part of this documentation set:
-[Meet identity requirements of Memorandum 22-09](memo-22-09-meet-identity-requirements.md)
+The following articles are part of this documentation set:
-[Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md)
+[Meet identity requirements of memorandum 22-09](memo-22-09-meet-identity-requirements.md)
-[Multi-factor authentication](memo-22-09-multi-factor-authentication.md)
+[Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md)
[Authorization](memo-22-09-authorization.md) [Other areas of Zero Trust](memo-22-09-other-areas-zero-trust.md)
-Additional Zero Trust Documentation
+For more information about Zero Trust, see:
[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
active-directory Memo 22 09 Other Areas Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md
Title: Memo 22-09 other areas of Zero Trust
-description: Guidance on understanding other Zero Trust requirements outlined in US government OMB memorandum 22-09
+description: Get guidance on understanding other Zero Trust requirements outlined in US government OMB memorandum 22-09.
-# Other areas of zero trust addressed in Memo 22-09
+# Other areas of Zero Trust addressed in memorandum 22-09
-This other articles in this guidance set address the identity pillar of Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). There are areas of the zero trust maturity model that cover topics beyond the identity pillar.
+The other articles in this guidance set address the identity pillar of Zero Trust principles, as described in the US federal government's Office of Management and Budget (OMB) [memorandum 22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). This article covers areas of the Zero Trust maturity model that are beyond the identity pillar.
This article addresses the following cross-cutting themes:
This article addresses the following cross-cutting themes:
* Governance ## Visibility
-It's important to monitor your Azure AD tenant. You must adopt an "assume breach" mindset and meet compliance standards set forth in [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf) and [Memorandum M-21-31](https://www.whitehouse.gov/wp-content/uploads/2021/08/M-21-31-Improving-the-Federal-Governments-Investigative-and-Remediation-Capabilities-Related-to-Cybersecurity-Incidents.pdf). There are three primary log types used for security analysis and ingestion:
-* [Azure Audit Log.](../reports-monitoring/concept-audit-logs.md) Used to monitor operational activities of the directory itself such as creating, deleting, updating objects like users or groups, as well as making changes to configurations of Azure AD like modifications to a conditional access policy.
+It's important to monitor your Azure Active Directory (Azure AD) tenant. You must adopt an "assume breach" mindset and meet compliance standards in memorandum 22-09 and [memorandum 21-31](https://www.whitehouse.gov/wp-content/uploads/2021/08/M-21-31-Improving-the-Federal-Governments-Investigative-and-Remediation-Capabilities-Related-to-Cybersecurity-Incidents.pdf). Three primary log types are used for security analysis and ingestion:
-* [Azure AD Sign-In Logs.](../reports-monitoring/concept-all-sign-ins.md) Used to monitor all sign-in activities associated with users, applications, and service principals. The sign-in logs contain specific categories of sign-ins for easy differentiation:
+* [Azure audit logs](../reports-monitoring/concept-audit-logs.md). Used for monitoring operational activities of the directory itself, such as creating, deleting, updating objects like users or groups. Also used for making changes to configurations of Azure AD, like modifications to a conditional access policy.
- * Interactive sign-ins: Shows user successful and failed sign-ins for failures, the policies that may have been applied, and other relevant metadata.
+* [Azure AD sign-in logs](../reports-monitoring/concept-all-sign-ins.md). Used for monitoring all sign-in activities associated with users, applications, and service principals. The sign-in logs contain specific categories of sign-ins for easy differentiation:
- * Non-interactive user sign-ins: Shows sign-ins where a user did not perform an interaction during sign-in. These sign-ins are typically clients signing in on behalf of the user, such as mobile applications or email clients.
+ * Interactive sign-ins: Shows user successful and failed sign-ins for failures, the policies that might have been applied, and other relevant metadata.
- * Service principal sign-ins: Shows sign-ins by service principals or applications.Ttypically these are headless, and done by services or applications accessing other services, applications, or Azure AD directory itself through REST API.
+ * Non-interactive user sign-ins: Shows sign-ins where a user did not perform an interaction during sign-in. These sign-ins are typically clients signing in on behalf of the user, such as mobile applications or email clients.
- * Managed identities for azure resource sign-ins: Shows sign-ins from resources with Azure Managed Identities. Typically these are Azure resources or applications accessing other Azure resources, such as a web application service authenticating to an Azure SQL backend.
+ * Service principal sign-ins: Shows sign-ins by service principals or applications. Typically, these are headless and done by services or applications that are accessing other services, applications, or the Azure AD directory itself through the REST API.
-* [Provisioning Logs.](../reports-monitoring/concept-provisioning-logs.md) Shows information about objects synchronized from Azure AD to applications like Service Now by using SCIM.
+ * Managed identities for Azure resource sign-ins: Shows sign-ins from resources with Azure managed identities. Typically, these are Azure resources or applications that are accessing other Azure resources, such as a web application service authenticating to an Azure SQL back end.
-Log entries are stored for 7 days in Azure AD free tenants. Tenants with an Azure AD premium license retain log entries for 30 days. ItΓÇÖs important to ensure your logs are ingested by a SIEM tool. Using a SIEM allows sign-in and audit events to be correlated with application, infrastructure, data, device, and network logs for a holistic view of your systems. Microsoft recommends integrating your Azure AD logs with [Microsoft Sentinel](../../sentinel/overview.md) by configuring a connector to ingest your Azure AD tenant Logs.
-For more information, see [Connect Azure Active Directory to Sentinel](../../sentinel/connect-azure-active-directory.md).
-You can also configure the [diagnostic settings](../reports-monitoring/overview-monitoring.md) on your Azure AD tenant to send the data to either a Storage account, EventHub, or Log analytics workspace. These storage options allow you to integrate other SIEM tools to collect the data. For more information, see [Plan reports & monitoring deployment](../reports-monitoring/plan-monitoring-and-reporting.md).
+* [Provisioning logs](../reports-monitoring/concept-provisioning-logs.md). Shows information about objects synchronized from Azure AD to applications like Service Now by using Microsoft Identity Manager.
+
+Log entries are stored for 7 days in Azure AD free tenants. Tenants with an Azure AD premium license retain log entries for 30 days.
+
+It's important to ensure that your logs are ingested by a security information and event management (SIEM) tool. Using a SIEM tool allows sign-in and audit events to be correlated with application, infrastructure, data, device, and network logs for a holistic view of your systems.
+
+We recommend that you integrate your Azure AD logs with [Microsoft Sentinel](../../sentinel/overview.md) by configuring a connector to ingest your Azure AD tenant logs. For more information, see [Connect Azure Active Directory to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md).
+
+You can also configure the [diagnostic settings](../reports-monitoring/overview-monitoring.md) on your Azure AD tenant to send the data to an Azure Storage account, Azure Event Hubs, or a Log Analytics workspace. These storage options allow you to integrate other SIEM tools to collect the data. For more information, see [Plan an Azure Active Directory reporting and monitoring deployment](../reports-monitoring/plan-monitoring-and-reporting.md).
## Analytics
-Analytics can be used to aggregate information from Azure AD to show trends in your security posture in comparison to your baseline. Analytics can also be used as the way to assess and look for patterns or threats across Azure AD.
+You can use analytics in the following tools to aggregate information from Azure AD and show trends in your security posture in comparison to your baseline. You can also use analytics to assess and look for patterns or threats across Azure AD.
-* [Azure AD Identity Protection.](../identity-protection/overview-identity-protection.md) Identity protection actively analyses sign-ins and other telemetry sources for risky behavior. Identity protection assigns a risk score to a sign-in event. You can prevent sign-ins, or force a step-up authentication, to access a resource or application based on risk score.
+* [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) actively analyzes sign-ins and other telemetry sources for risky behavior. Identity Protection assigns a risk score to a sign-in event. You can prevent sign-ins, or force a step-up authentication, to access a resource or application based on risk score.
-* [Microsoft Sentinel.](../../sentinel/get-visibility.md) Sentinel has many ways in which information from Azure AD can be analyzed.
+* [Microsoft Sentinel](../../sentinel/get-visibility.md) offers the following ways to analyze information from Azure AD:
- * Microsoft Sentinel has [User and Entity Behavioral Analytics (UEBA)](../../sentinel/identify-threats-with-entity-behavior-analytics.md). UEBA delivers high-fidelity, actionable intelligence on potential threats involving user, hosts, IP addresses, and application entities. This enhances events across the enterprise to help detect anomalous behavior in users and systems.
+ * Microsoft Sentinel has [User and Entity Behavior Analytics (UEBA)](../../sentinel/identify-threats-with-entity-behavior-analytics.md). UEBA delivers high-fidelity, actionable intelligence on potential threats that involve user, host, IP address, and application entities. This intelligence enhances events across the enterprise to help detect anomalous behavior in users and systems.
- * Specific analytics rule templates that hunt for threats and alerts found in information in your Azure AD logs. Your security or operation analyst can then triage and remediate threats.
+ * You can use specific analytics rule templates that hunt for threats and alerts found in your Azure AD logs. Your security or operation analyst can then triage and remediate threats.
- * Microsoft Sentinel has [workbooks](../../sentinel/top-workbooks.md) available that help visualize multiple Azure AD data sources. These include workbooks that show aggregate sign-ins by country, or applications with the most sign-ins. You can also create or modify existing workbooks to view information or threats in a dashboard to gain insights.
+ * Microsoft Sentinel has [workbooks](../../sentinel/top-workbooks.md) that help you visualize multiple Azure AD data sources. These workbooks can show aggregate sign-ins by country, or applications that have the most sign-ins. You can also create or modify existing workbooks to view information or threats in a dashboard to gain insights.
-* [Azure AD usage and insights report.](../reports-monitoring/concept-usage-insights-report.md) These reports show information similar to sentinel workbooks, including which applications have the highest usage or logon trends over a given time period. These are useful for understanding aggregate trends in your enterprise which may indicate an attack or other events.
+* [Azure AD usage and insights reports](../reports-monitoring/concept-usage-insights-report.md) show information similar to Azure Sentinel workbooks, including which applications have the highest usage or sign-in trends over a time period. The reports are useful for understanding aggregate trends in your enterprise that might indicate an attack or other events.
## Automation and orchestration
-Automation is an important aspect of Zero Trust, particularly in remediation of alerts that may occur due to threats or security changes in your environment. In Azure AD, automation integrations are possible to help remediate alerts or perform actions that can improve your security posture. Automations are based on information received from monitoring and analytics.
-[Microsoft Graph API](../develop/microsoft-graph-intro.md) REST calls are the most common way to programmatically access Azure AD. This API-based access requires an Azure AD identity with the necessary authorizations and scope. With the Graph API, you can integrate Microsoft's and other tools. Microsoft recommends you set up an Azure function or Azure Logic App to use a [System Assigned Managed Identity](../managed-identities-azure-resources/overview.md). Your logic app or function contains the steps or code necessary to automate the desired actions. You assign permissions to the managed identity to grant the service principal the necessary directory permissions to perform the required actions. Grant managed identities only the minimum rights necessary. With the Graph API, you can integrate third party tools. Follow the principles outlined in this article when performing your integration.
-Another automation integration point is [Azure AD PowerShell](/powershell/azure/active-directory/overview?view=azureadps-2.0) modules. PowerShell is a useful automation tool for administrators and IT integrators performing common tasks or configurations in Azure AD. PowerShell can also be incorporated into Azure functions or Azure automation runbooks.
+Automation is an important aspect of Zero Trust, particularly in remediation of alerts that occur because of threats or security changes in your environment. In Azure AD, automation integrations are possible to help remediate alerts or perform actions that can improve your security posture. Automations are based on information received from monitoring and analytics.
+
+[Microsoft Graph API](../develop/microsoft-graph-intro.md) REST calls are the most common way to programmatically access Azure AD. This API-based access requires an Azure AD identity with the necessary authorizations and scope. With the Graph API, you can integrate Microsoft's and other tools. Follow the principles outlined in this article when you're performing the integration.
+
+We recommend that you set up an Azure function or an Azure logic app to use a [system-assigned managed identity](../managed-identities-azure-resources/overview.md). Your logic app or function contains the steps or code necessary to automate the desired actions. You assign permissions to the managed identity to grant the service principal the necessary directory permissions to perform the required actions. Grant managed identities only the minimum rights necessary.
+
+Another automation integration point is [Azure AD PowerShell](/powershell/azure/active-directory/overview) modules. PowerShell is a useful automation tool for administrators and IT integrators who are performing common tasks or configurations in Azure AD. PowerShell can also be incorporated into Azure functions or Azure Automation runbooks.
## Governance
-It is important that you understand and document clear processes for how you intend to operate your Azure AD environment. Azure AD has several features that allow for governance-like functionality to be applied to scopes within Azure AD. Consider the following guidance to help with governance with Azure AD.
+It's important that you understand and document clear processes for how you intend to operate your Azure AD environment. Azure AD has features that allow for governance-like functionality to be applied to scopes within Azure AD. Consider the following guidance to help with governance via Azure AD:
* [Azure Active Directory governance operations reference guide](../fundamentals/active-directory-ops-guide-govern.md).
-* [Azure Active Directory security operations guide](../fundamentals/security-operations-introduction.md) can help you secure your operations and understand how security and governance overlap.
-* Once you understand operational governance, you can use [governance features](../governance/identity-governance-overview.md) to implement portions of your governance controls. These include features mentioned in [Authorization for Memo 22-09](memo-22-09-authorization.md).
+* [Azure Active Directory security operations guide](../fundamentals/security-operations-introduction.md). It can help you secure your operations and understand how security and governance overlap.
+
+After you understand operational governance, you can use [governance features](../governance/identity-governance-overview.md) to implement portions of your governance controls. These include features mentioned in [Meet authorization requirements of memorandum 22-09](memo-22-09-authorization.md).
## Next steps
-The following articles are a part of this documentation set:
+The following articles are part of this documentation set:
-[Meet identity requirements of Memorandum 22-09](memo-22-09-meet-identity-requirements.md)
+[Meet identity requirements of memorandum 22-09](memo-22-09-meet-identity-requirements.md)
[Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md)
-[Multi-factor authentication](memo-22-09-multi-factor-authentication.md)
+[Multifactor authentication](memo-22-09-multi-factor-authentication.md)
[Authorization](memo-22-09-authorization.md)
-[Other areas of Zero Trust](memo-22-09-other-areas-zero-trust.md)
-
-Additional Zero Trust Documentation
+For more information about Zero Trust, see:
[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
aks Out Of Tree https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/out-of-tree.md
Title: Enable Cloud Controller Manager (Preview)
+ Title: Enable Cloud Controller Manager
description: Learn how to enable the Out of Tree cloud provider Previously updated : 8/25/2021 Last updated : 04/08/2022
-# Enable Cloud Controller Manager (Preview)
+# Enable Cloud Controller Manager
As a Cloud Provider, Microsoft Azure works closely with the Kubernetes community to support our infrastructure on behalf of users.
-Currently, Cloud provider integration within Kubernetes is "in-tree", where any changes to Cloud specific features must follow the standard Kubernetes release cycle. When we find, fix issues, or need to roll out enhancements, we must do this within the Kubernetes community's release cycle.
+Previously, Cloud provider integration with Kubernetes was "in-tree", where any changes to Cloud specific features would follow the standard Kubernetes release cycle. When issues were fixed or enhancements were rolled out, they would need to be within the Kubernetes community's release cycle.
-The Kubernetes community is now adopting an "out-of-tree" model where the Cloud providers will control their releases independently of the core Kubernetes release schedule through the [cloud-provider-azure][cloud-provider-azure] component. We have already rolled out the Cloud Storage Interface (CSI) drivers to be the default in Kubernetes version 1.21 and above.
+The Kubernetes community is now adopting an "out-of-tree" model where the Cloud providers will control their releases independently of the core Kubernetes release schedule through the [cloud-provider-azure][cloud-provider-azure] component. As part of this cloud-provider-azure component, we are also introducing a cloud-node-manager component, which is a component of the Kubernetes node lifecycle controller. This component is deployed by a DaemonSet in the *kube-system* namespace. To view this component, use
-> [!Note]
-> When enabling Cloud Controller Manager on your AKS cluster, this will also enable the out of tree CSI drivers.
-
-The Cloud Controller Manager will be the default controller from Kubernetes 1.22, supported by AKS.
+```azurecli-interactive
+kubectl get po -n kube-system | grep cloud-node-manager
+```
+We recently rolled out the Cloud Storage Interface (CSI) drivers to be the default in Kubernetes version 1.21 and above.
+> [!Note]
+> When enabling Cloud Controller Manager on your AKS cluster, this will also enable the out of tree CSI drivers.
-## Before you begin
+The Cloud Controller Manager is the default controller from Kubernetes 1.22, supported by AKS. If running < v1.22, follow instructions below.
+## Prerequisites
You must have the following resource installed: * The Azure CLI
-* The `aks-preview` extension version 0.5.5 or later
* Kubernetes version 1.20.x or above-
+* The `aks-preview` extension version 0.5.5 or later
### Register the `EnableCloudControllerManager` feature flag
az extension add --name aks-preview
az extension update --name aks-preview ```
-## Create an AKS cluster with Cloud Controller Manager
+## Create a new AKS cluster with Cloud Controller Manager with version <1.22
To create a cluster using the Cloud Controller Manager, pass `EnableCloudControllerManager=True` as a customer header to the Azure API using the Azure CLI.
az group create --name myResourceGroup --location eastus
az aks create -n aks -g myResourceGroup --aks-custom-headers EnableCloudControllerManager=True ```
-## Upgrade an AKS cluster to Cloud Controller Manager
+## Upgrade an AKS cluster to Cloud Controller Manager on an existing cluster with version <1.22
To upgrade a cluster to use the Cloud Controller Manager, pass `EnableCloudControllerManager=True` as a customer header to the Azure API using the Azure CLI.
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-import-restrictions.md
For each operation found in the OpenAPI document, a new operation will be create
* Length is limited to 300 characters. * If `summary` isn't specified (not present, `null`, or empty), display name value will set to `operationId`.
+**Normalization rules for `operationId`**
+- Convert to lower case.
+- Replace each sequence of non-alphanumeric characters with a single dash.
+ - For example, `GET-/foo/{bar}?buzz={quix}` will be transformed into `get-foo-bar-buzz-quix-`.
+- Trim dashes on both sides.
+ - For example, `get-foo-bar-buzz-quix-` will become `get-foo-bar-buzz-quix`
+- Truncate to fit 76 characters, four characters less than maximum limit for a resource name.
+- Use remaining four characters for a de-duplication suffix, if necessary, in the form of `-1, -2, ..., -999`.
+ ### Update an existing API via OpenAPI import During import, the existing API operation:
To make import more predictable, follow these guidelines:
- Refrain from changing `operationId` after initial import. - Never change `operationId` and HTTP method or path template at the same time.
-### Export API as OpenAPI
-
-For each operation, its:
-* Azure resource name will be exported as an `operationId`.
-* Display name will be exported as a `summary`.
- **Normalization rules for `operationId`** - Convert to lower case. - Replace each sequence of non-alphanumeric characters with a single dash.
For each operation, its:
- Truncate to fit 76 characters, four characters less than maximum limit for a resource name. - Use remaining four characters for a de-duplication suffix, if necessary, in the form of `-1, -2, ..., -999`.
+### Export API as OpenAPI
+
+For each operation, its:
+* Azure resource name will be exported as an `operationId`.
+* Display name will be exported as a `summary`.
+
+Note that normalization of the `operationId` is done on import, not on export.
+ ## <a name="wsdl"> </a>WSDL You can create [SOAP pass-through](import-soap-api.md) and [SOAP-to-REST](restify-soap-api.md) APIs with WSDL files.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
App Service can also host web apps natively on Linux for supported application s
### Built-in languages and frameworks
-App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
+App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
| `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable is not passed on to the container. | `https://<server-name>.azurecr.io` | | `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || | `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. ||
+| `DOCKER_ENABLE_CI` | Set to `true` to enable the continuous deployment for custom containers. The default is `false` for custom containers. ||
| `WEBSITE_PULL_IMAGE_OVER_VNET` | Connect and pull from a registry inside a Virtual Network or on-premise. Your app will need to be connected to a Virtual Network using VNet integration feature. This setting is also needed for Azure Container Registry with Private Endpoint. || | `WEBSITES_WEB_CONTAINER_NAME` | In a Docker Compose app, only one of the containers can be internet accessible. Set to the name of the container defined in the configuration file to override the default container selection. By default, the internet accessible container is the first container to define port 80 or 8080, or, when no such container is found, the first container defined in the configuration file. | | | `WEBSITES_PORT` | For a custom container, the custom port number on the container for App Service to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. This setting is *not* injected into the container as an environment variable. ||
app-service Resources Kudu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/resources-kudu.md
It also provides other features, such as:
- Generates [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). - Allows access with [REST API](https://github.com/projectkudu/kudu/wiki/REST-API).
+## RBAC permissions required to access Kudo
+To access Kudu in the browser with Azure Active Directory authentication, you need to be a member of a built-in or custom role.
+
+- If using a built-in role, you must be a member of Website Contributor, Contributor, or Owner.
+- If using a custom role, you need the resource provider operation: `Microsoft.Web/sites/publish/Action`.
+ ## More Resources Kudu is an [open source project](https://github.com/projectkudu/kudu), and has its documentation at [Kudu Wiki](https://github.com/projectkudu/kudu/wiki).
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
The streamed logs look like this:
::: zone pivot="container-linux"
-Azure App Service uses the Docker container technology to host both built-in images and custom images. To see a list of built-in images, run the Azure CLI command, ['az webapp list-runtimes--linux'](/cli/azure/webapp#az-webapp-list-runtimes). If those images don't satisfy your needs, you can build and deploy a custom image.
+Azure App Service uses the Docker container technology to host both built-in images and custom images. To see a list of built-in images, run the Azure CLI command, ['az webapp list-runtimes --os linux'](/cli/azure/webapp#az-webapp-list-runtimes). If those images don't satisfy your needs, you can build and deploy a custom image.
In this tutorial, you learn how to:
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
az appservice plan create \
Finally, create the App Service web app using the [az webapp create](/cli/azure/webapp#az-webapp-create) command. * The App Service name is used as both the name of the resource in Azure and to form the fully qualified domain name for your app in the form of `https://<app service name>.azurewebsites.com`.
-* The runtime specifies what version of .NET your app is running. This example uses .NET 6.0 LTS. To list all available runtimes, use the command `az webapp list-runtimes --linux --output table` for Linux and `az webapp list-runtimes --output table` for Windows.
+* The runtime specifies what version of .NET your app is running. This example uses .NET 6.0 LTS. To list all available runtimes, use the command `az webapp list-runtimes --os linux --output table` for Linux and `az webapp list-runtimes --os windows --output table` for Windows.
```azurecli-interactive
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md
Similarly, you can host multiple subdomains of the same parent domain on the sam
## Request Routing rules evaluation order
-While using multi-site listeners, to ensure that the client traffic is routed to the accurate backend, it is important to have the request routing rules be present in the correct order.
+While using multi-site listeners, to ensure that the client traffic is routed to the accurate backend, it's important to have the request routing rules be present in the correct order.
For example, if you have 2 listeners with associated Host name as `*.contoso.com` and `shop.contoso.com` respectively, the listener with the `shop.contoso.com` Host name would have to be processed before the listener with `*.contoso.com`. If the listener with `*.contoso.com` is processed first, then no client traffic would be received by the more specific `shop.contoso.com` listener. This ordering can be established by providing a 'Priority' field value to the request routing rules associated with the listeners. You can specify an integer value from 1 to 20000 with 1 being the highest priority and 20000 being the lowest priority. In case the incoming client traffic matches with multiple listeners, the request routing rule with highest priority will be used for serving the request. Each request routing rule needs to have a unique priority value.
-The priority field only impacts the order of evaluation of a request routing rule, this will not change the order of evaluation of path based rules within a `PathBasedRouting` request routing rule.
+The priority field only impacts the order of evaluation of a request routing rule, this wont change the order of evaluation of path based rules within a `PathBasedRouting` request routing rule.
>[!NOTE] >This feature is currently available only through [Azure PowerShell](tutorial-multiple-sites-powershell.md#add-priority-to-routing-rules) and [Azure CLI](tutorial-multiple-sites-cli.md#add-priority-to-routing-rules). Portal support is coming soon.
In the Azure portal, under the multi-site listener, you must chose the **Multipl
### Considerations and limitations of using wildcard or multiple host names in a listener
-* [SSL termination and End-to-End SSL](ssl-overview.md) requires you to configure the protocol as HTTPS and upload a certificate to be used in the listener configuration. If it is a multi-site listener, you can input the host name as well, usually this is the CN of the SSL certificate. When you are specifying multiple host names in the listener or use wildcard characters, you must consider the following:
- * If it is a wildcard hostname like *.contoso.com, you must upload a wildcard certificate with CN like *.contoso.com
+* [SSL termination and End-to-End SSL](ssl-overview.md) requires you to configure the protocol as HTTPS and upload a certificate to be used in the listener configuration. If it's a multi-site listener, you can input the host name as well, usually this is the CN of the SSL certificate. When you are specifying multiple host names in the listener or use wildcard characters, you must consider the following:
+ * If it's a wildcard hostname like *.contoso.com, you must upload a wildcard certificate with CN like *.contoso.com
* If multiple host names are mentioned in the same listener, you must upload a SAN certificate (Subject Alternative Names) with the CNs matching the host names mentioned. * You cannot use a regular expression to mention the host name. You can only use wildcard characters like asterisk (*) and question mark (?) to form the host name pattern. * For backend health check, you cannot associate multiple [custom probes](application-gateway-probe-overview.md) per HTTP settings. Instead, you can probe one of the websites at the backend or use "127.0.0.1" to probe the localhost of the backend server. However, when you are using wildcard or multiple host names in a listener, the requests for all the specified domain patterns will be routed to the backend pool depending on the rule type (basic or path-based). * The properties "hostname" takes one string as input, where you can mention only one non-wildcard domain name and "hostnames" takes an array of strings as input, where you can mention up to 5 wildcard domain names. But both the properties cannot be used at once.
-* You cannot create a [redirection](redirect-overview.md) rule with a target listener, which uses wildcard or multiple host names.
See [create multi-site using Azure PowerShell](tutorial-multiple-sites-powershell.md) or [using Azure CLI](tutorial-multiple-sites-cli.md) for the step-by-step guide on how to configure wildcard host names in a multi-site listener.
application-gateway Redirect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-overview.md
Application Gateway redirection support offers the following capabilities:
- **Global redirection** Redirects from one listener to another listener on the gateway. This enables HTTP to HTTPS redirection on a site.
+ When configuring redirects with a multi-site target listener, it is required that all the host names (with or without wildcard characters) defined as part of the source listener are also part of the destination listener. This ensures that no traffic is dropped due to missing host names on the destination listener while setting up HTTP to HTTPS redirection.
++ - **Path-based redirection** This type of redirection enables HTTP to HTTPS redirection only on a specific site area, for example a shopping cart area denoted by /cart/*.
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
# Form Recognizer read model
-The prebuilt-read model extracts printed and handwritten textual elements including lines, words, locations, and detected languages from documents (PDF and TIFF) and images (JPG, PNG, and BMP). The read model is the foundation for all Form Recognizer models. Layout, general document, custom, and prebuilt models use the prebuilt-read model as a basis for extracting text from documents.
+The Form Recognizer v3.0 preview includes the new Read API. Read extracts printed and handwritten from documents. The read model can detect lines, words, locations, and languages and is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
## Development options
The following resources are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|||
-|**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-read**|
+|**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
## Data extraction
Form Recognizer preview version supports several languages for the read model. *
### Text lines and words
-Read API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided on lines, words, bounding boxes, confidence scores, and style (handwritten or other).
+Read API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted from data provided in lines, words, bounding boxes, confidence scores, and style.
### Language detection (v3.0 preview)
For large multi-page documents, use the `pages` query parameter to indicate spec
## Next steps
-* Complete a Form Recognizer quickstart:
+Complete a Form Recognizer quickstart:
- > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+> [!div class="checklist"]
+>
+> * [**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)
+> * [**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)
+> * [**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)
+> * [**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)
+> * [**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul>
-* Explore our REST API:
+Explore our REST API:
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
+> [!div class="nextstepaction"]
+> [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
Previously updated : 02/16/2022 Last updated : 04/13/2022
Follow these tips to further optimize your data set for training:
## Upload your training data
-When you've put together the set of forms or documents that you'll use for training, you'll need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+Once you've put together the set of forms or documents for training, you'll need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
## Create a project in the Form Recognizer Studio
-The Form Recognizer Studio provides and orchestrates all the API calls required to create the files required to complete your dataset and train your model.
+The Form Recognizer Studio provides and orchestrates all the API calls required to complete your dataset and train your model.
-1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). If this is your first time using the Studio, you'll need to [initialize it for use](../quickstarts/try-v3-form-recognizer-studio.md). Follow the [additional prerequisite for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
+1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you'll need to [initialize your subscription, resource group, and resource](../quickstarts/try-v3-form-recognizer-studio.md). Follow the [additional prerequisite for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
-1. In the Studio select the **Custom models** tile, on the custom models page and select the **Create a project** button.
+1. In the Studio, select the **Custom models** tile, on the custom models page and select the **Create a project** button.
:::image type="content" source="../media/how-to/studio-create-project.png" alt-text="Screenshot: Create a project in the Form Recognizer Studio.":::
The Form Recognizer Studio provides and orchestrates all the API calls required
:::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
-1. Next select the storage account where you uploaded the dataset you wish to use to train your custom model. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a sub-folder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
+1. Next select the storage account where you uploaded your custom model training dataset. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a subfolder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
:::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot: Select the storage account.":::
You'll see the files you uploaded to storage on the left of your screen, with th
1. Enter a name for the field.
-1. To assign a value to the field, simply choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. You'll see the labeled value below the field name in the list of fields.
+1. To assign a value to the field, choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. You'll see the labeled value below the field name in the list of fields.
-1. Repeat this process for all the fields you wish to label for your dataset
+1. Repeat the process for all the fields you wish to label for your dataset.
-1. Label the remaining documents in your dataset by selecting each document in the document list and selecting the text to be labeled
+1. Label the remaining documents in your dataset by selecting each document and selecting the text to be labeled.
-You now have all the documents in your dataset labeled. If you look at the storage account, you'll find a *.labels.json* and *.ocr.json* files that correspond to each document in your training dataset and an additional fields.json file. This is the training dataset that will be submitted to train the model.
+You now have all the documents in your dataset labeled. If you look at the storage account, you'll find a *.labels.json* and *.ocr.json* files that correspond to each document in your training dataset and a new fields.json file. This training dataset will be submitted to train the model.
## Train your model
Congratulations you've trained a custom model in the Form Recognizer Studio! You
> [Learn about custom model types](../concept-custom.md) > [!div class="nextstepaction"]
-> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
applied-ai-services Use Prebuilt Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-prebuilt-read.md
+
+ Title: "Use SDKs and REST API for read Model"
+
+description: Learn how to use read model using SDKs and REST API.
+++++
+zone_pivot_groups: programming-languages-set-formre
Last updated : 04/12/2022+
+recommendations: false
++
+# Use the Read Model
+
+ In this how-to guide, you'll learn to use Azure Form Recognizer's [read model](../concept-read.md) to extract printed and handwritten text from documents. The read model can detect lines, words, locations, and languages. You can use a programming language of your choice or the REST API. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+ The read model is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
+
+>[!NOTE]
+> Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
+The current API version is ```2022-01-30-preview```.
+++++++++++++++
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
The following features and development options are supported by the Form Recogn
| Feature | Description | Development options | |-|--|-|
-|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzePrebuiltRead.md)</li><li>[**Python SDK**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-bet#general-document-model)</li><li>[**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/118feb81eb57dbf6b4f851ef2a387ed1b1a86bde/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/readDocument.js)</li></ul> |
+|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
|[🆕 **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul> | |[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Title: Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation (Preview)
-description: This article tells how to deploy an extension-based Windows or Linux Hybrid Runbook Worker that you can use to run runbooks on Windows-based machines in your local datacenter or cloud environment.
+description: This article provides information about deploying the extension-based User Hybrid Runbook Worker to run runbooks on Windows or Linux machines in your on-premises datacenter or other cloud environment.
Previously updated : 03/17/2021 Last updated : 04/13/2022 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
-# Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Automation (Preview)
+# Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation (Preview)
-The extension-based onboarding is only for **User** Hybrid Runbook Workers. For **System** Hybrid Runbook Worker onboarding, see [Deploy an agent-based Windows Hybrid Runbook Worker in Automation](./automation-windows-hrw-install.md) or [Deploy an agent-based Linux Hybrid Runbook Worker in Automation](./automation-linux-hrw-install.md).
+The extension-based onboarding is only for **User** Hybrid Runbook Workers. This article describes how to: deploy a user Hybrid Runbook Worker on a Windows or Linux machine, remove the worker, and remove a Hybrid Runbook Worker group.
-You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
+For **System** Hybrid Runbook Worker onboarding, see [Deploy an agent-based Windows Hybrid Runbook Worker in Automation](./automation-windows-hrw-install.md) or [Deploy an agent-based Linux Hybrid Runbook Worker in Automation](./automation-linux-hrw-install.md).
-Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. This article describes how to: deploy a user Hybrid Runbook Worker on a Windows or Linux machine, remove the worker, and remove a Hybrid Runbook Worker group.
+You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
-After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment.
+Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment.
> [!NOTE]
azure-app-configuration Manage Feature Flags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/manage-feature-flags.md
description: In this tutorial, you learn how to manage feature flags separately from your application by using Azure App Configuration. documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Previously updated : 04/19/2019- Last updated : 04/05/2022+ #Customer intent: I want to control feature availability in my app by using App Configuration.
The Feature Manager in the Azure portal for App Configuration provides a UI for
To add a new feature flag:
-1. Select **Feature Manager** > **+Add** to add a feature flag.
+1. Open an Azure App Configuration store and from the **Operations** menu, select **Feature Manager** > **+Add**.
- ![Feature flag list](./media/azure-app-configuration-feature-flags.png)
+1. Check the box **Enable feature flag** to make the new feature flag active as soon as the flag has been created.
-1. Enter a unique key name for the feature flag. You need this name to reference the flag in your code.
+1. Enter a **Feature flag name**. The feature flag name is the unique ID of the flag, and the name that should be used when referencing the flag in code.
-1. If you want, give the feature flag a description.
+1. You can edit the key for your feature flag. The default value for this key is the name of your feature flag. You can change the key to add a prefix, which can be used to find specific feature flags when loading the feature flags in your application. For example, using the application's name as prefix such as **appname:featureflagname**.
-1. Set the initial state of the feature flag. This state is usually *Off* or *On*. The *On* state changes to *Conditional* if you add a filter to the feature flag.
+1. Optionally select an existing label or create a new one, and enter a description for the new feature flag.
- ![Feature flag creation](./media/azure-app-configuration-feature-flag-create.png)
+1. Leave the **Use feature filter** box unchecked and select **Apply** to create the feature flag. To learn more about feature filters, visit [Use feature filters to enable conditional feature flags](howto-feature-filters-aspnet-core.md) and [Enable staged rollout of features for targeted audiences](howto-targetingfilter-aspnet-core.md).
-1. When the state is *On*, select **+Add filter** to specify any additional conditions to qualify the state. Enter a built-in or custom filter key, and then select **+Add parameter** to associate one or more parameters with the filter. Built-in filters include:
+## Update feature flags
- | Key | JSON parameters |
- |||
- | Microsoft.Percentage | {"Value": 0-100 percent} |
- | Microsoft.TimeWindow | {"Start": UTC time, "End": UTC time} |
- | Microsoft.Targeting | { "Audience": JSON blob defining users, groups, and rollout percentages. See an example under the `EnabledFor` element of [this settings file](https://github.com/microsoft/FeatureManagement-Dotnet/blob/master/examples/FeatureFlagDemo/appsettings.json) }
+To update a feature flag:
- ![Feature flag filter](./media/azure-app-configuration-feature-flag-filter.png)
+1. From the **Operations** menu, select **Feature Manager**.
-## Update feature flag states
+1. Move to the right end of the feature flag you want to modify, select the **More actions** ellipsis (**...**). From this menu, you can edit the flag, create a label, lock or delete the feature flag.
-To change a feature flag's state value:
+1. Select **Edit** and update the feature flag.
-1. Select **Feature Manager**.
+In the **Feature manager**, you can also change the state of a feature flag by checking or unchecking the **Enable Feature flag** checkbox.
-1. To the right of a feature flag you want to modify, select the ellipsis (**...**), and then select **Edit**.
+## Access feature flags
-1. Set a new state for the feature flag.
+In the **Operations** menu, select **Feature manager**. You can select **Edits Columns** to add or remove columns, and change the column order.
+create a label, lock or delete the feature flag.
-## Access feature flags
+Feature flags created with the Feature Manager are stored and retrieved as regular key-values. They're kept under a special namespace prefix `.appconfig.featureflag`.
-Feature flags created by the Feature Manager are stored and retrieved as regular key values. They're kept under a special namespace prefix `.appconfig.featureflag`. To view the underlying key values, use the Configuration Explorer. Your application can retrieve these values by using the App Configuration configuration providers, SDKs, command-line extensions, and REST APIs.
+To view the underlying key-values:
-## Next steps
+1. In the **Operations** menu, open the **Configuration explorer**.
+
+1. Select **Manage view** > **Settings**.
-In this tutorial, you learned how to manage feature flags and their states by using App Configuration. For more information about feature-management support in App Configuration and ASP.NET Core, see the following article:
+1. Select **Include feature flags in the configuration explorer** and **Apply**.
+
+Your application can retrieve these values by using the App Configuration configuration providers, SDKs, command-line extensions, and REST APIs.
+
+## Next steps
-* [Use feature flags in an ASP.NET Core app](./use-feature-flags-dotnet-core.md)
+> [!div class="nextstepaction"]
+> [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md)
azure-arc Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/maintenance-window.md
Example:
az arcdata dc update --maintenance-enabled true --k8s-namespace arc --use-k8s ```
-## Change maintenance window start time
+## Change maintenance window options
-The update command can be used to change the maintenance start time.
+The update command can be used to change any of the options. In this example, I will update the start time.
```cli az arcdata dc update --maintenance-start <date and time> --k8s-namespace arc --use-k8s
az arcdata dc update --maintenance-start "2022-04-15T23:00" --k8s-namespace arc
## Next steps
-[Enable automatic upgrades of a SQL Managed Instance](upgrade-sql-managed-instance-auto.md)
+[Enable automatic upgrades of a SQL Managed Instance](upgrade-sql-managed-instance-auto.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Previously updated : 11/23/2021 Last updated : 04/13/2022 description: "This article provides an overview of Azure Arc-enabled Kubernetes." keywords: "Kubernetes, Arc, Azure, containers"
keywords: "Kubernetes, Arc, Azure, containers"
# What is Azure Arc-enabled Kubernetes?
-With Azure Arc-enabled Kubernetes, you can attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (GCP, AWS) or clusters running on your on-premise data center (on VMware vSphere, Azure Stack HCI) to Azure Arc. When you connect a Kubernetes cluster to Azure Arc, it will:
-* Get an Azure Resource Manager representation with a unique ID.
-* Be placed in an Azure subscription and resource group.
-* Receive tags just like any other Azure resource.
+Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premise data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc.
-Azure Arc-enabled Kubernetes supports industry-standard SSL to secure data in transit. For the connected clusters, data at rest is stored encrypted in an Azure Cosmos DB database to ensure data confidentiality.
+When you connect a Kubernetes cluster to Azure Arc, it will:
+* Be represented in Azure Resource Manager by a unique ID
+* Be placed in an Azure subscription and resource group
+* Receive tags just like any other Azure resource
-Azure Arc-enabled Kubernetes supports the following scenarios for the connected clusters:
+Azure Arc-enabled Kubernetes supports industry-standard SSL to secure data in transit. For the connected clusters, data at rest is stored encrypted in an Azure Cosmos DB database to ensure confidentiality.
+
+Azure Arc-enabled Kubernetes supports the following scenarios for connected clusters:
* [Connect Kubernetes](quickstart-connect-cluster.md) running outside of Azure for inventory, grouping, and tagging.
Azure Arc-enabled Kubernetes supports the following scenarios for the connected
* Deploy machine learning workloads using [Azure Machine Learning for Kubernetes clusters](../../machine-learning/how-to-attach-arc-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json).
-* Create [custom locations](./custom-locations.md) as target locations for deploying Azure Arc-enabled Data Services (SQL Managed Instances, PostgreSQL Hyperscale.), [App Services on Azure Arc](../../app-service/overview-arc-integration.md) (including web, function, and logic apps) and [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).
+* Create [custom locations](./custom-locations.md) as target locations for deploying Azure Arc-enabled Data Services (SQL Managed Instances, PostgreSQL Hyperscale.), [App Services on Azure Arc](../../app-service/overview-arc-integration.md) (including web, function, and logic apps), and [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).
[!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis fro
- If you use a private Marketplace, it must contain the Redis Labs Enterprise offer. > [!IMPORTANT]
-> Azure Cache for Redis Enterprise requires standard network Load Balancers that are charged
-> separately from cache instances themselves. For more information, see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
-> If an Enterprise cache is configured for multiple Availability Zones, data
-> transfer will be billed at the [standard network bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/)
-> starting from July 1, 2022.
->
-> In addition, data persistence adds Managed Disks. The use of these resources will be free during
-> the public preview of Enterprise data persistence. This may change when the feature becomes
-> generally available.
+> Azure Cache for Redis Enterprise requires standard network Load Balancers that are charged separately from cache instances themselves. For more information, see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
>
+> If an Enterprise cache is configured for multiple Availability Zones, data transfer is billed at the [standard network bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/)
+> starting from July 1, 2022.
>
+> In addition, data persistence adds Managed Disks. The use of these resources is free during the public preview of Enterprise data persistence. This might change when the feature becomes generally available.
+
+### Availability by region
+
+Azure Cache for Redis is continually expanding into new regions. To check the availability by region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=redis-cache&regions=all).
## Next steps
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
Previously updated : 02/08/2021
-#Customer intent: As a developer new to Azure Cache for Redis, I want to create an instance of Azure Cache for Redis Enterprise tier.
Last updated : 04/12/2022+ # Quickstart: Create a Redis Enterprise cache
Both Enterprise and Enterprise Flash support open-source Redis 6 and some new fe
You'll need an Azure subscription before you begin. If you don't have one, create an [account](https://azure.microsoft.com/). For more information, see [special considerations for Enterprise tiers](cache-overview.md#special-considerations-for-enterprise-tiers).
+### Availability by region
+
+Azure Cache for Redis is continually expanding into new regions. To check the availability by region for all tiers, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=redis-cache&regions=all).
+ ## Create a cache 1. To create a cache, sign in to the Azure portal and select **Create a resource**.
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
The following are not supported:
- CIS - SELinux (custom hardening like MLS)
-CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods are not supported nor planned for OMS Agent. For instance, OS images like Github Enterprise Server which include customizations such as limitations to user account privileges are not supported.
+CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods are not supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges are not supported.
## Agent prerequisites
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The following tables show gap analyses for the **log types** that are currently
| **Performance counters** | Yes | Yes | | **Windows Event Logs** | Yes | Yes | | **Filtering by event ID** | Yes | No |
-| **Custom logs** | No | Yes |
-| **IIS logs** | No | Yes |
+| **Text logs** | Yes | Yes |
+| **IIS logs** | Yes | Yes |
| **Application and service logs** | Yes | Yes | | **Multi-homing** | Yes | Yes |
The following tables show gap analyses for the **log types** that are currently
|||| | **Syslog** | Yes | Yes | | **Performance counters** | Yes | Yes |
-| **Custom logs** | No | Yes |
+| **Text logs** | Yes | Yes |
| **Multi-homing** | Yes | No |
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
# Azure Monitor agent overview The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
-Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure Portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
+Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
## Relationship to other agents Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
In addition to consolidating this functionality into a single agent, the Azure M
- **Improved extension management:** The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the current Log Analytics agents. ### Current limitations
-When compared with the legacy agents, this new agent doesn't yet have full parity.
-- **Comparison with Log Analytics agents (MMA/OMS):**
- - Not all Log Analytics solutions are supported yet. [View supported features and services](#supported-services-and-features).
- - The support for collecting file based logs or IIS logs is in [private preview](https://aka.ms/amadcr-privatepreviews).
+ Not all Log Analytics solutions are supported yet. [View supported features and services](#supported-services-and-features).
### Changes in data collection The methods for defining data collection for the existing agents are distinctly different from each other. Each method has challenges that are addressed with the Azure Monitor agent.
The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log A
| Performance | Azure Monitor Metrics (preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads | | Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system | | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
+| Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine. |
<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher. <sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including **Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format)**.
The following table shows the current support for the Azure Monitor agent with A
| Azure Monitor feature | Current support | More information | |:|:|:|
-| File based logs and Windows IIS logs | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+| Text logs and Windows IIS logs | Public preview | [Collect text logs with Azure Monitor agent (preview)](data-collection-text-log.md) |
| Windows Client OS installer | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
The following table shows the current support for the Azure Monitor agent with A
| Solution | Current support | More information | |:|:|:|
-| [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud private preview. | [Sign-up link](https://aka.ms/AMAgent) |
-| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 (private preview) that doesn't require an agent. | [Sign-up link](https://www.yammer.com/azureadvisors/threads/1064001355087872) |
+| [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud Private Preview. | [Sign-up link](https://aka.ms/AMAgent) |
+| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 (Private Preview) that doesn't require an agent. | [Sign-up link](https://www.yammer.com/azureadvisors/threads/1064001355087872) |
## Costs There's no cost for the Azure Monitor agent, but you might incur charges for the data ingested. For details on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
There's no cost for the Azure Monitor agent, but you might incur charges for the
The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent. ## Networking
-The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required). It supports connecting via **direct proxies, Log Analytics gateway, and private links** as described below.
+The Azure Monitor agent supports Azure service tags (both *AzureMonitor* and *AzureResourceManager* tags are required). It supports connecting via **direct proxies, Log Analytics gateway, and private links** as described below.
### Firewall requirements | Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
The Azure Monitor agent supports Azure service tags (both AzureMonitor and Azure
| Azure China |`<log-analytics-workspace-id>`.ods.opinsights.azure.cn |Ingest logs data |Port 443 |Outbound|Yes |
-If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)
+If using private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)
### Proxy configuration If the machine connects through a proxy server to communicate over the internet, review requirements below to understand the network configuration required.
The Azure Monitor agent extensions for Windows and Linux can communicate either
![Flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-2. After the values for the *settings* and *protectedSettings* parameters are determined, **provide these additional parameters** when you deploy the Azure Monitor agent by using PowerShell commands. Refer the following examples.
+2. After the values for the *settings* and *protectedSettings* parameters are determined, **provide these additional parameters** when you deploy the Azure Monitor agent by using PowerShell commands. Refer to the following examples.
# [Windows VM](#tab/PowerShellWindows)
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
### Log Analytics gateway configuration 1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-2. Add the **configuration endpoint URL** to fetch data collection rules to the allow list for the gateway
+2. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
`Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com` `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com` (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-3. Add the **data ingestion endpoint URL** to the allow list for the gateway
+3. Add the **data ingestion endpoint URL** to the allowlist for the gateway
`Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com` 3. Restart the **OMS Gateway** service to apply the changes `Stop-Service -Name <gateway-name>`
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
+
+ Title: Set up the Azure Monitor agent on Windows client devices (Preview)
+description: This article describes the instructions to install the agent on Windows 10, 11 client OS devices, configure data collection, manage and troubleshoot the agent.
+++ Last updated : 4/13/2022++++
+# Azure Monitor agent on Windows client devices (Preview)
+This article provides instructions and guidance for using the client installer for Azure Monitor Agent. It also explains how to leverage Data Collection Rules on Windows client devices.
+
+With the new client installer available in this preview, you can now collect telemetry data from your Windows client devices in addition to servers and virtual machines.
+Both the [generally available extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**.
+
+## Supported device types
+
+| Device type | Supported? | Installation method | Additional information |
+|:|:|:|:|
+| Windows 10, 11 desktops, workstations | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer |
+| Windows 10, 11 laptops | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
+| Virtual machines, scale sets | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
+| On-premise servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premise by installing Arc agent |
++
+## Prerequisites
+1. The machine must be running Windows client OS version 10 RS4 or higher.
+2. To download the installer, the machine should have [C++ Redistributable version 2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) or higher
+3. The machine must be domain joined to an Azure AD tenant (AADj or Hybrid AADj machines), which enables the agent to fetch Azure AD device tokens used to authenticate and fetch data collection rules from Azure.
+4. You may need tenant admin permissions on the Azure AD tenant.
+5. The device must have access to the following HTTPS endpoints:
+ - global.handler.control.monitor.azure.com
+ - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
+ - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
+ (If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+6. Existing data collection rule(s) you wish to associate with the devices. If it doesn't exist already, [follow the guidance here to create data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi). **Do not associate the rule to any resources yet**.
+
+## Install the agent
+1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below):
+ [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox)
+2. Open an elevated admin command prompt window and update path to the location where you downloaded the installer.
+3. To install with **default settings**, run the following command:
+ ```cli
+ msiexec /i AzureMonitorAgentClientSetup.msi /qn
+ ```
+4. To install with custom file paths or [network proxy settings](./azure-monitor-agent-overview.md#proxy-configuration), use the command below with the values from the following table:
+ ```cli
+ msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder"
+ ```
+
+ | Parameter | Description |
+ |:|:|
+ | INSTALLDIR | Directory path where the agent binaries are installed |
+ | DATASTOREDIR | Directory path where the agent stores its operational logs and data |
+ | PROXYUSE | Must be set to "true" to use proxy |
+ | PROXYADDRESS | Set to Proxy Address. PROXYUSE must be set to "true" to be correctly applied |
+ | PROXYUSEAUTH | Set to "true" if proxy requires authentication |
+ | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+
+5. Verify successful installation:
+ - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
+ - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
+6. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating.
+
+> [!NOTE]
+> The agent installed with the client installer currently doesn't support updating configuration once it is installed. Uninstall and reinstall AMA to update its configuration.
++
+## Create and associate a 'Monitored Object'
+You need to create a 'Monitored Object' (MO) that creates a representation for the Azure AD tenant within Azure Resource Manager (ARM). This ARM entity is what Data Collection Rules are then associated with.
+Currently this association is only **limited** to the Azure AD tenant scope, which means configuration applied to the tenant will be applied to all devices that are part of the tenant and running the agent.
+The image below demonstrates how this works:
+
+![Diagram shows monitored object purpose and association.](media/azure-monitor-agent-windows-client/azure-monitor-agent-monitored-object.png)
+
+Then, proceed with the instructions below to create and associate them to a Monitored Object, using REST APIs or PowerShell commands.
+
+### Using REST APIs
+
+#### 1. Assign ΓÇÿMonitored Object ContributorΓÇÖ role to the operator
+
+This step grants the ability to create and link a monitored object to a user.
+**Permissions required:** Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Azure AD Tenant Admin as Azure Tenant Admin](/azure/role-based-access-control/elevate-access-global-admin). It will give the Azure AD admin 'owner' permissions at the root scope.
+
+**Request URI**
+```HTTP
+PUT https://management.azure.com/providers/microsoft.insights/providers/microsoft.authorization/roleassignments/{roleAssignmentGUID}?api-version=2021-04-01-preview
+```
+**URI Parameters**
+
+| Name | In | Type | Description |
+|:|:|:|:|:|
+| `roleAssignmentGUID` | path | string | Provide any valid guid (you can generate one using https://guidgenerator.com/) |
+
+**Headers**
+- Authorization: ARM Bearer Token (using ΓÇÿGet-AzAccessTokenΓÇÖ or other method)
+- Content-Type: Application/json
+
+**Request Body**
+```JSON
+{
+ "properties":
+ {
+ "roleDefinitionId":"/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b",
+ "principalId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
+ }
+}
+```
+
+**Body parameters**
+
+| Name | Description |
+|:|:|
+| roleDefinitionId | Fixed value: Role definition ID of the 'Monitored Objects Contributor' role: `/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b` |
+| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user who will perform later steps. |
+
+After this step is complete, **reauthenticate** your session and **reacquire** your ARM bearer token.
+
+#### 2. Create Monitored Object
+This step creates the Monitored Object for the Azure AD Tenant scope. It will be used to represent client devices that are signed with that Azure AD Tenant identity.
+
+**Permissions required**: Anyone who has 'Monitored Object Contributor' at an appropriate scope can perform this operation, as assigned in step 1.
+
+**Request URI**
+```HTTP
+PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{AADTenantId}?api-version=2021-09-01-preview
+```
+**URI Parameters**
+
+| Name | In | Type | Description |
+|:|:|:|:|:|
+| `AADTenantId` | path | string | ID of the Azure AD tenant that the device(s) belong to. The MO will be created with the same ID |
+
+**Headers**
+- Authorization: ARM Bearer Token
+- Content-Type: Application/json
+
+**Request Body**
+```JSON
+{
+ "properties":
+ {
+ "location":"eastus"
+ }
+}
+```
+**Body parameters**
+
+| Name | Description |
+|:|:|
+| `location` | The Azure region where the MO object would be stored. It should be the **same region** where you created the Data Collection Rule. This is the location of the region from where agent communications would happen. |
++
+#### 3. Associate DCR to Monitored Object
+Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating a Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi) to create data collection rule(s) first.
+**Permissions required**: Anyone who has ΓÇÿMonitored Object ContributorΓÇÖ at an appropriate scope can perform this operation, as assigned in step 1.
+
+**Request URI**
+```HTTP
+PUT https://management.azure.com/{MOResourceId}/providers/microsoft.insights/datacollectionruleassociations/assoc?api-version=2021-04-01
+```
+**Sample Request URI**
+```HTTP
+PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{AADTenantId}/providers/microsoft.insights/datacollectionruleassociations/assoc?api-version=2021-04-01
+```
+
+**URI Parameters**
+
+| Name | In | Type | Description |
+|:|:|:|:|:|
+| ``MOResourceId` | path | string | Full resource ID of the MO created in step 2. Example: 'providers/Microsoft.Insights/monitoredObjects/{AADTenantId}' |
+
+**Headers**
+- Authorization: ARM Bearer Token
+- Content-Type: Application/json
+
+**Request Body**
+```JSON
+{
+ "properties":
+ {
+ "dataCollectionRuleId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}"
+ }
+}
+```
+**Body parameters**
+
+| Name | Description |
+|:|:|
+| `dataCollectionRuleID` | The resource ID of an existing Data Collection Rule that you created in the **same region** as the Monitored Object. |
++
+### Using PowerShell
+```PowerShell
+$TenantID = "xxxxxxxxx-xxxx-xxx" #Your Tenant ID
+$SubscriptionID = "xxxxxx-xxxx-xxxxx" #Your Subscription ID
+$ResourceGroup = "rg-yourResourseGroup" #Your resroucegroup
+$DCRName = "CollectWindowsOSlogs" #Your Data collection rule name
+
+Connect-AzAccount -Tenant $TenantID
+
+#Select the subscription
+Select-AzSubscription -SubscriptionId $SubscriptionID
+
+#Grant Access to User at root scope "/"
+$user = Get-AzADUser -UserPrincipalName (Get-AzContext).Account
+
+New-AzRoleAssignment -Scope '/' -RoleDefinitionName 'Owner' -ObjectId $user.Id
+
+#Create Auth Token
+$auth = Get-AzAccessToken
+
+$AuthenticationHeader = @{
+ "Content-Type" = "application/json"
+ "Authorization" = "Bearer " + $auth.Token
+ }
++
+#1. Assign ΓÇÿMonitored Object ContributorΓÇÖ Role to the operator
+$newguid = (New-Guid).Guid
+$UserObjectID = $user.Id
+
+$body = @"
+{
+ "properties": {
+ "roleDefinitionId":"/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b",
+ "principalId": `"$UserObjectID`"
+ }
+}
+"@
+
+$request = "https://management.azure.com/providers/microsoft.insights/providers/microsoft.authorization/roleassignments/$newguid`?api-version=2021-04-01-preview"
++
+Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body $body
++
+##########################
+
+#2. Create Monitored Object
+
+$request = "https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/$TenantID`?api-version=2021-09-01-preview"
+$body = @'
+{
+ "properties":{
+ "location":"eastus"
+ }
+}
+'@
+
+$Respond = Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body $body -Verbose
+$RespondID = $Respond.id
+
+#########
+
+#3. Associate DCR to Monitored Object
+
+$request = "https://management.azure.com$RespondId/providers/microsoft.insights/datacollectionruleassociations/assoc?api-version=2021-04-01"
+$body = @"
+ {
+ "properties": {
+ "dataCollectionRuleId": "/subscriptions/$SubscriptionID/resourceGroups/$ResourceGroup/providers/Microsoft.Insights/dataCollectionRules/$DCRName"
+ }
+ }
+
+"@
+
+Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body $body
+```
+++
+## Verify successful setup
+Check the ΓÇÿHeartbeatΓÇÖ table (and other tables you configured in the rules) in the Log Analytics workspace that you specified as a destination in the data collection rule(s).
+The `SourceComputerId`, `Computer`, `ComputerIP` columns should all reflect the client device information respectively, and the `Category` column should say 'Azure Monitor Agent'. See example below:
+
+[![Diagram shows agent heartbeat logs on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-heartbeat-logs.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-heartbeat-logs.png)
++
+## Manage the agent
+
+### Check the agent version
+You can use any of the following options to check the installed version of the agent:
+- Open **Control Panel** > **Programs and Features** > **Azure Monitor Agent** and see the 'Version' listed
+- Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and see the 'Version' listed
+
+### Uninstall the agent
+You can use any of the following options to check the installed version of the agent:
+- Open **Control Panel** > **Programs and Features** > **Azure Monitor Agent** and click 'Uninstall'
+- Open **Settings** > **Apps** > **Apps and Features** > **Azure Monitor Agent** and click 'Uninstall'
+
+If you face issues during 'Uninstall', refer to [troubleshooting guidance](#troubleshoot) below
+
+### Update the agent
+In order to update the version, install the new version you wish to update to.
++
+## Troubleshoot
+### View agent diagnostic logs
+1. Rerun the installation with logging turned on and specify the log file name:
+ `Msiexec /I AzureMonitorAgentClientSetup.msi /L*V <log file name>`
+2. Runtime logs are collected automatically either at the default location `C:\Resources\Azure Monitor Agent\` or at the file path mentioned during installation.
+ - If you can't locate the path, the exact location can be found on the registry as `AMADataRootDirPath` on `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMonitorAgent`.
+3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes
+4. 'AzureMonitorAgent.MonitoringDataStore' contains data/logs from AMA processes.
+
+### Common issues
+
+#### Missing DLL
+- Error message: "There's a problem with this Windows Installer package. A DLL required for this installer to complete could not be run. …"
+- Ensure you have installed [C++ Redistributable (>2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) before installing AMA:
+
+#### Silent install from command prompt fails
+Make sure to start the installer on administrator command prompt. Silent install can only be initiated from the administrator command prompt.
+
+#### Uninstallation fails due to the uninstaller being unable to stop the service
+- If There's an option to try again, do try it again
+- If retry from uninstaller doesn't work, cancel the uninstall and stop Azure Monitor Agent service from Services (Desktop Application)
+- Retry uninstall
+
+#### Force uninstall manually when uninstaller doesn't work
+- Stop Azure Monitor Agent service. Then try uninstalling again. If it fails, then proceed with the following steps
+- Delete AMA service with "sc delete AzureMonitorAgent" from admin cmd
+- Download [this tool](https://support.microsoft.com/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d) and uninstall AMA
+- Delete AMA binaries. They're stored in `Program Files\Azure Monitor Agent` by default
+- Delete AMA data/logs. They're stored in `C:\Resources\Azure Monitor Agent` by default
+- Open Registry. Check `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure Monitor Agent`. If it exists, delete the key.
++
+## Questions and feedback
+Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Configure data collection for the Azure Monitor agent
-description: Describes how to create a data collection rule to collect data from virtual machines using the Azure Monitor agent.
+description: Describes how to create a data collection rule to collect events and performance data from virtual machines using the Azure Monitor agent.
Last updated 03/16/2022 # Configure data collection for the Azure Monitor agent-
-Data Collection Rules (DCR) define data coming into Azure Monitor and specify where it should be sent. This article describes how to create a data collection rule to collect data from virtual machines using the Azure Monitor agent.
-
-For a complete description of data collection rules, see [Data collection rules in Azure Monitor](../essentials/data-collection-rule-overview.md).
+This article describes how to create a [data collection rule](../essentials/data-collection-rule-overview.md) to collect events and performance counters from virtual machines using the Azure Monitor agent. The data collection rule defines data coming into Azure Monitor and specify where it should be sent.
> [!NOTE] > This article describes how to configure data for virtual machines with the Azure Monitor agent only. ## Data collection rule associations
-To apply a DCR to a virtual machine, you create an association for the virtual machine. A virtual machine may have an association to multiple DCRs, and a DCR may have multiple virtual machines associated to it. This allows you to define a set of DCRs, each matching a particular requirement, and apply them to only the virtual machines where they apply.
+To apply a DCR to a virtual machine, you create an association for the virtual machine. A virtual machine may have an association to multiple DCRs, and a DCR may have multiple virtual machines associated to it. This allows you to define a set of DCRs, each matching a particular requirement, and apply them to only the virtual machines where they apply.
For example, consider an environment with a set of virtual machines running a line of business application and others running SQL Server. You might have one default data collection rule that applies to all virtual machines and separate data collection rules that collect data specifically for the line of business application and for SQL Server. The associations for the virtual machines to the data collection rules would look similar to the following diagram. ![Diagram shows virtual machines hosting line of business application and SQL Server associated with data collection rules named central-i t-default and lob-app for line of business application and central-i t-default and s q l for SQL Server.](media/data-collection-rule-azure-monitor-agent/associations.png) - ## Create rule and association in Azure portal You can use the Azure portal to create a data collection rule and associate virtual machines in your subscription to that rule. The Azure Monitor agent will be automatically installed and a managed identity created for any virtual machines that don't already have it installed.
Since you're charged for any data collected in a Log Analytics workspace, you sh
To specify additional filters, you must use Custom configuration and specify an XPath that filters out the events you don't. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath would be `Application!*[System[EventID=1035]]` ### Extracting XPath queries from Windows Event Viewer
-One of the ways to create XPath quries is to use Windows Event Viewer to extract XPath queries as shown below.
+One of the ways to create XPath queries is to use Windows Event Viewer to extract XPath queries as shown below.
*In step 5 when pasting over the 'Select Path' parameter value, you must append the log type category followed by '!' and then paste the copied value. [![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
This is enabled as part of Azure CLI **monitor-control-service** Extension. [Vie
## Next steps
+- [Collect text logs using Azure Monitor agent.](data-collection-text-log.md)
- Learn more about the [Azure Monitor Agent](azure-monitor-agent-overview.md). - Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
+
+ Title: Collect text logs with Azure Monitor agent (preview)
+description: Configure collection of filed-based text logs using a data collection rule on virtual machines with the Azure Monitor agent.
+ Last updated : 04/08/2022+++
+# Collect text logs with Azure Monitor agent (preview)
+This tutorial shows you how to configure the collection of file-based text logs with the [Azure Monitor agent](azure-monitor-agent-overview.md) and sending the collected data to a custom table in a Log Analytics workspace. This feature uses a [data collection rule](../essentials/data-collection-rule-overview.md) that you can use to define the structure of the log file and its target table.
+
+> [!NOTE]
+> This feature is currently in public preview and isn't completely implemented in the Azure portal. This tutorial uses Azure Resource Manager templates for steps that can't yet be performed with the portal.
+
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Create a custom table in a Log Analytics workspace.
+> * Create a data collection endpoint to receive data from an agent.
+> * Create a data collection rule that collects data from both a custom text log file.
+> * Create an association to apply the data collection rule to agents.
+## Prerequisites
+To complete this tutorial, you need the following:
+
+- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#manage-access-using-azure-permissions) .
+- [Permissions to create Data Collection Rule objects](/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace.
+- An agent with supported log file as described in the next section.
+
+## Log files supported
+The log file must meet the following criteria to be collected by this feature:
+
+- The log file must be stored on a local drive of a virtual machine, virtual machine scale set, or Arc enabled server with the Azure Monitor installed.
+- Each entry in the log file must be delineated with an [ISO 8601 formatted](https://www.iso.org/standard/40874.html) time stamp or an end of line.
+- The log file must not allow circular logging, log rotation where the file is overwritten with new entries, or the file is renamed and the same file name is reused for continued logging.
++
+## Steps to collect text logs
+The steps to configure log collection are as follows. The detailed steps for each are provided in the sections below:
+
+1. Create a new table in your workspace to receive the collected data.
+2. Create a data collection endpoint for the Azure Monitor agent to connect.
+3. Create a data collection rule to define the structure of the log file and destination of the collected data.
+4. Create association between the data collection rule and the agent collecting the log file.
+
+## Create new table in Log Analytics workspace
+The custom table must be created before you can send data to it. When you create the table, you provide its name and a definition for each of its columns.
+
+Use the **Tables - Update** API to create the table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. You can modify this schema to collect a different table.
+
+> [!IMPORTANT]
+> Custom tables must use a suffix of *_CL*.
+
+1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell in the Azure portal.":::
+
+2. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
+
+ ```PowerShell
+ $tableParams = @'
+ {
+ "properties": {
+ "schema": {
+ "name": "MyTable_CL",
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "DateTime"
+ },
+ {
+ "name": "RawData",
+ "type": "String"
+ }
+ ]
+ }
+ }
+ }
+ '@
+
+ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+ ```
++
+## Create data collection endpoint
+A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md) is required for the agent to connect to send the data to Azure Monitor. The DCE must be located in the same region as the Log Analytics Workspace where the data will be sent. If you already have a data collection endpoint for the agent, then you can use the existing one.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows portal blade to deploy custom template.":::
+
+2. Click **Build your own template in the editor**.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal blade to build template in the editor.":::
+
+3. Paste the Resource Manager template below into the editor and then click **Save**. You don't need to modify this template since you will provide values for its parameters.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template.":::
++
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionEndpointName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Endpoint to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "westus2",
+ "allowedValues": [
+ "westus2",
+ "eastus2",
+ "eastus2euap"
+ ],
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Endpoint."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionEndpoints",
+ "name": "[parameters('dataCollectionEndpointName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-04-01",
+ "properties": {
+ "networkAcls": {
+ "publicNetworkAccess": "Enabled"
+ }
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionEndpointId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionEndpoints', parameters('dataCollectionEndpointName'))]"
+ }
+ }
+ }
+ ```
+
+4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values a **Name** for the data collection endpoint. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection endpoint.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection endpoint.":::
+
+5. Click **Review + create** and then **Create** when you review the details.
+
+6. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion URI** since you'll need this in a later step.
+
+ :::image type="content" source="../logs/media/tutorial-custom-logs-api/data-collection-endpoint-overview.png" lightbox="../logs/media/tutorial-custom-logs-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows portal blade with details of data collection endpoint uri.":::
+
+7. Click **JSON View** to view other details for the DCE. Copy the **Resource ID** since you'll need this in a later step.
+
+ :::image type="content" source="../logs/media/tutorial-custom-logs-api/data-collection-endpoint-json.png" lightbox="../logs/media/tutorial-custom-logs-api/data-collection-endpoint-json.png" alt-text="Screenshot that shows JSON view for data collection endpoint with the resource ID.":::
++
+## Create data collection rule
+The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) defines the schema of data that being collected from the log file, the transformation that will be applied to it, and the destination workspace and table the transformed data will be sent to.
+
+1. The data collection rule requires the resource ID of your workspace. Navigate to your workspace in the **Log Analytics workspaces** menu in the Azure portal. From the **Properties** page, copy the **Resource ID** and save it for later use.
+
+ :::image type="content" source="../logs/media/tutorial-custom-logs-api/workspace-resource-id.png" lightbox="../logs/media/tutorial-custom-logs-api/workspace-resource-id.png" alt-text="Screenshot showing workspace resource ID.":::
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows portal blade to deploy custom template.":::
+
+2. Click **Build your own template in the editor**.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal blade to build template in the editor.":::
+
+3. Paste the Resource Manager template below into the editor and then change the following values:
+
+ You may choose to modify the following details in the DCR defined in this template:
+
+ - `streamDeclarations`: Defines the columns of the incoming data. This must match the structure of the log file.
+ - `filePatterns`: Specifies the location and file pattern of the log files to collect. This defines a separate pattern for Windows and Linux agents.
+ - `transformKql`: Specifies a [transformation](../logs/../essentials/data-collection-rule-transformations.md) to apply to the incoming data before it's sent to the workspace. Since data collection rules for Azure Monitor agent don't yet support transformations, this value will always be `source`.
++
+4. Click **Save**.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template.":::
++
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "westus2",
+ "allowedValues": [
+ "westus2",
+ "eastus2",
+ "eastus2euap"
+ ],
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
+ "workspaceName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Log Analytics workspace to use."
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ }
+ },
+ "endpointResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
+ "streamDeclarations": {
+ "Custom-MyLogFileFormat": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles ": [
+ {
+ "streams": [
+ "Custom-MyLogFileFormat "
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Windows"
+ },
+ {
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "filePatterns": [
+ "/var/*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Linux"
+ }
+
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "[parameters('workspaceName')]"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "destinations": [
+ "[parameters('workspaceName')]"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
+ }
+ ```
+
+5. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values defined in the template. This includes a **Name** for the data collection rule and the **Workspace Resource ID** and **Endpoint Resource ID**. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection rule.
+
+ :::image type="content" source="media/data-collection-text-log/custom-deployment-values.png" lightbox="media/data-collection-text-log/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection rule.":::
+
+6. Click **Review + create** and then **Create** when you review the details.
+
+7. When the deployment is complete, expand the **Deployment details** box and click on your data collection rule to view its details. Click **JSON View**.
+
+ :::image type="content" source="media/data-collection-text-log/data-collection-rule-details.png" lightbox="media/data-collection-text-log/data-collection-rule-details.png" alt-text="Screenshot that shows portal blade with data collection rule details.":::
+
+8. Change the API version to **2021-09-01-preview**.
+
+ :::image type="content" source="media/data-collection-text-log/data-collection-rule-json-view.png" lightbox="media/data-collection-text-log/data-collection-rule-json-view.png" alt-text="Screenshot that shows JSON view for data collection rule.":::
+
+9. Copy the **Resource ID** for the data collection rule. You'll use this in the next step.
+
+## Create association with agent
+The final step is to create a data collection association that associates the data collection rule to the agents with the log file to be collected. A single data collection rule can be used with multiple agents.
+
+1. From the **Monitor** menu in the Azure portal, select **Data Collection Rules** and select the rule that you just created.
+
+ :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows portal blade with data collection rules menu item.":::
+
+2. Select **Resources** and then click **Add** to view the available resources.
+
+ :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows portal blade with resources for the data collection rule.":::
+
+3. Select either individual agents to associate the data collection rule, or select a resource group to create an association for all agents in that resource group. Click **Apply**.
+
+ :::image type="content" source="media/data-collection-text-log/select-resources.png" lightbox="media/data-collection-text-log/select-resources.png" alt-text="Screenshot that shows portal blade to add resources to the data collection rule.":::
++
+## Next steps
+
+- Learn more about the [Azure Monitor agent](azure-monitor-agent-overview.md).
+- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
+- Learn more about [data collection endpoints](../essentials/data-collection-endpoint-overview.md).
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
Title: Collect custom logs with Log Analytics agent in Azure Monitor
+ Title: Collect text logs with Log Analytics agent in Azure Monitor
description: Azure Monitor can collect events from text files on both Windows and Linux computers. This article describes how to define a new custom log and details of the records they create in Azure Monitor.
Last updated 02/07/2022
-# Collect custom logs with Log Analytics agent in Azure Monitor
+# Collect text logs with Log Analytics agent in Azure Monitor
> [!IMPORTANT] > This article describes collecting file based text logs using the Log Analytics agent. It should not be confused with the [custom logs API](../logs/custom-logs-overview.md) which allows you to send data to Azure Monitor Logs using a REST API.
Last updated 02/07/2022
The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text files on both Windows and Linux computers. Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog. Once collected, you can either parse the data into individual fields in your queries or extract the data during collection to individual fields. > [!IMPORTANT]
-> This article covers collecting custom logs with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
+> This article covers collecting text logs with the [Log Analytics agent](./log-analytics-agent.md). See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting text logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
![Custom log collection](media/data-sources-custom-logs/overview.png)
The log files to be collected must match the following criteria.
- The log file must not allow circular logging, log rotation where the file is overwritten with new entries, or the file is renamed and the same file name is reused for continued logging. - The log file must use ASCII or UTF-8 encoding. Other formats such as UTF-16 are not supported. - For Linux, time zone conversion is not supported for time stamps in the logs.-- As a best practice, the log file should include the date time that it was created to prevent log rotation overwiting or renaming.
+- As a best practice, the log file should include the date time that it was created to prevent log rotation overwriting or renaming.
>[!NOTE] > If there are duplicate entries in the log file, Azure Monitor will collect them. However, the query results will be inconsistent where the filter results show more events than the result count. It will be important that you validate the log to determine if the application that creates it is causing this behavior and address it if possible before creating the custom log collection definition.
azure-monitor Data Sources Iis Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-iis-logs.md
Last updated 03/31/2022
Internet Information Services (IIS) stores user activity in log files that can be collected by the Log Analytics agent and stored in [Azure Monitor Logs](../data-platform.md). > [!IMPORTANT]
-> This article covers collecting IIS logs with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
+> This article covers collecting IIS logs with the [Log Analytics agent](./log-analytics-agent.md). See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting IIS logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
![IIS logs](media/data-sources-iis-logs/overview.png)
azure-monitor Activity Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/activity-log-alerts.md
Last updated 04/04/2022
## Overview
-Activity log alerts allow you to be notified on events and operations that are logged in [Azure Activity Log](../essentials/activity-log.md). An alert is fired when a new [activity log event](../essentials/activity-log-schema.md) occurs that matches the conditions specified in the alert rule. Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal. This article introduces the concepts behind activity log alerts. For more information on creating or usage of activity log alert rules, see [Create and manage activity log alerts](./alerts-activity-log.md).
+Activity log alerts allow you to be notified on events and operations that are logged in [Azure Activity Log](../essentials/activity-log.md). An alert is fired when a new [activity log event](../essentials/activity-log-schema.md) occurs that matches the conditions specified in the alert rule.
+
+Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal. This article introduces the concepts behind activity log alerts. For more information on creating or usage of activity log alert rules, see [Create and manage activity log alerts](./alerts-activity-log.md).
## Alerting on activity log event categories
-You can create activity log alert rules to receive notifications on one of the following activity log event categories :
+You can create activity log alert rules to receive notifications on one of the following activity log event categories:
-* **Administrative events** - get notified when a create, update, delete, or action operation occur on resources in your Azure subscription, resource group, or on a specific resource. For example, you might want to be notified when any virtual machine in myProductionResourceGroup is deleted. Or, you might want to be notified if any new roles are assigned to a user in your subscription.
-* **Service Health events** - get notified on Azure incidents, such as an outage or a maintenance event, occurred in a specific Azure region and may impact services in your subscription.
-* **Resource health events** - get notified when the health of a specific Azure resource you are using is degraded, or if the resource becomes unavailable.
-* **Autoscale events** - get notified when events related to the operation of the configured [autoscale operations](../autoscale/autoscale-overview.md) in your subscription. An example of an Autoscale event is Autoscale scale up action failed.
-* **Recommendation** - get notified when a new [Azure Advisor recommendation](../../advisor/advisor-overview.md) is available for your subscription.
-* **Security** - get notified on events generated by Microsoft Defender for Cloud. An example of a Security event is Suspicious double extension file executed.
-* **Policy** - get notified on effect action operations performed by Azure Policy. Examples of Policy events include Audit and Deny.
+| Event Category | Category Description | Example |
+|-|-||
+| Administrative | ARM operation (e.g. create, update, delete, or action) was performed on resources in your subscription, resource group, or on a specific Azure resource.| A virtual machine in your resource group is deleted |
+| Service health | Service incidents (e.g. an outage or a maintenance event) occurred that may impact services in your subscription on a specific region.| An outage impacting VMs in your subscription in East US. |
+| Resource health | The health of a specific resource is degraded, or the resource becomes unavailable. | A VM in your subscription transitions to a degraded or unavailable state. |
+| Autoscale | An Azure Autoscale operation has occurred, resulting in success or failure | An autoscale action on a virtual machine scale set in your subscription failed. |
+| Recommendation | A new Azure Advisor recommendation is available for your subscription | A high-impact recommendation for your subscription was received. |
+| Security | Events detected by Microsoft Defender for Cloud | A suspicious double extension file executed was detected in your subscription |
+| Policy | Operations performed by Azure Policy | Policy Deny event occurred in your subscription. |
> [!NOTE]
-> Alerts **cannot** be created for events in Alert category of activity log.
+> Alert rules **cannot** be created for events in Alert category of activity log.
## Configuring activity log alert rules
-You can configure an activity log alert based on any top-level property in the JSON object for an activity log event. For more information, see [Categories in the Activity Log](../essentials/activity-log.md#view-the-activity-log).
+You can configure an activity log alert rule based on any top-level property in the JSON object for an activity log event. For more information, see [Categories in the Activity Log](../essentials/activity-log.md#view-the-activity-log).
-An alternative simple way for creating conditions for activity log alerts is to explore or filter events via [Activity log in Azure portal](../essentials/activity-log.md#view-the-activity-log). In Azure Monitor - Activity log, one can filter and locate a required event and then create an alert to notify on similar by using the **New alert rule** button.
+An alternative simple way for creating conditions for activity log alert rules is to explore or filter events via [Activity log in Azure portal](../essentials/activity-log.md#view-the-activity-log). In Azure Monitor - Activity log, one can filter and locate a required event and then create an alert rule to notify on similar events by using the **New alert rule** button.
> [!NOTE] > An activity log alert rule monitors only for events in the subscription in which the alert rule is created.
-Activity log events have a few common properties which can be used to define a the activity log alert rule condition:
+Activity log events have a few common properties which can be used to define an activity log alert rule condition:
- **Category**: Administrative, Service Health, Resource Health, Autoscale, Security, Policy, or Recommendation. - **Scope**: The individual resource or set of resource(s) for which the alert on activity log is defined. Scope for an activity log alert can be defined at various levels:
Activity log events have a few common properties which can be used to define a t
- Resource Group Level: For example, all virtual machines in a specific resource group - Subscription Level: For example, all virtual machines in a subscription (or) all resources in a subscription - **Resource group**: By default, the alert rule is saved in the same resource group as that of the target defined in Scope. The user can also define the Resource Group where the alert rule should be stored.-- **Resource type**: Resource Manager defined namespace for the target of the alert.-- **Operation name**: The [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md) name utilized for Azure role-based access control . Operations not registered with Azure Resource Manager can not be used in an activity log alert rule.
+- **Resource type**: Resource Manager defined namespace for the target of the alert rule.
+- **Operation name**: The [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md) name utilized for Azure role-based access control. Operations not registered with Azure Resource Manager cannot be used in an activity log alert rule.
- **Level**: The severity level of the event (Informational, Warning, Error, or Critical). - **Status**: The status of the event, typically Started, Failed, or Succeeded. - **Event initiated by**: Also known as the "caller." The email address or Azure Active Directory identifier of the user (or application) who performed the operation.
-In addition to these comment properties, different activity log events categories have categpry-specific properties that can be used to define an alert rule for events of this category. For example, when creating a service health alert rule you can configure a condition on the impacted region name or service name that appear in the event.
+In addition to these comment properties, different activity log events have category-specific properties that can be used to configure an alert rule for events of each category. For example, when creating a service health alert rule you can configure a condition on the impacted region or service that appear in the event.
## Using action groups
-When an activity log alert is fired, it uses an action group to generate actions or notifications. An action group is a reusable set of notification receivers, such as email addresses, webhook URLs, or SMS phone numbers. The receivers can be referenced from multiple alerts to centralize and group your notification channels. When you define your activity log alert rule, you have two options. You can:
+When an activity log alert is fired, it uses an action group to trigger actions or send notifications. An action group is a reusable set of notification receivers, such as email addresses, webhook URLs, or SMS phone numbers. The receivers can be referenced from multiple alerts rules to centralize and group your notification channels. When you define your activity log alert rule, you have two options. You can:
* Use an existing action group in your activity log alert rule. * Create a new action group.
When an activity log alert is fired, it uses an action group to generate actions
To learn more about action groups, see [Create and manage action groups in the Azure portal](./action-groups.md). ## Activity log alert rules limit
-You can create up to 100 active activity log alert rules per subscription (including alert rules all activity log categories, such as resource health or service health ). This limit can't be increased.
-If you are reaching near this limit, there are several guidelines you can follow to optimize the use of activity log alerts rules so that you can cover more resources and events with the same number of rules:
+You can create up to 100 active activity log alert rules per subscription (including rules for all activity log event categories, such as resource health or service health). This limit can't be increased.
+If you are reaching near this limit, there are several guidelines you can follow to optimize the use of activity log alerts rules, so that you can cover more resources and events with the same number of rules:
* A single activity log alert rule can be configured to cover the scope of a single resource, a resource group, or an entire subscription. To reduce the number of rules you're using, consider to replace multiple rules covering a narrow scope with a single rule covering a broad scope. For example, if you have multiple VMs in a subscription, and you want an alert to be triggered whenever one of them is restarted, you can use a single activity log alert rule to cover all the VMs in your subscription. The alert will be triggered whenever any VM in the subscription is restarted. * A single service health alert rule can cover all the services and Azure regions used by your subscription. If you're using multiple service health alert rules per subscription, you can replace them with a single rule (or with a small number of rules, if you prefer).
-* A single resource health alert rule can cover multiple resource types and resources in your subscription. If you're using multiple resource health alert rules per subscription, you can replace them with a smaller number of rules (or even a single rule) that covers multiple resource types.
+* A single resource health alert rule can cover multiple resource types and resources in your subscription. If you're using multiple resource health alert rules per subscription, you can replace them with a smaller number of rules (or even a single rule) that covers multiple resource types.
## Next steps
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-unified-log.md
Log alerts allow you to set a location for alert rules. You can select any of th
Location affects which region the alert rule is evaluated in. Queries are executed on the log data in the selected region, that said, the alert service end to end is global. Meaning alert rule definition, fired alerts, notifications, and actions aren't bound to the location in the alert rule. Data is transfer from the set region since the Azure Monitor alerts service is a [non-regional service](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=non-regional).
-## Pricing and billing of log alerts
+## Pricing model
-Pricing information is located in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). Log Alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
+Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query.
+
+Prices for Log Alert rules are availalble on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+
+## View log alerts usage on your Azure bill
+
+Log Alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
- Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties. - Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules).
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Azure Application Insights | Microsoft Docs description: How to use connection strings. Previously updated : 01/17/2020 Last updated : 04/13/2022 - # Connection strings ## Overview
-Connection strings provide Application Insight users with a single configuration setting, eliminating the need for multiple proxy settings. Highly useful for intranet web servers, sovereign or hybrid cloud environments looking to send in data to the monitoring service.
+Connection strings define where to send telemetry data.
The key value pairs provide an easy way for users to define a prefix suffix combination for each Application Insights (AI) service/ product.
-> [!IMPORTANT]
-> We don't recommend setting both Connection String and Instrumentation key. In the event that a user does set both, whichever was set last will take precedence.
-
-> [!TIP]
-> We recommend the use of connection strings over instrumentation keys.
- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+> [!IMPORTANT]
+> Do not use a connection string and instrumentation key simultaneously. Whichever was set last will take precedence.
+ ## Scenario overview Scenarios most affected by this change:
Scenarios most affected by this change:
Your connection string is displayed on the Overview section of your Application Insights resource.
-![connection string on overview blade](media/overview-dashboard/overview-connection-string.png)
### Schema
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
You can also ingestion-time transformations to lower the storage requirements fo
The following table for methods to apply transformations to different workflows.
-> {!NOTE]
+> [!NOTE]
> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor Reference](/azure/azure-monitor-reference). Custom tables are created by custom applications and have a suffix of *_CL* ion their name. | Source | Target | Description | Filtering method |
See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for differ
- See [Azure Monitor cost and usage](usage-estimated-costs.md)) for a description of Azure Monitor and how to view and analyze your monthly bill. - See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges. - See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.-- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.
azure-monitor Activity Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-schema.md
Each entry in the activity log has a severity level. Severity level can have one
| Warning | Events that provide forewarning of potential problems, although not an actual error. Indicate that a resource is not in an ideal state and may degrade later into showing errors or critical events. | Informational | Events that pass noncritical information to the administrator. Similar to a note that says: "For your information".
-The devlopers of each resource provider choose the severity levels of their resource entries. As a result, the actual severity to you can vary depending on how your application is built. For example, items that are "critical" to a particular resource taken in isolation may not be as important as "errors" in a resource type that is central to your Azure application. Be sure to consider this fact when deciding what events to alert on.
+The developers of each resource provider choose the severity levels of their resource entries. As a result, the actual severity to you can vary depending on how your application is built. For example, items that are "critical" to a particular resource taken in isolation may not be as important as "errors" in a resource type that is central to your Azure application. Be sure to consider this fact when deciding what events to alert on.
## Categories
-Each event in the Activity Log has a particular category that are described in the following table. See the sections below for more detail on each category and its schema when you access the Activity log from the portal, PowerShell, CLI, and REST API. The schema is different when you [stream the Activity log to storage or Event Hubs](./resource-logs.md#send-to-azure-event-hubs). A mapping of the properties to the [resource logs schema](./resource-logs-schema.md) is provided in the last section of the article.
+Each event in the Activity Log has a particular category that is described in the following table. See the sections below for more detail on each category and its schema when you access the Activity log from the portal, PowerShell, CLI, and REST API. The schema is different when you [stream the Activity log to storage or Event Hubs](./resource-logs.md#send-to-azure-event-hubs). A mapping of the properties to the [resource logs schema](./resource-logs-schema.md) is provided in the last section of the article.
| Category | Description | |:|:|
When streaming the Azure Activity log to a storage account or event hub, the dat
| properties.operationId | operationId | | | properties.eventProperties | properties | |
-Following is an example of an event using this schema..
+Following is an example of an event using this schema:
```json {
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
You can send platform logs to one or more of the destinations in the following t
- [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) - [Tutorial: Archive Azure AD logs to an Azure storage account](../../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md)
+## Pricing model
+Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a Log Analytics charge for ingesting the data into a workspace.
+
+The charge is based on the number of bytes in the exported JSON formatted log data, measured in GB (10^9 bytes).
+
+Pricing is availalble on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
## Next steps
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
Within the PT1H.json file, each event is stored with the following format. This
{"time": "2016-07-01T00:00:37.2040000Z","systemId": "46cdbb41-cb9c-4f3d-a5b4-1d458d827ff1","category": "NetworkSecurityGroupRuleCounter","resourceId": "/SUBSCRIPTIONS/s1id1234-5679-0123-4567-890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/TESTNSG","operationName": "NetworkSecurityGroupCounters","properties": {"vnetResourceGuid": "{12345678-9012-3456-7890-123456789012}","subnetPrefix": "10.3.0.0/24","macAddress": "000123456789","ruleName": "/subscriptions/ s1id1234-5679-0123-4567-890123456789/resourceGroups/testresourcegroup/providers/Microsoft.Network/networkSecurityGroups/testnsg/securityRules/default-allow-rdp","direction": "In","type": "allow","matchedConnections": 1988}} ```
-> [!NOTE]
-> Platform logs are written to blob storage using [JSON lines](http://jsonlines.org/), where each event is a line and the newline character indicates a new event. This format was implemented in November 2018. Prior to this date, logs were written to blob storage as a json array of records as described in [Prepare for format change to Azure Monitor platform logs archived to a storage account](resource-logs-blob-format.md).
-- ## Next steps * [Read more about resource logs](../essentials/platform-logs-overview.md).
-* [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
+* [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
azure-monitor Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-insights-overview.md
Here are some links to troubleshooting articles for frequently used services. Fo
* [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-troubleshoot.md) * [Azure ExpressRoute](../../expressroute/expressroute-troubleshooting-expressroute-overview.md) * [Azure Load Balancer](../../load-balancer/load-balancer-troubleshoot.md)
+* [Azure NAT Gateway](/azure/virtual-network/nat-gateway/troubleshoot-nat)
### Why don't I see the resources for all the subscriptions I've selected?
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-time.md
Agents and management solutions use different strategies to collect data from a
|:--|:-|:| | Windows events, syslog events, and performance metrics | collected immediately| | | Linux performance counters | polled at 30-second intervals| |
-| IIS logs and custom logs | collected once their timestamp changes | For IIS logs, this is influenced by the [rollover schedule configured on IIS](../agents/data-sources-iis-logs.md). |
+| IIS logs and text logs | collected once their timestamp changes | For IIS logs, this is influenced by the [rollover schedule configured on IIS](../agents/data-sources-iis-logs.md). |
| Active Directory Replication solution | Assessment every five days | The agent collects these logs only when assessment is complete.| | Active Directory Assessment solution | weekly assessment of your Active Directory infrastructure | The agent collects these logs only when assessment is complete.|
Ingestion time may vary for different resources under different circumstances. Y
|:|:|:| | Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, then it will be set to the same time as _TimeReceived. | | Record received by Azure Monitor ingestion endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | This field is not optimized for mass processing and should not be used to filter large datasets. |
-| Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | It is recommended to use ingestion_time() if there is a need to filter only records that where ingested in a certain time window. In such case, it is recommended to add also TimeGenerated filter with a larger range. |
+| Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | It is recommended to use ingestion_time() if there is a need to filter only records that were ingested in a certain time window. In such case, it is recommended to add also TimeGenerated filter with a larger range. |
### Ingestion latency delays You can measure the latency of a specific record by comparing the result of the [ingestion_time()](/azure/kusto/query/ingestiontimefunction) function to the _TimeGenerated_ property. This data can be used with various aggregations to find how ingestion latency behaves. Examine some percentile of the ingestion time to get insights for large amount of data.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Data is exported without a filter. For example, when you configure a data export
Log Analytics workspace data export continuously exports data that is sent to your Log Analytics workspace. There are other options to export data for particular scenarios: - Configure Diagnostic Settings in Azure resources. Logs is sent to destination directly and has lower latency compared to data export in Log Analytics.-- Scheduled export from a log query using a Logic App. This is similar to the data export feature but allows you to send filtered or aggregated data to Azure Storage Account. This method though is subject to [log query limits](../service-limits.md#log-analytics-workspaces), see [Archive data from Log Analytics workspace to Azure Storage Account using Logic App](logs-export-logic-app.md).
+- Scheduled export from a log query using a Logic App. This is similar to the data export feature, but allows you to export historical data from your workspace, using filters and aggregation. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces) and not intended for scale. See [Archive data from Log Analytics workspace to Azure Storage Account using Logic App](logs-export-logic-app.md).
- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport). ## Limitations
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
Title: Tutorial - Send custom logs to Azure Monitor Logs using resource manager templates
-description: Tutorial on how to send custom logs to a Log Analytics workspace in Azure Monitor using resource manager templates.
+ Title: Tutorial - Send custom logs to Azure Monitor Logs using Resource Manager templates
+description: Tutorial on how to send custom logs to a Log Analytics workspace in Azure Monitor using Resource Manager templates.
Last updated 01/19/2022
-# Tutorial: Send custom logs to Azure Monitor Logs using resource manager templates (preview)
-[Custom logs](custom-logs-overview.md) in Azure Monitor allow you to send custom data to tables in a Log Analytics workspace with a REST API. This tutorial walks through configuration of a new table and a sample application to send custom logs to Azure Monitor using resource manager templates.
+# Tutorial: Send custom logs to Azure Monitor Logs using Resource Manager templates (preview)
+[Custom logs](custom-logs-overview.md) in Azure Monitor allow you to send custom data to tables in a Log Analytics workspace with a REST API. This tutorial walks through configuration of a new table and a sample application to send custom logs to Azure Monitor using Resource Manager templates.
> [!NOTE]
-> This tutorial uses resource manager templates and REST API to configure custom logs. See [Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview)](tutorial-custom-logs.md) for a similar tutorial using the Azure portal.
+> This tutorial uses Resource Manager templates and REST API to configure custom logs. See [Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview)](tutorial-custom-logs.md) for a similar tutorial using the Azure portal.
In this tutorial, you learn to:
In this tutorial, you learn to:
> [!NOTE]
-> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls using the Azure Monitor **Tables** API and the Azure portal to install resource manager templates. You can use any other method to make these calls.
+> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls using the Azure Monitor **Tables** API and the Azure portal to install Resource Manager templates. You can use any other method to make these calls.
## Prerequisites To complete this tutorial, you need the following:
To complete this tutorial, you need the following:
## Collect workspace details Start by gathering information that you'll need from your workspace.
-1. Navigate to your workspace in the **Log Analytics workspaces** menu in the Azure Portal. From the **Properties** page, copy the **Resource ID** and save it for later use.
+1. Navigate to your workspace in the **Log Analytics workspaces** menu in the Azure portal. From the **Properties** page, copy the **Resource ID** and save it for later use.
:::image type="content" source="media/tutorial-custom-logs-api/workspace-resource-id.png" lightbox="media/tutorial-custom-logs-api/workspace-resource-id.png" alt-text="Screenshot showing workspace resource ID.":::
Start by registering an Azure Active Directory application to authenticate again
3. Once registered, you can view the details of the application. Note the **Application (client) ID** and the **Directory (tenant) ID**. You'll need these values later in the process.
- :::image type="content" source="media/tutorial-custom-logs/new-app-id.png" lightbox="media/tutorial-custom-logs/new-app-id.png" alt-text="Screenshot showing app id.":::
+ :::image type="content" source="media/tutorial-custom-logs/new-app-id.png" lightbox="media/tutorial-custom-logs/new-app-id.png" alt-text="Screenshot showing app ID.":::
4. You now need to generate an application client secret, which is similar to creating a password to use with a username. Select **Certificates & secrets** and then **New client secret**. Give the secret a name to identify its purpose and select an **Expires** duration. *1 year* is selected here although for a production implementation, you would follow best practices for a secret rotation procedure or use a more secure authentication mode such a certificate.
Use the **Tables - Update** API to create the table with the PowerShell code bel
1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
- :::image type="content" source="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening cloud shell":::
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell":::
-2. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the cloud shell prompt to run it.
+2. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
```PowerShell $tableParams = @'
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
:::image type="content" source="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor.":::
-3. Paste the resource manager template below into the editor and then click **Save**. You don't need to modify this template since you will provide values for its parameters.
+3. Paste the Resource Manager template below into the editor and then click **Save**. You don't need to modify this template since you will provide values for its parameters.
- :::image type="content" source="media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot to edit resource manager template.":::
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot to edit Resource Manager template.":::
```json
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
5. Click **Review + create** and then **Create** when you review the details.
-6. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion** URI since you'll need this in a later step.
+6. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion URI** since you'll need this in a later step.
:::image type="content" source="media/tutorial-custom-logs-api/data-collection-endpoint-overview.png" lightbox="media/tutorial-custom-logs-api/data-collection-endpoint-overview.png" alt-text="Screenshot for data collection endpoint uri.":::
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
:::image type="content" source="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor.":::
-3. Paste the resource manager template below into the editor and then click **Save**.
+3. Paste the Resource Manager template below into the editor and then click **Save**.
- :::image type="content" source="media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot to edit resource manager template.":::
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot to edit Resource Manager template.":::
Notice the following details in the DCR defined in this template:
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
## Assign permissions to DCR Once the data collection rule has been created, the application needs to be given permission to it. This will allow any application using the correct application ID and application key to send data to the new DCE and DCR.
-1. From the DCR in the Azure portal, select **Access Control (IAM)** amd then **Add role assignment**.
+1. From the DCR in the Azure portal, select **Access Control (IAM)** and then **Add role assignment**.
:::image type="content" source="media/tutorial-custom-logs/add-role-assignment.png" lightbox="media/tutorial-custom-logs/custom-log-create.png" alt-text="Screenshot for adding custom role assignment to DCR.":::
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Several other features don't have a direct cost, but you instead pay for the ing
| Type | Description | |:|:| | Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application insights resources. This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
-| Resource Logs | [Diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
-| Custom metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
-| Alerts | Charged based on the type and number of [signals](alerts/alerts-overview.md#what-you-can-alert-on) used by the alert rule, its frequency, and the type of notification used in response. |
-| Multi-step web tests | There is a cost for [multi-step web tests](app/availability-multistep.md) in Application Insights, but this feature has been deprecated.
+| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
+| Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Alerts | Charged based on the type and number of [signals](alerts/alerts-overview.md#what-you-can-alert-on) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+| Web tests | There is a cost for [multi-step web tests](app/availability-multistep.md) in Application Insights, but this feature has been deprecated.
## Data transfer charges Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is typically very small compared to the costs for data ingestion and retention. Controlling costs for Log Analytics should focus on your ingested data volume.
If you're new to Azure Monitor, you can use the [Azure Monitor pricing calculato
The bulk of your costs will typically be from data ingestion and retention for your Log Analytics workspaces and Application Insights resources. It's difficult to give accurate estimates for data volumes that you can expect since they'll vary significantly based on your configuration. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
-Following is basic guidance that you can use for common resources:
+Following is basic guidance that you can use for common resources.
- **Virtual machines.** With typical monitoring enabled, a virtual machine will generate between 1 GB to 3 GB of data per month. This is highly dependent on the configuration of your agents. - **Application Insights.** See the following section for different methods to estimate data from your applications.-- **Container insights.** See [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster) for guidance on estimating data for your ASK cluster.
+- **Container insights.** See [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster) for guidance on estimating data for your AKS cluster.
+
+The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases.
## Estimate application usage There are two methods that you can use to estimate the amount of data from an application monitored with Application Insights.
Also, if you move a subscription to the new Azure monitoring pricing model in Ap
- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges. - See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected. - See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.-- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
+- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-netapp-files Configure Unix Permissions Change Ownership Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-unix-permissions-change-ownership-mode.md
na Previously updated : 08/06/2021 Last updated : 04/13/2022 # Configure Unix permissions and change ownership mode for NFS and dual-protocol volumes
The change ownership mode (**`Chown Mode`**) functionality enables you to set th
## Considerations * The Unix permissions you specify apply only for the volume mount point (root directory).
-* You cannot modify the Unix permissions on source or destination volumes that are in a cross-region replication configuration.
+* You can modify the Unix permissions on the source volume *but not on the destination volume* that is in a cross-region replication configuration.
## Steps
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 03/15/2022 Last updated : 04/12/2022 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
- * **Administrators privilege users**
+ * <a name="administrators-privilege-users"></a>**Administrators privilege users**
You can grant additional security privileges to AD users or groups that require even more elevated privileges to access the Azure NetApp Files volumes. The specified accounts will have further elevated permissions at the file or folder level.
This setting is configured in the **Active Directory Connections** under **NetAp
| `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. | ![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
-
- The **Administrators** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAdAdministrators
- ```
-
- Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAdAdministrators
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
* Credentials, including your **username** and **password**
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 03/17/2022 Last updated : 04/12/2022
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## April 2022
+
+* The [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users) feature is now generally available (GA).
+
+ You no longer need to register this feature before using it.
+ ## March 2022 * Features that are now generally available (GA)
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md
description: Describes the functions to use in a Bicep file for working with arr
Previously updated : 04/06/2022 Last updated : 04/12/2022 # Array functions for Bicep
The output from the preceding example with the default values is:
| arrayOutput | String | one | | stringOutput | String | O |
+## indexOf
+
+`indexOf(arrayToSearch, itemToFind)`
+
+Returns an integer for the index of the first occurrence of an item in an array. The comparison is **case-sensitive** for strings.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+| | | | |
+| arrayToSearch | Yes | array | The array to use for finding the index of the searched item. |
+| itemToFind | Yes | int, string, array, or object | The item to find in the array. |
+
+### Return value
+
+An integer representing the first index of the item in the array. The index is zero-based. If the item isn't found, -1 is returned.
+
+### Examples
+
+The following example shows how to use the indexOf and lastIndexOf functions:
+
+```bicep
+var names = [
+ 'one'
+ 'two'
+ 'three'
+]
+
+var numbers = [
+ 4
+ 5
+ 6
+]
+
+var collection = [
+ names
+ numbers
+]
+
+var duplicates = [
+ 1
+ 2
+ 3
+ 1
+]
+
+output index1 int = lastIndexOf(names, 'two')
+output index2 int = indexOf(names, 'one')
+output notFoundIndex1 int = lastIndexOf(names, 'Three')
+
+output index3 int = lastIndexOf(numbers, 4)
+output index4 int = indexOf(numbers, 6)
+output notFoundIndex2 int = lastIndexOf(numbers, '5')
+
+output index5 int = indexOf(collection, numbers)
+
+output index6 int = indexOf(duplicates, 1)
+output index7 int = lastIndexOf(duplicates, 1)
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| index1 |int | 1 |
+| index2 | int | 0 |
+| index3 | int | 0 |
+| index4 | int | 2 |
+| index5 | int | 1 |
+| index6 | int | 0 |
+| index7 | int | 3 |
+| notFoundIndex1 | int | -1 |
+| notFoundIndex2 | int | -1 |
+ ## intersection `intersection(arg1, arg2, arg3, ...)`
The output from the preceding example with the default values is:
| arrayOutput | String | three | | stringOutput | String | e |
+## lastIndexOf
+
+`lastIndexOf(arrayToSearch, itemToFind)`
+
+Returns an integer for the index of the last occurrence of an item in an array. The comparison is **case-sensitive** for strings.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+| | | | |
+| arrayToSearch | Yes | array | The array to use for finding the index of the searched item. |
+| itemToFind | Yes | int, string, array, or object | The item to find in the array. |
+
+### Return value
+
+An integer representing the last index of the item in the array. The index is zero-based. If the item isn't found, -1 is returned.
+
+### Examples
+
+The following example shows how to use the indexOf and lastIndexOf functions:
+
+```bicep
+var names = [
+ 'one'
+ 'two'
+ 'three'
+]
+
+var numbers = [
+ 4
+ 5
+ 6
+]
+
+var collection = [
+ names
+ numbers
+]
+
+var duplicates = [
+ 1
+ 2
+ 3
+ 1
+]
+
+output index1 int = lastIndexOf(names, 'two')
+output index2 int = indexOf(names, 'one')
+output notFoundIndex1 int = lastIndexOf(names, 'Three')
+
+output index3 int = lastIndexOf(numbers, 4)
+output index4 int = indexOf(numbers, 6)
+output notFoundIndex2 int = lastIndexOf(numbers, '5')
+
+output index5 int = indexOf(collection, numbers)
+
+output index6 int = indexOf(duplicates, 1)
+output index7 int = lastIndexOf(duplicates, 1)
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| index1 |int | 1 |
+| index2 | int | 0 |
+| index3 | int | 0 |
+| index4 | int | 2 |
+| index5 | int | 1 |
+| index6 | int | 0 |
+| index7 | int | 3 |
+| notFoundIndex1 | int | -1 |
+| notFoundIndex2 | int | -1 |
+ ## length `length(arg1)`
An array or object.
The union function uses the sequence of the parameters to determine the order and values of the result.
-For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, it's earlier placement in the array is preserved.
+For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any more parameters. If a value is already present, its earlier placement in the array is preserved.
For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md
Title: Bicep functions description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 04/06/2022 Last updated : 04/12/2022 # Bicep functions
The following functions are available for working with arrays. All of these func
* [concat](./bicep-functions-array.md#concat) * [contains](./bicep-functions-array.md#contains) * [empty](./bicep-functions-array.md#empty)
+* [indexOf](./bicep-functions-array.md#indexof)
* [first](./bicep-functions-array.md#first) * [intersection](./bicep-functions-array.md#intersection) * [last](./bicep-functions-array.md#last)
+* [lastIndexOf](./bicep-functions-array.md#lastindexof)
* [length](./bicep-functions-array.md#length) * [min](./bicep-functions-array.md#min) * [max](./bicep-functions-array.md#max)
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 07/01/2021 Last updated : 04/13/2022
Unlike role-based access control, you use management locks to apply a restrictio
When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence.
+If you have a **Delete** lock on a resource and attempt to delete its resource group, the whole delete operation is blocked. Even if the resource group or other resources in the resource group aren't locked, the deletion doesn't happen. You never have a partial deletion.
+
+When you [cancel an Azure subscription](../../cost-management-billing/manage/cancel-azure-subscription.md#what-happens-after-subscription-cancellation), the resources are initially deactivated but not deleted. A resource lock doesn't block canceling the subscription. After a waiting period, the resources are permanently deleted. The resource lock doesn't prevent the permanent deletion of the resources.
+ ## Understand scope of locks > [!NOTE]
Applying locks can lead to unexpected results because some operations that don't
- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses POST](/rest/api/application-gateway/application-gateways/backend-health), which is blocked by the read-only lock. -- A read-only lock on a **AKS cluster** prevents all users from accessing any cluster resources from the **Kubernetes Resources** section of AKS cluster left-side blade on the Azure portal. These operations require a POST request for authentication.
+- A read-only lock on a **AKS cluster** prevents all users from accessing any cluster resources from the **Kubernetes Resources** section of AKS cluster on the left of the Azure portal. These operations require a POST request for authentication.
## Who can create or delete locks
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/05/2022 Last updated : 04/13/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | clusters | resource group | 4-63 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. |
-> | workspaces | global | 4-63 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. |
+> | workspaces | resource group | 4-63 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. |
## Microsoft.OperationsManagement
azure-resource-manager Template Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-array.md
Title: Template functions - arrays description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with arrays. Previously updated : 03/10/2022 Last updated : 04/12/2022 # Array functions for ARM templates
-Resource Manager provides several functions for working with arrays in your Azure Resource Manager template (ARM template):
-
-* [array](#array)
-* [concat](#concat)
-* [contains](#contains)
-* [createArray](#createarray)
-* [empty](#empty)
-* [first](#first)
-* [intersection](#intersection)
-* [last](#last)
-* [length](#length)
-* [max](#max)
-* [min](#min)
-* [range](#range)
-* [skip](#skip)
-* [take](#take)
-* [union](#union)
+This article describes the template functions for working with arrays.
To get an array of string values delimited by a value, see [split](template-functions-string.md#split).
The output from the preceding example with the default values is:
| arrayOutput | String | one | | stringOutput | String | O |
+## indexOf
+
+`indexOf(arrayToSearch, itemToFind)`
+
+Returns an integer for the index of the first occurrence of an item in an array. The comparison is **case-sensitive** for strings.
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+| | | | |
+| arrayToSearch | Yes | array | The array to use for finding the index of the searched item. |
+| itemToFind | Yes | int, string, array, or object | The item to find in the array. |
+
+### Return value
+
+An integer representing the first index of the item in the array. The index is zero-based. If the item isn't found, -1 is returned.
+
+### Examples
+
+The following example shows how to use the indexOf and lastIndexOf functions:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "names": [
+ "one",
+ "two",
+ "three"
+ ],
+ "numbers": [
+ 4,
+ 5,
+ 6
+ ],
+ "collection": [
+ "[variables('names')]",
+ "[variables('numbers')]"
+ ],
+ "duplicates": [
+ 1,
+ 2,
+ 3,
+ 1
+ ]
+ },
+ "resources": [],
+ "outputs": {
+ "index1": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('names'), 'two')]"
+ },
+ "index2": {
+ "type": "int",
+ "value": "[indexOf(variables('names'), 'one')]"
+ },
+ "notFoundIndex1": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('names'), 'Three')]"
+ },
+ "index3": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('numbers'), 4)]"
+ },
+ "index4": {
+ "type": "int",
+ "value": "[indexOf(variables('numbers'), 6)]"
+ },
+ "notFoundIndex2": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('numbers'), '5')]"
+ },
+ "index5": {
+ "type": "int",
+ "value": "[indexOf(variables('collection'), variables('numbers'))]"
+ },
+ "index6": {
+ "type": "int",
+ "value": "[indexOf(variables('duplicates'), 1)]"
+ },
+ "index7": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('duplicates'), 1)]"
+ }
+ }
+}
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| index1 |int | 1 |
+| index2 | int | 0 |
+| index3 | int | 0 |
+| index4 | int | 2 |
+| index5 | int | 1 |
+| index6 | int | 0 |
+| index7 | int | 3 |
+| notFoundIndex1 | int | -1 |
+| notFoundIndex2 | int | -1 |
+ ## intersection `intersection(arg1, arg2, arg3, ...)`
The output from the preceding example with the default values is:
| arrayOutput | String | three | | stringOutput | String | e |
+## lastIndexOf
+
+`lastIndexOf(arrayToSearch, itemToFind)`
+
+Returns an integer for the index of the last occurrence of an item in an array. The comparison is **case-sensitive** for strings.
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+| | | | |
+| arrayToSearch | Yes | array | The array to use for finding the index of the searched item. |
+| itemToFind | Yes | int, string, array, or object | The item to find in the array. |
+
+### Return value
+
+An integer representing the last index of the item in the array. The index is zero-based. If the item isn't found, -1 is returned.
+
+### Examples
+
+The following example shows how to use the indexOf and lastIndexOf functions:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "names": [
+ "one",
+ "two",
+ "three"
+ ],
+ "numbers": [
+ 4,
+ 5,
+ 6
+ ],
+ "collection": [
+ "[variables('names')]",
+ "[variables('numbers')]"
+ ],
+ "duplicates": [
+ 1,
+ 2,
+ 3,
+ 1
+ ]
+ },
+ "resources": [],
+ "outputs": {
+ "index1": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('names'), 'two')]"
+ },
+ "index2": {
+ "type": "int",
+ "value": "[indexOf(variables('names'), 'one')]"
+ },
+ "notFoundIndex1": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('names'), 'Three')]"
+ },
+ "index3": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('numbers'), 4)]"
+ },
+ "index4": {
+ "type": "int",
+ "value": "[indexOf(variables('numbers'), 6)]"
+ },
+ "notFoundIndex2": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('numbers'), '5')]"
+ },
+ "index5": {
+ "type": "int",
+ "value": "[indexOf(variables('collection'), variables('numbers'))]"
+ },
+ "index6": {
+ "type": "int",
+ "value": "[indexOf(variables('duplicates'), 1)]"
+ },
+ "index7": {
+ "type": "int",
+ "value": "[lastIndexOf(variables('duplicates'), 1)]"
+ }
+ }
+}
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| index1 |int | 1 |
+| index2 | int | 0 |
+| index3 | int | 0 |
+| index4 | int | 2 |
+| index5 | int | 1 |
+| index6 | int | 0 |
+| index7 | int | 3 |
+| notFoundIndex1 | int | -1 |
+| notFoundIndex2 | int | -1 |
+ ## length `length(arg1)`
An array or object.
The union function uses the sequence of the parameters to determine the order and values of the result.
-For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, it's earlier placement in the array is preserved.
+For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any more parameters. If a value is already present, its earlier placement in the array is preserved.
For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
azure-resource-manager Template Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions.md
Title: Template functions description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 02/11/2022 Last updated : 04/12/2022 # ARM template functions
Resource Manager provides several functions for working with arrays.
* [createArray](template-functions-array.md#createarray) * [empty](template-functions-array.md#empty) * [first](template-functions-array.md#first)
+* [indexOf](template-functions-array.md#indexof)
* [intersection](template-functions-array.md#intersection) * [last](template-functions-array.md#last)
+* [lastIndexOf](template-functions-array.md#lastindexof)
* [length](template-functions-array.md#length) * [min](template-functions-array.md#min) * [max](template-functions-array.md#max)
azure-signalr Signalr Cli Create With App Service Github Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service-github-oauth.md
This sample script creates a new Azure SignalR Service resource, which is used t
:::code language="azurecli" source="~/azure_cli_scripts/azure-signalr/create-signalr-with-app-service/create-signalr-with-app-service.sh" id="FullScript":::
-### Enable Github authentication and Git deployment for web app
+### Enable GitHub authentication and Git deployment for web app
1. Update the values in the following script for the desired deployment username and its passwor
azure-sql Active Geo Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-overview.md
Previously updated : 1/19/2022 Last updated : 4/13/2022 # Active geo-replication
To achieve full business continuity, adding database regional redundancy is only
An application can access a geo-secondary replica to execute read-only queries using the same or different security principals used for accessing the primary database. For more information, see [Use read-only replicas to offload read-only query workloads](read-scale-out.md). > [!IMPORTANT]
- > You can use geo-replication to create secondary replicas in the same region as the primary. You can use these secondaries to satisfy read scale-out scenarios in the same region. However, a secondary replica in the same region does not provide additional resilience to catastrophic failures or large scale outages, and therefore is not a suitable failover target for disaster recovery purposes. It also does not guarantee availability zone isolation. Use Business Critical or Premium service tiers [zone redundant configuration](high-availability-sla.md#premium-and-business-critical-service-tier-zone-redundant-availability) or General Purpose service tier [zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) to achieve availability zone isolation.
+ > You can use geo-replication to create secondary replicas in the same region as the primary. You can use these secondaries to satisfy read scale-out scenarios in the same region. However, a secondary replica in the same region does not provide additional resilience to catastrophic failures or large scale outages, and therefore is not a suitable failover target for disaster recovery purposes. It also does not guarantee availability zone isolation. Use Business Critical or Premium service tiers [zone redundant configuration](high-availability-sla.md#premium-and-business-critical-service-tier-zone-redundant-availability) or General Purpose service tier [zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability) to achieve availability zone isolation.
> - **Planned geo-failover**
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Database that are currently
| [Reverse migrate from Hyperscale](manage-hyperscale-database.md#reverse-migrate-from-hyperscale) | Reverse migration to the General Purpose service tier allows customers who have recently migrated an existing database in Azure SQL Database to the Hyperscale service tier to move back in an emergency, should Hyperscale not meet their needs. While reverse migration is initiated by a service tier change, it's essentially a size-of-data move between different architectures. | | [SQL Analytics](../../azure-monitor/insights/azure-sql.md)|Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting.| | [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance.|
-| [Zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview), you can make your databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. **The feature is currently in preview for the General Purpose and Hyperscale service tiers.** |
-
+| [Zone redundant configuration for Hyperscale databases](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview), you can make your Hyperscale databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic.|
+|||
## General availability (GA)
The following table lists the features of Azure SQL Database that have transitio
| Feature | GA Month | Details | | | | |
+| [Zone redundant configuration for General Purpose tier](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability) | April 2022 | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability), you can make your provisioned and serverless General Purpose databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic.|
| [Maintenance window](../database/maintenance-window.md)| March 2022 | The maintenance window feature allows you to configure maintenance schedule for your Azure SQL Database. [Maintenance window advance notifications](../database/advance-notifications.md), however, are in preview.| | [Storage redundancy for Hyperscale databases](automated-backups-overview.md#configure-backup-storage-redundancy) | March 2022 | When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. | | [Azure Active Directory-only authentication](authentication-azure-ad-only-authentication.md) | November 2021 | It's possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory. |
The following table lists the features of Azure SQL Database that have transitio
Learn about significant changes to the Azure SQL Database documentation.
+### April 2022
+
+| Changes | Details |
+| | |
+| **General Purpose tier Zone redundancy GA** | Enabling zone redundancy for your provisioned and serverless General Purpose databases and elastic pools is now generally available in select regions. To learn more, including region availability see [General Purpose zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability). |
+ ### March 2022 | Changes | Details |
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/gateway-migration.md
Previously updated : 04/06/2022 Last updated : 04/13/2022+ # Azure SQL Database traffic migration to newer Gateways [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
You may be impacted if you:
- Hard coded the IP address for any particular gateway in your on-premises firewall - Have any subnets using Microsoft.SQL as a Service Endpoint but cannot communicate with the gateway IP addresses-- Use the [zone redundant configuration for general purpose tier](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)
+- Use the [zone redundant configuration for general purpose tier](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability)
- Use the [zone redundant configuration for premium & business critical tiers](high-availability-sla.md#premium-and-business-critical-service-tier-zone-redundant-availability) You will not be impacted if you have:
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/high-availability-sla.md
Previously updated : 03/02/2022 Last updated : 04/13/2022 # High availability for Azure SQL Database and SQL Managed Instance
The standard availability model includes two layers:
Whenever the database engine or the operating system is upgraded, or a failure is detected, Azure Service Fabric will move the stateless `sqlservr.exe` process to another stateless compute node with sufficient free capacity. Data in Azure Blob storage is not affected by the move, and the data/log files are attached to the newly initialized `sqlservr.exe` process. This process guarantees 99.99% availability, but a heavy workload may experience some performance degradation during the transition since the new `sqlservr.exe` process starts with cold cache.
-## General Purpose service tier zone redundant availability (Preview)
+## General Purpose service tier zone redundant availability
Zone-redundant configuration for the general purpose service tier is offered for both serverless and provisioned compute. This configuration utilizes [Azure Availability Zones](../../availability-zones/az-overview.md)  to replicate databases across multiple physical locations within an Azure region. By selecting zone-redundancy, you can make your new and existing serverless and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
The zone-redundant version of the high availability architecture for the general
![Zone redundant configuration for general purpose](./media/high-availability-sla/zone-redundant-for-general-purpose.png) > [!IMPORTANT]
-> Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available when the Gen5 hardware is selected. Additionally, for serverless and provisioned general purpose tier, the zone-redundant configuration is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
+> For general purpose tier the zone-redundant configuration is Generally Available in the following regions: West Europe, North Europe, West US 2, and France Central. This is in preview in the following regions: East US, East US 2, Southeast Asia, Australia East, Japan East, and UK South.
> [!NOTE]
-> General Purpose databases with a size of 80 vcore may experience performance degradation with zone-redundant configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and downgrading a zone-redundant database from Business Critical to General Purpose may experience slower performance for any single databases larger than 1 TB. Please see our [latency documentation on scaling a database](single-database-scale.md) for more information.
->
-> [!NOTE]
-> The preview is not covered under Reserved Instance
+> Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available when the Gen5 hardware is selected.
+ ## Premium and Business Critical service tier locally redundant availability
azure-sql Resource Limits Dtu Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-dtu-elastic-pools.md
Previously updated : 01/18/2022 Last updated : 04/13/2022 # Resources limits for elastic pools using the DTU purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
Previously updated : 01/18/2022 Last updated : 04/13/2022 # Resource limits for elastic pools using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
vCore resource limits are listed in the following articles, please be sure to up
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1, 2|0, 0.25, 0.5, 1...4|0, 0.25, 0.5, 1...6|0, 0.25, 0.5, 1...8|0, 0.25, 0.5, 1...10|0, 0.25, 0.5, 1...12|0, 0.25, 0.5, 1...14| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
+|Multi-AZ|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
vCore resource limits are listed in the following articles, please be sure to up
|Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Min/max elastic pool vCore choices per database|0, 0.25, 0.5, 1...16|0, 0.25, 0.5, 1...18|0, 0.25, 0.5, 1...20|0, 0.25, 0.5, 1...20, 24|0, 0.25, 0.5, 1...20, 24, 32|0, 0.25, 0.5, 1...16, 24, 32, 40|0, 0.25, 0.5, 1...16, 24, 32, 40, 80| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
+|Multi-AZ|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-single-databases.md
Previously updated : 03/02/2022 Last updated : 04/13/2022 # Resource limits for single databases using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers|75|150|300|450|600| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|Yes|Yes|Yes|Yes|Yes|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers|750|900|1050|1200| |Max concurrent sessions|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|
-|Multi-AZ|N/A|N/A|N/A|N/A|
+|Multi-AZ|Yes|Yes|Yes|Yes|
|Read Scale-out|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers|1350|1500|1800|2400|3000| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|Yes|Yes|Yes|Yes|Yes|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers|200|400|600|800|1000|1200|1400| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
+|Multi-AZ|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers|1600|1800|2000|2400|3200|4000|8000| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1|
-|Multi-AZ|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)|
+|Multi-AZ|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
azure-sql Security Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/security-best-practice.md
Previously updated : 11/10/2021 Last updated : 04/13/2021
Most security standards address data availability in terms of operational contin
- Additional business continuity features such as the zone redundant configuration and auto-failover groups across different Azure geos can be configured: - [High-availability - Zone redundant configuration for Premium & Business Critical service tiers](high-availability-sla.md#premium-and-business-critical-service-tier-zone-redundant-availability)
- - [High-availability - Zone redundant configuration for General Purpose service tier](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview)
+ - [High-availability - Zone redundant configuration for General Purpose service tier](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability)
- [Overview of business continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md) ## Next steps
azure-sql Service Tier Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-business-critical.md
Previously updated : 03/09/2022 Last updated : 04/13/2022 # Business Critical tier - Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
The following table shows resource limits for both Azure SQL Database and Azure
| **Storage size** | 1 GB ΓÇô 4 TB |32 GB ΓÇô 16 TB | | **Tempdb size** | [32 GB per vCore](resource-limits-vcore-single-databases.md) |Up to 4 TB - [limited by storage size](../managed-instance/resource-limits.md#service-tier-characteristics) | | **Log write throughput** | Single databases: [12 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [15 MB/s per vCore (max 120 MB/s)](resource-limits-vcore-elastic-pools.md) | [4 MB/s per vCore (max 48 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) |
-| **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)|
+| **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sl#premium-and-business-critical-service-tier-zone-redundant-availability) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)|
| **Backups** | RA-GRS, 1-35 days (7 days by default) | RA-GRS, 1-35 days (7 days by default)| | [**Read-only replicas**](read-scale-out.md) |1 built-in high availability replica is readable <br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) |1 built-in high availability replica is readable <br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) | | **Pricing/Billing** |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. |
azure-sql Service Tier General Purpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-general-purpose.md
Previously updated : 02/02/2022 Last updated : 04/13/2022 # General Purpose service tier - Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
In the architectural model for the General Purpose service tier, there are two l
- A stateless compute layer that is running the `sqlservr.exe` process and contains only transient and cached data (for example ΓÇô plan cache, buffer pool, column store pool). This stateless node is operated by Azure Service Fabric that initializes process, controls health of the node, and performs failover to another place if necessary. - A stateful data layer with database files (.mdf/.ldf) that are stored in Azure Blob storage. Azure Blob storage guarantees that there will be no data loss of any record that is placed in any database file. Azure Storage has built-in data availability/redundancy that ensures that every record in log file or page in data file will be preserved even if the process crashes.
-Whenever the database engine or operating system is upgraded, some part of underlying infrastructure fails, or if some critical issue is detected in the `sqlservr.exe` process, Azure Service Fabric will move the stateless process to another stateless compute node. There is a set of spare nodes that is waiting to run new compute service if a failover of the primary node happens in order to minimize failover time. Data in Azure storage layer is not affected, and data/log files are attached to newly initialized process. This process guarantees 99.99% availability, but it might have some performance impacts on heavy workloads that are running due to transition time and the fact the new node starts with cold cache.
+Whenever the database engine or operating system is upgraded, some part of underlying infrastructure fails, or if some critical issue is detected in the `sqlservr.exe` process, Azure Service Fabric will move the stateless process to another stateless compute node. There is a set of spare nodes that is waiting to run new compute service if a failover of the primary node happens in order to minimize failover time. Data in Azure storage layer is not affected, and dat#general-purpose-service-tier-zone-redundant-availability) is enabled. There may be some performance impacts on heavy workloads that are running due to transition time and the fact the new node starts with cold cache.
## When to choose this service tier
The following table shows resource limits for both Azure SQL Database and Azure
| **Storage size** | 1 GB - 4 TB | 2 GB - 16 TB| | **Tempdb size** | [32 GB per vCore](resource-limits-vcore-single-databases.md) | [24 GB per vCore](../managed-instance/resource-limits.md#service-tier-characteristics) | | **Log write throughput** | Single databases: [4.5 MB/s per vCore (max 50 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [6 MB/s per vCore (max 62.5 MB/s)](resource-limits-vcore-elastic-pools.md) | [3 MB/s per vCore (max 22 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics)|
-| **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)|
+| **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sl#general-purpose-service-tier-zone-redundant-availability) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)|
| **Backups** | 1-35 days (7 days by default) | 1-35 days (7 days by default)| | [**Read-only replicas**](read-scale-out.md) | 0 built-in </br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) | 0 built-in </br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) | | **Pricing/Billing** | [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged.| [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/>IOPS is not charged. |
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
Previously updated : 04/06/2022 Last updated : 04/13/2022 # vCore purchasing model - Azure SQL Database
For greater details, review resource limits for [logical server](resource-limits
|**Use case**|**General Purpose**|**Business Critical**|**Hyperscale**| ||||| |**Best for**|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance per database replica.|Most business workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one isolated database replica. |
-|**Availability**|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) (preview)|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|zone-redundant high availability (HA) (preview)|
+|**Availability**|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) |3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|zone-redundant high availability (HA) (preview)|
|**Pricing/billing** | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | [vCore for each replica and used storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS not yet charged. | |**Discount models**| [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
azure-sql Auto Failover Group Configure Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
Consider the following prerequisites:
- The subnet range for the secondary virtual network must not overlap the subnet range of the primary virtual network. - The collation and time zone of the secondary managed instance must match that of the primary managed instance. - When connecting the two gateways, the **Shared Key** should be the same for both connections.-- You will need to either configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) or create a gateway for the virtual network of each SQL Managed Instance, connect the two gateways, and then create the failover group.
+- You'll need to either configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) or create a gateway for the virtual network of each SQL Managed Instance, connect the two gateways, and then create the failover group.
- Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. Managed instances residing in geo-paired regions have much better performance compared to unpaired regions. ## Create primary virtual network gateway
-If you have not configured [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), you can create the primary virtual network gateway with the Azure portal, or PowerShell.
+If you haven't configured [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), you can create the primary virtual network gateway with the Azure portal, or PowerShell.
> [!NOTE] > The SKU of the gateway affects throughput performance. This article deploys a gateway with the most basic SKU (`HwGw1`). Deploy a higher SKU (example: `VpnGw3`) to achieve higher throughput. For all available options, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#benchmark)
Create the failover group for your managed instances by using the Azure portal o
Create the failover group for your SQL Managed Instances by using the Azure portal.
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** isn't in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
1. Select the primary managed instance you want to add to the failover group. 1. Under **Settings**, navigate to **Instance Failover Groups** and then choose to **Add group** to open the **Instance Failover Group** page.
Create the failover group for your SQL Managed Instances by using the Azure port
![Create failover group](./media/auto-failover-group-configure-sql-mi/create-failover-group.png)
-1. Once failover group deployment is complete, you will be taken back to the **Failover group** page.
+1. Once failover group deployment is complete, you'll be taken back to the **Failover group** page.
# [PowerShell](#tab/azure-powershell)
The listener endpoint is in the form of `fog-name.database.windows.net`, and is
You can create a failover group between SQL Managed Instances in two different subscriptions, as long as subscriptions are associated to the same [Azure Active Directory Tenant](../../active-directory/fundamentals/active-directory-whatis.md#terminology). When using PowerShell API, you can do it by specifying the `PartnerSubscriptionId` parameter for the secondary SQL Managed Instance. When using REST API, each instance ID included in the `properties.managedInstancePairs` parameter can have its own Subscription ID. > [!IMPORTANT]
-> Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover groups across different subscriptions and/or resource groups, failover cannot be initiated manually via portal from the primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.
+> Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover groups across different subscriptions and/or resource groups, failover can't be initiated manually via portal from the primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.
## Change the secondary region Let's assume that instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new secondary instance in the third region. To make the transition, follow these steps: 1. Create instance C with same size as A and in the same DNS zone.
-2. Delete the failover group between instances A and B. At this point the logins will be failing because the SQL aliases for the failover group listeners have been deleted and the gateway will not recognize the failover group name. The secondary databases will be disconnected from the primaries and will become read-write databases.
+2. Delete the failover group between instances A and B. At this point the logins will be failing because the SQL aliases for the failover group listeners have been deleted and the gateway won't recognize the failover group name. The secondary databases will be disconnected from the primaries and will become read-write databases.
3. Create a failover group with the same name between instance A and C. Follow the instructions in [failover group with SQL Managed Instance tutorial](failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized. 4. Delete instance B if not needed to avoid unnecessary charges.
Let's assume instance A is the primary instance, instance B is the existing seco
> After step 3 and until step 4 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A. > [!IMPORTANT]
-> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group with the same name. Because failover group names must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
+> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there's a non-zero probability of somebody else creating a failover group with the same name. Because failover group names must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
## <a name="enabling-geo-replication-between-managed-instances-and-their-vnets"></a> Enabling geo-replication between MI virtual networks
When you set up a failover group between primary and secondary SQL Managed Insta
- The two instances of SQL Managed Instance need to be in different Azure regions. - The two instances of SQL Managed Instance need to be the same service tier, and have the same storage size. - Your secondary instance of SQL Managed Instance must be empty (no user databases).-- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there is no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below.
+- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there's no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below.
> [!IMPORTANT] > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring non-default [maintenance window](../database/maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering. -- The two SQL Managed Instance VNets cannot have overlapping IP addresses.
+- The two SQL Managed Instance VNets can't have overlapping IP addresses.
- You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000~12000 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances. > [!IMPORTANT] > Misconfigured NSG security rules leads to stuck database seeding operations. -- The secondary SQL Managed Instance is configured with the correct DNS zone ID. DNS zone is a property of a SQL Managed Instance and underlying virtual cluster, and its ID is included in the host name address. The zone ID is generated as a random string when the first SQL Managed Instance is created in each VNet and the same ID is assigned to all other instances in the same subnet. Once assigned, the DNS zone cannot be modified. SQL Managed Instances included in the same failover group must share the DNS zone. You accomplish this by passing the primary instance's zone ID as the value of DnsZonePartner parameter when creating the secondary instance.
+- The secondary SQL Managed Instance is configured with the correct DNS zone ID. DNS zone is a property of a SQL Managed Instance and underlying virtual cluster, and its ID is included in the host name address. The zone ID is generated as a random string when the first SQL Managed Instance is created in each VNet and the same ID is assigned to all other instances in the same subnet. Once assigned, the DNS zone can't be modified. SQL Managed Instances included in the same failover group must share the DNS zone. You accomplish this by passing the primary instance's zone ID as the value of DnsZonePartner parameter when creating the secondary instance.
> [!NOTE] > For a detailed tutorial on configuring failover groups with SQL Managed Instance, see [add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md).
When you set up a failover group between primary and secondary SQL Managed Insta
<!--
-There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+There's some overlap of content in the following articles, be sure to make changes to all if necessary:
/azure-sql/auto-failover-group-overview.md /azure-sql/database/auto-failover-group-sql-db.md /azure-sql/database/auto-failover-group-configure-sql-db.md
There is some overlap of content in the following articles, be sure to make chan
Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role has all the necessary permissions to manage failover groups.
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role has all the necessary permissions to manage failover groups.
The following table lists specific permission scopes for Azure SQL Managed Instance: | **Action** | **Permission** | **Scope**| | :- | :- | :- | |**Create failover group**| Azure RBAC write access | Primary managed instance </br> Secondary managed instance|
-| **Update failover group** Azure RBAC write access | Failover group </br> All databases within the managed instance|
+| **Update failover group**| Azure RBAC write access | Failover group </br> All databases within the managed instance|
| **Fail over failover group** | Azure RBAC write access | Failover group on new primary managed instance |
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-apis-tools.md
Your applications and services can issue direct REST API calls or use one or mor
| | | | | | | | **Batch REST** |[Azure REST API - Docs](/rest/api/batchservice/) |N/A |- |- | [Supported versions](/rest/api/batchservice/batch-service-rest-api-versioning) | | **Batch .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Batch/) |[Tutorial](tutorial-parallel-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) | [Release notes](https://aka.ms/batch-net-dataplane-changelog) |
-| **Batch Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/batch/client) |[PyPI](https://pypi.org/project/azure-batch/) |[Tutorial](tutorial-parallel-python.md)|[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Python/Batch) | [Readme](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/batch/azure-batch/README.md) |
+| **Batch Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/mgmt-datafactory-readme?view=azure-python) |[PyPI](https://pypi.org/project/azure-batch/) |[Tutorial](tutorial-parallel-python.md)|[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Python/Batch) | [Readme](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/batch/azure-batch/README.md) |
| **Batch JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch) |[npm](https://www.npmjs.com/package/@azure/batch) |[Tutorial](batch-js-get-started.md) |- | [Readme](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/batch/batch) | | **Batch Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Java) | [Readme](https://github.com/Azure/azure-batch-sdk-for-java)|
The Azure Resource Manager APIs for Batch provide programmatic access to Batch a
| | | | | | | **Batch Management REST** |[Azure REST API - Docs](/rest/api/batchmanagement/) |- |- |[GitHub](https://github.com/Azure-Samples/batch-dotnet-manage-batch-accounts) | | **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) |
-| **Batch Management Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/batch/management) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- |
+| **Batch Management Python** |[Azure SDK for Python - Docs](/samples/azure-samples/azure-samples-python-management/batch/) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- |
| **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/arm-batch-readme) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- | | **Batch Management Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch/management) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |- |
cognitive-services Csharptutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/CSharpTutorial.md
- Title: "Sample: Explore an image processing app in C#"-
-description: Explore a basic Windows app that uses the Computer Vision API in Azure Cognitive Services. Perform OCR, create thumbnails, and work with visual features in an image.
------ Previously updated : 10/27/2021----
-# Sample: Explore an image processing app with C#
-
-Explore a basic Windows application that uses Computer Vision to perform optical character recognition (OCR), create smart-cropped thumbnails, and detect, categorize, tag and describe visual features, including faces, in an image. The below example lets you submit an image URL or a locally stored file. You can use this open source example as a template for building your own app for Windows using the Computer Vision API and Windows Presentation Foundation (WPF), a part of the .NET Framework.
-
-> [!div class="checklist"]
-> * Get the sample app from GitHub
-> * Open and build the sample app in Visual Studio
-> * Run the sample app and interact with it to perform various scenarios
-> * Explore the various scenarios included with the sample app
-
-## Prerequisites
-
-Before exploring the sample app, ensure that you've met the following prerequisites:
-
-* You must have [Visual Studio 2015](https://visualstudio.microsoft.com/downloads/) or later.
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * You will need the key and endpoint from the resource you create to connect your application to the Computer Vision service. You'll paste your key and endpoint into the code below later in the quickstart.
- * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-
-## Get the sample app
-
-The Computer Vision sample app is available on GitHub from the [Microsoft/Cognitive-Vision-Windows repository](https://github.com/microsoft/Cognitive-Vision-Windows). This repository also includes the `Microsoft/Cognitive-Common-Windows` repository as a Git submodule. You can recursively clone this repository, including the submodule, either by using the `git clone --recurse-submodules` command from the command line, or by using GitHub Desktop.
-
-For example, to recursively clone the repository for the Computer Vision sample app from a command prompt, run the following command:
-
-```Console
-git clone --recurse-submodules https://github.com/Microsoft/Cognitive-Vision-Windows.git
-```
-
-> [!IMPORTANT]
-> Do not download this repository as a _.zip_ file. Git doesn't include submodules when downloading a repository as a _.zip_.
-
-### Get optional sample images
-
-You can optionally use the sample images included with the [Face](../../Face/Overview.md) sample app, available on GitHub from the `Microsoft/Cognitive-Face-Windows` repository. That sample app includes a folder, `/Data`, which contains multiple images of people. You can recursively clone this repository, as well, by the methods described for the Computer Vision sample app.
-
-For example, to recursively clone the repository for the Face sample app from a command prompt, run the following command:
-
-```Console
-git clone --recurse-submodules https://github.com/Microsoft/Cognitive-Face-Windows.git
-```
-
-## Open and build the sample app in Visual Studio
-
-You must build the sample app first, so that Visual Studio can resolve dependencies, before you can run or explore the sample app. To open and build the sample app, do the following steps:
-
-1. Open the Visual Studio solution file, `/Sample-WPF/VisionAPI-WPF-Samples.sln`, in Visual Studio.
-1. Ensure that the Visual Studio solution contains two projects:
-
- * SampleUserControlLibrary
- * VisionAPI-WPF-Samples
-
- If the SampleUserControlLibrary project is unavailable, confirm that you've recursively cloned the `Microsoft/Cognitive-Vision-Windows` repository.
-1. In Visual Studio, either press Ctrl+Shift+B or choose **Build** from the ribbon menu and then choose **Build Solution** to build the solution.
-
-## Run and interact with the sample app
-
-You can run the sample app, to see how it interacts with you and with the Computer Vision client library when performing various tasks, such as generating thumbnails or tagging images. To run and interact with the sample app, do the following steps:
-
-1. After the build is complete, either press **F5** or choose **Debug** from the ribbon menu and then choose **Start debugging** to run the sample app.
-1. When the sample app is displayed, choose **Subscription Key Management** from the navigation pane to display the Subscription Key Management page.
- ![Subscription Key Management page](../Images/Vision_UI_Subscription.PNG)
-1. Enter your subscription key in **Subscription Key**.
-1. Enter the endpoint URL in **Endpoint**.
- [!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)]
-1. If you don't want to enter your subscription key and endpoint URL the next time you run the sample app, choose **Save Setting** to save the subscription key and endpoint URL to your computer. If you want to delete your previously-saved subscription key and endpoint URL, choose **Delete Setting**.
-
- > [!NOTE]
- > The sample app uses isolated storage, and `System.IO.IsolatedStorage`, to store your subscription key and endpoint URL.
-
-1. Under **Select a scenario** in the navigation pane, select one of the scenarios currently included with the sample app:
-
- | Scenario | Description |
- |-|-|
- |Analyze Image | Uses the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) operation to analyze a local or remote image. You can choose the visual features and language for the analysis, and see both the image and the results. |
- |Analyze Image with Domain Model | Uses the [List Domain Specific Models](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fd) operation to list the domain models from which you can select, and the [Recognize Domain Specific Content](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e200) operation to analyze a local or remote image using the selected domain model. You can also choose the language for the analysis. |
- |Describe Image | Uses the [Describe Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fe) operation to create a human-readable description of a local or remote image. You can also choose the language for the description. |
- |Generate Tags | Uses the [Tag Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1ff) operation to tag the visual features of a local or remote image. You can also choose the language used for the tags. |
- |Recognize Text (OCR) | Uses the [OCR](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fc) operation to recognize and extract printed text from an image. You can either choose the language to use, or let Computer Vision auto-detect the language. |
- |Recognize Text V2 (English) | Uses the [Recognize Text](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/587f2c6a154055056008f200) and [Get Recognize Text Operation Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/587f2cf1154055056008f201) operations to asynchronously recognize and extract printed or handwritten text from an image. |
- |Get Thumbnail | Uses the [Get Thumbnail](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fb) operation to generate a thumbnail for a local or remote image. |
-
- The following screenshot illustrates the page provided for the Analyze Image scenario, after analyzing a sample image.
- ![Screenshot of the Analyze image page](../Images/Analyze_Image_Example.PNG)
-
-## Explore the sample app
-
-The Visual Studio solution for the Computer Vision sample app contains two projects:
-
-* SampleUserControlLibrary
- The SampleUserControlLibrary project provides functionality shared by multiple Cognitive Services samples. The project contains the following:
- * SampleScenarios
- A UserControl that provides a standardized presentation, such as the title bar, navigation pane, and content pane, for samples. The Computer Vision sample app uses this control in the MainWindow.xaml window to display scenario pages and access information shared across scenarios, such as the subscription key and endpoint URL.
- * SubscriptionKeyPage
- A Page that provides a standardized layout for entering a subscription key and endpoint URL for the sample app. The Computer Vision sample app uses this page to manage the subscription key and endpoint URL used by the scenario pages.
- * VideoResultControl
- A UserControl that provides a standardized presentation for video information. The Computer Vision sample app doesn't use this control.
-* VisionAPI-WPF-Samples
- The main project for the Computer Vision sample app, this project contains all of the interesting functionality for Computer Vision. The project contains the following:
- * AnalyzeInDomainPage.xaml
- The scenario page for the Analyze Image with Domain Model scenario.
- * AnalyzeImage.xaml
- The scenario page for the Analyze Image scenario.
- * DescribePage.xaml
- The scenario page for the Describe Image scenario.
- * ImageScenarioPage.cs
- The ImageScenarioPage class, from which all of the scenario pages in the sample app are derived. This class manages functionality, such as providing credentials and formatting output, shared by all of the scenario pages.
- * MainWindow.xaml
- The main window for the sample app, it uses the SampleScenarios control to present the SubscriptionKeyPage and scenario pages.
- * OCRPage.xaml
- The scenario page for the Recognize Text (OCR) scenario.
- * RecognizeLanguage.cs
- The RecognizeLanguage class, which provides information about the languages supported by the various methods in the sample app.
- * TagsPage.xaml
- The scenario page for the Generate Tags scenario.
- * TextRecognitionPage.xaml
- The scenario page for the Recognize Text V2 (English) scenario.
- * ThumbnailPage.xaml
- The scenario page for the Get Thumbnail scenario.
-
-### Explore the sample code
-
-Key portions of sample code are framed with comment blocks that start with `KEY SAMPLE CODE STARTS HERE` and end with `KEY SAMPLE CODE ENDS HERE`, to make it easier for you to explore the sample app. These key portions of sample code contain the code most relevant to learning how to use the Computer Vision API client library to do various tasks. You can search for `KEY SAMPLE CODE STARTS HERE` in Visual Studio to move between the most relevant sections of code in the Computer Vision sample app.
-
-For example, the `UploadAndAnalyzeImageAsync` method, shown following and included in AnalyzePage.xaml, demonstrates how to use the client library to analyze a local image by invoking the `ComputerVisionClient.AnalyzeImageInStreamAsync` method.
-
-```csharp
-private async Task<ImageAnalysis> UploadAndAnalyzeImageAsync(string imageFilePath)
-{
- // --
- // KEY SAMPLE CODE STARTS HERE
- // --
-
- //
- // Create Cognitive Services Vision API Service client.
- //
- using (var client = new ComputerVisionClient(Credentials) { Endpoint = Endpoint })
- {
- Log("ComputerVisionClient is created");
-
- using (Stream imageFileStream = File.OpenRead(imageFilePath))
- {
- //
- // Analyze the image for all visual features.
- //
- Log("Calling ComputerVisionClient.AnalyzeImageInStreamAsync()...");
- VisualFeatureTypes[] visualFeatures = GetSelectedVisualFeatures();
- string language = (_language.SelectedItem as RecognizeLanguage).ShortCode;
- ImageAnalysis analysisResult = await client.AnalyzeImageInStreamAsync(imageFileStream, visualFeatures, null, language);
- return analysisResult;
- }
- }
-
- // --
- // KEY SAMPLE CODE ENDS HERE
- // --
-}
-```
-
-### Explore the client library
-
-This sample app uses the Computer Vision API client library, a thin C# client wrapper for the Computer Vision API in Azure Cognitive Services. The client library is available from NuGet in the [Microsoft.Azure.CognitiveServices.Vision.ComputerVision](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.ComputerVision/) package. When you built the Visual Studio application, you retrieved the client library from its corresponding NuGet package. You can also view the source code for the client library in the `/ClientLibrary` folder of the `Microsoft/Cognitive-Vision-Windows` repository.
-
-The client library's functionality centers around the `ComputerVisionClient` class, in the `Microsoft.Azure.CognitiveServices.Vision.ComputerVision` namespace, while the models used by the `ComputerVisionClient` class when interacting with Computer Vision are found in the `Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models` namespace. In the various XAML scenario pages included with the sample app, you'll find the following `using` directives for those namespaces:
-
-```csharp
-// --
-// KEY SAMPLE CODE STARTS HERE
-// Use the following namespace for ComputerVisionClient.
-// --
-using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
-using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
-// --
-// KEY SAMPLE CODE ENDS HERE
-// --
-```
-
-You'll learn more about the various methods included with the `ComputerVisionClient` class as you explore the scenarios included with the Computer Vision sample app.
-
-## Explore the Analyze Image scenario
-
-This scenario is managed by the AnalyzePage.xaml page. You can choose the visual features and language for the analysis, and see both the image and the results. The scenario page does this by using one of the following methods, depending on the source of the image:
-
-* UploadAndAnalyzeImageAsync
- This method is used for local images, in which the image must be encoded as a `Stream` and sent to Computer Vision by calling the `ComputerVisionClient.AnalyzeImageInStreamAsync` method.
-* AnalyzeUrlAsync
- This method is used for remote images, in which the URL for the image is sent to Computer Vision by calling the `ComputerVisionClient.AnalyzeImageAsync` method.
-
-The `UploadAndAnalyzeImageAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. Because the sample app is analyzing a local image, it has to send the contents of that image to Computer Vision. It opens the local file specified in `imageFilePath` for reading as a `Stream`, then gets the visual features and language selected in the scenario page. It calls the `ComputerVisionClient.AnalyzeImageInStreamAsync` method, passing the `Stream` for the file, the visual features, and the language, then returns the result as an `ImageAnalysis` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-The `AnalyzeUrlAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. It gets the visual features and language selected in the scenario page. It calls the `ComputerVisionClient.AnalyzeImageInStreamAsync` method, passing the image URL, the visual features, and the language, then returns the result as an `ImageAnalysis` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-## Explore the Analyze Image with Domain Model scenario
-
-This scenario is managed by the AnalyzeInDomainPage.xaml page. You can choose a domain model, such as `celebrities` or `landmarks`, and language to perform a domain-specific analysis of the image, and see both the image and the results. The scenario page uses the following methods, depending on the source of the image:
-
-* GetAvailableDomainModelsAsync
- This method gets the list of available domain models from Computer Vision and populates the `_domainModelComboBox` ComboBox control on the page, using the `ComputerVisionClient.ListModelsAsync` method.
-* UploadAndAnalyzeInDomainImageAsync
- This method is used for local images, in which the image must be encoded as a `Stream` and sent to Computer Vision by calling the `ComputerVisionClient.AnalyzeImageByDomainInStreamAsync` method.
-* AnalyzeInDomainUrlAsync
- This method is used for remote images, in which the URL for the image is sent to Computer Vision by calling the `ComputerVisionClient.AnalyzeImageByDomainAsync` method.
-
-The `UploadAndAnalyzeInDomainImageAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. Because the sample app is analyzing a local image, it has to send the contents of that image to Computer Vision. It opens the local file specified in `imageFilePath` for reading as a `Stream`, then gets the language selected in the scenario page. It calls the `ComputerVisionClient.AnalyzeImageByDomainInStreamAsync` method, passing the `Stream` for the file, the name of the domain model, and the language, then returns the result as an `DomainModelResults` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-The `AnalyzeInDomainUrlAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. It gets the language selected in the scenario page. It calls the `ComputerVisionClient.AnalyzeImageByDomainAsync` method, passing the image URL, the visual features, and the language, then returns the result as an `DomainModelResults` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-## Explore the Describe Image scenario
-
-This scenario is managed by the DescribePage.xaml page. You can choose a language to create a human-readable description of the image, and see both the image and the results. The scenario page uses the following methods, depending on the source of the image:
-
-* UploadAndDescribeImageAsync
- This method is used for local images, in which the image must be encoded as a `Stream` and sent to Computer Vision by calling the `ComputerVisionClient.DescribeImageInStreamAsync` method.
-* DescribeUrlAsync
- This method is used for remote images, in which the URL for the image is sent to Computer Vision by calling the `ComputerVisionClient.DescribeImageAsync` method.
-
-The `UploadAndDescribeImageAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. Because the sample app is analyzing a local image, it has to send the contents of that image to Computer Vision. It opens the local file specified in `imageFilePath` for reading as a `Stream`, then gets the language selected in the scenario page. It calls the `ComputerVisionClient.DescribeImageInStreamAsync` method, passing the `Stream` for the file, the maximum number of candidates (in this case, 3), and the language, then returns the result as an `ImageDescription` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-The `DescribeUrlAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. It gets the language selected in the scenario page. It calls the `ComputerVisionClient.DescribeImageAsync` method, passing the image URL, the maximum number of candidates (in this case, 3), and the language, then returns the result as an `ImageDescription` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-## Explore the Generate Tags scenario
-
-This scenario is managed by the TagsPage.xaml page. You can choose a language to tag the visual features of an image, and see both the image and the results. The scenario page uses the following methods, depending on the source of the image:
-
-* UploadAndGetTagsForImageAsync
- This method is used for local images, in which the image must be encoded as a `Stream` and sent to Computer Vision by calling the `ComputerVisionClient.TagImageInStreamAsync` method.
-* GenerateTagsForUrlAsync
- This method is used for remote images, in which the URL for the image is sent to Computer Vision by calling the `ComputerVisionClient.TagImageAsync` method.
-
-The `UploadAndGetTagsForImageAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. Because the sample app is analyzing a local image, it has to send the contents of that image to Computer Vision. It opens the local file specified in `imageFilePath` for reading as a `Stream`, then gets the language selected in the scenario page. It calls the `ComputerVisionClient.TagImageInStreamAsync` method, passing the `Stream` for the file and the language, then returns the result as a `TagResult` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-The `GenerateTagsForUrlAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. It gets the language selected in the scenario page. It calls the `ComputerVisionClient.TagImageAsync` method, passing the image URL and the language, then returns the result as a `TagResult` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-## Explore the Recognize Text (OCR) scenario
-
-This scenario is managed by the OCRPage.xaml page. You can choose a language to recognize and extract printed text from an image, and see both the image and the results. The scenario page uses the following methods, depending on the source of the image:
-
-* UploadAndRecognizeImageAsync
- This method is used for local images, in which the image must be encoded as a `Stream` and sent to Computer Vision by calling the `ComputerVisionClient.RecognizePrintedTextInStreamAsync` method.
-* RecognizeUrlAsync
- This method is used for remote images, in which the URL for the image is sent to Computer Vision by calling the `ComputerVisionClient.RecognizePrintedTextAsync` method.
-
-The `UploadAndRecognizeImageAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. Because the sample app is analyzing a local image, it has to send the contents of that image to Computer Vision. It opens the local file specified in `imageFilePath` for reading as a `Stream`, then gets the language selected in the scenario page. It calls the `ComputerVisionClient.RecognizePrintedTextInStreamAsync` method, indicating that orientation is not detected and passing the `Stream` for the file and the language, then returns the result as an `OcrResult` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-The `RecognizeUrlAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. It gets the language selected in the scenario page. It calls the `ComputerVisionClient.RecognizePrintedTextAsync` method, indicating that orientation is not detected and passing the image URL and the language, then returns the result as an `OcrResult` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-## Explore the Recognize Text V2 (English) scenario
-
-This scenario is managed by the TextRecognitionPage.xaml page. You can choose the recognition mode and a language to asynchronously recognize and extract either printed or handwritten text from an image, and see both the image and the results. The scenario page uses the following methods, depending on the source of the image:
-
-* UploadAndRecognizeImageAsync
- This method is used for local images, in which the image must be encoded as a `Stream` and sent to Computer Vision by calling the `RecognizeAsync` method and passing a parameterized delegate for the `ComputerVisionClient.RecognizeTextInStreamAsync` method.
-* RecognizeUrlAsync
- This method is used for remote images, in which the URL for the image is sent to Computer Vision by calling the `RecognizeAsync` method and passing a parameterized delegate for the `ComputerVisionClient.RecognizeTextAsync` method.
-* RecognizeAsync
- This method handles the asynchronous calling for both the `UploadAndRecognizeImageAsync` and `RecognizeUrlAsync` methods, as well as polling for results by calling the `ComputerVisionClient.GetTextOperationResultAsync` method.
-
-Unlike the other scenarios included in the Computer Vision sample app, this scenario is asynchronous, in that one method is called to start the process, but a different method is called to check on the status and return the results of that process. The logical flow in this scenario is somewhat different from that in the other scenarios.
-
-The `UploadAndRecognizeImageAsync` method opens the local file specified in `imageFilePath` for reading as a `Stream`, then calls the `RecognizeAsync` method, passing:
-
-* A lambda expression for a parameterized asynchronous delegate of the `ComputerVisionClient.RecognizeTextInStreamAsync` method, with the `Stream` for the file and the recognition mode as parameters, in `GetHeadersAsyncFunc`.
-* A lambda expression for a delegate to get the `Operation-Location` response header value, in `GetOperationUrlFunc`.
-
-The `RecognizeUrlAsync` method calls the `RecognizeAsync` method, passing:
-
-* A lambda expression for a parameterized asynchronous delegate of the `ComputerVisionClient.RecognizeTextAsync` method, with the URL of the remote image and the recognition mode as parameters, in `GetHeadersAsyncFunc`.
-* A lambda expression for a delegate to get the `Operation-Location` response header value, in `GetOperationUrlFunc`.
-
-When the `RecognizeAsync` method is completed, both `UploadAndRecognizeImageAsync` and `RecognizeUrlAsync` methods return the result as a `TextOperationResult` instance. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-The `RecognizeAsync` method calls the parameterized delegate for either the `ComputerVisionClient.RecognizeTextInStreamAsync` or `ComputerVisionClient.RecognizeTextAsync` method passed in `GetHeadersAsyncFunc` and waits for the response. The method then calls the delegate passed in `GetOperationUrlFunc` to get the `Operation-Location` response header value from the response. This value is the URL used to retrieve the results of the method passed in `GetHeadersAsyncFunc` from Computer Vision.
-
-The `RecognizeAsync` method then calls the `ComputerVisionClient.GetTextOperationResultAsync` method, passing the URL retrieved from the `Operation-Location` response header, to get the status and result of the method passed in `GetHeadersAsyncFunc`. If the status doesn't indicate that the method completed, successfully or unsuccessfully, the `RecognizeAsync` method calls `ComputerVisionClient.GetTextOperationResultAsync` 3 more times, waiting 3 seconds between calls. The `RecognizeAsync` method returns the results to the method that called it.
-
-## Explore the Get Thumbnail scenario
-
-This scenario is managed by the ThumbnailPage.xaml page. You can indicate whether to use smart cropping, and specify desired height and width, to generate a thumbnail from an image, and see both the image and the results. The scenario page uses the following methods, depending on the source of the image:
-
-* UploadAndThumbnailImageAsync
- This method is used for local images, in which the image must be encoded as a `Stream` and sent to Computer Vision by calling the `ComputerVisionClient.GenerateThumbnailInStreamAsync` method.
-* ThumbnailUrlAsync
- This method is used for remote images, in which the URL for the image is sent to Computer Vision by calling the `ComputerVisionClient.GenerateThumbnailAsync` method.
-
-The `UploadAndThumbnailImageAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. Because the sample app is analyzing a local image, it has to send the contents of that image to Computer Vision. It opens the local file specified in `imageFilePath` for reading as a `Stream`. It calls the `ComputerVisionClient.GenerateThumbnailInStreamAsync` method, passing the width, height, the `Stream` for the file, and whether to use smart cropping, then returns the result as a `Stream`. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-The `RecognizeUrlAsync` method creates a new `ComputerVisionClient` instance, using the specified subscription key and endpoint URL. It calls the `ComputerVisionClient.GenerateThumbnailAsync` method, passing the width, height, the URL for the image, and whether to use smart cropping, then returns the result as a `Stream`. The methods inherited from the `ImageScenarioPage` class present the returned results in the scenario page.
-
-## Clean up resources
-
-When no longer needed, delete the folder into which you cloned the `Microsoft/Cognitive-Vision-Windows` repository. If you opted to use the sample images, also delete the folder into which you cloned the `Microsoft/Cognitive-Face-Windows` repository.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get started with Face service](../../face/quickstarts/client-libraries.md?pivots=programming-language-csharp)
cognitive-services Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/concepts/face-detection.md
# Face detection and attributes
-This article explains the concepts of face detection and face attribute data. Face detection is the action of locating human faces in an image and optionally returning different kinds of face-related data.
+This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data.
-You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) operation to detect faces in an image. At a minimum, each detected face corresponds to a faceRectangle field in the response. This set of pixel coordinates for the left, top, width, and height mark the located face. Using these coordinates, you can get the location of the face and its size. In the API response, faces are listed in size order from largest to smallest.
+You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](../Quickstarts/client-libraries.md). Or, for a more in-depth guide, see [Call the detect API](../Face-API-How-to-Topics/HowtoDetectFacesinImage.md).
+
+## Face rectangle
+
+Each detected face corresponds to a `faceRectangle` field in the response. This is a set of pixel coordinates for the left, top, width, and height of the detected face. Using these coordinates, you can get the location and size of the face. In the API response, faces are listed in size order from largest to smallest.
## Face ID
Attributes are a set of features that can optionally be detected by the [Face -
* **Head pose**. The face's orientation in 3D space. This attribute is described by the roll, yaw, and pitch angles in degrees, which are defined according to the [right-hand rule](https://en.wikipedia.org/wiki/Right-hand_rule). The order of three angles is roll-yaw-pitch, and each angle's value range is from -180 degrees to 180 degrees. 3D orientation of the face is estimated by the roll, yaw, and pitch angles in order. See the following diagram for angle mappings: ![A head with the pitch, roll, and yaw axes labeled](../Images/headpose.1.jpg)+
+ For more details on how to use these values, see the [Head pose how-to guide](../Face-API-How-to-Topics/how-to-use-headpose.md).
* **Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup. * **Mask**. Whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered. * **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
+
+ Title: Captioning with speech to text - Speech service
+
+description: An overview of key concepts for captioning with speech to text.
++++++ Last updated : 04/12/2022+
+zone_pivot_groups: programming-languages-speech-sdk
++
+# Captioning with speech to text
+
+In this guide, you learn how to create captions with speech to text. This guide covers captioning for speech, but doesn't include speaker ID or sound effects such as bells ringing. Concepts include how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios.
+
+Here are some common captioning scenarios:
+- Online courses and instructional videos
+- Sporting events
+- Voice and video calls
+
+The following are aspects to consider when using captioning:
+* Let your audience know that captions are generated by an automated service.
+* Center captions horizontally on the screen, in a large and prominent font.
+* Consider whether to use partial results, when to start displaying captions, and how many words to show at a time.
+* Learn about captioning protocols such as [SMPTE-TT](https://ieeexplore.ieee.org/document/7291854).
+* Consider output formats such as SRT (SubRip Subtitle) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
+
+> [!TIP]
+> Try the [Azure Video Analyzer for Media](/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
+
+Captioning can accompany real time or pre-recorded speech. Whether you're showing captions in real time or with a recording, you can use the [Speech SDK](speech-sdk.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
+
+## Input audio to the Speech service
+
+For real time captioning, use a microphone or audio input stream instead of file input. For examples of how to recognize speech from a microphone, see the [Speech to text quickstart](get-started-speech-to-text.md) and [How to recognize speech](how-to-recognize-speech.md) documentation. For more information about streaming, see [How to use the audio input stream](how-to-use-audio-input-streams.md).
+
+For captioning of a prerecoding, send file input to the Speech service. For more information, see [How to use compressed audio files](how-to-use-codec-compressed-audio-input-streams.md).
+
+## Caption and speech synchronization
+
+You'll want to synchronize captions with the audio track, whether it's done in real time or with a prerecording.
+
+The Speech service returns the offset and duration of the recognized speech.
++
+For more information, see [Get speech recognition results](get-speech-recognition-results.md).
+
+## Get partial results
+
+Consider when to start displaying captions, and how many words to show at a time. Speech recognition results are subject to change while an utterance is still being recognized. Partial partial results are returned with each `Recognizing` event. As each word is processed, the Speech service re-evaluates an utterance in the new context and again returns the best result. The new result isn't guaranteed to be the same as the previous result. The complete and final transcription of an utterance is returned with the `Recognized` event.
+
+> [!NOTE]
+> Punctuation of partial results is not available.
+
+For captioning of prerecorded speech or wherever latency isn't a concern, you could wait for the complete transcription of each utterance before displaying any words. Given the final offset and duration of each word in an utterance, you know when to show subsequent words at pace with the soundtrack.
+
+Real time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
+
+You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
+
+```csharp
+speechConfig.SetProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```cpp
+speechConfig->SetProperty(PropertyId::SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```go
+speechConfig.SetProperty(common.SpeechServiceResponseStablePartialResultThreshold, 5)
+```
+```java
+speechConfig.setProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```javascript
+speechConfig.setProperty(sdk.PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```objective-c
+[self.speechConfig setPropertyTo:5 byId:SPXSpeechServiceResponseStablePartialResultThreshold];
+```
+```swift
+self.speechConfig!.setPropertyTo(5, by: SPXPropertyId.speechServiceResponseStablePartialResultThreshold)
+```
+```python
+speech_config.set_property(property_id = speechsdk.PropertyId.SpeechServiceResponse_StablePartialResultThreshold, value = 5)
+```
+
+Requesting more stable partial results will reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
+
+### Stable partial threshold example
+In the following recognition sequence without setting a stable partial threshold, "math" is recognized as a word, but the final text is "mathematics". At another point, "course 2" is recognized, but the final text is "course 201".
+
+```console
+RECOGNIZING: Text=welcome to
+RECOGNIZING: Text=welcome to applied math
+RECOGNIZING: Text=welcome to applied mathematics
+RECOGNIZING: Text=welcome to applied mathematics course 2
+RECOGNIZING: Text=welcome to applied mathematics course 201
+RECOGNIZED: Text=Welcome to applied Mathematics course 201.
+```
+
+In the previous example, the transcriptions were additive and no text was retracted. But at other times you might find that the partial results were inaccurate. In either case, the unstable partial results can be perceived as "flickering" when displayed.
+
+For this example, if the stable partial result threshold is set to `5`, no words are altered or backtracked.
+
+```console
+RECOGNIZING: Text=welcome to
+RECOGNIZING: Text=welcome to applied
+RECOGNIZING: Text=welcome to applied mathematics
+RECOGNIZED: Text=Welcome to applied Mathematics course 201.
+```
+
+## Profanity filter
+
+You can specify whether to mask, remove, or show profanity in recognition results.
+
+> [!NOTE]
+> Microsoft also reserves the right to mask or remove any word that is deemed inappropriate. Such words will not be returned by the Speech service, whether or not you enabled profanity filtering.
+
+The profanity filter options are:
+- `Masked`: Replaces letters in profane words with asterisk (*) characters. This is the default option.
+- `Raw`: Include the profane words verbatim.
+- `Removed`: Removes profane words.
+
+For example, to remove profane words from the speech recognition result, set the profanity filter to `Removed` as shown here:
+
+```csharp
+speechConfig.SetProfanity(ProfanityOption.Removed);
+```
+```cpp
+speechConfig->SetProfanity(ProfanityOption::Removed);
+```
+```go
+speechConfig.SetProfanity(common.Removed)
+```
+```java
+speechConfig.setProfanity(ProfanityOption.Removed);
+```
+```javascript
+speechConfig.setProfanity(sdk.ProfanityOption.Removed);
+```
+```objective-c
+[self.speechConfig setProfanityOptionTo:SPXSpeechConfigProfanityOption.SPXSpeechConfigProfanityOption_ProfanityRemoved];
+```
+```swift
+self.speechConfig!.setProfanityOptionTo(SPXSpeechConfigProfanityOption_ProfanityRemoved)
+```
+```python
+speech_config.set_profanity(speechsdk.ProfanityOption.Removed)
+```
+
+Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` properties. Profanity filter isn't applied to the result `LexicalForm` and `NormalizedForm` properties. Neither is the filter applied to the word level results.
+
+## Language identification
+
+If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md#language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
+
+## Customizations to improve accuracy
+
+A [phrase list](improve-accuracy-phrase-list.md) is a list of words or phrases that you provide right before starting speech recognition. Adding a phrase to a phrase list increases its importance, thus making it more likely to be recognized.
+
+Examples of phrases include:
+* Names
+* Geographical locations
+* Homonyms
+* Words or acronyms unique to your industry or organization
+
+There are some situations where [training a custom model](custom-speech-overview.md) is likely the best option to improve accuracy. For example, if you're captioning orthodontics lectures, you might want to train a custom model with the corresponding domain data.
+
+## Next steps
+
+* [Get started with speech to text](get-started-speech-to-text.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-speech-recognition-results.md
+
+ Title: "Get speech recognition results - Speech service"
+
+description: Learn how to get speech recognition results.
++++++ Last updated : 03/31/2022+
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+zone_pivot_groups: programming-languages-speech-sdk
+keywords: speech to text, speech to text software
++
+# Get speech recognition results
+++++++++
+## Next steps
+
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+* [Transcribe audio in batches](batch-transcription.md)
cognitive-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md
ms.devlang: csharp
-# About the Speech SDK audio input stream API
+# How to use the audio input stream
-The Speech SDK audio input stream API provides a way to stream audio into the recognizers instead of using either the microphone or the input file APIs.
+The Speech SDK provides a way to stream audio into the recognizer as an alternative to microphone or file input.
The following steps are required when you use audio input streams:
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
Title: How to use compressed audio files with the Speech SDK - Speech service
-description: Learn how to stream compressed audio to the Speech service with the Speech SDK.
+description: Learn how to use compressed audio files to the Speech service with the Speech SDK.
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
The Pre-Call API enables developers to programmatically validate a clientΓÇÖs re
## Accessing Pre-Call APIs
-To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. There you can access the `Diganostics` feature and the `preCallTest` method.
+To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method.
```javascript import { CallClient, Features} from "@azure/communication-calling"; import { AzureCommunicationTokenCredential } from '@azure/communication-common'; const tokenCredential = new AzureCommunicationTokenCredential();
-const preCallTest = await callClient.feature(Features.Diganostics).preCallTest(tokenCredential);
+const preCallDiagnosticsResult = await callClient.feature(Features.PreCallDiagnostics).startTest(tokenCredential);
```
Once it finishes running, developers can access the result object.
## Diagnostic results
-The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `CallDiagnosticsResult` object.
+The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `PreCallDiagnosticsResult` object.
```javascript
-export declare type CallDiagnosticsResult = {
+export declare type PreCallDiagnosticsResult = {
deviceAccess: Promise<DeviceAccess>; deviceEnumeration: Promise<DeviceEnumeration>; inCallDiagnostics: Promise<InCallDiagnostics>; browserSupport?: Promise<DeviceCompatibility>;
+ id: string;
callMediaStatistics?: Promise<MediaStatsCallFeature>; }; ```
-Individual result objects can be accessed as such using the `preCallTest` constant above.
+Individual result objects can be accessed as such using the `preCallDiagnosticsResult` constant above. Results for individual tests will be returned as they are completed with many of the test results being available immediately. In the case of the `inCallDiagnostics` test, the results might take up to 1 minute as the test validates quality of the video and audio.
### Browser support Browser compatibility check. Checks for `Browser` and `OS` compatibility and provides a `Supported` or `NotSupported` value back. ```javascript
-const browserSupport = await preCallTest.browserSupport;
+const browserSupport = await preCallDiagnosticsResult.browserSupport;
if(browserSupport) { console.log(browserSupport.browser) console.log(browserSupport.os)
const browserSupport = await preCallTest.browserSupport;
In the case that the test fails and the browser being used by the user is `NotSupported`, the easiest way to fix that is by asking the user to switch to a supported browser. Refer to the supported browsers in our [documentation](./calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser).
+>[!NOTE]
+>Known issue: `browser support` test returning `Unknown` in cases where it should be returning a correct value.
+ ### Device access Permission check. Checks whether video and audio devices are available from a permissions perspective. Provides `boolean` value for `audio` and `video` devices. ```javascript
- const deviceAccess = await preCallTest.deviceAccess;
+ const deviceAccess = await preCallDiagnosticsResult.deviceAccess;
if(deviceAccess) { console.log(deviceAccess.audio) console.log(deviceAccess.video)
Device availability. Checks whether microphone, camera and speaker devices are d
```javascript
- const deviceEnumeration = await preCallTest.deviceEnumeration;
+ const deviceEnumeration = await preCallDiagnosticsResult.deviceEnumeration;
if(deviceEnumeration) { console.log(deviceEnumeration.microphone) console.log(deviceEnumeration.camera)
Performs a quick call to check in-call metrics for audio and video and provides
```javascript
- const inCallDiagnostics = await preCallTest.inCallDiagnostics;
+ const inCallDiagnostics = await preCallDiagnosticsResult.inCallDiagnostics;
if(inCallDiagnostics) { console.log(inCallDiagnostics.connected) console.log(inCallDiagnostics.bandWidth)
At this step, there are multiple failure points to watch out for:
- If bandwidth is `Bad`, the user should be prompted to try out a different network or verify the bandwidth availability on their current one. Ensure no other high bandwidth activities might be taking place. ### Media stats
-For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `PreCallTest` feature. You can subscribe to the call media stats to get full collection of them.
+For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `preCallDiagnosticsResult` feature. You can subscribe to the call media stats to get full collection of them.
## Pricing
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
Learn more about the Azure Communication Services SDKs with the resources below.
|**[Calling SDK overview](./concepts/voice-video-calling/calling-sdk-features.md)**|Review the Communication Services Calling SDK overview.| |**[Chat SDK overview](./concepts/chat/sdk-features.md)**|Review the Communication Services Chat SDK overview.| |**[SMS SDK overview](./concepts/sms/sdk-features.md)**|Review the Communication Services SMS SDK overview.|
-|**[UI Library overview](https://aka.ms/acsstorybook))**| Review the UI Library for the Communication Services |
+|**[UI Library overview](https://aka.ms/acsstorybook)**| Review the UI Library for the Communication Services |
+
+## Design resources
+
+Find comprehensive components, composites, and UX guidance in the [UI Library Design Kit for Figma](https://www.figma.com/community/file/1095841357293210472). This design resource is purpose-built to help design your video calling and chat experiences faster and with less effort.
## Other Microsoft Communication Services
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
Configure the Azure Bot we created with its Web App endpoint where the bot logic
The final step would be to deploy the bot logic to the Web App we created. As we mentioned for this tutorial, we'll be using the Echo Bot. This bot only demonstrates a limited set of capabilities, such as echoing the user input. Here's how we deploy it to Azure Web App.
- 1. To use the samples, clone this Github repository using Git.
+ 1. To use the samples, clone this GitHub repository using Git.
``` git clone https://github.com/Microsoft/BotBuilder-Samples.gitcd BotBuilder-Samples ```
communication-services Get Started Ui Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-ui-kit.md
+
+ Title: Quickstart - Get started with UI Library Design Kit
+
+description: In this quickstart, you will learn how to leverage UI Library Design Kit for Azure Communication Services to quickly design communication experiences using Figma.
++ Last updated : 03/24/2022+++++
+# Get started with UI Library Design Kit (Figma)
+
+This article describes how to get started with the UI Library Design Kit (Figma).
+
+Start by getting the [UI Library Design Kit](https://www.figma.com/community/file/1095841357293210472) from Figma.
+
+## Design faster
+
+A resource to help design user interfaces built on Azure Communication Services, the UI Library Design Kit includes components, composites, and UX guidance purpose-built to help bring your video calling and chat experiences to life faster.
+
+## UI Library components and composites
+
+The same components and composites offered in the UI Library are available in Figma so you can quickly begin designing and prototyping your calling and chat experiences.
+
+## Built on Fluent
+
+The UI Library Design Kit's components are based on MicrosoftΓÇÖs Fluent UI; so, theyΓÇÖre built with usability, accessibility, and localization in mind.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+>[Get the ACS UI Kit (Figma)](https://www.figma.com/community/file/1095841357293210472)
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-portal.md
If you don't have an Azure subscription, [create an account](https://azure.micro
![image](https://user-images.githubusercontent.com/63871188/137009767-421ee49a-ded8-4cfd-ac53-a3d6750880b9.png)
-1. Choose a virtual machine with Intel SGX capabilities in the size selector by choosing **change size**. In the virtual machine size selector, click **Clear all filters**. Choose **Add filter**, select **Family** for the filter type, and then select only **Confidential compute**.
+1. Choose a virtual machine with Intel SGX capabilities by clicking on **+ Add filter** to create a filter, select **Type** for Filter type, and check only **Confidential compute** from the list in the next dropdown.
![DCsv2-Series VMs](media/quick-create-portal/dcsv2-virtual-machines.png)
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
The following example ARM template deploys a container app.
"type": "Microsoft.App/containerApps", "name": "[parameters('containerappName')]", "location": "[parameters('location')]",
+ "identity": {
+ "type": "None"
+ },
"properties": { "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments', parameters('environment_name'))]", "configuration": {
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
+
+ Title: 'Deploy to Azure Container Apps using Visual Studio Code'
+description: Deploy containerized .NET applications to Azure Container Apps using Visual Studio Code
+++++ Last updated : 4/05/2022+++
+# Tutorial: Deploy to Azure Container Apps using Visual Studio Code
+
+Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+
+In this tutorial, you'll deploy a containerized application to Azure Container Apps using Visual Studio Code.
+
+## Prerequisites
+
+- An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Visual Studio Code, available as a [free download](https://code.visualstudio.com/).
+- The following Visual Studio Code extensions installed:
+ - The [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account)
+ - The [Azure Container Apps extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecontainerapps)
+ - The [Docker extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker)
+
+## Clone the project
+
+To follow along with this tutorial, [Download the Sample Project](https://github.com/azure-samples/containerapps-albumapi-javascript/archive/refs/heads/master.zip) from [the repository](https://github.com/azure-samples/containerapps-albumapi-javascript) or clone it using the Git command below:
+
+```bash
+git clone https://github.com/Azure-Samples/containerapps-albumapi-javascript.git
+cd containerapps-albumapi-javascript
+```
+
+This tutorial uses a JavaScript project, but the steps are language agnostic. To open the project after cloning on Windows, navigate to the project's folder, and right click and choose **Open in VS Code**. For Mac or Linux, you can also use the Visual Studio Code user interface to open the sample project. Select **File -> Open Folder** and then navigate to the folder of the cloned project.
+
+## Sign in to Azure
+
+To work with Container Apps and complete this tutorial, you'll need to be signed into Azure. Once you have the Azure Account extension installed, you can sign in using the command palette by typing **Ctrl + shift + p** on Windows and searching for the Azure Sign In command.
++
+Select **Azure: Sign in**, and Visual Studio Code will launch a browser for you to sign into Azure. Login with the account you'd like to use to work with Container Apps, and then switch back to Visual Studio Code.
+
+## Create the container registry and Docker image
+
+The sample project includes a Dockerfile that is used to build a container image for the application. Docker images contain all of the source code and dependencies necessary to run an application. You can build and publish the image for your app directly in Azure; a local Docker installation is not required. An image is required to create and run a container app.
+
+Container images are stored inside of container registries. You can easily create a container registry and upload an image of your app to it in a single workflow using Visual Studio Code.
+
+1) First, right click on the Dockerfile in the explorer, and select **Build Image in Azure**. You can also begin this workflow from the command palette by entering **Ctrl + Shift + P** on Windows or **Cmd + Shift + P** on a Mac. When the command palette opens, search for *Build Image in Azure* and select **Enter** on the matching suggestion.
+
+ :::image type="content" source="media/visual-studio-code/visual-studio-code-build-in-azure-small.png" lightbox="media/visual-studio-code/visual-studio-code-build-in-azure.png" alt-text="A screenshot showing how to build the image in Azure.":::
+
+2) As the command palette opens, you are prompted to enter a tag for the container. Accept the default, which uses the project name with the `{{.Run.ID}}` replacement token as a suffix. Select **Enter** to continue.
+
+ :::image type="content" source="media/visual-studio-code/visual-studio-code-container-tag.png" alt-text="A screenshot showing Container Apps tagging.":::
+
+3) Choose the subscription you would like to use to create your container registry and build your image, and then press enter to continue.
+
+4) Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to step 7.
+
+5) Enter a unique name for the new registry such as *msdocscapps123*, where 123 are unique numbers of your own choosing, and then press enter. Container registry names must be globally unique across all over Azure.
+
+6) Select **Basic** as the SKU.
+
+7) Choose **+ Create new resource group**, or select an existing resource group you'd like to use. For a new resource group, enter a name such as `msdocscontainerapps`, and press enter.
+
+8) Finally, select the location that is nearest to you. Select **Enter** to finalize the workflow, and Azure begins creating the container registry and building the image. This may take a few moments to complete.
+
+Once the registry is created and the image is built successfully, you are ready to create the container app to host the published image.
+
+## Create and deploy to the container app
+
+The Azure Container Apps extension for Visual Studio Code enables you to choose existing Container Apps resources, or create new ones to deploy your applications to. In this scenario you create a new Container App environment and container app to host your application. After installing the Container Apps extension, you can access its features under the Azure control panel in Visual Studio Code.
+
+### Create the Container Apps environment
+
+Every container app must be part of a Container Apps environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other. You will need to create an environment before you can create the container app itself.
+
+1) In the Container Apps extension panel, right click on the subscription you would like to use and select **Create Container App Environment**.
+
+ :::image type="content" source="media/visual-studio-code/visual-studio-code-create-app-environment.png" alt-text="A screenshot showing how to create a Container Apps environment.":::
+
+2) A command palette workflow will open at the top of the screen. Enter a name for the new Container Apps environment, such as `msdocsappenvironment`, and select **Enter**.
+
+ :::image type="content" source="media/visual-studio-code/visual-studio-code-container-app-environment.png" alt-text="A screenshot showing the container app environment.":::
+
+3) Select the desired location for the container app from the list of options.
+
+ :::image type="content" source="media/visual-studio-code/visual-studio-code-container-env-location.png" alt-text="A screenshot showing the app environment location.":::
+
+Visual Studio Code and Azure will create the environment for you. This process may take a few moments to complete. Creating a container app environment also creates a log analytics workspace for you in Azure.
+
+### Create the container app and deploy the Docker image
+
+Now that you have a container app environment in Azure you can create a container app inside of it. You can also publish the Docker image you created earlier as part of this workflow.
+
+1) In the Container Apps extension panel, right click on the container environment you created previously and select **Create Container App**
+
+ :::image type="content" source="media/visual-studio-code/visual-studio-code-create-container-app.png" alt-text="A screenshot showing how to create the container app.":::
+
+2) A new command palette workflow will open at the top of the screen. Enter a name for the new container app, such as `msdocscontainerapp`, and then select **Enter**.
+
+ :::image type="content" source="media/visual-studio-code/visual-studio-code-container-name.png" alt-text="A screenshot showing the container app name.":::
+
+3) Next, you're prompted to choose a container registry hosting solution to pull a Docker image from. For this scenario, select **Azure Container Registries**, though Docker Hub is also supported.
+
+4) Select the container registry you created previously when publishing the Docker image.
+
+5) Select the container registry repository you published the Docker image to. Repositories allow you to store and organize your containers in logical groupings.
+
+6) Select the tag of the image you published earlier.
+
+7) When prompted for environment variables, choose **Skip for now**. This application does not require any environment variables.
+
+8) Select **Enable** on the ingress settings prompt to enable an HTTP endpoint for your application.
+
+9) Choose **External** to configure the HTTP traffic that the endpoint will accept.
+
+10) Leave the default value of 80 for the port, and then select **Enter** to complete the workflow.
+
+During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Click this link, and to view your app in the browser.
++
+You can also append the `/albums` path at the end of the app URL to view data from a sample API request.
+
+Congratulations! You successfully created and deployed your first container app using Visual Studio code.
+
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services at once by removing the resource group.
+
+Follow these steps in the Azure portal to remove the resources you created:
+
+1. Select the **msdocscontainerapps** resource group from the *Overview* section.
+1. Select the **Delete resource group** button at the top of the resource group *Overview*.
+1. Enter the resource group name **msdocscontainerapps** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog.
+1. Select **Delete**.
+ The process to delete the resource group may take a few minutes to complete.
+
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Environments in Azure Container Apps](environment.md)
container-apps Deploy Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio.md
The Visual Studio publish dialogs will help you choose existing Azure resources,
- **Container App name**: Enter a name of `msdocscontainerapp`. - **Subscription name**: Choose the subscription where you would like to host your app. - **Resource group**: A resource group acts as a logical container to organize related resources in Azure. You can either select an existing resource group, or select **New** to create one with a name of your choosing, such as `msdocscontainerapps`.
- - **Container Apps Environment**: Container Apps Environment: Every container app must be part of a container app environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other, Click **New** to open the Create new dialog for your container app environment. Leave the default values and select **OK** to close the environment dialog.
+ - **Container Apps Environment**: Container Apps Environment: Every container app must be part of a container app environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other. Click **New** to open the Create new dialog for your container app environment. Leave the default values and select **OK** to close the environment dialog.
- **Container Name**: This is the friendly name of the container that will run for this container app. Use the name `msdocscontainer1` for this quickstart. A container app typically runs a single container, but there are times when having more than one container is needed. One such example is when a sidecar container is required to perform an activity such as specialized logging or communications. :::image type="content" source="media/visual-studio/container-apps-create-new.png" alt-text="A screenshot showing how to create new Container Apps.":::
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
+
+ Title: Managed identities in Azure Container Apps
+description: Using managed identities in Container Apps
++++ Last updated : 04/11/2022+++
+# Managed identities in Azure Container Apps Preview
+
+A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+Your container app can be granted two types of identities:
+
+- A **system-assigned identity** is tied to your container app and is deleted when your container app is deleted. An app can only have one system-assigned identity.
+- A **user-assigned identity** is a standalone Azure resource that can be assigned to your container app and other resources. A container app can have multiple user-assigned identities. The identity exists until you delete them.
+
+## Why use a managed identity?
+
+You can use a managed identity in a running container app to authenticate to any [service that supports Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+
+With managed identities:
+
+- Your app connects to resources with the managed identity. You don't need to manage credentials in your container app.
+- You can use role-based access control to grant specific permissions to a managed identity.
+- System-assigned identities are automatically created and managed. They're deleted when your container app is deleted.
+- You can add and delete user-assigned identities and assign them to multiple resources. They're independent of your container app's life cycle.
+
+### Common use cases
+
+System-assigned identities are best for workloads that:
+
+- are contained within a single resource
+- need independent identities
+
+User-assigned identities are ideal for workloads that:
+
+- run on multiple resources and can share a single identity
+- need pre-authorization to a secure resource
+
+## Limitations
+
+The identity is only available within a running container, which means you can't use a managed identity to:
+
+- Pull an image from Azure Container Registry
+- Define scaling rules or Dapr configuration
+ - To access resources that require a connection string or key, such as storage resources, you'll still need to include the connection string or key in the `secretRef` of the scaling rule.
+
+## How to configure managed identities
+
+You can configure your managed identities through:
+
+- the Azure CLI
+- your Azure Resource Manager (ARM) template
+
+When a managed identity is added, deleted, or modified on a running container app, the app doesn't automatically restart and a new revision isn't created.
+
+> [!NOTE]
+> When adding a managed identity to a container app deployed before April 11, 2022, you must create a new revision.
+
+### Add a system-assigned identity
+
+# [Azure CLI](#tab/cli)
+
+Run the `az containerapp identity assign` command to create a system-assigned identity:
+
+```azurecli
+az containerapp identity assign --name myApp --resource-group myResourceGroup --system-assigned
+```
+
+# [ARM template](#tab/arm)
+
+An ARM template can be used to automate deployment of your container app and resources. To add a system-assigned identity, add an `identity` section to your ARM template.
+
+```json
+"identity": {
+ "type": "SystemAssigned"
+}
+```
+
+Adding the system-assigned type tells Azure to create and manage the identity for your application. For a complete ARM template example, see [ARM API Specification](azure-resource-manager-api-spec.md?tabs=arm-template#container-app-examples).
+
+--
+
+### Add a user-assigned identity
+
+Configuring a container app with a user-assigned identity requires that you first create the identity then add its resource identifier to your container app's configuration. You can create user-assigned identities via the Azure portal or the Azure CLI. For information on creating and managing user-assigned identities, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+# [Azure CLI](#tab/cli)
+
+1. Create a user-assigned identity.
+
+ ```azurecli
+ az identity create --resource-group <GROUP_NAME> --name <IDENTITY_NAME> --output json
+ ```
+
+ Note the `id` property of the new identity.
+
+1. Run the `az containerapps identity assign` command to assign the identity to the app. The identities parameter is a space separated list.
+
+ ```azurecli
+ az containerapp identity assign --resource-group <GROUP_NAME> --name <APP_NAME> \
+ --user-assigned <IDENTITY_RESOURCE_ID>
+ ```
+
+ Replace `<IDENTITY_RESOURCE_ID>` with the `id` property of the identity. To assign more than one user-assigned identity, supply a space-separated list of identity IDs to the `--user-assigned` parameter.
+
+# [ARM template](#tab/arm)
+
+To add one or more user-assigned identities, add an `identity` section to your ARM template. Replace `<IDENTITY1_RESOURCE_ID>` and `<IDENTITY2_RESOURCE_ID>` with the resource identifiers of the identities you want to add.
+
+Specify each user-assigned identity by adding an item to the `userAssignedIdentities` object with the identity's resource identifier as the key. Use an empty object as the value.
+
+```json
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<IDENTITY1_RESOURCE_ID>": {},
+ "<IDENTITY2_RESOURCE_ID>": {}
+ }
+}
+```
+
+For a complete ARM template example, see [ARM API Specification](azure-resource-manager-api-spec.md?tabs=arm-template#container-app-examples).
+
+> [!NOTE]
+> An application can have both system-assigned and user-assigned identities at the same time. In this case, the type property would be `SystemAssigned,UserAssigned`.
+
+--
+
+## Configure a target resource
+
+For some resources, you'll need to configure role assignments for your app's managed identity to grant access. Otherwise, calls from your app to services, such as Azure Key Vault and Azure SQL Database, will be rejected even if you use a valid token for that identity. To learn more about Azure role-based access control (Azure RBAC), see [What is RBAC?](../role-based-access-control/overview.md). To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+
+> [!IMPORTANT]
+> The back-end services for managed identities maintain a cache per resource URI for around 24 hours. If you update the access policy of a particular target resource and immediately retrieve a token for that resource, you may continue to get a cached token with outdated permissions until that token expires. There's currently no way to force a token refresh.
+
+## Connect to Azure services in app code
+
+With managed identities, an app can obtain tokens to access Azure resources that use Azure Active Directory, such as Azure SQL Database, Azure Key Vault, and Azure Storage. These tokens represent the application accessing the resource, and not any specific user of the application.
+
+Container Apps provides an internally accessible [REST endpoint](managed-identity.md?tabs=cli%2Chttp#rest-endpoint-reference) to retrieve tokens. The REST endpoint can be accessed from within the app with a standard HTTP GET, which can be implemented with a generic HTTP client in every language. For .NET, JavaScript, Java, and Python, the Azure Identity client library provides an abstraction over this REST endpoint. Connecting to other Azure services is as simple as adding a credential object to the service-specific client.
+
+# [.NET](#tab/dotnet)
+
+> [!NOTE]
+> When connecting to Azure SQL data sources with [Entity Framework Core](/ef/core/), consider [using Microsoft.Data.SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication), which provides special connection strings for managed identity connectivity.
+
+For .NET apps, the simplest way to work with a managed identity is through the [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
+
+- [Add Azure Identity client library to your project](/dotnet/api/overview/azure/identity-readme#getting-started)
+- [Access Azure service with a system-assigned identity](/dotnet/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/dotnet/api/overview/azure/identity-readme#specifying-a-user-assigned-managed-identity-with-the-defaultazurecredential)
+
+The linked examples use [`DefaultAzureCredential`](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). It's useful for most the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
+
+# [JavaScript](#tab/javascript)
+
+For Node.js apps, the simplest way to work with a managed identity is through the [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme?). See the respective documentation headings of the client library for information:
+
+- [Add Azure Identity client library to your project](/javascript/api/overview/azure/identity-readme#install-the-package)
+- [Access Azure service with a system-assigned identity](/javascript/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/javascript/api/overview/azure/identity-readme#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
+
+The linked examples use [`DefaultAzureCredential`](/javascript/api/overview/azure/identity-readme#defaultazurecredential). It's useful for most the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
+
+For more code examples of the Azure Identity client library for JavaScript, see [Azure Identity examples](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/identity_2.0.1/sdk/identity/identity/samples/AzureIdentityExamples.md).
+
+# [Python](#tab/python)
+
+For Python apps, the simplest way to work with a managed identity is through the [Azure Identity client library for Python](/python/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
+
+- [Add Azure Identity client library to your project](/python/api/overview/azure/identity-readme#getting-started)
+- [Access Azure service with a system-assigned identity](/python/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/python/api/overview/azure/identity-readme#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
+
+The linked examples use [`DefaultAzureCredential`](/python/api/overview/azure/identity-readme#defaultazurecredential). It's useful for most the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
+
+# [Java](#tab/java)
+
+For Java apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for Java](/java/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
+
+- [Add Azure Identity client library to your project](/java/api/overview/azure/identity-readme#include-the-package)
+- [Access Azure service with a system-assigned identity](/java/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/java/api/overview/azure/identity-readme#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
+
+The linked examples use [`DefaultAzureCredential`](/azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential). It's useful for most the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
+
+For more code examples of the Azure Identity client library for Java, see [Azure Identity Examples](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples).
+
+# [PowerShell](#tab/powershell)
+
+Use the following script to retrieve a token from the local endpoint by specifying a resource URI of an Azure service. Replace the place holder with the resource URI to obtain the token.
+
+```powershell
+$resourceURI = "https://<AAD-resource-URI>"
+$tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource=$resourceURI&api-version=2019-08-01"
+$tokenResponse = Invoke-RestMethod -Method Get -Headers @{"X-IDENTITY-HEADER"="$env:IDENTITY_HEADER"} -Uri $tokenAuthURI
+$accessToken = $tokenResponse.access_token
+```
+
+# [HTTP GET](#tab/http)
+
+A raw HTTP GET request looks like the following example.
+
+X-IDENTITY-HEADER contains the GUID that is stored in the IDENTITY_HEADER environment variable.
+
+```http
+GET http://localhost:42356/msi/token?resource=https://vault.azure.net&api-version=2019-08-01 HTTP/1.1
+X-IDENTITY-HEADER: 853b9a84-5bfa-4b22-a3f3-0b9a43d9ad8a
+```
+
+A response might look like this example:
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/json
+
+{
+ "access_token": "eyJ0eXAi…",
+ "expires_on": "1586984735",
+ "resource": "https://vault.azure.net",
+ "token_type": "Bearer",
+ "client_id": "5E29463D-71DA-4FE0-8E69-999B57DB23B0"
+}
+
+```
+
+This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#service-to-service-access-token-response). To access Key Vault, you'll then add the value of `access_token` to a client connection with the vault.
+
+### REST endpoint reference
+
+> [!NOTE]
+> An older version of this endpoint, using the "2017-09-01" API version, used the `secret` header instead of `X-IDENTITY-HEADER` and only accepted the `clientid` property for user-assigned. It also returned the `expires_on` in a timestamp format. `MSI_ENDPOINT` can be used as an alias for `IDENTITY_ENDPOINT`, and `MSI_SECRET` can be used as an alias for `IDENTITY_HEADER`. This version of the protocol is currently required for Linux Consumption hosting plans.
+
+A container app with a managed identity exposes the identity endpoint by defining two environment variables:
+
+- IDENTITY_ENDPOINT - local URL from which your container app can request tokens.
+- IDENTITY_HEADER - a header used to help mitigate server-side request forgery (SSRF) attacks. The value is rotated by the platform.
+
+To get a token for a resource, make an HTTP GET request to this endpoint, including the following parameters:
+
+| Parameter name | In | Description|
+||||
+| resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. This could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
+| api-version | Query | The version of the token API to be used. Use "2019-08-01" or later. |
+| X-IDENTITY-HEADER | Header | The value of the `IDENTITY_HEADER` environment variable. This header mitigates server-side request forgery (SSRF) attacks. |
+| client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Can't be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used.|
+| principal_id | Query | (Optional) The principal ID of the user-assigned identity to be used. `object_id` is an alias that may be used instead. Can't be used on a request that includes client_id, mi_res_id, or object_id. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
+| mi_res_id| Query | (Optional) The Azure resource ID of the user-assigned identity to be used. Can't be used on a request that includes `principal_id`, `client_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used.|
+
+> [!IMPORTANT]
+> If you are attempting to obtain tokens for user-assigned identities, you must include one of the optional properties. Otherwise the token service will attempt to obtain a token for a system-assigned identity, which may or may not exist.
+
+For more information on the REST endpoint, see [REST endpoint reference](#rest-endpoint-reference).
+
+--
+## View managed identities
+
+You can show the system-assigned and user-assigned managed identities using the following Azure CLI command. The output will show the managed identity type, tenant IDs and principal IDs of all managed identities assigned to your container app.
+
+```azurecli
+az containerapps identity show --name <APP_NAME> --resource-group <GROUP_NAME>
+```
+
+## Remove a managed identity
+
+When you remove a system-assigned identity, it's deleted from Azure Active Directory. System-assigned identities are also automatically removed from Azure Active Directory when you delete the container app resource itself. Removing user-assigned managed identities from your container app doesn't remove them from Azure Active Directory.
+
+# [Azure CLI](#tab/cli)
+
+To remove the system-assigned identity:
+
+```azurecli
+az containerapp identity remove --name <APP_NAME> --resource-group <GROUP_NAME> --system-assigned
+```
+
+To remove one or more user-assigned identities:
+
+```azurecli
+az containerapp identity remove --name <APP_NAME> --resource-group <GROUP_NAME> \
+ --user-assigned <IDENTITY1_RESOURCE_ID> <IDENTITY2_RESOURCE_ID>
+```
+
+To remove all user-assigned identities:
+
+```azurecli
+az containerapp identity remove --name <APP_NAME> --resource-group <GROUP_NAME> \
+ --user-assigned <IDENTITY1_RESOURCE_ID> <IDENTITY2_RESOURCE_ID>
+```
+
+# [ARM template](#tab/arm)
+
+To remove all identities, set the `type` of the container app's identity to `None` in the ARM template:
+
+```json
+"identity": {
+ "type": "None"
+}
+```
+
+--
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Monitor an app](monitor.md)
cosmos-db Optimize Dev Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-dev-test.md
This article describes the different options to use Azure Cosmos DB for developm
[Azure Cosmos DB emulator](local-emulator.md) is a local downloadable version that mimics the Azure Cosmos DB cloud service. You can write and test code that uses the Azure Cosmos DB APIs even if you have no network connection and without incurring any costs. Azure Cosmos DB emulator provides a local environment for development purposes with high fidelity to the cloud service. You can develop and test your application locally, without creating an Azure subscription. When you're ready to deploy your application to the cloud, update the connection string to connect to the Azure Cosmos DB endpoint in the cloud, no other modifications are needed. You can also [set up a CI/CD pipeline with the Azure Cosmos DB emulator](tutorial-setup-ci-cd.md) build task in Azure DevOps to run tests. You can get started by visiting the [Azure Cosmos DB emulator](local-emulator.md) article.
+## Try Azure Cosmos DB for free
+
+[Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) is a free of charge experience that allows you to experiment with Azure Cosmos DB in the cloud without signing up for an Azure account or using your credit card. The Try Azure Cosmos DB accounts are available for a limited time, currently 30 days. You can renew them at any time. Try Azure Cosmos DB accounts makes it easy to evaluate Azure Cosmos DB, build and test an application or use the Quickstarts or tutorials. You can also create a demo, perform unit testing, or even create a multi-region account and run an app on it without incurring any costs. In a Try Azure Cosmos DB account, you can have one shared throughput database with a maximum of 25 containers and 20,000 RU/s of throughput, or one container with up to 5000 RU/s. To get started, see [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) page.
+ ## Azure Cosmos DB free tier Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account free. Free tier lasts indefinitely for the lifetime of the account and comes with all the [benefits and features](introduction.md#key-benefits) of a regular Azure Cosmos DB account, including unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more. You can create a free tier account using Azure portal, CLI, PowerShell, and a Resource Manager template. To learn more, see how to [create a free tier account](free-tier.md) article and the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
-## Try Azure Cosmos DB for free
-
-[Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) is a free of charge experience that allows you to experiment with Azure Cosmos DB in the cloud without signing up for an Azure account or using your credit card. The Try Azure Cosmos DB accounts are available for a limited time, currently 30 days. You can renew them at any time. Try Azure Cosmos DB accounts makes it easy to evaluate Azure Cosmos DB, build and test an application or use the Quickstarts or tutorials. You can also create a demo, perform unit testing, or even create a multi-region account and run an app on it without incurring any costs. In a Try Azure Cosmos DB account, you can have one shared throughput database with a maximum of 25 containers and 20,000 RU/s of throughput, or one container with up to 5000 RU/s. To get started, see [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) page.
- ## Azure free account Azure Cosmos DB is included in the [Azure free account](https://azure.microsoft.com/free), which offers Azure credits and resources for free for a certain time period. Specifically for Azure Cosmos DB, this free account offers 25-GB storage and 400 RUs of provisioned throughput for the entire year. This experience enables any developer to easily test the features of Azure Cosmos DB or integrate it with other Azure services at zero cost. With Azure free account, you get a $200 credit to spend in the first 30 days. You wonΓÇÖt be charged, even if you start using the services until you choose to upgrade. To get started, visit [Azure free account](https://azure.microsoft.com/free) page.
You can get started with using the emulator or the free Azure Cosmos DB accounts
* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import.md
Inside the `Main` method, add the following code to initialize the CosmosClient
[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=CreateClient)]
+> [!Note]
+> Once bulk execution is specified in the [CosmosClientOptions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions), they are effectively immutable for the lifetime of the CosmosClient. Changing the values will have no effect.
+ After the bulk execution is enabled, the CosmosClient internally groups concurrent operations into single service calls. This way it optimizes the throughput utilization by distributing service calls across partitions, and finally assigning individual results to the original callers. You can then create a container to store all our items. Define `/pk` as the partition key, 50000 RU/s as provisioned throughput, and a custom indexing policy that will exclude all fields to optimize the write throughput. Add the following code after the CosmosClient initialization statement:
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management. Previously updated : 03/30/2022 Last updated : 04/13/2022
Use the **Cost & Usage Reports** page of the Billing and Cost Management console
7. Select **Next**. 8. For **S3 bucket**, choose **Configure**. 9. In the Configure S3 Bucket dialog box, enter a bucket name and the Region where you want to create a new bucket and choose **Next**.
-10. Select **I have confirmed that this policy is correct**, then click **Save**.
+10. Select **I have confirmed that this policy is correct**, then select **Save**.
11. (Optional) For Report path prefix, enter the report path prefix that you want prepended to the name of your report. If you don't specify a prefix, the default prefix is the name that you specified for the report. The date range has the `/report-name/date-range/` format. 12. For **Time unit**, choose **Hourly**.
-13. For **Report versioning**, choose whether you want each version of the report to overwrite the previous version, or if you want additional new reports.
+13. For **Report versioning**, choose whether you want each version of the report to overwrite the previous version, or if you want more new reports.
14. For **Enable data integration for**, no selection is required. 15. For **Compression**, select **GZIP**. 16. Select **Next**.
Use the Create a New Role wizard:
4. On the next page, select **Another AWS account**. 5. In **Account ID**, enter **432263259397**. 6. In **Options**, select **Require external ID (Best practice when a third party will assume this role)**.
-7. In **External ID**, enter the external ID which is a shared passcode between the AWS role and Cost Management. The same external ID is also used on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID.
+7. In **External ID**, enter the external ID, which is a shared passcode between the AWS role and Cost Management. The same external ID is also used on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID.
> [!NOTE] > Don't change the selection for **Require MFA**. It should remain cleared. 8. Select **Next: Permissions**.
-9. Select **Create policy**. A new browser tab opens. That's where you create a policy.
+9. Select **Create policy**. A new browser tab opens where you create a policy.
10. Select **Choose a service**. Configure permission for the Cost and Usage report:
Configure permissions for Policies
1. Select Access level > Read > **GetPolicyVersion**. 1. Select **Resources** > policy, and then select **Any**. These actions allow verification that only the minimal required set of permissions were granted to the connector. 1. Select role - **Add ARN**. The account number should be automatically populated.
-1. In **Role name with path** enter a role name and note it. You need to use it in the final role creation step.
+1. In **Role name with path**, enter a role name and note it. You need to use it in the final role creation step.
1. Select **Add**. 1. Select **Next: Tags**. You may enter tags you wish to use or skip this step. This step isn't required to create a connector in Cost Management. 1. Select **Next: Review Policy**.
The policy JSON should resemble the following example. Replace `bucketname` with
Use the following information to create an AWS connector and start monitoring your AWS costs.
+> [!NOTE]
+> The Connector for AWS remains active after the trial period ends if you set the auto-renew configuration to **On** during the initial setup. Otherwise, the connector is disabled following its trial. It may remain disabled for three months before it's permanently deleted. After the connector is deleted, the same connection can't be reactivated. For assistance with a disabled connector or to create a new connection after it's deleted, create a [support request in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+ ### Prerequisites - Ensure you have at least one management group enabled. A management group is required to link your subscription to the AWS service. For more information about creating a management group, see [Create a management group in Azure](../../governance/management-groups/create-management-group-portal.md). - Ensure that you're an administrator of the subscription.-- Complete the set up required for a new AWS connector, as described in the [Create a Cost and Usage report in AWS](#create-a-cost-and-usage-report-in-aws) section.
+- Complete the setup required for a new AWS connector, as described in the [Create a Cost and Usage report in AWS](#create-a-cost-and-usage-report-in-aws) section.
### Create a new connector
After you create the connector, we recommend that you assign access control to i
Assigning connector permissions to users after discovery occurs doesn't assign permissions to the existing AWS scopes. Instead, only new linked accounts are assigned permissions.
-## Take additional steps
+## Take other steps
- [Set up management groups](../../governance/management-groups/overview.md#initial-setup-of-management-groups), if you haven't already. - Check that new scopes are added to your scope picker. Select **Refresh** to view the latest data.
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
tags: billing,top-support-issue
Previously updated : 11/17/2021 Last updated : 04/07/2022 # Transfer billing ownership of an Azure subscription to another account
-This article shows the steps needed to transfer billing ownership of an Azure subscription to another account. Before you transfer billing ownership for a subscription, read [About transferring billing ownership for an Azure subscription](subscription-transfer.md).
+This article shows the steps needed to transfer billing ownership of an Azure subscription to another account. Before you transfer billing ownership for a subscription, read [Azure subscription and reservation transfer hub](subscription-transfer.md) to ensure that your transfer type is supported.
If you want to keep your billing ownership but change subscription type, see [Switch your Azure subscription to another offer](switch-azure-offer.md). To control who can access resources in the subscription, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
Additionally, Azure shows a banner in the subscription's details window in the A
<a name="no-button"></a>
-The self-service subscription transfer isn't available for your billing account. Currently, we don't support transferring the billing ownership of subscriptions in Enterprise Agreement (EA) accounts in the Azure portal. Also, Microsoft Customer Agreement accounts that are created while working with a Microsoft representative don't support transferring billing ownership.
+The self-service subscription transfer isn't available for your billing account. For more information, see [Azure subscription and reservation transfer hub](subscription-transfer.md) to ensure that your transfer type is supported.
### Not all subscription types can transfer
cost-management-billing Billing Troubleshoot Azure Payment Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-troubleshoot-azure-payment-issues.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/13/2022
You might be using an email ID that differs from the one that is used for the su
To troubleshoot this issue, see [No subscriptions found sign-in error for Azure portal](no-subscriptions-found.md).
-## Unable to use a virtual or prepaid credit or debit card as a payment method.
+## Unable to use a virtual or prepaid credit as a payment method.
-* Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptions.
-* Debit cards aren't accepted as payment for Azure subscriptions.
+Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptions.
For more information, see [Troubleshoot a declined card at Azure sign-up](troubleshoot-declined-card.md).
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mpa-request-ownership.md
tags: billing
Previously updated : 01/05/2022 Last updated : 04/08/2022 # Get billing ownership of Azure subscriptions to your MPA account
-To provide a single combined invoice for managed services and Azure consumption, a Cloud Solution Provider (CSP) can take over billing ownership of Azure subscriptions from their customers with Direct Enterprise Agreements (EA).
+An Azure Expert MSP can request to transfer their customer's Enterprise subscriptions and reservations to the Microsoft Partner Agreement (MPA) that they manage. Supported billing ownership transfer options for subscriptions and reservations include:
+
+- A direct Enterprise Agreement transfer to MPA
+- An enterprise Microsoft Customer Agreement transfer to MPA
+
+> [!NOTE]
+> Indirect Enterprise Agreement transfer to a Microsoft Customer Agreement isn't supported.
This feature is available only for CSP Direct Bill Partners certified as [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp). It's subject to Microsoft governance and policies and might require review and approval for certain customers.
When you send or accept transfer request, you agree to terms and conditions. For
## Supported subscription types
-You can request billing ownership of the subscription types listed below.
+You can request billing ownership of the following subscription types.
-* [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)\*
+* [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)<sup>1</sup>
* [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/)
+* Azure Plan<sup>1</sup> [(Microsoft Customer Agreement in Enterprise Motion)](https://www.microsoft.com/Licensing/how-to-buy/microsoft-customer-agreement)
-\* You must convert a Dev/Test subscription to an EA Enterprise offer via a support ticket. An Enterprise Dev/Test subscription will be billed at a pay-as-you-go rate after it's transferred. Any discount offered via the Enterprise Dev/Test offer through the customer's EA won't be available to the CSP partner.
+<sup>1</sup> You must convert an EA Dev/Test subscription to an EA Enterprise offer using a support ticket and respectively, an Azure Plan Dev/Test offer to Azure plan. A Dev/Test subscription will be billed at a pay-as-you-go rate after conversion. There is no discount currently available through the Dev/Test offer to CSP partners.
## Additional information
If these two directories donΓÇÖt match, the subscriptions couldn't be transferre
### EA subscription in the non-organization directory
-The EA subscriptions from non-organization directories can be transferred as long as the directory has a reseller relationship with the CSP. If the directory doesnΓÇÖt have a reseller relationship, you need to make sure to have the organization user in the directory as a *Global Administrator* who can accept the partner relationship. The domain name portion of the username must either be the initial default domain name "[domain name]. onmicrosoft.com" or a verified, non-federated custom domain name such as "contoso.com."
+The EA subscriptions from non-organization directories can be transferred as long as the directory has a reseller relationship with the CSP. If the directory doesnΓÇÖt have a reseller relationship, you need to make sure to have the organization user in the directory as a *Global Administrator* who can accept the partner relationship. The domain name portion of the username must either be the initial default domain name *[domain name].onmicrosoft.com* or a verified, non-federated custom domain name such as *contoso.com*.
-To add new user to the directory, see [Quickstart: Add new users to Azure Active Directory to add the new user to the directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
+To add a new user to the directory, see [Quickstart: Add new users to Azure Active Directory to add the new user to the directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
## Check access to a Microsoft Partner Agreement
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 03/01/2022 Last updated : 04/07/2022
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| MCA - Enterprise | MOSP | <ul><li> Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. | | MCA - Enterprise | MCA - individual | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. | | MCA - Enterprise | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| MCA - Enterprise | MPA | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
+| MCA - Enterprise | MPA | <ul><li> Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program. <li> Self-service reservation transfers are supported. <li> There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
| Previous Azure offer in CSP | Previous Azure offer in CSP | <ul><li> Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. | | Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). | | MPA | EA | <ul><li> Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product. <li> Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <li> Reservations don't automatically transfer and transferring them isn't supported. |
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-azure-sign-up.md
Here are some additional tips:
#### Credit card declined or not accepted
-Virtual or pre-paid credit or debit cards aren't accepted as payment for Azure subscriptions. To see what else may cause your card to be declined, see [Troubleshoot a declined card at Azure sign-up](./troubleshoot-declined-card.md).
+Virtual or pre-paid credit cards aren't accepted as payment for Azure subscriptions. To see what else may cause your card to be declined, see [Troubleshoot a declined card at Azure sign-up](./troubleshoot-declined-card.md).
#### Credit card form doesn't support my billing address
data-catalog Data Catalog Adopting Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-adopting-data-catalog.md
Title: Approach and process for adopting Azure Data Catalog description: This article presents an approach and process for organizations considering adopting Azure Data Catalog, including defining a vision, identifying key business use cases, and choosing a pilot project.-- Last updated 02/17/2022
data-catalog Data Catalog Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-common-scenarios.md
Title: Azure Data Catalog common scenarios description: An overview of common scenarios for Azure Data Catalog, including the registration and discovery of high-value data sources, enabling self-service business intelligence, and capturing existing knowledge about data sources and processes.-- Last updated 02/22/2022
data-catalog Data Catalog Developer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-developer-concepts.md
Title: Azure Data Catalog developer concepts description: Introduction to the key concepts in Azure Data Catalog conceptual model, as exposed through the Catalog REST API.-- Last updated 02/16/2022
data-catalog Data Catalog Dsr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-dsr.md
Title: Supported data sources in Azure Data Catalog description: This article lists specifications of the currently supported data sources for Azure Data Catalog.-- Last updated 02/24/2022
data-catalog Data Catalog Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-get-started.md
Title: 'Quickstart: Create an Azure Data Catalog' description: This quickstart describes how to create an Azure Data Catalog using the Azure portal.-- Last updated 02/25/2022
data-catalog Data Catalog How To Annotate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-annotate.md
Title: How to annotate data sources in Azure Data Catalog description: How-to article highlighting how to annotate data assets in Azure Data Catalog, including friendly names, tags, descriptions, and experts.-- Last updated 02/18/2022
data-catalog Data Catalog How To Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-big-data.md
Title: How to catalog big data in Azure Data Catalog description: How-to article highlighting patterns for using Azure Data Catalog with 'big data' data sources, including Azure Blob Storage, Azure Data Lake, and Hadoop HDFS.-- Last updated 02/14/2022
data-catalog Data Catalog How To Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-business-glossary.md
Title: Set up the business glossary in Azure Data Catalog description: How-to article highlighting the business glossary in Azure Data Catalog for defining and using a common business vocabulary to tag registered data assets.-- Last updated 02/23/2022
data-catalog Data Catalog How To Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-connect.md
Title: How to connect to data sources in Azure Data Catalog description: How-to article highlighting how to connect to data sources discovered with Azure Data Catalog.-- Last updated 02/22/2022
data-catalog Data Catalog How To Data Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-data-profile.md
Title: How to use data profiling data sources in Azure Data Catalog description: How-to article highlighting how to include table- and column-level data profiles when registering data sources in Azure Data Catalog, and how to use data profiles to understand data sources.-- Last updated 02/18/2022
data-catalog Data Catalog How To Discover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-discover.md
Title: How to discover data sources in Azure Data Catalog description: This article highlights how to discover registered data assets with Azure Data Catalog, including searching and filtering and using the hit highlighting capabilities of the Azure Data Catalog portal.-- Last updated 02/24/2022
data-catalog Data Catalog How To Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-documentation.md
Title: How to document data sources in Azure Data Catalog description: How-to article highlighting how to document data assets in Azure Data Catalog.-- Last updated 02/17/2022
data-catalog Data Catalog How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-manage.md
Title: Manage data assets in Azure Data Catalog description: The article highlights how to control visibility and ownership of data assets registered in Azure Data Catalog.-- Last updated 02/15/2022
data-catalog Data Catalog How To Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-register.md
Title: Register data sources in Azure Data Catalog description: This article highlights how to register data sources in Azure Data Catalog, including the metadata fields extracted during registration.-- Last updated 02/25/2022
data-catalog Data Catalog How To Save Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-save-pin.md
Title: Save searches and pin data assets in Azure Data Catalog description: How-to for saving data sources and data assets for later use in Azure Data Catalog.-- Last updated 02/10/2022
data-catalog Data Catalog How To Secure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-secure-catalog.md
Title: How to secure access to Azure Data Catalog description: This article explains how to secure a data catalog and its data assets in Azure Data Catalog.-- Last updated 02/14/2022
data-catalog Data Catalog How To View Related Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-view-related-data-assets.md
Title: How to view related data assets in Azure Data Catalog description: This article explains how to view related data assets of a selected data asset in Azure Data Catalog.-- Last updated 02/11/2022
data-catalog Data Catalog Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-keyboard-shortcuts.md
Title: Keyboard shortcuts for Azure Data Catalog description: This article shows a list of keyboard shortcuts that you can use in Azure Data Catalog.-- Last updated 02/11/2022
data-catalog Data Catalog Migration To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-migration-to-azure-purview.md
Title: Migrate from Azure Data Catalog to Azure Purview description: Steps to migrate from Azure Data Catalog to Microsoft's unified data governance service--Azure Purview.-- Last updated 01/24/2022
data-catalog Data Catalog Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-samples.md
Title: Azure Data Catalog developer samples description: This article provides an overview of the available developer samples for the Data Catalog REST API. -- Last updated 02/16/2022
data-catalog Data Catalog Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-terminology.md
Title: Azure Data Catalog terminology description: This article provides an introduction to concepts and terms used in Azure Data Catalog documentation.-- Last updated 02/15/2022
data-catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/overview.md
Title: Introduction to Azure Data Catalog description: This article provides an overview of Microsoft Azure Data Catalog, including its features and the problems it addresses. Data Catalog enables any user to register, discover, understand, and consume data sources.-- Last updated 02/24/2022
data-catalog Register Data Assets Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/register-data-assets-tutorial.md
Title: 'Tutorial: Register data assets in Azure Data Catalog' description: This tutorial describes how to register data assets in your Azure Data Catalog. -- Last updated 02/24/2022
data-catalog Troubleshoot Policy Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/troubleshoot-policy-configuration.md
Title: How to troubleshoot Azure Data Catalog description: This article describes common troubleshooting concerns for Azure Data Catalog resources. -- Last updated 02/10/2022
data-factory Connector Dataworld https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dataworld.md
+
+ Title: Transform data in data.world (Preview)
+
+description: Learn how to transform data in data.world (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 04/12/2022++
+# Transform data in data.world (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in data.world (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This data.world connector is supported for the following activities:
+
+- [Mapping data flow](concepts-data-flow-overview.md)
+
+## Create a data.world linked service using UI
+
+Use the following steps to create a data.world linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for data.world (Preview) and select the data.world (Preview) connector.
+
+ :::image type="content" source="media/connector-dataworld/dataworld-connector.png" alt-text="Screenshot showing selecting data.world connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-dataworld/configure-dataworld-linked-service.png" alt-text="Screenshot of configuration for data.world linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to data.world.
+
+## Linked service properties
+
+The following properties are supported for the data.world linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Dataworld**. |Yes |
+| apiToken | Specify an API token for the data.world. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "DataworldLinkedService",
+ "properties": {
+ "type": "Dataworld",
+ "typeProperties": {
+ "apiToken": {
+ "type": "SecureString",
+ "value": "<API token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read tables from data.world. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
++
+### Source transformation
+
+The below table lists the properties supported by data.world source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Dataset name| The ID of the dataset in data.world.| Yes | String | datasetId |
+| Table name | The ID of the table within the dataset in data.world. | No (if `query` is specified) | String | tableId |
+| Query | Enter a SQL query to fetch data from data.world. An example is `select * from MyTable`.| No (if `tableId` is specified)| String | query |
+| Owner | The owner of the dataset in data.world. | Yes | String | owner |
+
+#### data.world source script example
+
+When you use data.world as source type, the associated data flow script is:
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'dataworld',
+ format: 'rest',
+ owner: 'owner1',
+ datasetId: 'dataset1',
+ tableId: 'MyTable') ~> DataworldSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-twilio.md
+
+ Title: Transform data in Twilio (Preview)
+
+description: Learn how to transform data in Twilio (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 04/12/2022++
+# Transform data in Twilio (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in Twilio (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This Twilio connector is supported for the following activities:
+
+- [Mapping data flow](concepts-data-flow-overview.md)
+
+## Create a Twilio linked service using UI
+
+Use the following steps to create a Twilio linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for Twilio (Preview) and select the Twilio (Preview) connector.
+
+ :::image type="content" source="media/connector-twilio/twilio-connector.png" alt-text="Screenshot showing selecting Twilio connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-twilio/configure-twilio-linked-service.png" alt-text="Screenshot of configuration for Twilio linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Twilio.
+
+## Linked service properties
+
+The following properties are supported for the Twilio linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Twilio**. | Yes |
+| userName | The account SID of your Twilio account. | No |
+| password | The auth token of your Twilio account. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "TwilioLinkedService",
+ "properties": {
+ "type": "Twilio",
+ "typeProperties": {
+ "userName": "<account SID>",
+ "password": {
+ "type": "SecureString",
+ "value": "<auth token>"
+ }
+ }
+ }
+}
+```
+++
+### Source transformation
+
+When transforming data in mapping data flow, you can read resources from Twilio. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+
+The below table lists the properties supported by Twilio source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Resource | The type of resources that data flow fetch from Twilio. | Yes | `Messages`<br>`Calls` | resource |
+| From | The phone number with country code, for example `+17755425856`. | No | String | from |
+| To | The phone number with country code, for example `+17755425856`. | No | String | to |
+
+#### Twilio source script example
+
+When you use Twilio as source type, the associated data flow script is:
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'twilio',
+ format: 'rest',
+ resource: 'Messages',
+ from: '+17755425856') ~> TwilioSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-lake-analytics Data Lake Analytics Manage Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-portal.md
Last updated 12/05/2016+ # Manage Azure Data Lake Analytics using the Azure portal [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)]
Use the Data Lake Analytics Developer role to enable U-SQL developers to use the
### Add users or security groups to a Data Lake Analytics account 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Access control (IAM)** > **Add role assignment**.
-3. Select a role.
-4. Add a user.
-5. Click **OK**.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Assign a role to a user. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
>[!NOTE] >If a user or a security group needs to submit jobs, they also need permission on the store account. For more information, see [Secure data stored in Data Lake Store](../data-lake-store/data-lake-store-secure-data.md).
data-share Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/concepts-roles-permissions.md
Last updated 02/07/2022+ # Roles and requirements for Azure Data Share
For SQL snapshot-based sharing, a SQL user needs to be created from an external
### Data provider For storage and data lake snapshot-based sharing, to add a dataset in Azure Data Share, provider data share resource's managed identity needs to be granted access to the source Azure data store. For example, if using a storage account, the data share resource's managed identity is granted the *Storage Blob Data Reader* role. This is done automatically by the Azure Data Share service when user is adding dataset via Azure portal and the user has the proper permission. For example, user is an owner of the Azure data store, or is a member of a custom role that has the *Microsoft.Authorization/role assignments/write* permission assigned.
-Alternatively, user can have owner of the Azure data store add the data share resource's managed identity to the Azure data store manually. This action only needs to be performed once per data share resource. To create a role assignment for the data share resource's managed identity manually, follow the below steps.
+Alternatively, user can have owner of the Azure data store add the data share resource's managed identity to the Azure data store manually. This action only needs to be performed once per data share resource.
+
+To create a role assignment for the data share resource's managed identity manually, use the following steps:
1. Navigate to the Azure data store.+ 1. Select **Access Control (IAM)**.
-1. Select **Add a role assignment**.
-1. Under *Role*, select the role in the role assignment table above (for example, for storage account, select *Storage Blob Data Reader*).
-1. Under *Select*, type in the name of your Azure Data Share resource.
-1. Select *Save*.
-To learn more about role assignment, refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). If you're sharing data using REST APIs, you can create role assignment using API by referencing [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md).
+1. Select **Add > Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Roles** tab, select one of the roles listed in the role assignment table in the previous section.
+
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+
+1. Select **System-assigned managed identity**, search for your Azure Data Share resource, and then select it.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+To learn more about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). If you're sharing data using REST APIs, you can create role assignment using API by referencing [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md).
For SQL snapshot-based sharing, a SQL user needs to be created from an external provider in SQL Database with the same name as the Azure Data Share resource while connecting to SQL database using Azure Active Directory authentication. This user needs to be granted *db_datareader* permission. A sample script along with other prerequisites for SQL-based sharing can be found in the [Share from Azure SQL Database or Azure Synapse Analytics](how-to-share-from-sql.md) tutorial.
To receive data into storage account, consumer data share resource's managed ide
Alternatively, user can have owner of the storage account add the data share resource's managed identity to the storage account manually. This action only needs to be performed once per data share resource. To create a role assignment for the data share resource's managed identity manually, follow the below steps. 1. Navigate to the Azure data store.+ 1. Select **Access Control (IAM)**.
-1. Select **Add a role assignment**.
-1. Under *Role*, select the role in the role assignment table above (for example, for storage account, select *Storage Blob Data Reader*).
-1. Under *Select*, type in the name of your Azure Data Share resource.
-1. Select *Save*.
-To learn more about role assignment, refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). If you're receiving data using REST APIs, you can create role assignment using API by referencing [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md).
+1. Select **Add > Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Roles** tab, select one of the roles listed in the role assignment table in the previous section. For example, for a storage account, select Storage Blob Data Reader.
+
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+
+1. Select **System-assigned managed identity**, search for your Azure Data Share resource, and then select it.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+To learn more about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). If you're receiving data using REST APIs, you can create role assignment using API by referencing [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md).
For SQL-based target, a SQL user needs to be created from an external provider in SQL Database with the same name as the Azure Data Share resource while connecting to SQL database using Azure Active Directory authentication. This user needs to be granted *db_datareader, db_datawriter, db_ddladmin* permission. A sample script along with other prerequisites for SQL-based sharing can be found in the [Share from Azure SQL Database or Azure Synapse Analytics](how-to-share-from-sql.md) tutorial.
databox Data Box Deploy Export Copy Data Via Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-copy-data-via-nfs.md
Previously updated : 12/18/2020 Last updated : 04/04/2022 #Customer intent: As an IT admin, I need to be able to copy data exported from Azure to Data Box, to an on-premises data server.
databox Data Box Deploy Export Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-copy-data.md
Previously updated : 05/17/2021 Last updated : 04/04/2022 # Customer intent: As an IT admin, I need to be able to copy data from Data Box to download from Azure to my on-premises server.
databox Data Box Deploy Set Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-set-up.md
Previously updated : 08/23/2021 Last updated : 04/06/2021 # Customer intent: As an IT admin, I need to be able to set up Data Box to upload on-premises data from my server onto Azure.
databox Data Box Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-overview.md
Previously updated : 01/28/2022 Last updated : 04/06/2022 #Customer intent: As an IT admin, I need to understand what Data Box is and how it works so I can use it to import on-premises data into Azure or export data from Azure.
databox Data Box Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-security.md
Previously updated : 12/16/2020 Last updated : 04/13/2022 # Azure Data Box security and data protection
The Data Box device is protected by the following features:
- A rugged device casing that withstands shocks, adverse transportation, and environmental conditions. - Hardware and software tampering detection that prevents further device operations.
+- A Trusted Platform Module (TPM) that performs hardware-based, security-related functions. Specifically, the TPM manages and protects secrets and data that needs to be persisted on the device.
- Runs only Data Box-specific software. - Boots up in a locked state. - Controls device access via a device unlock passkey. This passkey is protected by an encryption key. You can use your own customer-managed key to protect the passkey. For more information, see [Use customer-managed keys in Azure Key Vault for Azure Data Box](data-box-customer-managed-encryption-key-portal.md).
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
na Previously updated : 05/17/2019 Last updated : 04/13/2022 # Quickstart: Create and configure Azure DDoS Protection Standard
-Get started with Azure DDoS Protection Standard by using the Azure portal.
+Get started with Azure DDoS Protection Standard by using the Azure portal.
-A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
-In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
+In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
## Prerequisites
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
## Create a DDoS protection plan 1. Select **Create a resource** in the upper left corner of the Azure portal.
-2. Search the term *DDoS*. When **DDoS protection plan** appears in the search results, select it.
-3. Select **Create**.
-4. Enter or select the following values, then select **Create**:
+1. Search the term *DDoS*. When **DDoS protection plan** appears in the search results, select it.
+1. Select **Create**.
+1. Enter or select the following values.
|Setting |Value | | | |
- |Name | Enter _MyDdosProtectionPlan_. |
|Subscription | Select your subscription. |
- |Resource group | Select **Create new** and enter _MyResourceGroup_.|
- |Location | Enter _East US_. |
+ |Resource group | Select **Create new** and enter **MyResourceGroup**.|
+ |Name | Enter **MyDdosProtectionPlan**. |
+ |Region | Enter **East US**. |
-## Enable DDoS protection for a virtual network
+1. Select **Review + create** then **Create**
+## Enable DDoS protection for a virtual network
### Enable DDoS protection for a new virtual network 1. Select **Create a resource** in the upper left corner of the Azure portal.
-2. Select **Networking**, and then select **Virtual network**.
-3. Enter or select the following values, accept the remaining defaults, and then select **Create**:
+1. Select **Networking**, and then select **Virtual network**.
+1. Enter or select the following values.
| Setting | Value | | | |
- | Name | Enter _MyVnet_. |
| Subscription | Select your subscription. | | Resource group | Select **Use existing**, and then select **MyResourceGroup** |
- | Location | Enter _East US_ |
- | DDoS Protection Standard | Select **Enable**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.|
+ | Name | Enter **MyVnet**. |
+ | Region | Enter **East US**. |
+
+1. Select **Next: IP Addresses** and enter the following values.
+
+ | Setting | Value |
+ | | |
+ | IPv4 address space | Enter **10.1.0.0/16.** |
+ | Subnet name | Under **Subnet name**, select the **Add subnet** link and enter **mySubnet.** |
+ | Subnet address range | Enter **10.1.0.0/24.** |
+
+1. Select **Add**.
+1. Select **Next: Security**.
+1. Select **Enable** on the **DDoS Protection Standard** radio.
+1. Select **MyDdosProtectionPlan** from the **DDoS protection plan** pane. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+1. Select **Review + create** then **Create**.
-You cannot move a virtual network to another resource group or subscription when DDoS Standard is enabled for the virtual network. If you need to move a virtual network with DDoS Standard enabled, disable DDoS Standard first, move the virtual network, and then enable DDoS standard. After the move, the auto-tuned policy thresholds for all the protected public IP addresses in the virtual network are reset.
### Enable DDoS protection for an existing virtual network 1. Create a DDoS protection plan by completing the steps in [Create a DDoS protection plan](#create-a-ddos-protection-plan), if you don't have an existing DDoS protection plan.
-2. Enter the name of the virtual network that you want to enable DDoS Protection Standard for in the **Search resources, services, and docs box** at the top of the Azure portal. When the name of the virtual network appears in the search results, select it.
-3. Select **DDoS protection**, under **SETTINGS**.
-4. Select **Standard**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+1. Enter the name of the virtual network that you want to enable DDoS Protection Standard for in the **Search resources, services, and docs box** at the top of the Azure portal. When the name of the virtual network appears in the search results, select it.
+1. Select **DDoS protection**, under **SETTINGS**.
+1. Select **Standard**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
-### Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)
+## Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)
-Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager. This functionality is currently available in Public Preview. See [Configure an Azure DDoS Protection Plan using Azure Firewall Manager](../firewall-manager/configure-ddos.md)
+Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager. This functionality is currently available in Public Preview. See [Configure an Azure DDoS Protection Plan using Azure Firewall Manager](../firewall-manager/configure-ddos.md).
-### Enable DDoS protection for all virtual networks
+## Enable DDoS protection for all virtual networks
-This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d) will detect any virtual networks in a defined scope that do not have DDoS Protection Standard enabled, then optionally create a remediation task that will create the association to protect the VNet. See [Azure Policy built-in definitions for Azure DDoS Protection Standard](policy-reference.md) for full list of built-in policies.
+This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d) will detect any virtual networks in a defined scope that don't have DDoS Protection Standard enabled. This policy will then optionally create a remediation task that will create the association to protect the Virtual Network. See [Azure Policy built-in definitions for Azure DDoS Protection Standard](policy-reference.md) for full list of built-in policies.
## Validate and test First, check the details of your DDoS protection plan: 1. Select **All services** on the top, left of the portal.
-2. Enter *DDoS* in the **Filter** box. When **DDoS protection plans** appear in the results, select it.
-3. Select your DDoS protection plan from the list.
+1. Enter *DDoS* in the **Filter** box. When **DDoS protection plans** appear in the results, select it.
+1. Select your DDoS protection plan from the list.
-The _MyVnet_ virtual network should be listed.
+The _MyVnet_ virtual network should be listed.
-### View protected resources
+## View protected resources
Under **Protected resources**, you can view your protected virtual networks and public IP addresses, or add more virtual networks to your DDoS protection plan:
-![View protected resources](./media/manage-ddos-protection/ddos-protected-resources.png)
## Clean up resources
You can keep your resources for the next tutorial. If no longer needed, delete t
1. In the Azure portal, search for and select **Resource groups**, or select **Resource groups** from the Azure portal menu.
-2. Filter or scroll down to find the _MyResourceGroup_ resource group.
+1. Filter or scroll down to find the _MyResourceGroup_ resource group.
-3. Select the resource group, then select **Delete resource group**.
+1. Select the resource group, then select **Delete resource group**.
-4. Type the resource group name to verify, and then select **Delete**.
+1. Type the resource group name to verify, and then select **Delete**.
-To disable DDoS protection for a virtual network:
+To disable DDoS protection for a virtual network:
1. Enter the name of the virtual network you want to disable DDoS protection standard for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it.
-2. Under **DDoS Protection Standard**, select **Disable**.
+1. Under **DDoS Protection Standard**, select **Disable**.
-If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
+If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
## Next steps
-To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
+To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
> [!div class="nextstepaction"] > [View and configure DDoS protection telemetry](telemetry.md)
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Host-level threat detection for your Linux AKS nodes is available if you enable
## Availability > [!IMPORTANT]
-> Microsoft Defender for Kubernetes has been replaced with [**Microsoft Defender for Containers**](defender-for-servers-introduction.md). If you've already enabled Defender for Kubernetes on a subscription, you can continue to use it. However, you won't get Defender for Containers' improvements and new features.
+> Microsoft Defender for Kubernetes has been replaced with [**Microsoft Defender for Containers**](defender-for-containers-introduction.md). If you've already enabled Defender for Kubernetes on a subscription, you can continue to use it. However, you won't get Defender for Containers' improvements and new features.
> > This plan is no longer available for subscriptions where it isn't already enabled. >
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Title: Defender for Cloud's integrated vulnerability assessment solution for Azure, hybrid, and multi-cloud machines description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Microsoft Defender for Cloud that can help you protect your Azure and hybrid machines-- Previously updated : 11/16/2021 Last updated : 04/13/2022 # Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines
The vulnerability scanner extension works as follows:
| Microsoft | Windows | All | | Amazon | Amazon Linux | 2015.09-2018.03 | | Amazon | Amazon Linux 2 | 2017.03-2.0.2021 |
- | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.3 |
- | Red Hat | CentOS | 5.4+, 6, 7, 7.1-7.8, 8-8.4 |
+ | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.5, 9 beta |
+ | Red Hat | CentOS | 5.4-5.11, 6-6.7, 7-7.8, 8-8.5 |
| Red Hat | Fedora | 22-33 |
- | SUSE | Linux Enterprise Server (SLES) | 11, 12, 15 |
- | SUSE | openSUSE | 12, 13, 15.0-15.2 |
+ | SUSE | Linux Enterprise Server (SLES) | 11, 12, 15, 15 SP1 |
+ | SUSE | openSUSE | 12, 13, 15.0-15.3 |
| SUSE | Leap | 42.1 |
- | Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.4 |
- | Debian | Debian | 7.x-10.x |
+ | Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.5 |
+ | Debian | Debian | 7.x-11.x |
| Ubuntu | Ubuntu | 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS, 19.10, 20.04 LTS |
Your machine might be in this tab because:
| Microsoft | Windows | All | | Amazon | Amazon Linux | 2015.09-2018.03 | | Amazon | Amazon Linux 2 | 2017.03-2.0.2021 |
- | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.3 |
- | Red Hat | CentOS | 5.4+, 6, 7, 7.1-7.8, 8-8.4 |
+ | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.5, 9 beta |
+ | Red Hat | CentOS | 5.4-5.11, 6-6.7, 7-7.8, 8-8.5 |
| Red Hat | Fedora | 22-33 |
- | SUSE | Linux Enterprise Server (SLES) | 11, 12, 15 |
- | SUSE | openSUSE | 12, 13, 15.0-15.2 |
+ | SUSE | Linux Enterprise Server (SLES) | 11, 12, 15, 15 SP1 |
+ | SUSE | openSUSE | 12, 13, 15.0-15.3 |
| SUSE | Leap | 42.1 |
- | Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.4 |
- | Debian | Debian | 7.x-10.x |
+ | Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.5 |
+ | Debian | Debian | 7.x-11.x |
| Ubuntu | Ubuntu | 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS, 19.10, 20.04 LTS | - ### What is scanned by the built-in vulnerability scanner? The scanner runs on your machine to look for vulnerabilities of the machine itself. From the machine, it can't scan your network.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/11/2022 Last updated : 04/13/2022 # What's new in Microsoft Defender for Cloud?
Updates in April include:
- [New Defender for Servers plans](#new-defender-for-servers-plans) - [Relocation of custom recommendations](#relocation-of-custom-recommendations) - [PowerShell script to stream alerts to Splunk and QRadar](#powershell-script-to-stream-alerts-to-splunk-and-ibm-qradar)-
-### PowerShell script to stream alerts to Splunk and IBM QRadar
-
-We recommend that you use Event Hubs and a built-in connector to export security alerts to Splunk and IBM QRadar. Now you can use a PowerShell script to set up the Azure resources needed to export security alerts for your subscription or tenant.
-
-Just download and run the PowerShell script. After you provide a few details of your environment, the script configures the resources for you. The script then produces output that you use in the SIEM platform to complete the integration.
-
-To learn more, see [Stream alerts to Splunk and QRadar](export-to-siem.md#stream-alerts-to-qradar-and-splunk).
+- [Deprecated the Azure Cache for Redis recommendation](#deprecated-the-azure-cache-for-redis-recommendation)
+- [New alert variant for Microsoft Defender for Storage (preview) to detect exposure of sensitive data](#new-alert-variant-for-microsoft-defender-for-storage-preview-to-detect-exposure-of-sensitive-data)
+- [Container scan alert title augmented with IP address reputation](#container-scan-alert-title-augmented-with-ip-address-reputation)
### New Defender for Servers plans
Microsoft Defender for Servers is now offered in two incremental plans.
- Microsoft Defender for Servers Plan 2, formerly Defender for Servers - Microsoft Defender for Servers Plan 1, including support for Defender for Endpoint only
-While Microsoft Defender for Servers Plan 2 continues to provide complete protections from threats and vulnerabilities to your cloud and on-premises workloads, Microsoft Defender for Servers Plan 1 provides endpoint protection only, powered by Microsoft Defender for Endpoint and natively integrated with Defender for Cloud. Read more about the [Microsoft Defender for Servers plans](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
+While Microsoft Defender for Servers Plan 2 continues to provide, complete protections from threats and vulnerabilities to your cloud and on-premises workloads, Microsoft Defender for Servers Plan 1 provides endpoint protection only, powered by Microsoft Defender for Endpoint and natively integrated with Defender for Cloud. Read more about the [Microsoft Defender for Servers plans](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
If you have been using Defender for Servers until now ΓÇô no action is required.
-In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016 (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads. Defender for Servers Plan 2 deploys the legacy agent to Windows Server 2012 R2 and 2016 workloads, and will deploy the unified agent soon after it is approved for general use.
+In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016 (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads. Defender for Servers Plan 2 deploys the legacy agent to Windows Server 2012 R2 and 2016 workloads, and will deploy the unified agent soon after it's approved for general use.
### Relocation of custom recommendations
Use the new "recommendation type" filter, to locate custom recommendations.
Learn more in [Create custom security initiatives and policies](custom-security-policies.md).
+### PowerShell script to stream alerts to Splunk and IBM QRadar
+
+We recommend that you use Event Hubs and a built-in connector to export security alerts to Splunk and IBM QRadar. Now you can use a PowerShell script to set up the Azure resources needed to export security alerts for your subscription or tenant.
+
+Just download and run the PowerShell script. After you provide a few details of your environment, the script configures the resources for you. The script then produces output that you use in the SIEM platform to complete the integration.
+
+To learn more, see [Stream alerts to Splunk and QRadar](export-to-siem.md#stream-alerts-to-qradar-and-splunk).
+
+### Deprecated the Azure Cache for Redis recommendation
+
+The recommendation `Azure Cache for Redis should reside within a virtual network` (Preview) has been deprecated. WeΓÇÖve changed our guidance for securing Azure Cache for Redis instances. We recommend the use of a private endpoint to restrict access to your Azure Cache for Redis instance, instead of a virtual network.
+
+### New alert variant for Microsoft Defender for Storage (preview) to detect exposure of sensitive data
+
+Microsoft Defender for Storage's alerts notify you when threat actors attempt to scan and expose, successfully or not, misconfigured, publicly open storage containers to try to exfiltrate sensitive information.
+
+To allow for faster triaging and response time, when exfiltration of potentially sensitive data may have occurred, we've released a new variation to the existing `Publicly accessible storage containers have been exposed` alert.
+
+The new alert, `Publicly accessible storage containers with potentially sensitive data have been exposed`, is triggered with a `High` severity level, after there has been a successful discovery of a publicly open storage container(s) with names that statistically have been found to rarely be exposed publicly, suggesting they might hold sensitive information.
+
+| Alert (alert type) | Description | MITRE tactic | Severity |
+|--|--|--|--|
+|**PREVIEW - Publicly accessible storage containers with potentially sensitive data have been exposed** <br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery.Sensitive)| Someone has scanned your Azure Storage account and exposed container(s) that allow public access. One or more of the exposed containers have names that indicate that they may contain sensitive data. <br> <br> This usually indicates reconnaissance by a threat actor that is scanning for misconfigured publicly accessible storage containers that may contain sensitive data. <br> <br> After a threat actor successfully discovers a container, they may continue by exfiltrating the data. <br> Γ£ö Azure Blob Storage <br> Γ£û Azure Files <br> Γ£û Azure Data Lake Storage Gen2 | Collection | High |
+
+### Container scan alert title augmented with IP address reputation
+
+An IP address's reputation can indicate whether the scanning activity originates from a known threat actor, or from an actor that is using the Tor network to hide their identity. Both of these indicators, suggest that there's malicious intent. The IP address's reputation is provided by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684).
+
+The addition of the IP address's reputation to the alert title provides a way to quickly evaluate the intent of the actor, and thus the severity of the threat.
+
+The following alerts will include this information:
+
+- `Publicly accessible storage containers have been exposed`
+
+- `Publicly accessible storage containers with potentially sensitive data have been exposed`
+
+- `Publicly accessible storage containers have been scanned. No publicly accessible data was discovered`
+
+For example, the added information to the title of the `Publicly accessible storage containers have been exposed` alert will look like this:
+
+- `Publicly accessible storage containers have been exposed`**`by a suspicious IP address`**
+
+- `Publicly accessible storage containers have been exposed`**`by a Tor exit node`**
+
+All of the alerts for Microsoft Defender for Storage will continue to include threat intelligence information in the IP entity under the alert's Related Entities section.
+ ## March 2022 Updates in March include:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | May 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2022 | | [Changes to vulnerability assessment](#changes-to-vulnerability-assessment) | May 2022 |
+| [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | May 2022 |
### Changes to recommendations for managing endpoint protection solutions
As part of this update, vulnerabilities that have medium and low severities, tha
Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.md)
+### Key Vault recommendations changed to "audit"
+
+The Key Vault recommendations listed here are currently disabled so that they don't impact your secure score. We will change their effect to "audit".
+
+| Recommendation name | Recommendation ID |
+| - | |
+| Validity period of certificates stored in Azure Key Vault should not exceed 12 months | fc84abc0-eee6-4758-8372-a7681965ca44 |
+| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b |
+| Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
++ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md)
defender-for-iot How To Configure Windows Endpoint Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-configure-windows-endpoint-monitoring.md
- Title: Configure Windows endpoint monitoring for Defender for IoT devices
-description: Set up Windows endpoint monitoring (WMI) for Windows information on devices.
Previously updated : 02/01/2022----
-# Configure Windows endpoint monitoring (WMI)
-
-Use WMI to scan Windows systems for focused and accurate device information, such as service pack levels. You can scan specific IP address ranges and hosts. You can perform scheduled or manual scans. When a scan is finished, you can view the results in a CSV log file. The log contains all the IP addresses that were probed, and success and failure information for each address. There's also an error code, which is a free string derived from the exception. Note that:
--- You can run only one scan at a time.-- You get the best results with users who have domain or local administrator privileges.-- Only the scan of the last log is kept in the system.--
-## Set up a firewall rule
-
-Before you begin scanning, create a firewall rule that allows outgoing traffic from the sensor to the scanned subnet by using UDP port 135 and all TCP ports above 1024.
--
-## Set up scanning
-
-1. In Defender for IoT select **System Settings**.
-1. Under **Network monitoring**, select **Windows Endpoint Monitoring (WMI)**
-1. In the **Windows Endpoint Monitoring (WMI) dialog, select **Add ranges**. You can also import and export ranges.
-1. Specify the IP address range you want to scan. You can add multiple ranges.
-1. Add your user name and password, and ensure that **Enable** is toggled on.
-1. In **Scan will run**, specify when you want the automatic scan to run. You can set an hourly interval between scans, or a specific scan time.
-1. If you want to run a scan immediately with the configured settings, select **Manually scan**.
-1. Select **Save** to save the automatic scan settings.
-1. When the scan is finished, select to view/export scan results.
-
-## Next steps
-
-For more information, see [Work with device notifications](how-to-work-with-device-notifications.md).
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
Regardless of configuration, data detected by a specific sensor is also always a
## Extend support to proprietary protocols
-IoT and ICS devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. Use the [Horizon Open Development Environment (ODE) SDK](references-horizon-sdk.md) to develop dissector plug-ins that decode network traffic, regardless of protocol type.
+IoT and ICS devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. Use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins that decode network traffic, regardless of protocol type.
For example, in an environment running MODBUS, you might want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and Ethernet destination. Or you might want to generate an alert when any access is performed to a specific IP address. Alerts are triggered when Horizon alert rule conditions are met. Use custom, condition-based alert triggering and messaging to help pinpoint specific network activity and effectively update your security, IT, and operational teams.-
-For more information, see [Horizon proprietary protocol dissector](references-horizon-sdk.md) and [Supported Protocols](concept-supported-protocols.md).
-
+Contact [ms-horizon-support@microsoft.com](mailto:ms-horizon-support@microsoft.com) for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
## Extend Defender for IoT to enterprise networks
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
- Title: Manage proprietary protocols (Horizon)
-description: Defender for IoT Horizon delivers an Open Development Environment (ODE) used to secure IoT and ICS devices running proprietary protocols.
Previously updated : 11/09/2021---
-# Defender for IoT Horizon
-
-Defender for IoT Horizon includes an Open Development Environment (ODE) used to secure IoT and ICS devices running proprietary protocols.
-
-Horizon provides:
-
- - Unlimited, full support for common, proprietary, custom protocols or protocols that deviate from any standard.
- - A new level of flexibility and scope for DPI development.
- - A tool that exponentially expands OT visibility and control, without the need to upgrade to new versions.
- - The security of allowing proprietary development without divulging sensitive information.
-
-Use the Horizon SDK to design dissector plugins that decode network traffic so it can be processed by automated Defender for IoT network analysis programs.
-
-Protocol dissectors are developed as external plugins and are integrated with an extensive range of Defender for IoT services, for example services that provide monitoring, alerting, and reporting capabilities.
-
-Contact <ms-horizon-support@microsoft.com> for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
-
-Once the plugin is developed, you can use Horizon web console to:
-
- - Upload your plugin
- - Enable and disable plugins
- - Monitor and debug the plugin to evaluate performance
- - Create custom alerts based on proprietary protocols. Display them in the console and forward them to partner vendors.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/horizon-plugin.png" alt-text="Upload through your horizon plugin.":::
-
-This feature is available to Administrator, Cyberx, or Support users.
-
-To sign in to the Horizon console:
-
-1. Sign in to your sensor via CLI.
-2. In the file: `/var/cyberx/properties/horizon.properties` change the `ui.enabled` property to `true` (`horizon.properties:ui.enabled=true`)
-3. Sign in to the sensor console.
-4. Select **Horizon** from the main menu.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/horizon-from-the-menu.png" alt-text="Select Horizon from the main menu.":::
-
-The Horizon console displays the infrastructure plugins provided by Defender for IoT and any other plugin you created and uploaded.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/infrastructure.png" alt-text="Screenshot of the Horizon infrastructure.":::
-
-## Upload plugins
-
-After creating and testing your proprietary dissector plugin, you can upload and monitor it from the Horizon console.
-
-To upload:
-
-1. Select **UPLOAD** from the console.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/upload-a-plugin.png" alt-text="Select upload for your plugin.":::
-
-2. Drag or browse to your plugin. If the upload fails, an error message will be presented.
-
-Contact <ms-horizon-support@microsoft.com> for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
-
-## Enable and disable plugins
-
-Use the toggle button to enable and disable plugins. When disabled, traffic is no longer monitored.
-
-Infrastructure plugins cannot be disabled.
-
-## Monitor plugin performance
-
-The Horizon console Overview window provides basic information about the plugins you uploaded and lets you disable and enable them.
--
-| Application | The name of the plugin you uploaded. |
-|--|--|
-| :::image type="icon" source="media/how-to-manage-proprietary-protocols/toggle-icon.png" border="false"::: | Toggle the plugin on or off. The sensor will not handle protocol traffic defined in the plugin when you toggle off the plugin. |
-| Time | The time the data was last analyzed. Updated every five seconds. |
-| PPS | The number of packets per second. |
-| Bandwidth | The average bandwidth detected within the last five seconds. |
-| Malforms | Malformed validations are used after the protocol has been positively validated. If there is a failure to process the packets based on the protocol, a failure response is returned.<br/> <br />This column indicates the number of malform errors in the past five seconds. |
-| Warnings | Packets match the structure and specification but there is unexpected behavior based on the plugin warning configuration. |
-| Errors | The number of packets that failed basic protocol validations that the packet matches the protocol definitions. The Number displayed here indicates that n umber of errors detected in the past five seconds. |
-| :::image type="icon" source="media/how-to-manage-proprietary-protocols/monitor-icon.png" border="false"::: | Review details about malform and warnings detected for your plugin. |
-
-### Plugin performance details
-
-You can monitor real-time plugin performance by the analyzing number of malform and warnings detected for your plugin. An option is available to freeze the screen and export for further investigation
--
-### Horizon logs
-
-Horizon dissection information is available for export in the dissection details, dissection logs, and exports logs.
--
-## Trigger Horizon alerts
-
-Enhance alert management in your enterprise by triggering custom alerts for any protocol based on Horizon framework traffic dissectors.
-
-These alerts can be used to communicate information:
-
- - About traffic detections based on protocols and underlying protocols in a proprietary Horizon plugin.
-
- - About a combination of protocol fields from all protocol layers. For example, in an environment running MODBUS, you may want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and ethernet destination, or an alert when any access is performed to a specific IP address.
-
-Alerts are triggered when Horizon alert, rule conditions, are met.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/custom-alert-rules.png" alt-text="Sample custom rules for Horizon.":::
-
-In addition, working with Horizon custom alerts lets you write your own alert titles and messages. Protocol fields and values resolved can also be embedded in the alert message text.
-
-Using custom, conditioned-based alert triggering and messaging helps pinpoint specific network activity and effectively update your security, IT, and operational teams.
-
-### Working with Horizon alerts
-
-Alerts generated by Horizon custom alert rules are displayed in the sensor and management console Alerts window and in integrated partner systems when using Forwarding Rules.
-
-Alerts generated by Horizon can be acknowledged or muted. The learn option is not available for custom alerts as the alert events cannot be learned to policy baseline.
-
-Alert information is forwarded to partner vendors when Forwarding rules are used.
-
-The severity for Horizon custom alerts is critical.
-
-Horizon custom alerts include static text under the **Manage this Event** section indicating that the alert was generated by your organizationΓÇÖs security team.
-
-### Required permissions
-
-Users defined as Defender for IoT users have permission to create Horizon Custom Alert Rules.
-
-### About creating rule conditions
-
-Rule conditions describe the network traffic that should be detected to trigger the alert. Rule conditions can comprise one or several sets of fields, operators, and values. Create condition sets, by using **AND**.
-
-When the rule condition or condition set is met, the alert is sent. You will be notified if the condition logic is not valid.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/and-condition.png" alt-text="Use the AND condition for your custom rule.":::
-
-You can also create several rules for one protocol. This means, an alert will be triggered for each rule you created, when the rule conditions are met.
-
-### About titles and messages
-
-Alert messages can contain alphanumeric characters you enter, as well as traffic variables detected. For example, include the detected source and destination addresses in the alert messages. Various languages are supported.
-
-### About alert recommendations
-
-Horizon custom alerts include static text under the **Manage this Event** section indicating that the alert was generated by your organizationΓÇÖs security team. You can also work with alert comments to improve communication between individuals and teams reading your alert.
-
-## Create Horizon alert rules
-
-This article describes how to create the alert rule.
-
-To create Horizon custom alerts:
-
-1. Right-click a plugin from the plugins menu in the Horizon console.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/plugins-menu.png" alt-text="Right-click on a plugin from the menu.":::
-
-2. Select **Horizon Custom Alerts**. The **Rule** window opens for the plugin you selected.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/sample-rule-window.png" alt-text="The sample rule window opens for your plugin.":::
-
-3. Enter a title in the Title field.
-
-4. Enter an alert message in the Message field. Use curly brackets `{}` to include detected field parameters in the message. When you enter the first bracket, relevant fields appear.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/rule-window.png" alt-text="Use {} in the rule window to include detected fields.":::
-
-5. Define alert conditions.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/define-conditions.png" alt-text="Define the alert's conditions.":::
-
-6. Select a **Variable**. Variables represent fields configured in the plugin.
-
-7. Select an **Operator**:
-
- - Equal to
-
- - Not equal to
-
- - Less than
-
- - Less than or equal to
-
- - Greater than
-
- - Greater than or equal to
-
-8. Enter a **Value** as a number. If the variable you selected is a MAC address or IP address, the value must be converted from a dotted-decimal address to decimal format. Use an IP address conversion tool, for example <https://www.ipaddressguide.com/ip>.
-
- :::image type="content" source="media/how-to-manage-proprietary-protocols/ip-address-value.png" alt-text="Translated IP address value.":::
-
-9. Select **AND** to create a condition set.
-
-10. Select **SAVE**. The rule is added to the Rules section.
-
-### Edit and delete Horizon custom alert rules
-
-Use edit and delete options as required. Certain rules are embedded and cannot be edited or deleted.
-
-### Create multiple rules
-
-When you create multiple rules, alerts are triggered when any rule condition or condition sets are valid.
-
-## Next steps
-
-For more information, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
event-hubs Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/network-security.md
Title: Network security for Azure Event Hubs description: This article describes how to configure access from private endpoints Previously updated : 02/11/2022 Last updated : 04/13/2022 # Network security for Azure Event Hubs
Once configured to bound to at least one virtual network subnet service endpoint
The result is a private and isolated relationship between the workloads bound to the subnet and the respective Event Hubs namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. There is an exception to this behavior. Enabling a service endpoint, by default, enables the `denyall` rule in the [IP firewall](event-hubs-ip-filtering.md) associated with the virtual network. You can add specific IP addresses in the IP firewall to enable access to the Event Hub public endpoint. > [!IMPORTANT]
-> Virtual networks aren't supported in the **basic** and **premium** tiers.
+> This feature isn't supported in the **basic** tier.
### Advanced security scenarios enabled by VNet integration
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
description: In this quickstart, you learn how to create, provision, verify, upd
Previously updated : 04/23/2021 Last updated : 04/13/2022
This quickstart shows you how to create an ExpressRoute circuit using the Azure
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Review the [prerequisites](expressroute-prerequisites.md) and [workflows](expressroute-workflows.md) before you begin configuration.
-* You can view a video before beginning to better understand the steps.
## <a name="create"></a>Create and provision an ExpressRoute circuit
From a browser, navigate to the [Azure portal](https://portal.azure.com) and sig
> [!IMPORTANT] > Your ExpressRoute circuit is billed from the moment a service key is issued. Ensure that you perform this operation when the connectivity provider is ready to provision the circuit.
-You can create an ExpressRoute circuit by selecting the option to create a new resource.
+1. On the Azure portal menu, select **+ Create a resource**. Search for **ExpressRoute** and then select **Create**.
-1. On the Azure portal menu, select **Create a resource**. Select **Networking** > **ExpressRoute**, as shown in the following image:
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/create-an-expressroute-circuit.png" alt-text="Create an ExpressRoute circuit":::
- :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/create-expressroute-circuit-menu.png" alt-text="Create an ExpressRoute circuit":::
-
-2. After you select **ExpressRoute**, you'll see the **Create ExpressRoute** page. Provide the **Resource Group**, **Region**, and **Name** for the circuit. Then select **Next: Configuration >**.
+1. On the **Create ExpressRoute** page. Provide the **Resource Group**, **Region**, and **Name** for the circuit. Then select **Next: Configuration >**.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-create-basic.png" alt-text="Configure the resource group and region":::
-3. When you're filling in the values on this page, make sure that you specify the correct SKU tier (Local, Standard, or Premium) and data metering billing model (Unlimited or Metered).
+1. When you're filling in the values on this page, make sure that you specify the correct SKU tier (Local, Standard, or Premium) and data metering billing model (Unlimited or Metered).
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-create-configuration.png" alt-text="Configure the circuit":::
- * **Port type** determines if you're connecting to a service provider or directly into Microsoft's global network at a peering location.
- * **Create new or import from classic** determines if a new circuit is being created or if you're migrating a classic circuit to Azure Resource Manager.
- * **Provider** is the internet service provider who you will be requesting your service from.
- * **Peering Location** is the physical location where you're peering with Microsoft.
-
- > [!IMPORTANT]
- > The Peering Location indicates the [physical location](expressroute-locations.md) where you are peering with Microsoft. This is **not** linked to "Location" property, which refers to the geography where the Azure Network Resource Provider is located. While they are not related, it is a good practice to choose a Network Resource Provider geographically close to the Peering Location of the circuit.
-
- * **SKU** determines whether an ExpressRoute local, ExpressRoute standard, or an ExpressRoute premium add-on is enabled. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change the SKU to enable the premium add-on.
- > [!IMPORTANT]
- > You cannot change the SKU from **Standard/Premium** to **Local**.
-
- * **Billing model** determines the billing type. You can specify **Metered** for a metered data plan and **Unlimited** for an unlimited data plan. You can change the billing type from **Metered** to **Unlimited**.
+ | Setting | Description |
+ | | |
+ | Port type | Select if you're connecting to a service provider or directly into Microsoft's global network at a peering location. |
+ | Create new or import from classic | Select if you're creating a new circuit or if you're migrating a classic circuit to Azure Resource Manager. |
+ | Provider | Select the internet service provider who you'll be requesting your service from. |
+ | Peering Location | Select the physical location where you're peering with Microsoft. |
+ | SKU | Select the SKU for the ExpressRoute circuit. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change between Standard and Premium but not to Local once created. |
+ | Billing model | Select the billing type for egress data charge. You can specify **Metered** for a metered data plan and **Unlimited** for an unlimited data plan. You can change the billing type from **Metered** to **Unlimited**. |
+ | Allow classic operations | Enable this option to allow classic virtual networks to link to the circuit. |
> [!IMPORTANT]
- > You can not change the type from **Unlimited** to **Metered**.
+ > * The Peering Location indicates the [physical location](expressroute-locations.md) where you are peering with Microsoft. This is **not** linked to "Location" property, which refers to the geography where the Azure Network Resource Provider is located. While they're not related, it is a good practice to choose a Network Resource Provider geographically close to the Peering Location of the circuit.
+ > * You can't change the SKU from **Standard/Premium** to **Local**.
+ > * You can't change the type from **Unlimited** to **Metered**.
- * **Allow classic operation** will allow classic virtual networks to be link to the circuit.
+1. Select **Review + create** and then select **Create** to deploy the ExpressRoute circuit.
### View the circuits and properties **View all the circuits**
-You can view all the circuits that you created by selecting **All services > Networking > ExpressRoute circuits** on the left-side menu.
+You can view all the circuits that you created by searching for **ExpressRoute circuits** in the search box at the top of the portal.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-circuit-menu.png" alt-text="Expressroute circuit menu":::
All Expressroute circuits created in the subscription will appear here.
**View the properties**
-You can view the properties of the circuit by selecting it. On the **Overview** page for your circuit, the service key appears in the service key field. Refer to the service key for your circuit and provide it to the service provider to complete the provisioning process. The service key is specific to your circuit.
+You can view the properties of the circuit by selecting it. On the Overview page for your circuit, you'll find the **Service Key**. Provide the service key to the service provider to complete the provisioning process. The service key is unique to your circuit.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-circuit-overview.png" alt-text="View properties":::
You can do the following tasks with no downtime:
* Enable or disable an ExpressRoute Premium add-on for your ExpressRoute circuit.
-> [!IMPORTANT]
+ > [!IMPORTANT]
> Changing the SKU from **Standard/Premium** to **Local** is not supported. * Increase the bandwidth of your ExpressRoute circuit, provided there's capacity available on the port.
expressroute Expressroute Howto Macsec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-macsec.md
To start the configuration, sign in to your Azure account and select the subscri
($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "your_existing_keyvault").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enableSoftDelete" -Value "true" Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties ```
+
+ > [!NOTE]
+ > The Key Vault shouldn't be behind a private endpoint because communicate to the ExpressRoute management plane is required.
+ >
+
2. Create a user identity. ```azurepowershell-interactive
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
In this section, you'll map the Private Link service to a private endpoint creat
| - | -- | | Name | Enter a name to identify this custom origin. | | Origin Type | Custom |
- | Host name | Select the host from the dropdown that you want as an origin. |
+ | Host name | HostName is used for SNI (SSL negotiation) and should match your server side certificate. |
| Origin host header | You can customize the host header of the origin or leave it as default. | | HTTP port | 80 (default) | | HTTPS port | 443 (default) |
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance
description: Learn about the management groups, how their permissions work, and how to use them. Last updated 08/17/2021 ++ # What are Azure management groups? If your organization has many Azure subscriptions, you may need a way to efficiently manage access, policies, and compliance for those subscriptions. _Management groups_ provide a governance scope above subscriptions. You organize subscriptions into management groups the governance conditions you apply
-cascade by inheritence to all associated subscriptions.
+cascade by inheritance to all associated subscriptions.
Management groups give you enterprise-grade management at scale no matter what type of subscriptions you might have.
creating a hierarchy for governance using management groups.
Diagram of a root management group holding both management groups and subscriptions. Some child management groups hold management groups, some hold subscriptions, and some hold both. One of the examples in the sample hierarchy is four levels of management groups with the child level being all subscriptions. :::image-end:::
-You can create a hierarchy that applies a policy, for example, which limits VM locations to the US
-West Region in the group called "Production". This policy will inherit onto all the Enterprise
+You can create a hierarchy that applies a policy, for example, which limits VM locations to the
+West US region in the management group called "Production". This policy will inherit onto all the Enterprise
Agreement (EA) subscriptions that are descendants of that management group and will apply to all VMs under those subscriptions. This security policy cannot be altered by the resource or subscription owner allowing for improved governance.
subscriptions.
## Root management group for each directory
-Each directory is given a single top-level management group called the "Root" management group. This
+Each directory is given a single top-level management group called the **root** management group. The
root management group is built into the hierarchy to have all management groups and subscriptions fold up to it. This root management group allows for global policies and Azure role assignments to be applied at the directory level. The [Azure AD Global Administrator needs to elevate
Administrator role of this root group initially. After elevating access, the adm
assign any Azure role to other directory users or groups to manage the hierarchy. As administrator, you can assign your own account as owner of the root management group.
-### Important facts about the Root management group
+### Important facts about the root management group
-- By default, the root management group's display name is **Tenant root group**. The ID is the Azure
- Active Directory ID.
-- To change the display name, your account must be assigned the Owner or Contributor role on the
+- By default, the root management group's display name is **Tenant root group** and operates itself as a management group. The ID is the same value as the Azure Active Directory (Azure AD) tenant ID.
+- To change the display name, your account must be assigned the **Owner** or **Contributor** role on the
root management group. See [Change the name of a management group](manage.md#change-the-name-of-a-management-group) to update the name of a management group.
you can assign your own account as owner of the root management group.
the only users that can elevate themselves to gain access. Once they have access to the root management group, the global administrators can assign any Azure role to other users to manage it.-- In SDK, the root management group, or 'Tenant Root', operates as a management group. > [!IMPORTANT]
-> Any assignment of user access or policy assignment on the root management group **applies to all
+> Any assignment of user access or policy on the root management group **applies to all
> resources within the directory**. Because of this, all customers should evaluate the need to have > items defined on this scope. User access and policy assignments should be "Must Have" only at this > scope.
The reason for this process is to make sure there's only one management group hi
directory. The single hierarchy within the directory allows administrative customers to apply global access and policies that other customers within the directory can't bypass. Anything assigned on the root will apply to the entire hierarchy, which includes all management groups, subscriptions,
-resource groups, and resources within that Azure AD Tenant.
+resource groups, and resources within that Azure AD tenant.
## Trouble seeing all subscriptions
-A few directories that started using management groups early in the preview before June 25 2018
+A few directories that started using management groups early in the preview before June 25, 2018
could see an issue where not all the subscriptions were within the hierarchy. The process to have all subscriptions in the hierarchy was put in place after a role or policy assignment was done on the root management group in the directory.
the root management group in the directory.
There are two options you can do to resolve this issue. -- Remove all Role and Policy assignments from the root management group
+- Remove all role and policy assignments from the root management group
- By removing any policy and role assignments from the root management group, the service backfills all subscriptions into the hierarchy the next overnight cycle. This process is so there's no accidental access given or policy assignment to all of the tenants subscriptions. - The best way to do this process without impacting your services is to apply the role or policy
- assignments one level below the Root management group. Then you can remove all assignments from
+ assignments one level below the root management group. Then you can remove all assignments from
the root scope. - Call the API directly to start the backfill process - Any customer in the directory can call the _TenantBackfillStatusRequest_ or
The following chart shows the list of roles and the supported actions on managem
|Resource Policy Contributor | | | | | | X | | |User Access Administrator | | | | | X | X | |
-\*: MG Contributor and MG Reader only allow users to do those actions on the management group scope.
-\*\*: Role Assignments on the Root management group aren't required to move a subscription or
-management group to and from it. See [Manage your resources with management groups](manage.md) for
+\*: The **Management Group Contributor** and **Management Group Reader** roles allow users to perform those actions only on the management group scope.
+
+\*\*: Role assignments on the root management group aren't required to move a subscription or
+management group to and from it.
+
+See [Manage your resources with management groups](manage.md) for
details on moving items within the hierarchy. ## Azure custom role definition and assignment
will inherit down the hierarchy like any built-in role.
[Defining and creating a custom role](../../role-based-access-control/custom-roles.md) doesn't change with the inclusion of management groups. Use the full path to define the management group
-**/providers/Microsoft.Management/managementgroups/{groupId}**.
+**/providers/Microsoft.Management/managementgroups/{_groupId_}**.
Use the management group's ID and not the management group's display name. This common error happens since both are custom-defined fields when creating a management group.
break this relationship.
There are a couple different options to fix this scenario: - Remove the role assignment from the subscription before moving the subscription to a new parent MG.-- Add the subscription to the Role Definition's assignable scope.
+- Add the subscription to the role definition's assignable scope.
- Change the assignable scope within the role definition. In the above example, you can update the
- assignable scopes from Marketing to Root Management Group so that the definition can be reached by
+ assignable scopes from Marketing to the root management group so that the definition can be reached by
both branches of the hierarchy.-- Create another Custom Role that is defined in the other branch. This new role requires the role
+- Create another custom role that is defined in the other branch. This new role requires the role
assignment to be changed on the subscription also. ### Limitations
If you're doing the move action, you need:
- Management group write and Role Assignment write permissions on the child subscription or management group.
- - Built-in role example **Owner**
+ - Built-in role example: **Owner**
- Management group write access on the target parent management group. - Built-in role example: **Owner**, **Contributor**, **Management Group Contributor** - Management group write access on the existing parent management group. - Built-in role example: **Owner**, **Contributor**, **Management Group Contributor**
-**Exception**: If the target or the existing parent management group is the Root management group,
-the permissions requirements don't apply. Since the Root management group is the default landing
+**Exception**: If the target or the existing parent management group is the root management group,
+the permissions requirements don't apply. Since the root management group is the default landing
spot for all new management groups and subscriptions, you don't need permissions on it to move an item.
-If the Owner role on the subscription is inherited from the current management group, your move
+If the **Owner** role on the subscription is inherited from the current management group, your move
targets are limited. You can only move the subscription to another management group where you have
-the Owner role. You can't move it to a management group where you're a contributor because you would
-lose ownership of the subscription. If you're directly assigned to the Owner role for the
+the **Owner** role. You can't move it to a management group where you're a **Contributor** because you would
+lose ownership of the subscription. If you're directly assigned to the **Owner** role for the
subscription (not inherited from the management group), you can move it to any management group
-where you're a contributor.
+where you're assigned the **Contributor** role.
> [!IMPORTANT] > Azure Resource Manager caches management group hierarchy details for up to 30 minutes.
-> As a result, moving a management group may not immediately be reflected in the Azure portal.
+> As a result, moving a management group may not immediately be reflected in the Azure portal.
## Audit management groups using activity logs Management groups are supported within
-[Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md). You can search all
+[Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md). You can search all
events that happen to a management group in the same central location as other Azure resources. For
-example, you can see all Role Assignments or Policy Assignment changes made to a particular
+example, you can see all role assignments or policy assignment changes made to a particular
management group. :::image type="content" source="./media/al-mg.png" alt-text="Screenshot of Activity Logs and operations related to the selected management group." border="false":::
-When looking to query on Management Groups outside of the Azure portal, the target scope for
-management groups looks like **"/providers/Microsoft.Management/managementGroups/{yourMgID}"**.
+When looking to query on management groups outside the Azure portal, the target scope for
+management groups looks like **"/providers/Microsoft.Management/managementGroups/{_management-group-id_}"**.
## Next steps
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration.md
for the same definitions using the same parameter values as machines in the prim
Guest configuration stores/processes customer data. By default, customer data is replicated to the [paired region.](../../../availability-zones/cross-region-replication-azure.md)
-For single resident region all customer data is stored and processed in the region.
+For the regions: Singapore, Brazil South, and East Asia all customer data is stored and processed in the region.
## Troubleshooting guest configuration
hdinsight Connect Install Beeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/connect-install-beeline.md
Title: Connect to Hive using Beeline or install Beeline locally to connect from your local - Azure HDInsight
+ Title: Connect to HiveServer2 using Beeline or install Beeline locally to connect from your local - Azure HDInsight
description: Learn how to connect to the Apache Beeline client to run Hive queries with Hadoop on HDInsight. Beeline is a utility for working with HiveServer2 over JDBC. Last updated 04/07/2021
-# Connect to Hive using Beeline or install Beeline locally to connect from your local
+# Connect to HiveServer2 using Beeline or install Beeline locally to connect from your local
-[Apache Beeline](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineΓÇôNewCommandLineShell) is a Hive client that is included on the head nodes of your HDInsight cluster. This article describes how to connect to Hive using the Beeline client installed on your HDInsight cluster across different types of connections. It also discusses how to [Install the Beeline client locally](#install-beeline-client).
+[Apache Beeline](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineΓÇôNewCommandLineShell) is a Hive client that is included on the head nodes of your HDInsight cluster. This article describes how to connect to HiveServer2 using the Beeline client installed on your HDInsight cluster across different types of connections. It also discusses how to [Install the Beeline client locally](#install-beeline-client).
## Types of connections
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
-+ Last updated 03/01/2022
The first step in configuring the FHIR service for export is to enable system wi
In this step, browse to your FHIR service in the Azure portal, and select the **Identity** blade. Select the **Status** option to **On** , and then select **Save**. **Yes** and **No** buttons will display. Select **Yes** to enable the managed identity for FHIR service. Once the system identity has been enabled, you'll see a system assigned GUID value.
-[ ![Enable Managed Identity](media/export-data/fhir-mi-enabled.png) ](media/export-data/fhir-mi-enabled.png#lightbox)
+[![Enable Managed Identity](media/export-data/fhir-mi-enabled.png)](media/export-data/fhir-mi-enabled.png#lightbox)
## Assign permissions to the FHIR service to access the storage account
-Browse to the **Access Control (IAM)** in the storage account, and then select **Add role assignment**. If the add role assignment option is grayed out, you'll need to ask your Azure Administrator to assign you permission to perform this task.
+1. Select **Access Control (IAM)**.
-For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
+1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task.
+
+ :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Roles** tab, select the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role.
+
+ [![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
-Add the role [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) to the FHIR service, and then select **Save**.
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
-[![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) ](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
+1. Select **System-assigned managed identity**, and then select the FHIR service.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
Now you're ready to select the storage account in the FHIR service as a default storage account for export.
The final step is to assign the Azure storage account that the FHIR service will
To do this, select the **Export** blade in FHIR service and select the storage account. To search for the storage account, enter its name in the text field. You can also search for your storage account by using the available filters **Name**, **Resource group**, or **Region**.
-[![Screen shot showing user interface of FHIR Export Storage.](media/export-data/fhir-export-storage.png) ](media/export-data/fhir-export-storage.png#lightbox)
+[![Screen shot showing user interface of FHIR Export Storage.](media/export-data/fhir-export-storage.png)](media/export-data/fhir-export-storage.png#lightbox)
After you've completed this final step, you're ready to export the data using $export command.
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Last updated 03/21/2022 +
Change the status to **On** to enable managed identity in FHIR service.
### Provide access of the ACR to FHIR service
-1. Browse to the **Access control (IAM)** blade.
+1. Select **Access Control (IAM)**.
-1. Select **Add**, and then select **Add role assignment** to open the Add role assignment page.
+1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task.
-1. Assign the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
+ :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
- [ ![Add role assignment page](../../../includes/role-based-access-control/media/add-role-assignment-page.png) ](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
+1. On the **Roles** tab, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
-For more information about assigning roles in the Azure portal, see [Screen image of Azure built-in roles.](../../role-based-access-control/role-assignments-portal.md).
+ [![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
+
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+
+1. Select **System-assigned managed identity**, and then select the FHIR service.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
### Register the ACR servers in FHIR service
industrial-iot Tutorial Deploy Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md
In this tutorial, you learn:
- An Azure subscription must be created - Download [Git](https://git-scm.com/downloads)-- The Azure Active Directory (AAD) app registrations used for authentication require Global Administrator, Application
+- The Microsoft Azure Active Directory (Azure AD) app registrations used for authentication require Global Administrator, Application
Administrator, or Cloud Application Administrator rights to provide tenant-wide admin consent (see below for further options) - The supported operating systems for deployment are Windows, Linux and Mac - IoT Edge supports Windows 10 IoT Enterprise LTSC and Ubuntu Linux 16.08/18.04 LTS Linux
-## Main Components
--- Minimum dependencies: IoT Hub, Cosmos DB, Service Bus, Event Hub, Key Vault, Storage-- Standard dependencies: Minimum + SignalR Service, AAD app
-registrations, Device Provisioning Service, Time Series Insights, Workbook, Log Analytics,
-Application Insights
-- Micro-- UI (Web app): App Service Plan (shared with microservices), App Service-- Simulation: Virtual machine, Virtual network, IoT Edge-- Azure Kubernetes Service-
-## Installation types
--- Minimum: Minimum dependencies-- Local: Minimum and the standard dependencies-- -- Simulation: Minimum dependencies and the simulation components-- App: Services and the UI-- All (default): App and the simulation-
-## Deployment
+## Main components
+
+The Azure Industrial IoT Platform is a Microsoft suite of modules (OPC Publisher, OPC Twin, Discovery) and services that are deployed on Azure. The cloud microservices (Registry, OPC Twin, OPC Publisher, Edge Telemetry Processor, Registry Onboarding Processor, Edge Event Processor, Registry Synchronization) are implemented as ASP.NET microservices with a REST interface and run on managed Azure Kubernetes Services or stand-alone on Azure App Service. The deployment can deploy the platform, an entire simulation environment and a Web UI (Industrial IoT Engineering Tool).
+The deployment script allows to select which set of components to deploy.
+- Minimum dependencies:
+ - [IoT Hub](https://azure.microsoft.com/services/iot-hub/) to communicate with the edge and ingress raw OPC UA telemetry data
+ - [Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) to persist state that is not persisted in IoT Hub
+ - [Service Bus](https://azure.microsoft.com/services/service-bus/) as integration event bus
+ - [Event Hubs](https://azure.microsoft.com/services/event-hubs/) contains processed and contextualized OPC UA telemetry data
+ - [Key Vault](https://azure.microsoft.com/services/key-vault/) to manage secrets and certificates
+ - [Storage](https://azure.microsoft.com/product-categories/storage/) for Event Hubs checkpointing
+- Standard dependencies: Minimum +
+ - [SignalR Service](https://azure.microsoft.com/services/signalr-service/) used to scale out asynchronous API notifications, Azure AD app registrations,
+ - [Device Provisioning Service](https://docs.microsoft.com/azure/iot-dps/) used for deploying and provisioning the simulation gateways
+ - [Time Series Insights](https://azure.microsoft.com/services/time-series-insights/)
+ - Workbook, Log Analytics, [Application Insights](https://azure.microsoft.com/services/monitor/) for operations monitoring
+- Micro
+ - App Service Plan, [App Service](https://azure.microsoft.com/services/app-service/) for hosting the cloud microservices
+- UI (Web app):
+ - App Service Plan (shared with microservices), [App Service](https://azure.microsoft.com/services/app-service/) for hosting the Industrial IoT Engineering Tool cloud application
+- Simulation:
+ - [Virtual machine](https://azure.microsoft.com/services/virtual-machines/), Virtual network, IoT Edge used for a factory simulation to show the capabilities of the platform and to generate sample telemetry
+- [Azure Kubernetes Service](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md) should be used to host the cloud microservices
+
+## Deploy Azure IIoT Platform using the deployment script
1. To get started with the deployment of the IIoT Platform, clone the repository from the command prompt or terminal. git clone https://github.com/Azure/Industrial-IoT cd Industrial-IoT
-2. Start the guided deployment, the script will collect the required information, such as Azure account, subscription, target resource and group and application name.
+2. Start the guided deployment. The script will collect the required information, such as Azure account, subscription, target resource and group and application name.
On Windows: ```
- .\deploy -version <version>
+ .\deploy -version <version> [-type <deploymentType>]
``` On Linux or Mac: ```
- ./deploy.sh -version <version>
+ ./deploy.sh -version <version> [-type <deploymentType>]
``` Replace \<version> with the version you want to deploy.
-3. The microservices and the UI are web applications that require authentication, this requires three app registrations in the AAD. If the required rights are missing, there are two possible solutions:
+ Replace \<deploymentType> with the type of deployment (optional parameter).
+
+ The types of deployments are the followings:
- - Ask the AAD admin to grant tenant-wide admin consent for the application
- - An AAD admin can create the AAD applications. The deploy/scripts folder contains the aad- register.ps1 script to perform the AAD registration separately from the deployment. The output of the script is a file containing the relevant information to be used as part of deployment and must be passed to the deploy.ps1 script in the same folder using the -
- aadConfig argument.
+ - `minimum`: Minimum dependencies
+ - `local`: Minimum and standard dependencies
+ - `services`: Local and microservices
+ - `simulation`: Minimum dependencies and simulation components
+ - `app`: Services and UI
+ - `all` (default): App and simulation
+
+3. The microservices and the UI are web applications that require authentication, this requires three app registrations in the Azure AD. If the required rights are missing, there are two possible solutions:
+
+ - Ask the Azure AD admin to grant tenant-wide admin consent for the application
+ - An Azure AD admin can create the Azure AD applications. The deploy/scripts folder contains the aad-register.ps1 script to perform the Azure AD registration separately from the deployment. The output of the script is a file containing the relevant information to be used as part of deployment and must be passed to the deploy.ps1 script in the same folder using the `-aadConfig` argument.
```bash cd deploy/scripts ./aad-register.ps1 -Name <application-name> -Output aad.json ./deploy.ps1 -aadConfig aad.json ```
-For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md)
+
+## Other hosting and deployment methods
+
+Other hosting and deployment methods:
+
+- For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md)
+- Deploying Azure Industrial IoT Platform microservices into an existing Kubernetes cluster using [Helm](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-helm.md).
+- Deploying [Azure Kubernetes Service (AKS) cluster on top of Azure Industrial IoT Platform created by deployment script and adding Azure Industrial IoT components into the cluster](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-add-aks-to-ps1.md).
References: - [Deploying Azure Industrial IoT Platform](https://github.com/Azure/Industrial-IoT/tree/master/docs/deploy)
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The response to this request looks like the following example:
} ```
-### List device groups
-
-Use the following request to retrieve a list of device groups from your application:
-
-```http
-GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=1.1-preview
-```
-
-The response to this request looks like the following example:
-
-```json
-{
- "value": [
- {
- "id": "1dbb2610-04f5-47f8-81ca-ba38a24a6cf3",
- "displayName": "Thermostat - All devices",
- "organizations": [
- "seattle"
- ]
- },
- {
- "id": "b37511ca-1beb-4781-ae09-c2d73c9104bf",
- "displayName": "Cascade 500 - All devices",
- "organizations": [
- "redmond"
- ]
- },
- {
- "id": "788d08c6-2d11-4372-a994-71f63e108cef",
- "displayName": "RS40 Occupancy Sensor - All devices"
- }
- ]
-}
-```
-
-The organizations field is only used when an application has an organization hierarchy defined. To learn more about organizations, see [Manage IoT Central organizations](howto-edit-device-template.md)
- ### Use ODATA filters You can use ODATA filters to filter the results returned by the list devices API.
The response to this request looks like the following example:
} ```
+## Device groups
+
+### Add a device group
+
+Use the following request to create a new device group.
+
+```http
+PUT https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+```
+
+When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is true
+
+```json
+{
+ "displayName": "Device group 1",
+ "description": "Custom device group.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true",
+ "organizations": [
+ "seattle"
+ ]
+}
+```
+
+The request body has some required fields:
+
+* `@displayName`: Display name of the device group.
+* `@filter`: Query defining which devices should be in this group.
+* `@etag`: ETag used to prevent conflict in device updates.
+* `description`: Short summary of device group.
+
+The organizations field is only used when an application has an organization hierarchy defined. To learn more about organizations, see [Manage IoT Central organizations](howto-edit-device-template.md)
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "group1",
+ "displayName": "Device group 1",
+ "description": "Custom device group.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true",
+ "organizations": [
+ "seattle"
+ ]
+}
+```
+
+### Get a device group
+
+Use the following request to retrieve details of a device group from your application:
+
+```http
+GET https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+```
+
+* deviceGroupId - Unique ID for the device group.
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "475cad48-b7ff-4a09-b51e-1a9021385453",
+ "displayName": "DeviceGroupEntry1",
+ "description": "This is a default device group containing all the devices for this particular Device Template.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true",
+ "organizations": [
+ "seattle"
+ ]
+}
+```
+
+### Update a device group
+
+```http
+PATCH https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+```
+
+The sample request body looks like the following example which updates the `displayName` of the device group:
+
+```json
+{
+ "displayName": "New group name"
+}
+
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "group1",
+ "displayName": "New group name",
+ "description": "Custom device group.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true",
+ "organizations": [
+ "seattle"
+ ]
+}
+```
+
+### Delete a device group
+
+Use the following request to delete a device group:
+
+```http
+DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+```
+
+### List device groups
+
+Use the following request to retrieve a list of device groups from your application:
+
+```http
+GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=1.2-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "475cad48-b7ff-4a09-b51e-1a9021385453",
+ "displayName": "DeviceGroupEntry1",
+ "description": "This is a default device group containing all the devices for this particular Device Template.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true",
+ "organizations": [
+ "seattle"
+ ]
+ },
+ {
+ "id": "c2d5ae1d-2cb7-4f58-bf44-5e816aba0a0e",
+ "displayName": "DeviceGroupEntry2",
+ "description": "This is a default device group containing all the devices for this particular Device Template.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:model1\"",
+ "organizations": [
+ "redmond"
+ ]
+ },
+ {
+ "id": "241ad72b-32aa-4216-aabe-91b240582c8d",
+ "displayName": "DeviceGroupEntry3",
+ "description": "This is a default device group containing all the devices for this particular Device Template.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:model2\" AND $simulated = true"
+ },
+ {
+ "id": "group4",
+ "displayName": "DeviceGroupEntry4",
+ "description": "This is a default device group containing all the devices for this particular Device Template.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:model3\""
+ }
+ ]
+}
+```
++ ## Next steps Now that you've learned how to manage devices with the REST API, a suggested next step is to [How to control devices with rest api.](howto-control-devices-with-rest-api.md)
iot-edge How To Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-observability.md
+
+ Title: How to implement IoT Edge observability using monitoring and troubleshooting
+description: Learn how to build an observability solution for an IoT Edge System
++ Last updated : 04/01/2022+++++
+# How to implement IoT Edge observability using monitoring and troubleshooting
++
+In this article, you'll learn the concepts and techniques of implementing both observability dimensions *measuring and monitoring* and *troubleshooting*. You'll learn about the following topics:
+* Define what indicators of the service performance to monitor
+* Measure service performance indicators with metrics
+* Monitor metrics and detect issues with Azure Monitor workbooks
+* Perform basic troubleshooting with the curated workbooks
+* Perform deeper troubleshooting with distributed tracing and correlated logs
+* Optionally, deploy a sample scenario to Azure to reproduce what you learned
++
+## Scenario
+
+In order to go beyond abstract considerations, we'll use a *real-life* scenario collecting ocean surface temperatures from sensors into Azure IoT.
+
+### La Ni├▒a
+
+![Illustration of La Nina solution collecting surface temperature from sensors into Azure IoT Edge](media/how-to-observability/la-nina-high-level.png)
+
+The La Ni├▒a service measures surface temperature in Pacific Ocean to predict La Ni├▒a winters. There is a number of buoys in the ocean with IoT Edge devices that send the surface temperature to Azure Cloud. The telemetry data with the temperature is pre-processed by a custom module on the IoT Edge device before sending it to the cloud. In the cloud, the data is processed by backend Azure Functions and saved to Azure Blob Storage. The clients of the service (ML inference workflows, decision making systems, various UIs, etc.) can pick up messages with temperature data from the Azure Blob Storage.
+
+## Measuring and monitoring
+
+Let's build a measuring and monitoring solution for the La Ni├▒a service focusing on its business value.
+
+### What do we measure and monitor
+
+To understand what we're going to monitor, we must understand what the service actually does and what the service clients expect from the system. In this scenario, the expectations of a common La Ni├▒a service consumer may be categorized by the following factors:
+
+* **_Coverage_**. The data is coming from most installed buoys
+* **_Freshness_**. The data coming from the buoys is fresh and relevant
+* **_Throughput_**. The temperature data is delivered from the buoys without significant delays
+* **_Correctness_**. The ratio of lost messages (errors) is small
+
+The satisfaction regarding these factors means that the service works according to the client's expectations.
+
+The next step is to define instruments to measure values of these factors. This job can be done by the following Service Level Indicators (SLI):
+
+|**Service Level Indicator** | **Factors** |
+|-||
+|Ratio of on-line devices to the total number of devices| Coverage|
+|Ratio of devices reporting frequently to the number of reporting devices| Freshness, Throughput|
+|Ratio of devices successfully delivering messages to the total number of devices|Correctness|
+|Ratio of devices delivering messages fast to the total number of devices| Throughput |
+
+With that done, we can apply a sliding scale on each indicator and define exact threshold values that represent what it means for the client to be "satisfied". For this scenario, we have selected sample threshold values as laid out in the table below with formal Service Level Objectives (SLOs):
+
+|**Service Level Objective**|**Factor**|
+|-|-|
+|90% of devices reported metrics no longer than 10 mins ago (were online) for the observation interval| Coverage |
+|95% of online devices send temperature 10 times per minute for the observation interval| Freshness, Throughput |
+|99% of online devices deliver messages successfully with less than 5% of errors for the observation interval| Correctness |
+|95% of online devices deliver 90th percentile of messages within 50 ms for the observation interval|Throughput|
+
+SLOs definition must also describe the approach of how the indicator values are measured:
+
+- Observation interval: 24 hours. SLO statements have been true for the last 24 hours. Which means that if an SLI goes down and breaks a corresponding SLO, it will take 24 hours after the SLI has been fixed to consider the SLO good again.
+- Measurements frequency: 5 minutes. We do the measurements to evaluate SLI values every 5 minutes.
+- What is measured: interaction between IoT Device and the cloud, further consumption of the temperature data is out of scope.
++
+### How do we measure
+
+At this point, it's clear what we're going to measure and what threshold values we're going to use to determine if the service performs according to the expectations.
+
+It's a common practice to measure service level indicators, like the ones we've defined, by the means of **_metrics_**. This type of observability data is considered to be relatively small in values. It's produced by various system components and collected in a central observability backend to be monitored with dashboards, workbooks and alerts.
+
+Let's clarify what components the La Ni├▒a service consists of:
+
+![Diagram of La Nina components including IoT Edge device and Azure Services](media/how-to-observability/la-nina-metrics.png)
+
+There is an IoT Edge device with `Temperature Sensor` custom module (C#) that generates some temperature value and sends it upstream with a telemetry message. This message is routed to another custom module `Filter` (C#). This module checks the received temperature against a threshold window (0-100 degrees Celsius). If the temperature is within the window, the FilterModule sends the telemetry message to the cloud.
+
+In the cloud, the message is processed by the backend. The backend consists of a chain of two Azure Functions and storage account.
+Azure .NET Function picks up the telemetry message from the IoT Hub events endpoint, processes it and sends it to Azure Java Function. The Java function saves the message to the storage account blob container.
+
+An IoT Hub device comes with system modules `edgeHub` and `edgeAgent`. These modules expose through a Prometheus endpoint [a list of built-in metrics](how-to-access-built-in-metrics.md). These metrics are collected and pushed to Azure Monitor Log Analytics service by the [metrics collector module](how-to-collect-and-transport-metrics.md) running on the IoT Edge device. In addition to the system modules, the `Temperature Sensor` and `Filter` modules can be instrumented with some business specific metrics too. However, the service level indicators that we've defined can be measured with the built-in metrics only. So, we don't really need to implement anything else at this point.
+
+In this scenario, we have a fleet of 10 buoys. One of the buoys has been intentionally set up to malfunction so that we can demonstrate the issue detection and the follow-up troubleshooting.
+
+### How do we monitor
+
+We're going to monitor Service Level Objectives (SLO) and corresponding Service Level Indicators (SLI) with Azure Monitor Workbooks. This scenario deployment includes the *La Nina SLO/SLI* workbook assigned to the IoT Hub.
+
+![Screenshot of IoT Hub monitoring showing the Workbooks | Gallery in the Azure portal](media/how-to-observability/dashboard-path.png)
+
+To achieve the best user experience the workbooks are designed to follow the _glance_ -> _scan_ -> _commit_ concept:
+
+#### Glance
+
+At this level, we can see the whole picture at a single glance. The data is aggregated and represented at the fleet level:
+
+![Screenshot of the monitoring summary report in the Azure portal showing an issue with device coverage and data freshness](media/how-to-observability/glance.png)
+
+From what we can see, the service is not functioning according to the expectations. There is a violation of the *Data Freshness* SLO.
+Only 90% of the devices send the data frequently, and the service clients expect 95%.
+
+All SLO and threshold values are configurable on the workbook settings tab:
+
+![Screenshot of the workbook settings in the Azure portal](media/how-to-observability/workbook-settings.png)
+
+#### Scan
+
+By clicking on the violated SLO, we can drill down to the *scan* level and see how the devices contribute to the aggregated SLI value.
+
+![Screenshot of message frequency by device](media/how-to-observability/scan.png)
+
+There is a single device (out of 10) that sends the telemetry data to the cloud "rarely". In our SLO definition, we've stated that "frequently" means at least 10 times per minute. The frequency of this device is way below that threshold.
+
+#### Commit
+
+By clicking on the problematic device, we're drilling down to the *commit* level. This is a curated workbook *Device Details* that comes out of the box with IoT Hub monitoring offering. The *La Nina SLO/SLI* workbook reuses it to bring the details of the specific device performance.
+
+![Screenshot of messaging telemetry for a device in the Azure portal](media/how-to-observability/commit.png)
+
+## Troubleshooting
+
+*Measuring and monitoring* lets us observe and predict the system behavior, compare it to the defined expectations and ultimately detect existing or potential issues. The *troubleshooting*, on the other hand, lets identify and locate the cause of the issue.
+
+### Basic troubleshooting
+
+The *commit* level workbook gives a lot of detailed information about the device health. That includes resources consumption at the module and device level, message latency, frequency, QLen, etc. In many cases, this information may help locate the root of the issue.
+
+In this scenario, all parameters of the trouble device look normal and it's not clear why the device sends messages less frequent than expected. This fact is also confirmed by the *messaging* tab of the device-level workbook:
+
+![Screenshot of sample messages in the Azure portal](media/how-to-observability/messages.png)
+
+The `Temperature Sensor` (tempSensor) module produced 120 telemetry messages, but only 49 of them went upstream to the cloud.
+
+The first step we want to do is to check the logs produced by the `Filter` module. Click the **Troubleshoot live!** button and select the `Filter` module.
+
+![Screenshot of the filter module log in the Azure portal](media/how-to-observability/basic-logs.png)
+
+Analysis of the module logs doesn't discover the issue. The module receives messages, there are no errors. Everything looks good here.
+
+### Deep troubleshooting
+
+There are two observability instruments serving the deep troubleshooting purposes: *traces* and *logs*. In this scenario, traces show how a telemetry message with the ocean surface temperature is traveling from the sensor to the storage in the cloud, what is invoking what and with what parameters. Logs give information on what is happening inside each system component during this process. The real power of *traces* and *logs* comes when they're correlated. With that it's possible to read the logs of a specific system component, such as a module on IoT device or a backend function, while it was processing a specific telemetry message.
+
+The La Ni├▒a service uses [OpenTelemetry](https://opentelemetry.io) to produce and collect traces and logs in Azure Monitor.
+
+![Diagram illustrating an IoT Edge device sending telemetry data to Azure Monitor](media/how-to-observability/la-nina-detailed.png)
+
+IoT Edge modules `Temperature Sensor` and `Filter` export the logs and tracing data via OTLP (OpenTelemetry Protocol) to the [OpenTelemetryCollector](https://opentelemetry.io/docs/collector/) module, running on the same edge device. The `OpenTelemetryCollector` module, in its turn, exports logs and traces to Azure Monitor Application Insights service.
+
+The Azure .NET Function sends the tracing data to Application Insights with [Azure Monitor Open Telemetry direct exporter](../azure-monitor/app/opentelemetry-enable.md). It also sends correlated logs directly to Application Insights with a configured ILogger instance.
+
+The Java backend function uses [OpenTelemetry auto-instrumentation Java agent](../azure-monitor/app/java-in-process-agent.md) to produce and export tracing data and correlated logs to the Application Insights instance.
+
+By default, IoT Edge modules on the devices of the La Ni├▒a service are configured to not produce any tracing data and the [logging level](/aspnet/core/fundamentals/logging) is set to `Information`. The amount of produced tracing data is regulated by a [ratio based sampler](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry/Trace/TraceIdRatioBasedSampler.cs#L35). The sampler is configured with a desired [probability](https://github.com/open-telemetry/opentelemetry-dotnet/blob/bdcf942825915666dfe87618282d72f061f7567e/src/OpenTelemetry/Trace/TraceIdRatioBasedSampler.cs#L35) of a given activity to be included in a trace. By default, the probability is set to 0. With that in place, the devices don't flood the Azure Monitor with the detailed observability data if it's not requested.
+
+We've analyzed the `Information` level logs of the `Filter` module and realized that we need to dive deeper to locate the cause of the issue. We're going to update properties in the `Temperature Sensor` and `Filter` module twins and increase the `loggingLevel` to `Debug` and change the `traceSampleRatio` from `0` to `1`:
+
+![Screenshot of module troubleshooting showing updating FilterModule twin properties](media/how-to-observability/update-twin.png)
+
+With that in place, we have to restart the `Temperature Sensor` and `Filter` modules:
+
+![Screenshot of module troubleshooting showing Restart FilterModule button](media/how-to-observability/restart-module.png)
+
+In a few minutes, the traces and detailed logs will arrive to Azure Monitor from the trouble device. The entire end-to-end message flow from the sensor on the device to the storage in the cloud will be available for monitoring with *application map* in Application Insights:
+
+![Screenshot of application map in Application Insights](media/how-to-observability/application-map.png)
+
+From this map we can drill down to the traces and we can see that some of them look normal and contain all the steps of the flow, and some of them, are very short, so nothing happens after the `Filter` module.
+
+![ Screenshot of monitoring traces](media/how-to-observability/traces.png)
+
+Let's analyze one of those short traces and find out what was happening in the `Filter` module, and why it didn't send the message upstream to the cloud.
+
+Our logs are correlated with the traces, so we can query logs specifying the `TraceId` and `SpanId` to retrieve logs corresponding exactly to this execution instance of the `Filter` module:
+
+![Sample trace query filtering based on Trace ID and Span ID.](media/how-to-observability/logs.png)
+
+The logs show that the module received a message with 70.465-degrees temperature. But the filtering threshold configured on this device is 30 to 70. So the message simply didn't pass the threshold. Apparently, this specific device was configured wrong. This is the cause of the issue we detected while monitoring the La Ni├▒a service performance with the workbook.
+
+Let's fix the `Filter` module configuration on this device by updating properties in the module twin. We also want to reduce back the `loggingLevel` to `Information` and `traceSampleRatio` to `0`:
+
+![Sample JSON showing the logging level and trace sample ratio values](media/how-to-observability/fix-issue.png)
+
+Having done that, we need to restart the module. In a few minutes, the device reports new metric values to Azure Monitor. It reflects in the workbook charts:
+
+![Screenshot of Azure Monitor workbook chart](media/how-to-observability/fixed-workbook.png)
+
+We see that the message frequency on the problematic device got back to normal. The overall SLO value will become green again, if nothing else happens, in the configured observation interval:
+
+![Screenshot of the monitoring summary report in the Azure portal](media/how-to-observability/green-workbook.png)
+
+## Try the sample
+
+At this point, you might want to deploy the scenario sample to Azure to reproduce the steps and play with your own use cases.
+
+In order to successfully deploy this solution, you need the following:
+
+- [PowerShell](/powershell/scripting/install/installing-powershell).
+- [Azure CLI](/cli/azure/install-azure-cli).
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free).
+
+1. Clone the [IoT Elms](https://github.com/Azure-Samples/iotedge-logging-and-monitoring-solution) repository.
+
+ ```sh
+ git clone https://github.com/Azure-Samples/iotedge-logging-and-monitoring-solution.git
+1. Open a PowerShell console and run the `deploy-e2e-tutorial.ps1` script.
++
+ ```powershell
+ ./Scripts/deploy-e2e-tutorial.ps1
+
+## Next steps
+
+In this article, you have set up a solution with end-to-end observability capabilities for monitoring and troubleshooting. The common challenge in such solutions for IoT systems is delivering observability data from the devices to the cloud. The devices in this scenario are supposed to be online and have a stable connection to Azure Monitor, which is not always the case in real life.
+
+Advance to follow up articles such as [Distributed Tracing with IoT Edge](https://github.com/Azure-Samples/iotedge-logging-and-monitoring-solution/blob/main/docs/iot-edge-distributed-tracing.md) with the recommendations and techniques to handle scenarios when the devices are normally offline or have limited or restricted connection to the observability backend in the cloud.
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-add-run-inline-code.md
When you want to run a piece of code inside your logic app workflow, you can add
* Doesn't require working with the [**Variables** actions](../logic-apps/logic-apps-create-variables-store-values.md), which are not yet supported.
-* Uses Node.js version 8.11.1 for [multi-tenant based logic apps](logic-apps-overview.md) or [Node.js versions 10.x.x, 11.x.x, or 12.x.x](https://nodejs.org/en/download/releases/) for [single-tenant based logic apps](single-tenant-overview-compare.md).
+* Uses Node.js version 8.11.1 for [multi-tenant based logic apps](logic-apps-overview.md) or [Node.js versions 12.x.x or 14.x.x](https://nodejs.org/en/download/releases/) for [single-tenant based logic apps](single-tenant-overview-compare.md).
For more information, see [Standard built-in objects](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects).
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Title: 'MLOps: ML model management'
-description: 'Learn about model management (MLOps) with Azure Machine Learning . Deploy, manage, track lineage and monitor your models to continuously improve them. '
+description: 'Learn about model management (MLOps) with Azure Machine Learning. Deploy, manage, track lineage and monitor your models to continuously improve them. '
When deploying to Azure Kubernetes Service, you can use controlled rollout to en
* Perform A/B testing by routing traffic to different versions of the endpoint. * Switch between endpoint versions by updating the traffic percentage in endpoint configuration.
-For more information, see [Controlled rollout of ML models](how-to-deploy-azure-kubernetes-service.md#deploy-models-to-aks-using-controlled-rollout-preview).
+For more information, see [Controlled rollout of ML models](./how-to-safely-rollout-managed-endpoints.md).
### Analytics
Azure ML gives you the capability to track the end-to-end audit trail of all of
## Notify, automate, and alert on events in the ML lifecycle
-Azure ML publishes key events to Azure EventGrid, which can be used to notify and automate on events in the ML lifecycle. For more information, please see [this document](how-to-use-event-grid.md).
+Azure ML publishes key events to Azure Event Grid, which can be used to notify and automate on events in the ML lifecycle. For more information, please see [this document](how-to-use-event-grid.md).
## Monitor for operational & ML issues
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
Allows you to define a role scoped only to labeling data:
"Actions": [ "Microsoft.MachineLearningServices/workspaces/read", "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/write"
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/write"
],
- "NotActions": [
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read"
+ "NotActions": [
], "AssignableScopes": [ "/subscriptions/<subscription_id>"
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
- The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md). -- If you plan on using an Azure Virtual Network to secure communication between your Azure ML workspace and the AKS cluster, read the [Network isolation during training & inference](./how-to-network-security-overview.md) article.
+- If you plan on using an Azure Virtual Network to secure communication between your Azure ML workspace and the AKS cluster, your workspace and its associated resources (storage, key vault, Azure Container Registry) must have private endpoints or service endpoints in the same VNET as AKS cluster's VNET. Please follow tutorial [create a secure workspace](./tutorial-create-secure-workspace.md) to add those private endpoints or service endpoints to your VNET.
## Limitations
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
Authorized IP ranges only works with Standard Load Balancer. -- To attach an AKS cluster from a __different Azure subscription__, you (your Azure AD account) must be granted the **Contributor** role on the AKS cluster. Check your access in the [Azure portal](https://portal.azure.com/).- - If you want to use a private AKS cluster (using Azure Private Link), you must create the cluster first, and then **attach** it to the workspace. For more information, see [Create a private Azure Kubernetes Service cluster](../aks/private-clusters.md). - Using a [public fully qualified domain name (FQDN) with a private AKS cluster](../aks/private-clusters.md) is __not supported__ with Azure Machine learning.
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
- [Manually scale the node count in an AKS cluster](../aks/scale-cluster.md) - [Set up cluster autoscaler in AKS](../aks/cluster-autoscaler.md) -- __Do not directly update the cluster by using a YAML configuration__. While Azure Kubernetes Services supports updates via YAML configuration, Azure Machine Learning deployments will override your changes. The only two YAML fields that will not overwritten are __request limits__ and and __cpu and memory__.
+- __Do not directly update the cluster by using a YAML configuration__. While Azure Kubernetes Services supports updates via YAML configuration, Azure Machine Learning deployments will override your changes. The only two YAML fields that will not overwritten are __request limits__ and __cpu and memory__.
- Creating an AKS cluster using the Azure Machine Learning studio UI, SDK, or CLI extension is __not__ idempotent. Attempting to create the resource again will result in an error that a cluster with the same name already exists.
For information on attaching an AKS cluster in the portal, see [Create compute t
## Create or attach an AKS cluster with TLS termination
-When you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md), you can enable TLS termination with **[AksCompute.provisioning_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none--load-balancer-subnet-none-)** and **[AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#attach-configuration-resource-group-none--cluster-name-none--resource-id-none--cluster-purpose-none-)** configuration objects. Both method return a configuration object that has an **enable_ssl** method, and you can use **enable_ssl** method to enable TLS.
+When you [create or attach an AKS cluster](how-to-create-attach-kubernetes.md), you can enable TLS termination with **[AksCompute.provisioning_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none--load-balancer-subnet-none-)** and **[AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.akscompute#attach-configuration-resource-group-none--cluster-name-none--resource-id-none--cluster-purpose-none-)** configuration objects. Both methods return a configuration object that has an **enable_ssl** method, and you can use **enable_ssl** method to enable TLS.
Following example shows how to enable TLS termination with automatic TLS certificate generation and configuration by using Microsoft certificate under the hood. ```python
When you create or attach an AKS cluster, you can configure the cluster to use a
# [Create](#tab/akscreate)
-To create an AKS cluster that uses an Internal Load Balancer, use the the `load_balancer_type` and `load_balancer_subnet` parameters:
+To create an AKS cluster that uses an Internal Load Balancer, use the `load_balancer_type` and `load_balancer_subnet` parameters:
```python from azureml.core.compute.aks import AksUpdateConfiguration
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
DNS resolution within an existing VNet is under your control. For example, a fir
| `<ACR name>.azurecr.io` | Your Azure Container Registry (ACR) | | `<account>.table.core.windows.net` | Azure Storage Account (table storage) | | `<account>.blob.core.windows.net` | Azure Storage Account (blob storage) |
-| `api.azureml.ms` | Azure Active Directory (AAD) authentication |
+| `api.azureml.ms` | Azure Active Directory (Azure AD) authentication |
| `ingest-vienna<region>.kusto.windows.net` | Kusto endpoint for uploading telemetry | | `<leaf-domain-label + auto-generated suffix>.<region>.cloudapp.azure.com` | Endpoint domain name, if you autogenerated by Azure Machine Learning. If you used a custom domain name, you do not need this entry. |
Right after azureml-fe is deployed, it will attempt to start and this requires t
Once azureml-fe is started, it requires the following connectivity to function properly: * Connect to Azure Storage to download dynamic configuration
-* Resolve DNS for AAD authentication server api.azureml.ms and communicate with it when the deployed service uses AAD authentication.
+* Resolve DNS for Azure AD authentication server api.azureml.ms and communicate with it when the deployed service uses Azure AD authentication.
* Query AKS API server to discover deployed models * Communicate to deployed model PODs
replicas = ceil(concurrentRequests / maxReqPerContainer)
For more information on setting `autoscale_target_utilization`, `autoscale_max_replicas`, and `autoscale_min_replicas`, see the [AksWebservice](/python/api/azureml-core/azureml.core.webservice.akswebservice) module reference.
-## Deploy models to AKS using controlled rollout (preview)
-
-Analyze and promote model versions in a controlled fashion using endpoints. You can deploy up to six versions behind a single endpoint. Endpoints provide the following capabilities:
-
-* Configure the __percentage of scoring traffic sent to each endpoint__. For example, route 20% of the traffic to endpoint 'test' and 80% to 'production'.
-
- > [!NOTE]
- > If you do not account for 100% of the traffic, any remaining percentage is routed to the __default__ endpoint version. For example, if you configure endpoint version 'test' to get 10% of the traffic, and 'prod' for 30%, the remaining 60% is sent to the default endpoint version.
- >
- > The first endpoint version created is automatically configured as the default. You can change this by setting `is_default=True` when creating or updating an endpoint version.
-
-* Tag an endpoint version as either __control__ or __treatment__. For example, the current production endpoint version might be the control, while potential new models are deployed as treatment versions. After evaluating performance of the treatment versions, if one outperforms the current control, it might be promoted to the new production/control.
-
- > [!NOTE]
- > You can only have __one__ control. You can have multiple treatments.
-
-You can enable app insights to view operational metrics of endpoints and deployed versions.
-
-### Create an endpoint
-Once you are ready to deploy your models, create a scoring endpoint and deploy your first version. The following example shows how to deploy and create the endpoint using the SDK. The first deployment will be defined as the default version, which means that unspecified traffic percentile across all versions will go to the default version.
-
-> [!TIP]
-> In the following example, the configuration sets the initial endpoint version to handle 20% of the traffic. Since this is the first endpoint, it's also the default version. And since we don't have any other versions for the other 80% of traffic, it is routed to the default as well. Until other versions that take a percentage of traffic are deployed, this one effectively receives 100% of the traffic.
-
-```python
-import azureml.core,
-from azureml.core.webservice import AksEndpoint
-from azureml.core.compute import AksCompute
-from azureml.core.compute import ComputeTarget
-# select a created compute
-compute = ComputeTarget(ws, 'myaks')
-
-# define the endpoint and version name
-endpoint_name = "mynewendpoint"
-version_name= "versiona"
-# create the deployment config and define the scoring traffic percentile for the first deployment
-endpoint_deployment_config = AksEndpoint.deploy_configuration(cpu_cores = 0.1, memory_gb = 0.2,
- enable_app_insights = True,
- tags = {'sckitlearn':'demo'},
- description = "testing versions",
- version_name = version_name,
- traffic_percentile = 20)
- # deploy the model and endpoint
- endpoint = Model.deploy(ws, endpoint_name, [model], inference_config, endpoint_deployment_config, compute)
- # Wait for he process to complete
- endpoint.wait_for_deployment(True)
- ```
-
-### Update and add versions to an endpoint
-
-Add another version to your endpoint and configure the scoring traffic percentile going to the version. There are two types of versions, a control and a treatment version. There can be multiple treatment versions to help compare against a single control version.
-
-> [!TIP]
-> The second version, created by the following code snippet, accepts 10% of traffic. The first version is configured for 20%, so only 30% of the traffic is configured for specific versions. The remaining 70% is sent to the first endpoint version, because it is also the default version.
-
-```python
-from azureml.core.webservice import AksEndpoint
-
-# add another model deployment to the same endpoint as above
-version_name_add = "versionb"
-endpoint.create_version(version_name = version_name_add,
- inference_config=inference_config,
- models=[model],
- tags = {'modelVersion':'b'},
- description = "my second version",
- traffic_percentile = 10)
-endpoint.wait_for_deployment(True)
-```
-
-Update existing versions or delete them in an endpoint. You can change the version's default type, control type, and the traffic percentile. In the following example, the second version increases its traffic to 40% and is now the default.
-
-> [!TIP]
-> After the following code snippet, the second version is now default. It is now configured for 40%, while the original version is still configured for 20%. This means that 40% of traffic is not accounted for by version configurations. The leftover traffic will be routed to the second version, because it is now default. It effectively receives 80% of the traffic.
-
-```python
-from azureml.core.webservice import AksEndpoint
-
-# update the version's scoring traffic percentage and if it is a default or control type
-endpoint.update_version(version_name=endpoint.versions["versionb"].name,
- description="my second version update",
- traffic_percentile=40,
- is_default=True,
- is_control_version_type=True)
-# Wait for the process to complete before deleting
-endpoint.wait_for_deployment(true)
-# delete a version in an endpoint
-endpoint.delete_version(version_name="versionb")
-
-```
- ## Web service authentication When deploying to Azure Kubernetes Service, __key-based__ authentication is enabled by default. You can also enable __token-based__ authentication. Token-based authentication requires clients to use an Azure Active Directory account to request an authentication token, which is used to make requests to the deployed service.
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
You have two options for AKS clusters in a virtual network:
**Default AKS clusters** have a control plane with public IP addresses. You can add a default AKS cluster to your VNet during the deployment or attach a cluster after it's created.
-**Private AKS clusters** have a control plane, which can only be accessed through private IPs. Private AKS clusters must be attached after the cluster is created.
+**Private AKS clusters** have a control plane, which can only be accessed through private IPs. Private AKS clusters must be attached after the cluster is created.
For detailed instructions on how to add default and private clusters, see [Secure an inferencing environment](how-to-secure-inferencing-vnet.md).
+Regardless default AKS cluster or private AKS cluster used, if your AKS cluster is behind of VNET, your workspace and its associate resources (storage, key vault, and ACR) must have private endpoints or service endpoints in the same VNET as the AKS cluster.
+ The following network diagram shows a secured Azure Machine Learning workspace with a private AKS cluster attached to the virtual network. :::image type="content" source="./media/how-to-network-security-overview/secure-inferencing-environment.svg" alt-text="Diagram showing an attached private AKS cluster.":::
-### Limitations
--- The workspace must have a private endpoint in the same VNet as the AKS cluster. For example, when using multiple private endpoints with the workspace, one private endpoint can be in the AKS VNet and another in the VNet that contains dependency services for the workspace. ## Optional: Enable public access
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
In this article you learn how to secure the following inferencing resources in a
### Azure Kubernetes Service
+* If your AKS cluster is behind of a VNET, your workspace and its associated resources (storage, key vault, Azure Container Registry) must have private endpoints or service endpoints in the same VNET as AKS cluster's VNET. Please read tutorial [create a secure workspace](./tutorial-create-secure-workspace.md) to add those private endpoints or service endpoints to your VNET.
* If your workspace has a __private endpoint__, the Azure Kubernetes Service cluster must be in the same Azure region as the workspace. * Using a [public fully qualified domain name (FQDN) with a private AKS cluster](../aks/private-clusters.md) is __not supported__ with Azure Machine learning.
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md
And run it:
:::code language="azurecli" source="~/azureml-examples-main/cli/train.sh" id="sklearn_iris":::
-To register a model, you can download the outputs and create a model from the local directory:
+To register a model, you can upload the model files from the run to the model registry:
:::code language="azurecli" source="~/azureml-examples-main/cli/train.sh" id="sklearn_download_register_model":::
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
In this scenario, Azure Machine Learning service builds the training or inferenc
1. Grant the Managed Identity Operator role: ```azurecli-interactive
- az role assignment create --assignee <principal ID> --role managedidentityoperator --scope <UAI resource ID>
+ az role assignment create --assignee <principal ID> --role managedidentityoperator --scope <user-assigned managed identity resource ID>
```
- The UAI resource ID is Azure resource ID of the user assigned identity, in the format `/subscriptions/<subscription ID>/resourceGroups/<resource group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<UAI name>`.
+ The user-assigned managed identity resource ID is Azure resource ID of the user assigned identity, in the format `/subscriptions/<subscription ID>/resourceGroups/<resource group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user-assigned managed identity name>`.
1. Specify the external ACR and client ID of the __user-assigned managed identity__ in workspace connections by using [Workspace.set_connection method](/python/api/azureml-core/azureml.core.workspace.workspace#set-connection-name--category--target--authtype--value-):
In this scenario, Azure Machine Learning service builds the training or inferenc
category="ACR", target = "<acr url>", authType = "RegistryConnection",
- value={"ResourceId": "<UAI resource id>", "ClientId": "<UAI client ID>"})
+ value={"ResourceId": "<user-assigned managed identity resource id>", "ClientId": "<user-assigned managed identity client ID>"})
``` Once the configuration is complete, you can use the base images from private ACR when building environments for training or inference. The following code snippet demonstrates how to specify the base image ACR and image name in an environment definition:
Optionally, you can specify the managed identity resource URL and client ID in t
from azureml.core.container_registry import RegistryIdentity identity = RegistryIdentity()
-identity.resource_id= "<UAI resource ID>"
-identity.client_id="<UAI client ID>"
+identity.resource_id= "<user-assigned managed identity resource ID>"
+identity.client_id="<user-assigned managed identity client ID>"
env.docker.base_image_registry.registry_identity=identity env.docker.base_image = "my-acr.azurecr.io/my-repo/my-image:latest" ```
marketplace Determine Your Listing Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/determine-your-listing-type.md
Previously updated : 12/03/2021 Last updated : 04/12/2021 # Introduction to listing options
When you create an offer type, you choose one or more listing options. These opt
This table shows which listing options are available for each offer type:
-| Offer type | Free Trial | Test Drive | Contact Me | Get It Now `*` |
+| Offer type | Free Trial | Test Drive | Contact Me | Get It Now |
| | - | - | - | - |
-| Azure Application (Managed app) | | &#10004; | | &#10004; |
-| Azure Application (Solution template) | | | | &#10004; |
-| Azure Container | | | | &#10004; |
-| Azure Virtual Machine | &#10004; | &#10004; | | &#10004; |
+| Azure Application (Managed app) | | &#10004; | | &#10004;<sup>1</sup> |
+| Azure Application (Solution template) | | | | &#10004;<sup>1</sup> |
+| Azure Container | | | | &#10004;<sup>1</sup> |
+| Azure Virtual Machine | &#10004; | &#10004; | | &#10004;<sup>1</sup> |
| Consulting service | | | &#10004; | |
-| Dynamics 365 Business Central | &#10004; | &#10004; | &#10004; | &#10004; |
-| Dynamics 365 apps on Dataverse and Power Apps | &#10004; | &#10004; | &#10004; | &#10004; |
-| Dynamics 365 Operations Apps | &#10004; | &#10004; | &#10004; | &#10004; |
-| IoT Edge module | | | | &#10004; |
-| Managed Service | | | | &#10004; |
-| Power BI App | | | | &#10004; |
-| Software as a service | &#10004; | &#10004; | &#10004; | &#10004; |
+| Dynamics 365 Business Central | &#10004; | &#10004; | &#10004; | &#10004;<sup>1</sup> |
+| Dynamics 365 apps on Dataverse and Power Apps | &#10004; | &#10004; | &#10004; | &#10004;<sup>1</sup> <sup>2</sup> |
+| Dynamics 365 Operations Apps | &#10004; | &#10004; | &#10004; | &#10004;<sup>1</sup> |
+| IoT Edge module | | | | &#10004;<sup>1</sup> |
+| Managed Service | | | | &#10004;<sup>1</sup> |
+| Power BI App | | | | &#10004;<sup>1</sup> |
+| Software as a service | &#10004; | &#10004; | &#10004; | &#10004;<sup>1</sup> |
||||||
-\* The **Get It Now** listing option includes Get It Now (Free), bring your own license (BYOL), Subscription, and Usage-based pricing. For more information, see [Get It Now](#get-it-now).
+<sup>1</sup> The **Get It Now** listing option includes Get It Now (Free), bring your own license (BYOL), Subscription, and Usage-based pricing. For more information, see [Get It Now](#get-it-now).
+
+<sup>2</sup> Customers will see a **Get it now** button on the offer listing page in AppSource for offers configured for [ISV app license management](isv-app-license.md). Customers can select this button to contact you to purchase licenses for the app.
## Change the offer type
This option is a simple listing of your application or service. Customers use th
This listing option includes transactable offers (subscriptions or user-based pricing), bring your own license (BYOL) offers, and **Get It Now (Free)**. Transactable offers are sold through the commercial marketplace. Microsoft is responsible for billing and collections. Customers use the **Get It Now** button to get the offer.
+> [!NOTE]
+> Customers will see a **Get it now** button on the offer listing page in AppSource for offers configured for [ISV app license management](isv-app-license.md). Customers can select this button to contact you to purchase licenses for the app.
+ This table shows which offer types support the pricing options that are included with the **Get It Now** listing option. | Offer type | Get It Now (Free) | BYOL | Subscription | Usage-based pricing |
This table shows which offer types support the pricing options that are included
<sup>2</sup> Priced per hour and billed monthly.
+<sup>3</sup> Customers will see a **Get it now** button on the offer listing page in AppSource for offers configured for [ISV app license management](isv-app-license.md).
+ ### Get It Now (Free) Use this listing option to offer your application for free. Customers use the **Get It Now** button to get your free offer.
The following table shows the options that are available for different offer typ
| IoT Edge module | | | Azure Marketplace | Azure Marketplace | | | Managed service | | | | Azure Marketplace | | | Consulting service | Both online stores | | | | |
-| SaaS | Both online stores | Both online stores | Both online stores | | Both online stores &#42; |
-| Microsoft 365 App | AppSource | AppSource | | | AppSource &#42;&#42; |
+| SaaS | Both online stores | Both online stores | Both online stores | | Both online stores <sup>1</sup>|
+| Microsoft 365 App | AppSource | AppSource | | | AppSource <sup>2</sup> |
| Dynamics 365 Business Central | AppSource | AppSource | | | |
-| Dynamics 365 apps on Dataverse and Power Apps | AppSource | AppSource | | | |
+| Dynamics 365 apps on Dataverse and Power Apps | AppSource | AppSource | | | AppSource <sup>3</sup> |
| Dynamics 365 Operations Apps | AppSource | AppSource | | | | | Power BI App | | | AppSource | | | |||||||
-&#42; SaaS transactable offers in AppSource only accept credit cards at this time.
+<sup>1</sup> SaaS transactable offers in AppSource only accept credit cards at this time.
+
+<sup>2</sup> Microsoft 365 add-ins are free to install and can be monetized using an SaaS offer. For more information, see [Monetize your app through the commercial marketplace](/office/dev/store/monetize-addins-through-microsoft-commercial-marketplace).
-&#42;&#42; Microsoft 365 add-ins are free to install and can be monetized using an SaaS offer. For more information, see [Monetize your app through the commercial marketplace](/office/dev/store/monetize-addins-through-microsoft-commercial-marketplace).
+<sup>3</sup> Applies to offers configured for [ISV app license management](isv-app-license.md).
## Marketplace Rewards
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
Previously updated : 03/17/2022 Last updated : 04/13/2022 # Plan a Microsoft Dynamics 365 offer
These are the available licensing options for Dynamics 365 offer types:
| Offer type | Listing option | | | | | Dynamics 365 Operations Apps | Contact me |
-| Dynamics 365 apps on Dataverse and Power Apps | Get it now<br>Get it now (Free)<br>Free trial (listing)<br>Contact me |
+| Dynamics 365 apps on Dataverse and Power Apps | Get it now<sup>1</sup><br>Get it now (Free)<br>Free trial (listing)<br>Contact me |
| Dynamics 365 Business Central | Get it now (Free)<br>Free trial (listing)<br>Contact me | |||
+<sup>1</sup> Customers will see a **Get it now** button on the offer listing page in AppSource for offers configured for [ISV app license management](isv-app-license.md). Customers can select this button to contact you to purchase licenses for the app.
+ The following table describes the transaction process of each listing option. | Licensing option | Transaction process |
The following table describes the transaction process of each listing option.
| Contact me | Collect customer contact information by connecting your Customer Relationship Management (CRM) system. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and marketplace source where they found your offer, will be sent to the CRM system that you've configured. For more information about configuring your CRM, see the **Customer leads** section of your offer type's **Offer setup** page. | | Free trial (listing) | Offer your customers a one-, three- or six-month free trial. Offer listing free trials are created, managed, and configured by your service and do not have subscriptions managed by Microsoft. | | Get it now (free) | List your offer to customers for free. |
-| Get it now | Enables you to manage your ISV app0 licenses in Partner Center.<br>Currently available to the following offer type only:<ul><li>Dynamics 365 apps on Dataverse and Power Apps</li></ul><br>For more information about this option, see [ISV app license management](isv-app-license.md). |
+| Get it now | Enables you to manage your ISV app licenses in Partner Center.<br>Currently available to the following offer type only:<ul><li>Dynamics 365 apps on Dataverse and Power Apps</li></ul><br>For more information about this option, see [ISV app license management](isv-app-license.md). |
||| ## Test drive
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Previously updated : 04/12/2022 Last updated : 04/13/2022 # What's new in the Microsoft commercial marketplace
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
+| Offers | ISVs can now offer custom prices, terms, conditions, and pricing for a specific customer through private offers. See [ISV to customer private offers](isv-customer.md) and the [FAQ](isv-customer-faq.yml). | 2022-04-06 |
| Offers | An ISV can now specify time-bound margins for CSP partners to incentivize them to sell it to their customers. When their partner makes a sale to a customer, Microsoft will pay the ISV the wholesale price. See [ISV to CSP Partner private offers](./isv-csp-reseller.md) and [the FAQs](./isv-csp-faq.yml). | 2022-02-15 | | Analytics | We added a new [Customer Retention Dashboard](./customer-retention-dashboard.md) that provides vital insights into customer retention and engagement. See the [FAQ article](./analytics-faq.yml). | 2022-02-15 | | Analytics | We added a Quality of Service (QoS) report query to the [List of system queries](./analytics-system-queries.md) used in the Create Report API. | 2022-01-27 |
Learn about important updates in the commercial marketplace program of Partner C
| Offers | We moved the list of categories and industries from our [Marketing Best Practices](gtm-offer-listing-best-practices.md) topic to their [own page](marketplace-categories-industries.md). | 2021-08-20 | | Offers | The [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md) topic now includes a flowchart to help you determine the appropriate transactable offer type and pricing plan to sell your software in the commercial marketplace. | 2021-08-18 | | Policy | Updated [certification](/legal/marketplace/certification-policies?context=/azure/marketplace/context/context) policy; see [change history](/legal/marketplace/offer-policies-change-history). | 2021-08-06 |
-|
+|
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md
The types of server credentials supported are listed in the table below:
Type of credentials | Description |
-**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/>For the appliance to validate the domain credentials with the domain controller, it should be able to resolve the domain name. Ensure that you have provided the correct domain name while adding the credentials else the validation will fail.<br/><br/> The appliance will not attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to start the discovery.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover web apps, and SQL Server instances and databases _(if you have configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
+**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. It is recommended to provide the credentials in the UPN format, for example, user1@contoso.com. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/>For the appliance to validate the domain credentials with the domain controller, it should be able to resolve the domain name. Ensure that you have provided the correct domain name while adding the credentials else the validation will fail.<br/><br/> The appliance will not attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to start the discovery.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover web apps, and SQL Server instances and databases _(if you have configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
**Non-domain credentials (Windows/Linux)** | You can add **Windows (Non-domain)** or **Linux (Non-domain)** by selecting the required option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. **SQL Server Authentication credentials** | You can add **SQL Server Authentication** credentials by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. <br/><br/> You can add this type of credentials to discover SQL Server instances and databases running in your VMware environment, if you have configured SQL Server authentication mode on your SQL Servers.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.<br/><br/> You need to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can complete the software inventory to discover SQL installed on the servers before it uses the SQL Server authentication credentials to discover the SQL Server instances and databases.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
## Cluster configuration requirements * All OpenShift Cluster operators must remain in a managed state. The list of cluster operators can be returned by running `oc get clusteroperators`.
-* The cluster must have a minimum of three worker nodes and three control plane nodes. Don't have taints that prevent OpenShift components to be scheduled. Don't scale the cluster workers to zero, or attempt a graceful cluster shutdown.
+* The cluster must have a minimum of three worker nodes and three manager nodes.
+* Don't scale the cluster workers to zero, or attempt a cluster shutdown. Deallocating or powering down any virtual machine in the cluster resource group is not supported.
+* Don't have taints that prevent OpenShift components to be scheduled.
* Don't remove or modify the cluster Prometheus and Alertmanager services. * Don't remove Service Alertmanager rules. * Security groups can't be modified. Any attempt to modify security groups will be reverted.
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Non-RHCOS compute nodes aren't supported. For example, you can't use a RHEL compute node. * Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the Azure Red Hat OpenShift cluster. For example, don't require tags on the Azure Red Hat OpenShift RP-managed cluster resource group. * Do not run extra workloads on the control plane nodes. While they can be scheduled on the control plane nodes, it will cause extra resource usage and stability issues that can affect the entire cluster.
+* Don't circumvent the deny assignment that is configured as part of the service, or perform administrative tasks that are normally prohibited by the deny assignment.
## Supported virtual machine sizes
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Previously updated : 03/14/2022 Last updated : 04/13/2022 # Create and manage a self-hosted integration runtime
Here are the domains and outbound ports that you need to allow at both **corpora
| `download.microsoft.com` | 443 | Required to download the self-hosted integration runtime updates. If you have disabled auto-update, you can skip configuring this domain. | | `login.windows.net`<br>`login.microsoftonline.com` | 443 | Required to sign in to the Azure Active Directory. |
+> [!NOTE]
+> As currently Azure Relay doesn't support service tag, you have to use service tag AzureCloud or Internet in NSG rules for the communication to Azure Relay. For the communication to Azure Purview.
+ Depending on the sources you want to scan, you also need to allow other domains and outbound ports for other Azure or external sources. A few examples are provided here: | Domain names | Outbound ports | Description |
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Enabling Azure connections will allow Azure Purview to reach and connect the ser
A self-hosted integration runtime (SHIR) can be installed on a machine to connect with a resource in a private network. 1. [Create and install a self-hosted integration runtime](./manage-integration-runtimes.md) on a personal machine, or a machine inside the same VNet as your database server.
-1. Check your database server firewall to confirm that the SHIR machine has access through the firewall. Add the IP of the machine if it doesn't already have access.
+1. Check your database server networking configuration to confirm that there is a private endpoint accessible to the SHIR machine. Add the IP of the machine if it doesn't already have access.
1. If your Azure SQL Server is behind a private endpoint or in a VNet, you can use an [ingestion private endpoint](catalog-private-link-ingestion.md#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) to ensure end-to-end network isolation. ### Authentication for a scan
Select your method of authentication from the tabs below for scanning steps.
### Scoping and running the scan
-1. You can scope your scan to specific folders and subfolders by choosing the appropriate items in the list.
+1. You can scope your scan to specific database objects by choosing the appropriate items in the list.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-scope-scan.png" alt-text="Scope your scan.":::
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This section describes how to register a Power BI tenant in Azure Purview for sa
#### Scan same tenant using Azure IR and Managed Identity
-This is a suitable scenario, if both Azure Purview and Power PI tenant are configured to allow public access in the network settings.
+This is a suitable scenario, if both Azure Purview and Power BI tenant are configured to allow public access in the network settings.
To create and run a new scan, do the following:
To create and run a new scan, do the following:
#### Scan same tenant using Self-hosted IR and Delegated authentication
-This scenario can be used when Azure Purview and Power PI tenant or both, are configured to use private endpoint and deny public access. Additionally, this option is also applicable if Azure Purview and Power PI tenant are configured to allow public access.
+This scenario can be used when Azure Purview and Power BI tenant or both, are configured to use private endpoint and deny public access. Additionally, this option is also applicable if Azure Purview and Power BI tenant are configured to allow public access.
> [!IMPORTANT] > Additional configuration may be required for your Power BI tenant and Azure Purview account, if you are planning to scan Power BI tenant through private network where either Azure Purview account, Power BI tenant or both are configured with private endpoint with public access denied.
sentinel Kusto Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/kusto-overview.md
Last updated 01/06/2022
# Kusto Query Language in Microsoft Sentinel
-Kusto Query Language is the language you will use to work with and manipulate data in Microsoft Sentinel. The logs you feed into your workspace aren't worth much if you can't analyze them and get the important information hidden in all that data. Kusto Query Language has not only the power and flexibility to get that information, but the simplicity to help you get started quickly. If you have a background in scripting or working with databases, a lot of the content of this article will feel very familiar. If not, don't worry, as the intuitive nature of the language will quickly enable you to start writing your own queries and driving value for your organization.
+Kusto Query Language is the language you will use to work with and manipulate data in Microsoft Sentinel. The logs you feed into your workspace aren't worth much if you can't analyze them and get the important information hidden in all that data. Kusto Query Language has not only the power and flexibility to get that information, but the simplicity to help you get started quickly. If you have a background in scripting or working with databases, a lot of the content of this article will feel very familiar. If not, don't worry, as the intuitive nature of the language quickly enables you to start writing your own queries and driving value for your organization.
This article introduces the basics of Kusto Query Language, covering some of the most used functions and operators, which should address 75 to 80 percent of the queries you will write day to day. When you'll need more depth, or to run more advanced queries, you can take advantage of the new **Advanced KQL for Microsoft Sentinel** workbook (see this [introductory blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766)). See also the [official Kusto Query Language documentation](/azure/data-explorer/kusto/query/) as well as a variety of online courses (such as [Pluralsight's](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch)).
Microsoft Sentinel is built on top of the Azure Monitor service and it uses Azur
[Kusto Query Language](/azure/data-explorer/kusto/query/) was developed as part of the [Azure Data Explorer](/azure/data-explorer/) service, and itΓÇÖs therefore optimized for searching through big-data stores in a cloud environment. Inspired by famed undersea explorer Jacques Cousteau (and pronounced accordingly "koo-STOH"), itΓÇÖs designed to help you dive deep into your oceans of data and explore their hidden treasures.
-Kusto Query Language is also used in Azure Monitor (and therefore in Microsoft Sentinel), including some additional Azure Monitor features, to retrieve, visualize, analyze, and parse data in Log Analytics data stores. In Microsoft Sentinel, you're using tools based on Kusto Query Language whenever youΓÇÖre visualizing and analyzing data and hunting for threats, whether in existing rules and workbooks, or in building your own.
+Kusto Query Language is also used in Azure Monitor (and therefore in Microsoft Sentinel), including some additional Azure Monitor features, which allow you to retrieve, visualize, analyze, and parse data in Log Analytics data stores. In Microsoft Sentinel, you're using tools based on Kusto Query Language whenever youΓÇÖre visualizing and analyzing data and hunting for threats, whether in existing rules and workbooks, or in building your own.
-Because Kusto Query Language is a part of nearly everything you do in Microsoft Sentinel, a clear understanding of how it works will help you get that much more out of your SIEM.
+Because Kusto Query Language is a part of nearly everything you do in Microsoft Sentinel, a clear understanding of how it works helps you get that much more out of your SIEM.
## What is a query?
sentinel Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prerequisites.md
# Pre-deployment activities and prerequisites for deploying Microsoft Sentinel - This article introduces the pre-deployment activities and prerequisites for deploying Microsoft Sentinel. ## Pre-deployment activities
Before deploying Microsoft Sentinel, we recommend taking the following steps to
1. Determine which [data sources](connect-data-sources.md) you need and the data size requirements to help you accurately project your deployment's budget and timeline.
- You might determine this information during your business use case review, or by evaluating a current SIEM that you already have in place. If you already have a SIEM in place, analyze your data to understand which data sources provide the most value and should be ingested into Microsoft Sentinel.
+ You might determine this information during your business use case review, or by evaluating a current SIEM that you already have in place. If you already have a SIEM in place, analyze your data to understand which data sources provide the most value and should be ingested into Microsoft Sentinel.
1. Design your Microsoft Sentinel workspace. Consider parameters such as:
- - Whether you'll use a single tenant or multiple tenants
- - Any compliance requirements you have for data collection and storage
- - How to control access to Microsoft Sentinel data
+ - Whether you'll use a single tenant or multiple tenants
+ - Any compliance requirements you have for data collection and storage
+ - How to control access to Microsoft Sentinel data
- For more information, see [Workspace architecture best practices](best-practices-workspace-architecture.md) and [Sample workspace designs](sample-workspace-designs.md).
+ For more information, see [Workspace architecture best practices](best-practices-workspace-architecture.md) and [Sample workspace designs](sample-workspace-designs.md).
1. After the business use cases, data sources, and data size requirements have been identified, [start planning your budget](billing.md), considering cost implications for each planned scenario.
- Make sure that your budget covers the cost of data ingestion for both Microsoft Sentinel and Azure Log Analytics, any playbooks that will be deployed, and so on.
+ Make sure that your budget covers the cost of data ingestion for both Microsoft Sentinel and Azure Log Analytics, any playbooks that will be deployed, and so on.
- For more information, see:
+ For more information, see:
- - [Microsoft Sentinel costs and billing](billing.md)
- - [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel/)
- - [Log Analytics pricing](https://azure.microsoft.com/pricing/details/monitor/)
- - [Logic apps (playbooks) pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
- - [Integrating Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md)
+ - [Microsoft Sentinel costs and billing](billing.md)
+ - [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel/)
+ - [Log Analytics pricing](https://azure.microsoft.com/pricing/details/monitor/)
+ - [Logic apps (playbooks) pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
+ - [Integrating Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md)
1. Nominate an engineer or architect lead the deployment, based on requirements and timelines. This individual should lead the deployment and be the main point of contact on your team.
Before deploying Microsoft Sentinel, make sure that your Azure tenant has the fo
- After you have a tenant, you must have an [Azure subscription](../cost-management-billing/manage/create-subscription.md) to track resource creation and billing. -- After you have a subscription, you'll need the [relevant permissions](../role-based-access-control/index.yml) to begin using your subscription. If you are using a new subscription, an admin or higher from the AAD tenant should be designated as the [owner/contributor](../role-based-access-control/rbac-and-directory-admin-roles.md) for the subscription.
+- After you have a subscription, you'll need the [relevant permissions](../role-based-access-control/index.yml) to begin using your subscription. If you are using a new subscription, an admin or higher from the Azure AD tenant should be designated as the [owner/contributor](../role-based-access-control/rbac-and-directory-admin-roles.md) for the subscription.
- - To maintain the least privileged access available, assign roles at the level of the resource group.
- - For more control over permissions and access, set up custom roles. For more information, see [Role-based access control](../role-based-access-control/custom-roles.md).
- - For extra separation between users and security users, you might want to use [resource-context](resource-context-rbac.md) or [table-level RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043).
+ - To maintain the least privileged access available, assign roles at the level of the resource group.
+ - For more control over permissions and access, set up custom roles. For more information, see [Role-based access control](../role-based-access-control/custom-roles.md).
+ - For extra separation between users and security users, you might want to use [resource-context](resource-context-rbac.md) or [table-level RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043).
- For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
+ For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
-- A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) is required to house all of the data that Microsoft Sentinel will be ingesting and using for its detections, analytics, and other features. For more information, see [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md).
+- A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) is required to house all of the data that Microsoft Sentinel will be ingesting and using for its detections, analytics, and other features. For more information, see [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md). Microsoft Sentinel doesn't support Log Analytics workspaces with a resource lock applied.
-> [!TIP]
-> When setting up your Microsoft Sentinel workspace, [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) that's dedicated to Microsoft Sentinel and the resources that Microsoft Sentinel users including the Log Analytics workspace, any playbooks, workbooks, and so on.
->
-> A dedicated resource group allows for permissions to be assigned once, at the resource group level, with permissions automatically applied to any relevant resources. Managing access via a resource group helps to ensure that you're using Microsoft Sentinel efficiently without potentially issuing improper permissions. Without a resource group for Microsoft Sentinel, where resources are scattered among multiple resource groups, a user or service principal may find themselves unable to perform a required action or view data due to insufficient permissions.
->
-> To implement more access control to resources by tiers, use extra resource groups to house the resources that should be accessed only by those groups. Using multiple tiers of resource groups enables you to separate access between those tiers.
->
+We recommend that when you set up your Microsoft Sentinel workspace, [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) that's dedicated to Microsoft Sentinel and the resources that Microsoft Sentinel users including the Log Analytics workspace, any playbooks, workbooks, and so on.
-## Next steps
+A dedicated resource group allows for permissions to be assigned once, at the resource group level, with permissions automatically applied to any relevant resources. Managing access via a resource group helps to ensure that you're using Microsoft Sentinel efficiently without potentially issuing improper permissions. Without a resource group for Microsoft Sentinel, where resources are scattered among multiple resource groups, a user or service principal may find themselves unable to perform a required action or view data due to insufficient permissions.
+To implement more access control to resources by tiers, use extra resource groups to house the resources that should be accessed only by those groups. Using multiple tiers of resource groups enables you to separate access between those tiers.
+## Next steps
> [!div class="nextstepaction"]
->[On-board Microsoft Sentinel](quickstart-onboard.md)
-
+> >[On-board Microsoft Sentinel](quickstart-onboard.md)
> [!div class="nextstepaction"]
->[Get visibility into alerts](get-visibility.md)
+> >[Get visibility into alerts](get-visibility.md)
+
service-fabric Service Fabric Cluster Creation Setup Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-aad.md
# Set up Azure Active Directory for client authentication
+> [!WARNING]
+> At this time, AAD client authentication and the Managed Identity Token Service are mutually incompatible on Linux.
+ For clusters running on Azure, Azure Active Directory (Azure AD) is recommended to secure access to management endpoints. This article describes how to setup Azure AD to authenticate clients for a Service Fabric cluster.
+On Linux, you must complete the following steps before you create the cluster. On Windows, you also have the option to [configure Azure AD authentication for an existing cluster](https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Security/Configure%20Azure%20Active%20Directory%20Authentication%20for%20Existing%20Cluster.md).
+ In this article, the term "application" will be used to refer to [Azure Active Directory applications](../active-directory/develop/developer-glossary.md#client-application), not Service Fabric applications; the distinction will be made where necessary. Azure AD enables organizations (known as tenants) to manage user access to applications. A Service Fabric cluster offers several entry points to its management functionality, including the web-based [Service Fabric Explorer][service-fabric-visualizing-your-cluster] and [Visual Studio][service-fabric-manage-application-in-visual-studio]. As a result, you will create two Azure AD applications to control access to the cluster: one web application and one native application. After the applications are created, you will assign users to read-only and admin roles.
-> [!NOTE]
-> On Linux, you must complete the following steps before you create the cluster. On Windows, you also have the option to [configure Azure AD authentication for an existing cluster](https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Security/Configure%20Azure%20Active%20Directory%20Authentication%20for%20Existing%20Cluster.md).
- > [!NOTE] > It is a [known issue](https://github.com/microsoft/service-fabric/issues/399) that applications and nodes on Linux AAD-enabled clusters cannot be viewed in Azure Portal.
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Uninstall from the UI or from a command prompt.
2. In a terminal, go to /usr/local/ASR. 3. Run the following command: ```
- uninstall.sh -Y
+ ./uninstall.sh -Y
``` ## Install Site Recovery VSS provider on source machine
spring-cloud How To Enable End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-end-to-end-tls.md
- Title: Enable end-to-end Transport Layer Security-
-description: How to enable end-to-end Transport Layer Security for an application.
---- Previously updated : 03/24/2021---
-# Enable end-to-end TLS for an application
-
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-
-This article shows you how to enable end-to-end SSL/TLS to secure traffic from an ingress controller to applications that support HTTPS. After you enable end-to-end TLS and load a cert from keyvault, all communications within Azure Spring Cloud are secured with TLS.
-
-![Graph of communications secured by TLS.](media/enable-end-to-end-tls/secured-tls.png)
-
-## Prerequisites
--- A deployed Azure Spring Cloud instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started.-- If you're unfamiliar with end-to-end TLS, see the [end-to-end TLS sample](https://github.com/Azure-Samples/spring-boot-secure-communications-using-end-to-end-tls-ssl).-- To securely load the required certificates into Spring Boot apps, you can use [keyvault spring boot starter](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-starter-keyvault-certificates).-
-## Enable end-to-end TLS on an existing app
-
-Use the command `az spring-cloud app update --enable-end-to-end-tls` to enable or disable end-to-end TLS for an app.
-
-```azurecli
-az spring-cloud app update --enable-end-to-end-tls -n app_name -s service_name -g resource_group_name
-az spring-cloud app update --enable-end-to-end-tls false -n app_name -s service_name -g resource_group_name
-```
-
-## Enable end-to-end TLS when you bind custom domain
-
-Use the command `az spring-cloud app custom-domain update --enable-end-to-end-tls` or `az spring-cloud app custom-domain bind --enable-end-to-end-tls` to enable or disable end-to-end TLS for an app.
-
-```azurecli
-az spring-cloud app custom-domain update --enable-end-to-end-tls -n app_name -s service_name -g resource_group_name
-az spring-cloud app custom-domain bind --enable-end-to-end-tls -n app_name -s service_name -g resource_group_name
-```
-
-## Enable end-to-end TLS using Azure portal
-To enable end-to-end TLS in the [Azure portal](https://portal.azure.com/), first create an app, and then enable the feature.
-
-1. Create an app in the portal as you normally would. Navigate to it in the portal.
-2. Scroll down to the **Settings** group in the left navigation pane.
-3. Select **End-to-end TLS**.
-4. Switch **End-to-end TLS** to *Yes*.
-
-![Enable End-to-end TLS in portal](./media/enable-end-to-end-tls/enable-tls.png)
-
-## Verify end-to-end TLS status
-
-Use the command `az spring-cloud app show` to check the value of `enableEndToEndTls`.
-
-```azurecli
-az spring-cloud app show -n app_name -s service_name -g resource_group_name
-```
-
-## Next steps
--- [Access Config Server and Service Registry](how-to-access-data-plane-azure-ad-rbac.md)
spring-cloud How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-ingress-to-app-tls.md
+
+ Title: Enable ingress-to-app Transport Layer Security in Azure Spring Cloud
+
+description: How to enable ingress-to-app Transport Layer Security for an application.
++++ Last updated : 04/12/2022++
+# Enable ingress-to-app TLS for an application
+
+**This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
+
+> [!NOTE]
+> This feature is not available in Basic tier.
+
+This article describes secure communications in Azure Spring Cloud. The article also explains how to enable ingress-to-app SSL/TLS to secure traffic from an ingress controller to applications that support HTTPS.
+
+The following picture shows the overall secure communication support in Azure Spring Cloud.
++
+## Secure communication model within Azure Spring Cloud
+
+This section explains the secure communication model shown in the overview diagram above.
+
+1. The client request from the client to the application in Azure Spring Cloud comes into the ingress controller. The request can be either HTTP or HTTPS. The TLS certificate returned by the ingress controller is issued by the Microsoft Azure TLS issuing CA.
+
+ If the app has been mapped to an existing custom domain and is configured as HTTPS only, the request to the ingress controller can only be HTTPS. The TLS certificate returned by the ingress controller is the SSL binding certificate for that custom domain. The server side SSL/TLS verification for the custom domain is done in the ingress controller.
+
+2. The secure communication between the ingress controller and the applications in Azure Spring Cloud are controlled by the ingress-to-app TLS. You can also control the communication through the portal or CLI, which will be explained later in this article. If ingress-to-app TLS is disabled, the communication between the ingress controller and the apps in Azure Spring Cloud is HTTP. If ingress-to-app TLS is enabled, the communication will be HTTPS and has no relation to the communication between the clients and the ingress controller. The ingress controller won't verify the certificate returned from the apps because the ingress-to-app TLS encrypts the communication.
+
+3. Communication between the apps and the Azure Spring Cloud services is always HTTPS and handled by Azure Spring Cloud. Such services include config server, service registry, and Eureka server.
+
+4. You manage the communication between the applications. You can also take advantage of Azure Spring Cloud features to load certificates into the application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
+
+5. You manage the communication between applications and external services. To reduce your development effort, Azure Spring Cloud helps you manage your public certificates and loads them into your application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
+
+## Enable ingress-to-app TLS for an application
+
+The following section shows you how to enable ingress-to-app SSL/TLS to secure traffic from an ingress controller to applications that support HTTPS.
+
+### Prerequisites
+
+- A deployed Azure Spring Cloud instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started.
+- If you're unfamiliar with ingress-to-app TLS, see the [end-to-end TLS sample](https://github.com/Azure-Samples/spring-boot-secure-communications-using-end-to-end-tls-ssl).
+- To securely load the required certificates into Spring Boot apps, you can use [keyvault spring boot starter](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-starter-keyvault-certificates).
+
+### Enable ingress-to-app TLS on an existing app
+
+Use the command `az spring-cloud app update --enable-ingress-to-app-tls` to enable or disable ingress-to-app TLS for an app.
+
+```azurecli
+az spring-cloud app update --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
+az spring-cloud app update --enable-ingress-to-app-tls false -n app_name -s service_name -g resource_group_name
+```
+
+### Enable ingress-to-app TLS when you bind a custom domain
+
+Use the command `az spring-cloud app custom-domain update --enable-ingress-to-app-tls` or `az spring-cloud app custom-domain bind --enable-ingress-to-app-tls` to enable or disable ingress-to-app TLS for an app.
+
+```azurecli
+az spring-cloud app custom-domain update --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
+az spring-cloud app custom-domain bind --enable-ingress-to-app-tls -n app_name -s service_name -g resource_group_name
+```
+
+### Enable ingress-to-app TLS using the Azure portal
+
+To enable ingress-to-app TLS in the [Azure portal](https://portal.azure.com/), first create an app, and then enable the feature.
+
+1. Create an app in the portal as you normally would. Navigate to it in the portal.
+2. Scroll down to the **Settings** group in the left navigation pane.
+3. Select **Ingress-to-app TLS**.
+4. Switch **Ingress-to-app TLS** to *Yes*.
+
+![Screenshot showing where to enable Ingress-to-app TLS in portal.](./media/enable-end-to-end-tls/enable-i2a-tls.png)
+
+### Verify ingress-to-app TLS status
+
+Use the command `az spring-cloud app show` to check the value of `enableEndToEndTls`.
+
+```azurecli
+az spring-cloud app show -n app_name -s service_name -g resource_group_name
+```
+
+## Next steps
+
+* [Access Config Server and Service Registry](how-to-access-data-plane-azure-ad-rbac.md)
spring-cloud How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-tls-certificate.md
Load certificate from specific path. alias = <certificate alias>, thumbprint = <
## Next steps
-* [Enable end-to-end Transport Layer Security](./how-to-enable-end-to-end-tls.md)
+* [Enable ingress-to-app Transport Layer Security](./how-to-enable-ingress-to-app-tls.md)
* [Access Config Server and Service Registry](./how-to-access-data-plane-azure-ad-rbac.md)
static-web-apps Branch Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/branch-environments.md
+
+ Title: Create branch preview environments in Azure Static Web Apps
+description: Expose stable URLs for specific branches to evaluate changes in Azure Static Web Apps
++++ Last updated : 03/29/2022+++
+# Create branch preview environments in Azure Static Web Apps
+
+You can configure your site to deploy every change made to branches that aren't a production branch. This preview deployment is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`.
+
+## Configuration
+
+To enable stable URL environments, make the following changes to your [configuration file](configuration.md).
+
+- Set the `production_branch` input on the `static-web-apps-deploy` GitHub action to your production branch name. This ensures changes to your production branch are deployed to the production environment, while changes to other branches are deployed to a preview environment.
+- List the branches you want to deploy to preview environments in the `on > push > branches` array in your workflow configuration so that changes to those branches also trigger the GitHub Actions deployment.
+ - Set this array to `**` if you want to track all branches.
+
+## Example
+
+The following example demonstrates how to enable branch preview environments.
+
+```yml
+name: Azure Static Web Apps CI/CD
+
+on:
+ push:
+ branches:
+ - main
+ - dev
+ - staging
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ ...
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ with:
+ submodules: true
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ ...
+ production_branch: "main"
+```
+
+> [!NOTE]
+> The `...` denotes code skipped for clarity.
+
+In this example, the preview environments are defined for the `dev` and `staging` branches. Each branch is deployed to a separate preview environment.
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Review pull requests in pre-production environments](./review-publish-pull-requests.md)
static-web-apps Preview Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/preview-environments.md
+
+ Title: Preview environments in Azure Static Web Apps
+description: Expose preview environments to evaluate changes in Azure Static Web Apps
++++ Last updated : 03/29/2022+++
+# Preview environments in Azure Static Web Apps
+
+By default, when you deploy a site to Azure Static Web Apps [each pull request deploys a preview version of your site available through a temporary URL](review-publish-pull-requests.md). This version of the site allows you to review changes before merging pull requests. Once the pull request (PR) is closed, the temporary environment disappears.
+
+Beyond PR-driven temporary environments, you can enable preview environments that feature stable locations. The URLs for preview environments take on the following form:
+
+ ```text
+ <DEFAULT_HOST_NAME>-<BRANCH_OR_ENVIRONMENT_NAME>.<LOCATION>.azurestaticapps.net
+ ```
+
+## Deployment types
+
+The following deployment types are available in Azure Static Web Apps.
+
+- **Production**: Changes to production branches are deployed into the production environment. Your custom domain points to this environment, and content served from this location is indexed by search engines.
+
+- [**Pull requests**](review-publish-pull-requests.md): Pull requests against your production branch deploy to a temporary environment that disappears after the pull request is closed. The URL for this environment includes the PR number as a suffix. For example, if you make your first PR, the preview location looks something like `<DEFAULT_HOST_NAME>-1.<LOCATION>.azurestaticapps.net`.
+
+- [**Branch**](branch-environments.md): You can optionally configure your site to deploy every change made to branches that aren't a production branch. This preview deployment lives for the entire lifetime of the branch and is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`.
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Review pull requests in pre-production environments](./review-publish-pull-requests.md)
static-web-apps Publish Gatsby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-gatsby.md
The following steps show you how to create a new static site app and deploy it t
1. Select the **Review + Create** button to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Action for deployment.
+1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
1. Once the deployment completes click, **Go to resource**.
-1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Action to complete.
+1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
:::image type="content" source="./media/publish-gatsby/deployed-app.png" alt-text="Deployed application":::
static-web-apps Review Publish Pull Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/review-publish-pull-requests.md
To verify the changes in production, open your production URL to load the live
## Next steps > [!div class="nextstepaction"]
-> [Setup a custom domain](custom-domain.md)
+> [Branch preview environments](branch-environments.md)
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.23.1 | April 12, 2022 | April 12, 2023 |
+| v1.23.0 | March 2, 2022 | March 2, 2023 |
| v1.22.1 | January 25, 2022 | January 25, 2023 | | v1.22.0 | December 14, 2021 | December 14, 2022 | | v1.21.3 | October 25, 2021 | October 25, 2022 |
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-statistics.md
Title: Create and update statistics using Azure Synapse SQL resources
-description: Recommendations and examples for creating and updating query-optimization statistics in Synapse SQL.
+description: Recommendations and examples for creating and updating query-optimization statistics in Azure Synapse SQL.
Previously updated : 04/19/2020 Last updated : 04/13/2022 -+ # Statistics in Synapse SQL
CREATE STATISTICS stats_col3 on dbo.table3 (col3);
#### Use a stored procedure to create statistics on all columns in a database
-SQL pool doesn't have a system stored procedure equivalent to sp_create_stats in SQL Server. This stored procedure creates a single column statistics object on every column of the database that doesn't already have statistics.
+SQL pool doesn't have a system stored procedure equivalent to `sp_create_stats` in SQL Server. This stored procedure creates a single column statistics object on every column of the database that doesn't already have statistics.
The following example will help you get started with your database design. Feel free to adapt it to your needs:
Automatic creation of statistics is done synchronously so you may incur slightly
### Manual creation of statistics
-Serverless SQL pool lets you create statistics manually. For CSV files, you have to create statistics manually because automatic creation of statistics isn't turned on for CSV files.
+Serverless SQL pool lets you create statistics manually. For CSV files, you have to create statistics manually because automatic creation of statistics isn't turned on for CSV files.
See the following examples for instructions on how to manually create statistics.
The following guiding principles are provided for updating your statistics:
For more information, see [Cardinality Estimation](/sql/relational-databases/performance/cardinality-estimation-sql-server).
-### Examples: Create statistics for column in OPENROWSET path
+### Examples: Create statistics for column in OPENROWSET path
-The following examples show you how to use various options for creating statistics. The options that you use for each column depend on the characteristics of your data and how the column will be used in queries.
+The following examples show you how to use various options for creating statistics in Azure Synapse serverless SQL pools. The options that you use for each column depend on the characteristics of your data and how the column will be used in queries. For more information on the stored procedures used in these examples, review [sys.sp_create_openrowset_statistics](/sql/relational-databases/system-stored-procedures/sp-create-openrowset-statistics) and [sys.sp_drop_openrowset_statistics](/sql/relational-databases/system-stored-procedures/sp-drop-openrowset-statistics), which apply to serverless SQL pools only.
> [!NOTE] > You can create single-column statistics only at this moment. >
-> Following permissions are required to execute sp_create_openrowset_statistics and sp_drop_openrowset_statistics: ADMINISTER BULK OPERATIONS or ADMINISTER DATABASE BULK OPERATIONS.
+> Following permissions are required to execute `sp_create_openrowset_statistics` and `sp_drop_openrowset_statistics`: ADMINISTER BULK OPERATIONS or ADMINISTER DATABASE BULK OPERATIONS.
The following stored procedure is used to create statistics:
FROM OPENROWSET(
### Examples: Update statistics
-To update statistics, you need to drop and create statistics. The following stored procedure is used to drop statistics:
+To update statistics, you need to drop and create statistics. For more information, review [sys.sp_create_openrowset_statistics](/sql/relational-databases/system-stored-procedures/sp-create-openrowset-statistics) and [sys.sp_drop_openrowset_statistics](/sql/relational-databases/system-stored-procedures/sp-drop-openrowset-statistics).
+
+The `sys.sp_drop_openrowset_statistics` stored procedure is used to drop statistics:
```sql sys.sp_drop_openrowset_statistics [ @stmt = ] N'statement_text' ``` > [!NOTE]
-> Following permissions are required to execute sp_create_openrowset_statistics and sp_drop_openrowset_statistics: ADMINISTER BULK OPERATIONS or ADMINISTER DATABASE BULK OPERATIONS.
+> Following permissions are required to execute `sp_create_openrowset_statistics` and `sp_drop_openrowset_statistics`: ADMINISTER BULK OPERATIONS or ADMINISTER DATABASE BULK OPERATIONS.
Arguments: [ @stmt = ] N'statement_text' - Specifies the same Transact-SQL statement used when the statistics were created.
-To update the statistics for the year column in the dataset, which is based on the population.csv file, you need to drop and create statistics:
+To update the statistics for the year column in the dataset, which is based on the `population.csv` file, you need to drop and create statistics:
```sql EXEC sys.sp_drop_openrowset_statistics N'SELECT payment_type
WHERE st.[user_created] = 1
To further improve query performance for dedicated SQL pool, see [Monitor your workload](../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?context=/azure/synapse-analytics/context/context) and [Best practices for dedicated SQL pool](best-practices-dedicated-sql-pool.md#maintain-statistics).
-To further improve query performance for serverless SQL pool see [Best practices for serverless SQL pool](best-practices-serverless-sql-pool.md)
+To further improve query performance for serverless SQL pool, see [Best practices for serverless SQL pool](best-practices-serverless-sql-pool.md).
virtual-desktop Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/agent-overview.md
The Azure Virtual Desktop agent is initially installed in one of two ways. If yo
## Agent update process
-The Azure Virtual Desktop service updates the agent whenever an update becomes available. Agent updates can include new functionality or fixes for previous issues. You must always have the latest stable version of the agent installed so your VMs don't lose connectivity or security. Once the initial version of the Azure Virtual Desktop agent is installed, the agent regularly queries the Azure Virtual Desktop service to determine if thereΓÇÖs a newer version of the agent, stack, or monitoring component available. If a newer version of any of the components has already been deployed, the updated component is automatically installed by the flighting system.
+The Azure Virtual Desktop service updates the agent whenever an update becomes available. Agent updates can include new functionality or fixes for previous issues. You must always have the latest stable version of the agent installed so your VMs don't lose connectivity or security. After you've installed the initial version of the Azure Virtual Desktop agent, the agent will regularly query the Azure Virtual Desktop service to determine if thereΓÇÖs a newer version of the agent, stack, or monitoring agent available. If a newer version exists, the updated component is automatically installed by the flighting system, unless you've configured the Scheduled Agent Updates feature. If you've already configured the Scheduled Agent Updates feature, the agent will only install the updated components during the maintenance window that you specify. For more information, see [Scheduled Agent Updates (preview)](scheduled-agent-updates.md).
New versions of the agent are deployed at regular intervals in five-day periods to all Azure subscriptions. These update periods are called "flights". It takes 24 hours for all VMs in a single broker region to receive the agent update in a flight. Because of this, when a flight happens, you may see VMs in your host pool receive the agent update at different times. Also, if the VMs are in different regions, they might update on different days in the five-day period. The flight will update all VM agents in all subscriptions by the end of the deployment period. The Azure Virtual Desktop flighting system enhances service reliability by ensuring the stability and quality of the agent update. - Other important things you should keep in mind: - The agent update isn't connected to Azure Virtual Desktop infrastructure build updates. When the Azure Virtual Desktop infrastructure updates, that doesn't mean that the agent has updated along with it.-- Because VMs in your host pool may receive agent updates at different times, you'll need to be able to tell the difference between flighting issues and failed agent updates. If you go to the event logs for your VM at **Event Viewer** > **Windows Logs** > **Application** and see an event labeled "ID 3277," that means the Agent update didn't work. If you don't see that event, then the VM is in a different flight and will be updated later.
+- Because VMs in your host pool may receive agent updates at different times, you'll need to be able to tell the difference between flighting issues and failed agent updates. If you go to the event logs for your VM at **Event Viewer** > **Windows Logs** > **Application** and see an event labeled "ID 3277," that means the Agent update didn't work. If you don't see that event, then the VM is in a different flight and will be updated later. See [Set up diagnostics to monitor agent updates](agent-updates-diagnostics.md) for more information about how to set up diagnostic logs to track updates and make sure they've been installed correctly.
- When the Geneva Monitoring agent updates to the latest version, the old GenevaTask task is located and disabled before creating a new task for the new monitoring agent. The earlier version of the monitoring agent isn't deleted in case that the most recent version of the monitoring agent has a problem that requires reverting to the earlier version to fix. If the latest version has a problem, the old monitoring agent will be re-enabled to continue delivering monitoring data. All versions of the monitor that are earlier than the last one you installed before the update will be deleted from your VM. - Your VM keeps three versions of the agent and of the side-by-side stack at a time. This allows for quick recovery if something goes wrong with the update. The earliest version of the agent or stack is removed from the VM whenever the agent or stack updates. If you delete these components prematurely and the agent or stack has a failure, the agent or stack won't be able to roll back to an earlier version, which will put your VM in an unavailable state.
The agent update normally lasts 2-3 minutes on a new VM and shouldn't cause your
Now that you have a better understanding of the Azure Virtual Desktop agent, here are some resources that might help you: - If you're experiencing agent or connectivity-related issues, check out the [Azure Virtual Desktop Agent issues troubleshooting guide](troubleshoot-agent.md).
+- To schedule agent updates, see the [Scheduled Agent Updates (preview) document](scheduled-agent-updates.md).
+- To set up diagnostics for this feature, see the [Scheduled Agent Updates Diagnostics guide](agent-updates-diagnostics.md).
+- To find information about the latest and previous agent versions, see the [Agent Updates version notes](whats-new-agent.md).
virtual-desktop Agent Updates Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/agent-updates-diagnostics.md
+
+ Title: Set up diagnostics for monitoring agent updates
+description: How to set up diagnostic reports to monitor agent updates.
++ Last updated : 03/28/2022+++
+# Set up diagnostics to monitor agent updates
+
+Diagnostic logs can tell you which agent version is installed for an update, when it was installed, and if the update was successful. If an update is unsuccessful, it might be because the session host was turned off during the update. If that happened, you should turn the session host back on.
+
+This article describes how to use diagnostic logs in a Log Analytics workspace to monitor agent updates.
+
+## Enable sending diagnostic logs to your Log Analytics workspace
+
+To enable sending diagnostic logs to your Log Analytics workspace:
+
+1. Create a Log Analytics workspace, if you haven't already. Next, get the workspace ID and primary key by following the instructions in [Use Log Analytics for the diagnostics feature](diagnostics-log-analytics.md#before-you-get-started).
+
+2. Send diagnostics to the Log Analytics workspace you created by following the instructions in [Push diagnostics data to your workspace](diagnostics-log-analytics.md#push-diagnostics-data-to-your-workspace).
+
+3. Follow the directions in [How to access Log Analytics](diagnostics-log-analytics.md#how-to-access-log-analytics) to access the logs in your workspace.
+
+> [!NOTE]
+> The log query results only cover the last 30 days of data in your deployment.
+
+## Use diagnostics to see when an update becomes available
+
+To see when agent component updates are available:
+
+1. Access the logs in your Log Analytics workspace.
+
+2. Select the **+** button to create a new query.
+
+3. Copy and paste the following Kusto query to see if agent component updates are available for the specified session host. Make sure to change the **sessionHostName** parameter to the name of your session host.
+
+> [!NOTE]
+> If you haven't enabled the Scheduled Agent Updates feature, you won't see anything in the NewPackagesAvailable field.
+
+```kusto
+WVDAgentHealthStatus
+| where TimeGenerated >= ago(30d)
+| where SessionHostName == "sessionHostName"
+| project TimeGenerated, AgentVersion, SessionHostName, LastUpgradeTimeStamp, UpgradeState, UpgradeErrorMsg, NewPackagesAvailable
+| sort by TimeGenerated desc
+| take 1
+```
+
+## Use diagnostics to see when agent updates are happening
+
+To see when agent updates are happening or to make sure that the Scheduled Agent Updates feature is working:
+
+1. Access the logs in your Log Analytics workspace.
+
+2. Select the **+** button to create a new query.
+
+3. Copy and paste the following Kusto query to see when the agent has updated for the specified session host. Make sure to change the **sessionHostName** parameter to the name of your session host.
+
+```kusto
+WVDAgentHealthStatus
+| where TimeGenerated >= ago(30d)
+| where SessionHostName == "sessionHostName"
+| project TimeGenerated, AgentVersion, SessionHostName, LastUpgradeTimeStamp, UpgradeState, UpgradeErrorMsg
+| summarize arg_min(TimeGenerated, *) by AgentVersion
+| sort by TimeGenerated asc
+```
+
+## Use diagnostics to check for unsuccessful agent updates
+
+To check if an agent component update was unsuccessful:
+
+1. Access the logs in your Log Analytics workspace.
+
+2. Select the **+** button to create a new query.
+
+3. Copy and paste the following Kusto query to see when the agent has updated for the specified session host. Make sure to change the **sessionHostName** parameter to the name of your session host.
+
+```kusto
+WVDAgentHealthStatus
+| where TimeGenerated >= ago(30d)
+| where SessionHostName == "sessionHostName"
+| where MaintenanceWindowMissed == true
+| project TimeGenerated, AgentVersion, SessionHostName, LastUpgradeTimeStamp, UpgradeState, UpgradeErrorMsg, MaintenanceWindowMissed
+| sort by TimeGenerated asc
+```
+
+## Next steps
+
+For more information about Scheduled Agent Updates and the agent components, check out the following articles:
+
+- To learn how to schedule agent updates, see [Scheduled Agent Updates (preview)](scheduled-agent-updates.md).
+- For more information about the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent, see [Getting Started with the Azure Virtual Desktop Agent](agent-overview.md).
+- Learn more about the latest and previous agent versions at [What's new in the Azure Virtual Desktop agent](whats-new-agent.md).
+- If you're experiencing agent or connectivity-related issues, see the [Azure Virtual Desktop Agent issues troubleshooting guide](troubleshoot-agent.md).
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
Azure Virtual Desktop is currently available for all geographical locations. Adm
>Microsoft doesn't control or limit the regions where you or your users can access your user and app-specific data. >[!IMPORTANT]
->Azure Virtual Desktop stores various types of information like host pool names, app group names, workspace names, and user principal names in a datacenter. While creating any of the service objects, the customer has to enter the location where the object needs to be created. The location of this object determines where the information for the object will be stored. The customer will choose an Azure region and the related information will be stored in the associated geography. For a list of all Azure regions and related geographies, visit [https://azure.microsoft.com/global-infrastructure/geographies/](https://azure.microsoft.com/global-infrastructure/geographies/).
+>Azure Virtual Desktop stores various types of information like host pool names, app group names, workspace names, and user principal names in a datacenter. While creating any of the service objects, the customer has to enter the location where the object needs to be created. The location of this object determines where the information for the object will be stored. The customer will choose an Azure region and the related information will be stored in the associated geography. Customers also choose a region for the Session host Virtual Machines in an additional step in the deployment process. This region can be any Azure region, hence it can be the same region as the service objects or a separate region. For a list of all Azure regions and related geographies, visit [https://azure.microsoft.com/global-infrastructure/geographies/](https://azure.microsoft.com/global-infrastructure/geographies/).
This article describes which information the Azure Virtual Desktop service stores. To learn more about the customer data definitions, see [How Microsoft categorizes data for online services](https://www.microsoft.com/trust-center/privacy/customer-data-definitions).
To set up the Azure Virtual Desktop service, the customer must create host pools
## Customer data
-The service doesn't directly store any user or app-related information, but it does store customer data like application names and user principal names because they're part of the object setup process. This information is stored in the geography associated with the region the customer created the object in.
+The service doesn't directly store any user created or app-related information, but it does store customer data like application names and user principal names because they're part of the object setup process. This information is stored in the geography associated with the region the customer created the object in.
## Diagnostic data
Azure Virtual Desktop gathers service-generated diagnostic data whenever the cus
## Service-generated data
-To keep Azure Virtual Desktop reliable and scalable, we aggregate traffic patterns and usage to check the health and performance of the infrastructure control plane. For example, to understand how to ramp up regional infrastructure capacity as service usage increases, we process service usage log data. We then review the logs for peak times and decide which data centers to add to meet this capacity. We aggregate this information from all locations where the service infrastructure is, then send it to the US region. The data sent to the US region includes scrubbed data, but not customer data.
+To keep Azure Virtual Desktop reliable and scalable, we aggregate traffic patterns and usage to check the health and performance of the infrastructure control plane. For example, to understand how to ramp up regional infrastructure capacity as service usage increases, we process service usage log data. We then review the logs for peak times and decide which data centers to add to meet this capacity.
We currently support storing the aforementioned data in the following locations: -- United States (US) (generally available)-- Europe (EU) (generally available)-- United Kingdom (UK) (generally available)-- Canada (CA) (generally available)
+- United States (US)
+- Europe (EU)
+- United Kingdom (UK)
+- Canada (CA)
+
+In addition we aggregate service-generated from all locations where the service infrastructure is, then send it to the US geography. The data sent to the US region includes scrubbed data, but not customer data.
More geographies will be added as the service grows. The stored information is encrypted at rest, and geo-redundant mirrors are maintained within the geography. Customer data, such as app settings and user data, resides in the location the customer chooses and isn't managed by the service.
virtual-desktop Scheduled Agent Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/scheduled-agent-updates.md
+
+ Title: Azure Virtual Desktop Scheduled Agent Updates preview
+description: How to use the Scheduled Agent Updates feature to choose a date and time to update your Azure Virtual Desktop agent components.
++ Last updated : 03/28/2022+++
+# Scheduled Agent Updates (preview) for Azure Virtual Desktop host pools
+
+> [!IMPORTANT]
+> The Scheduled Agent Updates feature is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The Scheduled Agent Updates feature (preview) lets you create up to two maintenance windows for the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent to get updated so that updates don't happen during peak business hours. To monitor agent updates, you can use Log Analytics to see when agent component updates are available and when updates are unsuccessful.
+
+This article describes how the Scheduled Agent Updates feature works and how to set it up.
+
+>[!NOTE]
+> Azure Virtual Desktop (classic) doesn't support the Scheduled Agent Updates feature.
+
+>[!IMPORTANT]
+>The preview version of this feature currently has the following limitations:
+>
+> - You can only use the Scheduled Agent Updates feature in the Azure public cloud.
+> - You can only configure the Scheduled Agent Updates feature with the Azure portal or REST API.
+
+## Configure the Scheduled Agent Updates feature using the Azure portal
+
+To use the Azure portal to configure Scheduled Agent Updates:
+
+1. Open your browser and go to [the Azure portal](https://portal.azure.com).
+
+2. In the Azure portal, go to **Azure Virtual Desktop**.
+
+3. Select **Host pools**, then go to the host pool where you want to enable the feature. You can only configure this feature for existing host pools. You can't enable this feature when you create a new host pool.
+
+4. In the host pool, select **Scheduled Agent Updates**. Scheduled Agent Updates is disabled by default. This means that, unless you enable this setting, the agent can get updated at any time by the agent update flighting service. Select the **Scheduled agent updates** checkbox to enable the feature.
+
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot showing the Scheduled Agent Updates options in the host pool table of contents and the checkbox for enabling Scheduled Agent Updates. Both are selected and highlighted with a red border.](media/agent-update-1.png)
+
+5. Enter your preferred time zone setting. If you select **Use local session host time zone**, Scheduled Agent Updates will automatically use the VM's local time zone. If you don't select **Use local session host time zone**, you'll need to specify a time zone.
+
+6. Select a day and time for the **Maintenance window**. If you'd like to make an optional second maintenance window, you can also select a date and time for it here. Since Scheduled Agent Updates is a host pool setting, the time zone setting and maintenance windows you configure will be applied to all session hosts in the host pool.
+
+7. Select **Apply** to apply your settings.
+
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot showing the Scheduled Agent Updates schedule options.](media/agent-update-2.png)
+
+## Additional information
+
+### How the feature works
+
+The Scheduled Agent Updates feature updates the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent if any one or more of these components needs to be updated. Any reference to the agent components is referring to these three components. Scheduled Agent Updates doesn't apply to the initial installation of the agent components. When you install the agent on a virtual machine (VM), the agent will automatically install the side-by-side stack and the Geneva Monitoring agent regardless of which maintenance windows you set. Any non-critical updates after installation will only happen within your maintenance windows. Host pools with the Scheduled Agent Updates feature enabled will receive the agent update after the agent has been fully flighted to production. For more information about how agent flighting works, see [Agent update process](agent-overview.md#agent-update-process).
+The agent component update won't succeed if the session host VM is shut down or deallocated during the scheduled update time. If you enable Scheduled Agent Updates, make sure all session hosts in your host pool are on during your configured maintenance window time. The broker will attempt to update the agent components during each specified maintenance window up to four times. After the fourth try, the broker will install the update by force. This process gives time for installation retries if an update is unsuccessful, and also prevents session hosts from having outdated versions of agent components. If a critical agent component update is available, the broker will install the agent component by force for security purposes.
+
+### Maintenance window and time zone information
+
+- You must specify at least one maintenance window. Configuring the second maintenance window is optional. Creating two maintenance windows gives the agent components additional opportunities to update if the first update during one of the windows is unsuccessful.
+
+- All maintenance windows are two hours long to account for situations where all three agent components must be updated at the same time. For example, if your maintenance window is Saturday at 9:00 AM PST, the updates will happen between 9:00 AM PST and 11:00 AM PST.
+
+- The **Use session host local time** parameter isn't selected by default. If you want the agent component update to be in the same time zone for all session hosts in your host pool, you'll need to specify a single time zone for your maintenance windows. Having a single time zone helps when all your session hosts or users are located in the same time zone.
+
+- If you select **Use session host local time**, the agent component update will be in the local time zone of each session host in the host pool. Use this setting when all session hosts in your host pool or their assigned users are in different time zones. For example, let's say you have one host pool with session hosts in West US in the Pacific Standard Time zone and session hosts in East US in the Eastern Standard Time zone, and you've set the maintenance window to be Saturday at 9:00 PM. Enabling **Use session host local time** ensures that updates to all session hosts in the host pool will happen at 9:00 PM in their respective time zones. Disabling **Use session host local time** and setting the time zone to be Central Standard Time ensures that updates to the session hosts in the host pool will happen at 9:00 PM Central Standard Time, regardless of the session hosts' local time zones.
+
+- The local time zone for VMs you create using the Azure portal is set to Coordinated Universal Time (UTC) by default. If you want to change the VM time zone, run the [Set-TimeZone PowerShell cmdlet](/powershell/module/microsoft.powershell.management/set-timezone?view=powershell-7.1&preserve-view=true) on the VM.
+
+- To get a list of available time zones for a VM, run the [Get-TimeZone PowerShell cmdlet]/powershell/module/microsoft.powershell.management/get-timezone?view=powershell-7.1&preserve-view=true) on the VM.
+
+## Next steps
+
+For more information related to Scheduled Agent Updates and agent components, check out the following resources:
+
+- Learn how to set up diagnostics for this feature at the [Scheduled Agent Updates Diagnostics guide](agent-updates-diagnostics.md).
+- Learn more about the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent at [Getting Started with the Azure Virtual Desktop Agent](agent-overview.md).
+- For more information about the current and earlier versions of the Azure Virtual Desktop agent, see [Azure Virtual Desktop agent updates](whats-new-agent.md).
+- If you're experiencing agent or connectivity-related issues, see the [Azure Virtual Desktop Agent issues troubleshooting guide](troubleshoot-agent.md).
virtual-desktop Shortpath Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/shortpath-public.md
+
+ Title: Azure Virtual Desktop RDP Shortpath for public networks (preview) - Azure
+
+description: How to set up RDP Shortpath for public networks for Azure Virtual Desktop (preview).
++ Last updated : 04/13/2022+++
+# Azure Virtual Desktop RDP Shortpath for public networks (preview)
+
+Remote Desktop Protocol (RDP) can use multiple different types of network transport to establish a connection between Remote Desktop Client and Session host.
+
+- Reverse connect - by default, RDP uses a TCP-based [reverse connect transport](./network-connectivity.md). This transport provides the best compatibility with various networking configurations and has a high success rate for establishing RDP connections. This transport is also used as a fallback if the RDP Shortpath connection is unsuccessful.
+- RDP Shortpath for managed networks - UDP-based transport designed for direct connectivity in controlled network setups. For example, connectivity over the ExpressRoute or Azure Stack HCI deployments. For more information, see the [documentation](./shortpath.md).
+- RDP Shortpath for public networks, currently in preview, is described in this document.
+
+ > [!NOTE]
+ > During the preview, RDP Shortpath for managed networks is incompatible with RDP Shortpath for public networks. If you want to participate in the preview, refer to the [documentation](./shortpath.md) for disabling RDP Shortpath for managed networks.
+
+RDP Shortpath transport is a feature of Azure Virtual Desktop that establishes a direct UDP data flow between Remote Desktop Client and Session host. RDP uses this data flow to deliver Remote Desktop and RemoteApp while offering better reliability and consistent latency.
+
+## Key benefits
+
+Both RDP Shortpath for managed and public networks provide the same set of core benefits:
+
+- RDP Shortpath transport is based on the [Universal Rate Control Protocol (URCP)](https://www.microsoft.com/research/publication/urcp-universal-rate-control-protocol-for-real-time-communication-applications/). URCP enhances UDP with active monitoring of the network conditions and provides fair and full link utilization. URCP operates at low delay and loss levels as needed by Remote Desktop. URCP achieves the best performance by dynamically learning network parameters and providing protocol with a rate control mechanism.
+- RDP Shortpath establishes the direct connectivity between the Remote Desktop client and the session host. Direct connectivity reduces dependency on the Azure Virtual Desktop gateways, improves the connection's reliability, and increases available bandwidth for each user session.
+- The removal of extra relay reduces round-trip time, which improves user experience with latency-sensitive applications and input methods.
+
+## Connection security
+
+RDP Shortpath for public networks extends RDP multi-transport capabilities. It doesn't replace the reverse connect transport but complements it. The initial session brokering is managed through the Azure Virtual Desktop infrastructure.
+Each RDP session uses a dynamically assigned UDP socket that accepts the Shortpath traffic previously authenticated over a reverse connect transport. This socket will ignore all connection attempts unless they match the reverse connect session. Before the UDP socket is open, any new RDP session must establish the unique reverse connect transport.
+RDP Shortpath uses a TLS connection between the client and the session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the OS during the deployment. If desired, customers may deploy centrally managed certificates issued by the enterprise certification authority. For more information about certificate configurations, see [Windows Server documentation](/troubleshoot/windows-server/remote/remote-desktop-listener-certificate-configurations).
+
+## Network Address Translation and firewalls
+
+Most Azure Virtual Desktop clients run on computers on the private network. Internet access is provided through a Network Address Translation (NAT) gateway device. Therefore, the NAT gateway modifies all network requests from the private network and destined to the Internet. Such modification intends to share a single public IP address across all of the computers on the private network.
+Because of IP packet modification, the recipient of the traffic will see the public IP address of the NAT gateway instead of the actual sender. When traffic comes back to the NAT gateway, it will take care to forward it to the intended recipient without the sender's knowledge. In most scenarios, the computers hidden behind such a NAT aren't aware translation is happening and don't know the network address of the NAT gateway.
+
+NAT is also applicable to the Azure Virtual Networks, where all session hosts reside. When a session host tries to reach the network address on the Internet, the NAT Gateway or Azure Load Balancer performs the address translation. For more information about various types of Source Network Address Translation, see the [documentation](/azure/load-balancer/load-balancer-outbound-connections.md).
+
+Most networks typically include firewalls that inspect traffic and block it based on rules. Most customers configure their firewalls to prevent incoming connections (that is, unsolicited packets from the Internet sent without a request). Firewalls employ different techniques to track data flow to distinguish between solicited and unsolicited traffic. In the context of TCP, the firewall tracks SYN and ACK packets, and the process is straightforward. UDP firewalls usually use heuristics based on packet addresses to associate traffic with UDP flows and allow or block it.
+There are many different NAT implementations available. In most cases, NAT gateway and firewall are the functions of the same physical or virtual device.
+
+## How RDP Shortpath works for public networks
+
+RDP Shortpath uses a standardized set of methods for traversal of NAT gateways. As a result, user sessions directly establish a UDP flow between the client and the session host. More specifically, RDP Shortpath uses STUN protocol to discover the external IP address of the NAT router.
+There are four primary components used to establish the RDP Shortpath data flow:
+
+- Remote Desktop Client
+- Session Host
+- Azure Virtual Desktop Gateway
+- Azure Virtual Desktop STUN Server
+
+In Azure Virtual Desktop, every RDP connection starts with establishing the [reverse connect transport](./network-connectivity.md) over the Azure Virtual Desktop Gateway.
+After the user authentication, the client and session host establish the initial RDP transport, and the client and session host start exchanging their capabilities.
+If RDP Shortpath for public networks is enabled on the session host, then the session host initiates a process called Candidate Gathering.
+
+- At this stage, the session host enumerates all network interfaces assigned to a VM, including virtual interfaces like VPN and Teredo.
+- Remote Desktop Service allocates UDP socket on each interface and stores the IP:Port pair in the candidate table as a *local candidate*.
+- Remote Desktop Service uses each UDP socket allocated in the previous step to try reaching the Azure Virtual Desktop STUN Server on the public Internet. Communication is done by sending a small UDP packet to port 3478
+- If the packet reaches the STUN server, the STUN server responds with the session host public IP and listener port. This information is stored in the candidate table as a *reflexive candidate*.
+
+After the session host gathers all candidates, the session host uses the established reverse connect transport to pass the candidate list to the client.
+When the client receives the list of candidates from the server, the client performs the candidate gathering on its side. Then the client sends its candidate list to the session host.
+After the session host and client exchange their candidate lists, both parties attempt to connect with each other using all the gathered candidates. This connection attempt is simultaneous on both sides.
+Many of the NAT gateways are configured to allow the incoming traffic to the socket as soon as the outbound data transfer initializes it. This behavior of NAT gateways is the reason the simultaneous connection is essential.
+After the initial packet exchange, the client and session host may establish one or many data flows. After that, Remote Desktop Protocol chooses the fastest network path. Client then establishes a secure TLS connection with the session host and initiates the RDP Shortpath transport.
+After RDP establishes the Shortpath, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection move to the new transport.
+
+## Enabling the preview of RDP Shortpath for public networks
+
+To participate in the preview of RDP Shortpath, you need to enable the Shortpath functionality. You can configure RDP Shortpath on any number of session hosts used in your environment. There's no requirement to enable RDP Shortpath on all hosts in the pool.
+We recommend you use a validation host pool by following the steps in [Define your host pool as a validation host pool](create-validation-host-pool.md#define-your-host-pool-as-a-validation-host-pool).
+
+Follow the steps below to configure session host:
+
+1. Connect to the session host
+2. Open the elevated command prompt
+3. Enable the RDP Shortpath for public networks:
+
+```cmd
+REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations" /v ICEControl /t REG_DWORD /d 2 /f
+```
+
+## Disabling the preview of RDP Shortpath for public networks
+
+If you decide to disable the preview of RDP Shortpath, you can disable the Shortpath functionality.
+
+Follow the steps below to configure session host:
+
+1. Connect to the session host
+2. Open elevated command prompt
+3. Disable the RDP Shortpath for public networks:
+
+```cmd
+REG DELETE "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations" /v ICEControl /f
+```
+
+## Network configuration
+
+To support RDP Shortpath for public networks, you typically don't need any particular configuration. Azure Virtual Desktop client and Session host will automatically discover the direct data flow if it's possible in your network configuration. However, every environment is unique, and some network configurations may negatively affect the rate of success of the direct connection.
+Follow the recommendations below to increase the probability of a direct data flow.
+
+### Allow outbound UDP connectivity
+
+RDP Shortpath uses UDP to establish a data flow. If a firewall on your network blocks UDP traffic, RDP Shortpath will fail, and the connection will fall back to TCP-based reverse connect transport.
+Azure Virtual Desktop uses STUN servers provided by [Azure Communication Services](/communication-services) and Microsoft Teams.
+By the nature of the feature, outbound connectivity from the session hosts to the client is required. Unfortunately, you can't predict where your users are located in most cases. Therefore, we recommend allowing outbound UDP connectivity to the Internet.
+You can [limit the port range](#limiting-port-range-used-on-the-client-side) used to listen to the incoming UDP flow.
+Use the following table for reference when configuring firewalls for RDP Shortpath.
+
+#### Session host virtual network
+
+| Name | Source | Destination Port | Protocol | Destination | Action |
+|-|--||-||--|
+| RDP Shortpath Server Endpoint | VM Subnet | 1024-65535 | UDP | * | Allow |
+| STUN Access | VM Subnet | 3478 | UDP | 13.107.17.41/24, 13.107.64.0/18, 20.202.0.0/16, 52.112.0.0/14, 52.120.0.0/14 | Allow |
+
+#### Client network
+
+| Name | Source | Destination Port | Protocol | Destination | Action |
+|-|-||-||--|
+| RDP Shortpath Server Endpoint | Client network | 1024-65535 | UDP | Public IP addresses assigned to NAT Gateway or Azure Firewall | Allow |
+| STUN Access | Client network | 3478 | UDP | 13.107.17.41/24, 13.107.64.0/18, 20.202.0.0/16, 52.112.0.0/14, 52.120.0.0/14 | Allow |
+
+ > [!NOTE]
+ > The IP ranges for STUN servers used in preview would change at the feature's release to General Availability.
+
+### Limiting port range used on the client side
+
+By default, RDP Shortpath for public networks uses an ephemeral port range (49152ΓÇô65535) to establish a direct path between server and client. However, you may want to configure the server to use a smaller, predictable port range in some cases.
+To enable a limited port range, you can use the following command on the session host:
+
+```cmd
+REG ADD HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services /v ICEEnableClientPortRange /t REG_DWORD /d 1 /f
+```
+
+When you enable this setting on the session host, the Azure Virtual Desktop client will randomly select the port from the range for every connection. If the specified port range is exhausted, the client's operating system will choose a port to use.
+By default, when the port range configuration is enabled, the client will choose a port from the range of 38300-39299.
+If you want to change the port numbers, you can customize a UDP port range for the Azure Virtual Desktop client.
+When choosing the base and pool size, consider the number of ports setting to ensure that the upper bound doesn't exceed 49151. For example, if you select 38300 as a port base and 1000 as pool size, the upper bound will be 39299.
+To specify the port range, use the following commands, substituting the base port and the number of ports.
+
+```cmd
+reg add HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services /v ICEClientPortBase /t REG_DWORD /d 38300 /f
+reg add HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services /v ICEClientPortRange /t REG_DWORD /d 1000 /f
+```
+
+### Disabling RDP Shortpath on the client
+
+To disable RDP Shortpath for a specific client, you can use the following Group Policy to disable the UDP support:
+
+1. On the client, run **gpedit.msc**.
+2. Go to **Computer Configuration** > **Administration Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client**.
+3. Set the **"Turn Off UDP On Client"** setting to **Enabled**
+
+### Teredo support
+
+While not required for RDP Shortpath, Teredo adds extra NAT traversal candidates and increases the chance of the successful RDP Shortpath connection in IPv4-only networks.
+You can enable Teredo on both Session host and Client side by running the following command:
+
+```cmd
+netsh interface Teredo set state type=enterpriseclient
+```
+
+### UPnP support
+
+To improve the chances of a direct connection, on the side of the Remote Desktop client, RDP Shortpath may use [UPnP](/windows/win32/upnp/universal-plug-and-play-start-page.md) to configure a port mapping on the NAT router. UPnP is a standard technology used by various applications, such as Xbox, Delivery Optimization, and Teredo. UPnP is generally available on the routers typically found on a home network. UPnP protocol is enabled by default on most home routers and access points. UPnP is often disabled on corporate networking.
+
+## General recommendations
+
+- Avoid using force tunneling configurations if your users access Azure Virtual desktop over the Internet.
+- Make sure you aren't using double NAT or Carrier-Grade-NAT (CGN) configurations.
+- Recommend users to not disable UPnP on their home routers.
+- Avoid using cloud packet-inspection Services
+- Avoid using TCP-based VPN solutions
+- Enable IPv6 connectivity or Teredo
+
+## Verify your network connectivity
+
+Next, you'll need to make sure your network is using RDP Shortpath. You can verify the transport with either a "Connection Information" dialog or by using Log Analytics.
+
+### Connection Information dialog
+
+To make sure connections are using RDP Shortpath, open the "Connection Information" dialog by going to the **Connection** tool bar on the top of the screen and select the antenna icon, as shown in the following screenshot.
+++
+### Use Log Analytics
+
+If you're using [Azure Log Analytics](./diagnostics-log-analytics.md), you can monitor connections by querying the [WVDConnections table](/azure/azure-monitor/reference/tables/wvdconnections). A column named UdpUse indicates whether Azure Virtual Desktop RDP Stack is using UDP protocol on the current user connection.
+The possible values are:
+
+- **0** - user connection isn't using RDP Shortpath.
+- **1** - The user connection is using RDP Shortpath for managed networks.
+- **2** - The user connection is using RDP Shortpath for public networks.
+
+The following query lets you review connection information. You can run this query in the [Log Analytics query editor](../azure-monitor/logs/log-analytics-tutorial.md#write-a-query). For each query, replace `userupn` with the UPN of the user you want to look up.
+
+```kusto
+let Events = WVDConnections | where UserName == "userupn" ;
+Events
+| where State == "Connected"
+| project CorrelationId , UserName, ResourceAlias , StartTime=TimeGenerated, UdpUse, SessionHostName, SessionHostSxSStackVersion
+| join (Events
+| where State == "Completed"
+| project EndTime=TimeGenerated, CorrelationId, UdpUse)
+on CorrelationId
+| project StartTime, Duration = EndTime - StartTime, ResourceAlias, UdpUse, SessionHostName, SessionHostSxSStackVersion
+| sort by StartTime asc
+```
+
+You can verify if RDP Shortpath is enabled for a specific user session by running the following Log Analytics query:
+
+```kusto
+WVDCheckpoints
+|where Name contains "Shortpath"
+```
+
+## Troubleshooting
+
+### Verifying STUN server connectivity and NAT type
+
+If you're unable to establish connection using the RDP Shortpath transport, you use the following PowerShell script to validate connectivity to STUN servers
+
+```powershell
+function Test-StunEndpoint
+{
+ param
+ (
+ [Parameter(Mandatory)]
+ $UdpClient,
+ [Parameter(Mandatory)]
+ $StunEndpoint
+ )
+ $ipendpoint = $null
+ try
+ {
+ $UdpClient.client.ReceiveTimeout = 5000
+ $listenport = $UdpClient.client.localendpoint.port
+ $endpoint = New-Object -TypeName System.Net.IPEndPoint -ArgumentList ([IPAddress]::Any, $listenport)
+
+
+ [Byte[]] $payload =
+ 0x00, 0x01, # Message Type: 0x0001 (Binding Request)
+ 0x00, 0x00, # Message Length: 0 bytes excluding header
+ 0x21, 0x12, 0xa4, 0x42 # Magic Cookie: Always 0x2112A442
+
+ $LocalTransactionId = ([guid]::NewGuid()).ToByteArray()[1..12]
+ $payload = $payload + $LocalTransactionId
+ try
+ {
+ $null = $UdpClient.Send($payload, $payload.length, $StunEndpoint)
+ }
+ catch
+ {
+ throw "Unable to send data, check if $($StunEndpoint.AddressFamily) is configured"
+ }
+
+
+ try
+ {
+ $content = $UdpClient.Receive([ref]$endpoint)
+ }
+ catch
+ {
+ try
+ {
+ $null = $UdpClient.Send($payload, $payload.length, $StunEndpoint)
+ $content = $UdpClient.Receive([ref]$endpoint)
+ }
+ catch
+ {
+ try
+ {
+ $null = $UdpClient.Send($payload, $payload.length, $StunEndpoint)
+ $content = $UdpClient.Receive([ref]$endpoint)
+ }
+ catch
+ {
+ throw "Unable to receive data, check if firewall allows access to $($StunEndpoint.ToString())"
+ }
+ }
+ }
+
+
+ if (-not $content)
+ {
+ throw 'Null response.'
+ }
+
+ [Byte[]]$messageType = $content[0..1]
+ [Byte[]]$messageCookie = $content[4..7]
+ [Byte[]]$TransactionId = $content[8..19]
+ [Byte[]]$AttributeType = $content[20..21]
+ [Byte[]]$AttributeLength = $content[22..23]
+
+ if ([System.BitConverter]::IsLittleEndian)
+ {
+ [Array]::Reverse($AttributeLength)
+ }
+
+ if ( -not ([BitConverter]::ToString($messageType)) -eq '01-01')
+ {
+ throw "Invalid message type: $([BitConverter]::ToString($messageType))"
+ }
+ if ( -not ([BitConverter]::ToString($messageCookie)) -eq '21-12-A4-42')
+ {
+ throw "Invalid message cookie: $([BitConverter]::ToString($messageCookie))"
+ }
+
+ if (-not ([BitConverter]::ToString($TransactionId)) -eq [BitConverter]::ToString($LocalTransactionId) )
+ {
+ throw "Invalid message id: $([BitConverter]::ToString($TransactionId))"
+ }
+ if (-not ([BitConverter]::ToString($AttributeType)) -eq '00-20' )
+ {
+ throw "Invalid Attribute Type: $([BitConverter]::ToString($AttributeType))"
+ }
+ $ProtocolByte = $content[25]
+ if (-not (($ProtocolByte -eq 1) -or ($ProtocolByte -eq 2)))
+ {
+ throw "Invalid Address Type: $([BitConverter]::ToString($ProtocolByte))"
+ }
+ $portArray = $content[26..27]
+ if ([System.BitConverter]::IsLittleEndian)
+ {
+ [Array]::Reverse($portArray)
+ }
+
+ $port = [Bitconverter]::ToUInt16($portArray, 0) -bxor 0x2112
+
+ if ($ProtocolByte -eq 1)
+ {
+ $IPbytes = $content[28..31]
+ if ([System.BitConverter]::IsLittleEndian)
+ {
+ [Array]::Reverse($IPbytes)
+ }
+ $IPByte = [System.BitConverter]::GetBytes(([Bitconverter]::ToUInt32($IPbytes, 0) -bxor 0x2112a442))
+
+ if ([System.BitConverter]::IsLittleEndian)
+ {
+ [Array]::Reverse($IPByte)
+ }
+ $IP = [ipaddress]::new($IPByte)
+ }
+ elseif ($ProtocolByte -eq 2)
+ {
+ $IPbytes = $content[28..44]
+ [Byte[]]$magic = $content[4..19]
+ for ($i = 0; $i -lt $IPbytes.Count; $i ++)
+ {
+ $IPbytes[$i] = $IPbytes[$i] -bxor $magic[$i]
+ }
+ $IP = [ipaddress]::new($IPbytes)
+ }
+ $ipendpoint = [IPEndpoint]::new($IP, $port)
+ }
+ catch
+ {
+ Write-Host -Object "Failed to communicate $($StunEndpoint.ToString()) with error: $_" -ForegroundColor Red
+ }
+ return $ipendpoint
+}
++
+$UdpClient6 = [Net.Sockets.UdpClient]::new([Net.Sockets.AddressFamily]::InterNetworkV6)
+$UdpClient = [Net.Sockets.UdpClient]::new([Net.Sockets.AddressFamily]::InterNetwork)
+
+
+$ipendpoint1 = Test-StunEndpoint -UdpClient $UdpClient -StunEndpoint ([IPEndpoint]::new(([Net.Dns]::GetHostAddresses('worldaz.turn.teams.microsoft.com')|Where-Object -FilterScript {$_.AddressFamily -EQ 'InterNetwork'})[0].Address, 3478))
+$ipendpoint2 = Test-StunEndpoint -UdpClient $UdpClient -StunEndpoint ([IPEndpoint]::new([ipaddress]::Parse('13.107.17.41'), 3478))
+$ipendpoint3 = Test-StunEndpoint -UdpClient $UdpClient6 -StunEndpoint ([IPEndpoint]::new([ipaddress]::Parse('2a01:111:202f::155'), 3478))
++
+$localendpoint1 = $UdpClient.Client.LocalEndPoint
+$localEndpoint2 = $UdpClient6.Client.LocalEndPoint
++
+if ($null -ne $ipendpoint1)
+{
+ if ($ipendpoint1.Port -eq $localendpoint1.Port)
+ {
+ Write-Host -Object 'Local NAT uses port preservation' -ForegroundColor Green
+ }
+ else
+ {
+ Write-Host -Object 'Local NAT does not use port preservation, custom port range may not work with Shortpath' -ForegroundColor Red
+ }
+ if ($null -eq $ipendpoint2)
+ {
+ if ($ipendpoint1.Equals($ipendpoint2))
+ {
+ Write-Host -Object 'Local NAT reuses SNAT ports' -ForegroundColor Green
+ }
+ else
+ {
+ Write-Host -Object 'Local NAT does not reuse SNAT ports, preventing Shortpath from connecting this endpoint' -ForegroundColor Red
+ }
+ }
+}
+Write-Output -InputObject "`nLocal endpoints:`n$localendpoint1`n$localEndpoint2"
+Write-Output -InputObject "`nDiscovered external endpoints:`n$ipendpoint1`n$ipendpoint2`n$ipendpoint3`n"
++
+$UdpClient.Close()
+$UdpClient6.Close()
++
+Pause
+
+```
+
+## References
+
+- [RFC 8839](https://datatracker.ietf.org/doc/html/rfc8839) - Session Description Protocol (SDP) Offer/Answer Procedures for Interactive Connectivity Establishment (ICE)
+- [RFC 8489](https://datatracker.ietf.org/doc/html/rfc8489) - Session Traversal Utilities for NAT (STUN)
+- [RFC 2663](https://datatracker.ietf.org/doc/html/rfc2663) - IP Network Address Translator (NAT) Terminology and Considerations
+
+## Next steps
+
+- To learn about Azure Virtual Desktop network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
The Azure Virtual Desktop Agent updates regularly. This article is where you'll
Make sure to check back here often to keep up with new updates.
+## Version 1.0.4230.1600
+
+This update was released in March 2022 and includes the following changes:
+
+- Fixes an issue with the agent health check result being empty for the first agent heart beat.
+- Added Azure VM ID to the WVDAgentHealthStatus Log Analytics table.
+- Updated the agent's update logic to install the Geneva Monitoring agent sooner.
+ ## Version 1.0.4119.1500 This update was released in February 2022 and includes the following changes:
This update was released in February 2022 and includes the following changes:
This update was released in January 2022 and includes the following changes: - Added logging to better capture agent update telemetry.-- Updated the agent's Azure Instance Metadata Service health check to be Azure Stack HCI-friendly
+- Updated the agent's Azure Instance Metadata Service health check to be Azure Stack HCI-friendly.
## Version 1.0.3855.1400
virtual-machines Azure Disk Enc Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/azure-disk-enc-linux.md
For an example of template deployment based on schema v0.1, see the Azure Quicks
>[!WARNING] > - If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option to encrypt your VM.
-> - When encrypting Linux OS volumes, the VM should be considered unavailable. We strongly recommend to avoid SSH logins while the encryption is in progress to avoid issues blocking any open files that will need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) PowerShell cmdlet or the [vm encryption show](/cli/azure/vm/encryption#az-vm-encryption-show) CLI command. This process can be expected to take a few hours for a 30GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time will be proportional to the size and quantity of the data volumes unless the encrypt format all option is used.
+> - When encrypting Linux OS volumes, the VM should be considered unavailable. We strongly recommend to avoid SSH logins while the encryption is in progress to avoid issues blocking any open files that will need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) PowerShell cmdlet or the [vm encryption show](/cli/azure/vm/encryption#az-vm-encryption-show) CLI command. This process can be expected to take a few hours for a 30GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time will be proportional to the size and quantity of the data volumes; the `encrypt format all` option is faster than in-place encryption, but will result in the loss of all data on the disks.
> - Disabling encryption on Linux VMs is only supported for data volumes. It is not supported on data or OS volumes if the OS volume has been encrypted. >[!NOTE]
virtual-machines Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/guest-configuration.md
and
[ConfigurationSetting](/rest/api/guestconfiguration/guestconfigurationassignments/createorupdate#configurationsetting) properties are each managed per-configuration rather than on the VM extension.
+## Guest Configuration resource provider error codes
+
+See below for a list of the possible error messages when enabling the extension
+
+|Error Code|Description|
+|-|-|
+|NoComplianceReport|VM has not reported the compliance data.|
+|GCExtensionMissing|Guest Configuration extension is missing.|
+|ManagedIdentityMissing|Managed identity is missing.|
+|UserIdentityMissing|User assigned identity is missing.|
+|GCExtensionManagedIdentityMissing|Guest Configuration extension and managed identity is missing.|
+|GCExtensionUserIdentityMissing|Guest Configuration extension and user identity is missing.|
+|GCExtensionIdentityMissing|Guest Configuration extension, managed identity and user identity are missing.|
+ ## Next steps * For more information about Azure Policy's guest configuration, see [Understand Azure Policy's Guest Configuration](../../governance/policy/concepts/guest-configuration.md)
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
Previously updated : 12/13/2021 Last updated : 03/29/2022
Create an SSH connection with the VM.
1. At your prompt, open an SSH connection to your virtual machine. Replace the IP address with the one from your VM, and replace the path to the `.pem` with the path to where the key file was downloaded. ```console
-ssh -i .\Downloads\myKey1.pem azureuser@10.111.12.123
+ssh -i .\Downloads\myKey.pem azureuser@10.111.12.123
``` > [!TIP]
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
If the virtual machine is currently running, changing its size will cause it to
If your VM is still running and you don't see the size you want in the list, stopping the virtual machine may reveal more sizes. > [!WARNING]
- > If resizing a production VM, consider using [Azure Capacity Reservations](capacity-reservation-overview.md) to reserve Compute capacity in the region. Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ >
+ > If you are resizing a production VM, consider using [Azure Capacity Reservations](capacity-reservation-overview.md) to reserve Compute capacity in the region.
+
### [CLI](#tab/cli)
To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) inst
``` > [!WARNING]
- > If resizing a production VM, consider using [Azure Capacity Reservations](capacity-reservation-overview.md) to reserve Compute capacity in the region. Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ >
+ > If you are resizing a production VM, consider using [Azure Capacity Reservations](capacity-reservation-overview.md) to reserve Compute capacity in the region.
### [PowerShell](#tab/powershell)
Start-AzVM -ResourceGroupName $resourceGroup -Name $vmName
``` > [!WARNING]
- > If resizing a production VM, consider using [Azure Capacity Reservations](capacity-reservation-overview.md) to reserve Compute capacity in the region. Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ >
+ > If you are resizing a production VM, consider using [Azure Capacity Reservations](capacity-reservation-overview.md) to reserve Compute capacity in the region.
**Use PowerShell to resize a VM in an availability set**
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Azure SQL Managed Instance has some network requirements. If your security admin
### What are the service limitation of Azure Virtual Network Manager?
-* A hub in a hub-and-spoke topology can be peered up to 250 spokes.
+* A connected group can have up to 250 virtual networks. Virtual networks in a mesh topology are in a connected group, therefore a mesh configuration has a limit of 250 virtual networks.
-* A mesh topology can have up to 250 virtual networks.
+* You can have network groups with or without direct connectivity enabled in the same hub-and-spoke configuration, as long as the total number of virtual networks peered to the hub **doesn't exceed 500** virtual networks.
+ * If the network group peered with the hub **has direct connectivity enabled**, these virtual networks are in a *connected group*, therefore the network group has a limit of 250 virtual networks.
+ * If the network group peered with the hub **doesn't have direct connectivity enabled**, the network group can have up to the total limit for a hub-and-spoke topology.
-* The subnets in a virtual network can't talk to each other if they have the same address space in a mesh configuration.
+* A virtual network can be part of up to two connected groups.
+
+ **Example:**
+ * A virtual network can be part of two mesh configurations.
+ * A virtual network can be part of a mesh topology and a network group that has direct connectivity enabled in a hub-and-spoke topology.
+ * A virtual network can be part of two network groups with direct connectivity enabled in the same or different hub-and-spoke configuration.
+
+* You can have virtual networks with overlapping IP spaces in the same connected group. However, communication to an overlapped IP address will be dropped.
* The maximum number of IP prefixes in all admin rules combined is 1000.
Azure SQL Managed Instance has some network requirements. If your security admin
* Azure Virtual Network Manager doesn't have cross-tenant support in the public preview.
-* A virtual network can be part of up to two mesh configurations.
- ## Next steps Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
A public IP address range that's brought to Azure must be owned by you and regis
### Provision
-After the previous steps are completed, the public IP range can complete the **Provisioning** phase. The range will be created as a custom IP prefix resource in your subscription. Public IP prefixes and public IPs can be derived from your range and associated to Azure resources. The IPs won't be advertised at this point and not reachable.
+After the previous steps are completed, the public IP range can complete the **Provisioning** phase. The range will be created as a custom IP prefix resource in your subscription. Public IP prefixes and public IPs can be derived from your range and associated to any Azure resource that supports Standard SKU Public IPs (IPs derived from a custom IP prefix can also be safeguarded with [DDoS Protection Standard](../../ddos-protection/ddos-protection-overview.md). The IPs won't be advertised at this point and not reachable.
### Commission
virtual-network Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-cli.md
description: In this quickstart, learn to create a virtual network using the Azu
Previously updated : 03/06/2021 Last updated : 04/13/2022 #Customer intent: I want to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
ssh <publicIpAddress>
## Communicate between VMs
-To confirm private communication between the **myVM2** and **myVM1** VMs, enter this command:
+To confirm private communication between the **myVM2** and **myVM1** VMs, enter `ping myVM1 -c 4`.
+
+You'll receive a reply message like this:
```bash
-ping myVM1 -c 4
-```
-You'll receive four replies from *10.0.0.4*.
+azureuser@myVM2:~$ ping myVM1 -c 4
+PING myVM1.h0o2foz2r0tefncddcnfqm2lid.bx.internal.cloudapp.net (10.0.0.4) 56(84) bytes of data.
+64 bytes from myvm1.internal.cloudapp.net (10.0.0.4): icmp_seq=1 ttl=64 time=2.77 ms
+64 bytes from myvm1.internal.cloudapp.net (10.0.0.4): icmp_seq=2 ttl=64 time=1.95 ms
+64 bytes from myvm1.internal.cloudapp.net (10.0.0.4): icmp_seq=3 ttl=64 time=2.19 ms
+64 bytes from myvm1.internal.cloudapp.net (10.0.0.4): icmp_seq=4 ttl=64 time=1.85 ms
+
+ myVM1.h0o2foz2r0tefncddcnfqm2lid.bx.internal.cloudapp.net ping statistics
+4 packets transmitted, 4 received, 0% packet loss, time 3003ms
+rtt min/avg/max/mdev = 1.859/2.195/2.770/0.357 ms
+
+```
Exit the SSH session with the **myVM2** VM.
virtual-network Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-portal.md
description: In this quickstart, learn how to create a virtual network using the Azure portal. Previously updated : 03/17/2021 Last updated : 04/13/2022
Create two VMs in the virtual network:
## Communicate between VMs
-1. In the bastion connection of **myVM1**, open PowerShell.
+1. In the Bastion connection of **myVM1**, open PowerShell.
-2. Enter `ping myvm2`.
+2. Enter `ping myVM2`.
- You'll receive a message similar to this output:
+ You'll get a reply message like this:
```powershell
- Pinging myvm2.cs4wv3rxdjgedggsfghkjrxuqf.bx.internal.cloudapp.net [10.1.0.5] with 32 bytes of data:
- Reply from 10.1.0.5: bytes=32 time=3ms TTL=128
- Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
- Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
- Reply from 10.1.0.5: bytes=32 time=1ms TTL=128
+ PS C:\Users\myVM1> ping myVM2
- Ping statistics for 10.1.0.5:
- Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
- Approximate round trip times in milli-seconds:
- Minimum = 1ms, Maximum = 3ms, Average = 1ms
+ Pinging myVM2.ovvzzdcazhbu5iczfvonhg2zrb.bx.internal.cloudapp.net
+ Request timed out.
+ Request timed out.
+ Request timed out.
+ Request timed out.
+
+ Ping statistics for 10.0.0.5:
+ Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
+ ```
+
+ The ping fails, because it uses the Internet Control Message Protocol (ICMP). By default, ICMP isn't allowed through your Windows firewall.
+
+1. To allow **myVM2** to ping **myVM1** in a later step, enter this command:
+
+ ```powershell
+ New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
```
-3. Close the bastion connection to **myVM1**.
+ That command lets ICMP inbound through the Windows firewall.
+
+3. Close the Bastion connection to **myVM1**.
4. Complete the steps in [Connect to myVM1](#connect-to-myvm1), but connect to **myVM2**.
-5. Open PowerShell on **myVM2**, enter `ping myvm1`.
- You'll receive something like this message:
+5. Open PowerShell on **myVM2**, enter `ping myVM1`.
+
+ You'll receive a successful reply message like this:
```powershell
- Pinging myvm1.cs4wv3rxdjgedggsfghkjrxuqf.bx.internal.cloudapp.net [10.1.0.4] with 32 bytes of data:
+ Pinging myVM1.cs4wv3rxdjgedggsfghkjrxuqf.bx.internal.cloudapp.net [10.1.0.4] with 32 bytes of data:
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128 Reply from 10.1.0.4: bytes=32 time=1ms TTL=128 Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
virtual-network Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-powershell.md
Title: Create a virtual network - quickstart - Azure PowerShell description: In this quickstart, you create a virtual network using the Azure portal. A virtual network lets Azure resources communicate with each other and with the internet.-+ Previously updated : 03/06/2021 Last updated : 04/13/2022 #Customer intent: I want to create a virtual network so that virtual machines can communicate with privately with each other and with the internet.
You'll have to create another user and password. Azure takes a few minutes to cr
To get the public IP address of the VM, use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress).
-This example returns the public IP address of the **myVm1** VM:
+This example returns the public IP address of the **myVM1** VM:
```azurepowershell-interactive $ip = @{
mstsc /v:<publicIpAddress>
## Communicate between VMs
-1. In the Remote Desktop of **myVm1**, open PowerShell.
+1. In the Remote Desktop of **myVM1**, open PowerShell.
-1. Enter `ping myVm2`.
+1. Enter `ping myVM2`.
- You'll get something like this back:
+ You'll get a reply message like this:
```powershell
- PS C:\Users\myVm1> ping myVm2
+ PS C:\Users\myVM1> ping myVM2
- Pinging myVm2.ovvzzdcazhbu5iczfvonhg2zrb.bx.internal.cloudapp.net
+ Pinging myVM2.ovvzzdcazhbu5iczfvonhg2zrb.bx.internal.cloudapp.net
Request timed out. Request timed out. Request timed out.
mstsc /v:<publicIpAddress>
The ping fails, because it uses the Internet Control Message Protocol (ICMP). By default, ICMP isn't allowed through your Windows firewall.
-1. To allow **myVm2** to ping **myVm1** in a later step, enter this command:
+1. To allow **myVM2** to ping **myVM1** in a later step, enter this command:
```powershell New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
mstsc /v:<publicIpAddress>
That command lets ICMP inbound through the Windows firewall.
-1. Close the remote desktop connection to **myVm1**.
+1. Close the remote desktop connection to **myVM1**.
-1. Repeat the steps in [Connect to a VM from the internet](#connect-to-a-vm-from-the-internet). This time, connect to **myVm2**.
+1. Repeat the steps in [Connect to a VM from the internet](#connect-to-a-vm-from-the-internet). This time, connect to **myVM2**.
-1. From a command prompt on the **myVm2** VM, enter `ping myvm1`.
+1. From a command prompt on the **myVM2** VM, enter `ping myVM1`.
- You'll get something like this back:
+ You'll get a reply message like this:
```cmd
- C:\windows\system32>ping myVm1
+ C:\windows\system32>ping myVM1
- Pinging myVm1.e5p2dibbrqtejhq04lqrusvd4g.bx.internal.cloudapp.net [10.0.0.4] with 32 bytes of data:
+ Pinging myVM1.e5p2dibbrqtejhq04lqrusvd4g.bx.internal.cloudapp.net [10.0.0.4] with 32 bytes of data:
Reply from 10.0.0.4: bytes=32 time=2ms TTL=128 Reply from 10.0.0.4: bytes=32 time<1ms TTL=128 Reply from 10.0.0.4: bytes=32 time<1ms TTL=128
mstsc /v:<publicIpAddress>
Minimum = 0ms, Maximum = 2ms, Average = 0ms ```
- You receive replies from **myVm1**, because you allowed ICMP through the Windows firewall on the **myVm1** VM in a previous step.
+ You receive replies from **myVM1**, because you allowed ICMP through the Windows firewall on the **myVM1** VM in a previous step.
-1. Close the remote desktop connection to **myVm2**.
+1. Close the remote desktop connection to **myVM2**.
## Clean up resources
virtual-wan Global Hub Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md
Azure Virtual WAN offers two types of connectivity for remote users: global and hub-based. Use the following sections to learn about profile types and how to download them.
-> [!IMPORTANT]
-> RADIUS authentication supports only the hub-based profile.
+ ## Global profile
The global profile associated with a User VPN configuration points to a load bal
For example, you can associate a VPN configuration with two Virtual WAN hubs, one in West US and one in Southeast Asia. If a user connects to the global profile associated with the User VPN configuration, they'll connect to the closest Virtual WAN hub based on their location.
+> [!IMPORTANT]
+> If a Point-to-site VPN configuration used for a global profile is configured to authenticate users using the RADIUS protocol, make sure "Use Remote/On-premises RADIUS server" is turned on for all Point-to-site VPN Gateways using that configuration. Additionally, ensure your RADIUS server is configured to accept authentication requests from theRADIUS proxy IP addresses of **all** Point-to-site VPN Gateways using this VPN configuration.
+ To download the global profile: 1. Go to the virtual WAN.
virtual-wan Nat Rules Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/nat-rules-vpn-gateway.md
In this example, we'll NAT VPN site 1 to 172.30.0.0.0/24. However, because the V
* Select the VPN site that is connected to the Virtual WAN hub via Link A. Select **Edit Site** and input 172.30.0.0/24 as the private address space for the VPN site.
- :::image type="content" source="./media/nat-rules-vpn-gateway/vpn-site-static.png" alt-text="Screenshot showing how to edit the Private Address space of a VPN site" lightbox="./media/nat-rules-vpn-gateway/vpn-site-static.png":::
+ :::image type="content" source="./media/nat-rules-vpn-gateway/vpn-site-static.png" alt-text="Screenshot showing how to edit the Private Address space of a VPN site" lightbox="./media/nat-rules-vpn-gateway/vpn-site-static.png":::
### <a name="considerationsnobgp"></a>Considerations if VPN sites are statically configured (not connected via BGP)
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Note that youΓÇÖll only be able to update your virtual hub router if all the res
The route limit for OpenVPN clients is 1000.
+### How is Virtual WAN SLA calculated?
+
+Virtual WAN is a networking-as-a-service platform that has a 99.95% SLA. However, Virtual WAN combines many different components such as Azure Firewall, Site-to-site VPN, ExpressRoute, Point-to-site VPN, and Virtual WAN Hub/Integrated Network Virtual Appliances.
+
+The SLA for each component is calculated individually. For example, if ExpressRoute has a 10 minute downtime, the availability of ExpressRoute would be calculated as (Maximum Available Minutes - downtime) / Maximum Available Minutes * 100.
+ ## Next steps * For more information about Virtual WAN, see [About Virtual WAN](virtual-wan-about.md).