Updates from: 04/15/2022 01:04:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md
Data resides in **Asia Pacific** for the following countries/regions:
Data resides in **Australia** for the following countries/regions:
-> Australia and New Zealand
+> Australia (AU) and New Zealand (NZ)
The following countries/regions are in the process of being added to the list. For now, you can still use Azure AD B2C by picking any of the countries/regions above.
active-directory-b2c Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/find-help-open-support-ticket.md
If you're unable to find answers by using self-help resources, you can open an o
1. Select a **[Severity](https://azure.microsoft.com/support/plans/response)**, and your preferred contact method.
+ > [!NOTE]
+ > Under **Advanced diagnostic information**, it's highly recommended that you allow the collection of advanced information by selecting **Yes**. It enables Microsoft support team to investigate the issue faster.
:::image type="content" source="media/find-help-and-submit-support-ticket/find-help-and-submit-support-ticket-1.png" alt-text="Screenshot of how to find help and submit support ticket part 1."::: :::image type="content" source="media/find-help-and-submit-support-ticket/find-help-and-submit-support-ticket-2.png" alt-text="Screenshot of how to find help and submit support ticket part 2.":::+ 1. Select **Next**. Under **4. Review + create**, you'll see a summary of your support ticket.
active-directory-domain-services How To Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/how-to-data-retrieval.md
+
+ Title: Instructions for data retrieval from Azure Active Directory Domain Services | Microsoft Docs
+description: Learn how to retrieve data from Azure Active Directory Domain Services (Azure AD DS).
++++++++ Last updated : 04/14/2022++++
+# Azure AD DS instructions for data retrieval
+
+This document describes how to retrieve data from Azure Active Directory Domain Services (Azure AD DS).
++
+## Use Azure Active Directory to create, read, update, and delete user objects
+
+You can create a user in the Azure AD portal or by using Graph PowerShell or Graph API. You can also read, update, and delete users. The next sections show how to do these operations in the Azure AD portal.
+
+### Create, read, or update a user
+
+You can create a new user using the Azure Active Directory portal.
+To add a new user, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization.
+
+1. Search for and select *Azure Active Directory* from any page.
+
+1. Select **Users**, and then select **New user**.
+
+ ![Add a user through Users - All users in Azure AD](./media/tutorial-create-management-vm/add-user-in-users-all-users.png)
+
+1. On the **User** page, enter information for this user:
+
+ - **Name**. Required. The first and last name of the new user. For example, *Mary Parker*.
+
+ - **User name**. Required. The user name of the new user. For example, `mary@contoso.com`.
+
+ - **Groups**. Optionally, you can add the user to one or more existing groups.
+
+ - **Directory role**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role.
+
+ - **Job info**: You can add more information about the user here.
+
+1. Copy the autogenerated password provided in the **Password** box. You'll need to give this password to the user to sign in for the first time.
+
+1. Select **Create**.
+
+The user is created and added to your Azure AD organization.
+
+To read or update a user, search for and select the user such as, _Mary Parker_. Change any property and click **Save**.
+
+### Delete a user
+
+To delete a user, follow these steps:
+
+1. Search for and select the user you want to delete from your Azure AD tenant. For example, _Mary Parker_.
+
+1. Select **Delete user**.
+
+ ![Users - All users page with Delete user highlighted](./media/tutorial-create-management-vm/delete-user-all-users-blade.png)
++
+The user is deleted and no longer appears on the **Users - All users** page. The user can be seen on the **Deleted users** page for the next 30 days and can be restored during that time.
+
+When a user is deleted, any licenses consumed by the user are made available for other users.
+
+## Use RSAT tools to connect to an Azure AD DS managed domain and view users
+
+Sign in to an administrative workstation with a user account that's a member of the *AAD DC Administrators* group. The following steps require installation of [Remote Server Administration Tools (RSAT)](tutorial-create-management-vm.md#install-active-directory-administrative-tools).
+
+1. From the **Start** menu, select **Windows Administrative Tools**. The Active Directory Administration Tools are listed.
+
+ ![List of Administrative Tools installed on the server](./media/tutorial-create-management-vm/list-admin-tools.png)
+
+1. Select **Active Directory Administrative Center**.
+1. To explore the managed domain, choose the domain name in the left pane, such as *aaddscontoso*. Two containers named *AADDC Computers* and *AADDC Users* are at the top of the list.
+
+ ![List the available containers part of the managed domain](./media/tutorial-create-management-vm/active-directory-administrative-center.png)
+
+1. To see the users and groups that belong to the managed domain, select the **AADDC Users** container. The user accounts and groups from your Azure AD tenant are listed in this container.
+
+ In the following example output, a user account named *Contoso Admin* and a group for *AAD DC Administrators* are shown in this container.
+
+ ![View the list of Azure AD DS domain users in the Active Directory Administrative Center](./media/tutorial-create-management-vm/list-azure-ad-users.png)
+
+1. To see the computers that are joined to the managed domain, select the **AADDC Computers** container. An entry for the current virtual machine, such as *myVM*, is listed. Computer accounts for all devices that are joined to the managed domain are stored in this *AADDC Computers* container.
+
+You can also use the *Active Directory Module for Windows PowerShell*, installed as part of the administrative tools, to manage common actions in your managed domain.
+
+## Next steps
+* [Azure AD DS Overview](overview.md)
active-directory Define Conditional Rules For Provisioning User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md
Scoping filters are configured as part of the attribute mappings for each Azure
g. **REGEX MATCH**. Clause returns "true" if the evaluated attribute matches a regular expression pattern. For example: ([1-9][0-9]) matches any number between 10 and 99 (case sensitive).
- h. **NOT REGEX MATCH**. Clause returns "true" if the evaluated attribute doesn't match a regular expression pattern.
+ h. **NOT REGEX MATCH**. Clause returns "true" if the evaluated attribute doesn't match a regular expression pattern. It will return "false" if the attribute is null / empty.
i. **Greater_Than.** Clause returns "true" if the evaluated attribute is greater than the value. The value specified on the scoping filter must be an integer and the attribute on the user must be an integer [0,1,2,...].
Scoping filters are configured as part of the attribute mappings for each Azure
> - The IsMemberOf filter is not supported currently. > - The members attribute on a group is not supported currently. > - EQUALS and NOT EQUALS are not supported for multi-valued attributes
+> - Scoping filters will return "false" if the value is null / empty
9. Optionally, repeat steps 7-8 to add more scoping clauses.
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 02/16/2022 Last updated : 04/13/2022
Generate a user alias by taking first three letters of user's first name and fir
* **INPUT** (surname): "Doe" * **OUTPUT**: "JohDoe"
+### Add a comma between last name and first name.
+Add a comma between last name and first name.
+
+**Expression:**
+`Join(", ", "", [surname], [givenName])`
+
+**Sample input/output:**
+
+* **INPUT** (givenName): "John"
+* **INPUT** (surname): "Doe"
+* **OUTPUT**: "Doe, John"
+ ## Related Articles * [Automate User Provisioning/Deprovisioning to SaaS Apps](../app-provisioning/user-provisioning.md)
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
-# Azure AD on-premises application provisioning architecture (preview)
+# Azure AD on-premises application identity provisioning architecture (preview)
## Overview
You can define one or more matching attribute(s) and prioritize them based on th
## Provisioning agent questions Some common questions are answered here.
-### What is the GA version of the provisioning agent?
-
-For the latest GA version of the provisioning agent, see [Azure AD connect provisioning agent: Version release history](provisioning-agent-release-version-history.md).
- ### How do I know the version of my provisioning agent? 1. Sign in to the Windows server where the provisioning agent is installed.
You can also check whether all the required ports are open.
- Microsoft Azure AD Connect Agent Updater - Microsoft Azure AD Connect Provisioning Agent Package
-### Provisioning agent history
+## Provisioning agent history
This article lists the versions and features of Azure Active Directory Connect Provisioning Agent that have been released. The Azure AD team regularly updates the Provisioning Agent with new features and functionality. Please ensure that you do not use the same agent for on-prem provisioning and Cloud Sync / HR-driven provisioning. Microsoft provides direct support for the latest agent version and one version before.
-## Download link
+### Download link
You can download the latest version of the agent using [this link](https://aka.ms/onpremprovisioningagent).
-## 1.1.846.0
+### 1.1.846.0
April 11th, 2022 - released for download
-### Fixed issues
+#### Fixed issues
- We added support for ObjectGUID as an anchor for the generic LDAP connector when provisioning users into AD LDS.
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
You can become a Microsoft-compatible FIDO2 security key vendor through the foll
- Receive an overview of the device from the vendor - Microsoft will share our test scripts with you. Our engineering team will be able to answer questions if you have any specific needs. - You will complete and send all passed results to Microsoft Engineering team
- - Once Microsoft confirms, you will send multiple hardware/solution samples of each device to Microsoft Engineering team
- - Upon receipt Microsoft Engineering team will conduct test script verification and user experience flow
4. Upon successful passing of all tests by Microsoft Engineering team, Microsoft will confirm vendor's device is listed in [the FIDO MDS](https://fidoalliance.org/metadata/). 5. Microsoft will add your FIDO2 Security Key on Azure AD backend and to our list of approved FIDO2 vendors.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 04/13/2022 Last updated : 04/14/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on April 13th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on April 14th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 E5 without Audio Conferencing | SPE_E5_NOPSTNCONF | cd2925a3-5076-4233-8931-638a8c94f773 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/> EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>RREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F1 | M365_F1 | 44575883-256e-4a79-9da4-ebe9acabe2b2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SharePoint Online Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 | SPE_F1 | 66b55226-6b4f-492c-910c-a3b7a3c9d993 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>WIN10_ENT_LOC_F1 (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>Common Data Service for Teams_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 K1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>Power Automate for Office 365 K1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>Power Virtual Agents for Office 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>Project for Office (Plan F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>Windows 10 Enterprise E3 (local only) (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 F5 Security Add-on | SPE_F5_SEC | 67ffe999-d9ca-49e1-9d2c-03fb28aa7a48 | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) |
| Microsoft 365 F5 Security + Compliance Add-on | SPE_F5_SECCOMP | 32b47245-eb31-44fc-b945-a8b1576c439f | AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP(871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/> Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Communications DLP(6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/> Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) | | MICROSOFT FLOW FREE | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE - VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) | | MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOV | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
api-management Api Management Sample Send Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-send-request.md
Title: Using API Management service to generate HTTP requests
description: Learn to use request and response policies in API Management to call external services from your API documentationcenter: ''-+ editor: ''
na Previously updated : 12/15/2016- Last updated : 04/14/2022+ # Using external services from the Azure API Management service
The following example demonstrates how to send a message to a Slack chat room if
```xml <choose>
- <when condition="@(context.Response.StatusCode >= 500)">
- <send-one-way-request mode="new">
- <set-url>https://hooks.slack.com/services/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</set-url>
- <set-method>POST</set-method>
- <set-body>@{
- return new JObject(
- new JProperty("username","APIM Alert"),
- new JProperty("icon_emoji", ":ghost:"),
- new JProperty("text", String.Format("{0} {1}\nHost: {2}\n{3} {4}\n User: {5}",
- context.Request.Method,
- context.Request.Url.Path + context.Request.Url.QueryString,
- context.Request.Url.Host,
- context.Response.StatusCode,
- context.Response.StatusReason,
- context.User.Email
- ))
- ).ToString();
- }</set-body>
- </send-one-way-request>
- </when>
+ <when condition="@(context.Response.StatusCode >= 500)">
+ <send-one-way-request mode="new">
+ <set-url>https://hooks.slack.com/services/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</set-url>
+ <set-method>POST</set-method>
+ <set-body>@{
+ return new JObject(
+ new JProperty("username","APIM Alert"),
+ new JProperty("icon_emoji", ":ghost:"),
+ new JProperty("text", String.Format("{0} {1}\nHost: {2}\n{3} {4}\n User: {5}",
+ context.Request.Method,
+ context.Request.Url.Path + context.Request.Url.QueryString,
+ context.Request.Url.Host,
+ context.Response.StatusCode,
+ context.Response.StatusReason,
+ context.User.Email
+ ))
+ ).ToString();
+ }</set-body>
+ </send-one-way-request>
+ </when>
</choose> ```
At the end, you get the following policy:
</send-request> <choose>
- <!-- Check active property in response -->
- <when condition="@((bool)((IResponse)context.Variables["tokenstate"]).Body.As<JObject>()["active"] == false)">
- <!-- Return 401 Unauthorized with http-problem payload -->
- <return-response response-variable-name="existing response variable">
- <set-status code="401" reason="Unauthorized" />
- <set-header name="WWW-Authenticate" exists-action="override">
- <value>Bearer error="invalid_token"</value>
- </set-header>
- </return-response>
- </when>
- </choose>
+ <!-- Check active property in response -->
+ <when condition="@((bool)((IResponse)context.Variables["tokenstate"]).Body.As<JObject>()["active"] == false)">
+ <!-- Return 401 Unauthorized with http-problem payload -->
+ <return-response response-variable-name="existing response variable">
+ <set-status code="401" reason="Unauthorized" />
+ <set-header name="WWW-Authenticate" exists-action="override">
+ <value>Bearer error="invalid_token"</value>
+ </set-header>
+ </return-response>
+ </when>
+ </choose>
<base /> </inbound> ```
Once you have this information, you can make requests to all the backend systems
</send-request> <send-request mode="new" response-variable-name="throughputdata" timeout="20" ignore-error="true">
-<set-url>@($"https://production.acme.com/throughput?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
+ <set-url>@($"https://production.acme.com/throughput?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
<set-method>GET</set-method> </send-request> <send-request mode="new" response-variable-name="accidentdata" timeout="20" ignore-error="true">
-<set-url>@($"https://production.acme.com/accidentdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
+ <set-url>@($"https://production.acme.com/accidentdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
<set-method>GET</set-method> </send-request> ```
-These requests execute in sequence, which is not ideal.
+API Management will send these requests sequentially.
### Responding+ To construct the composite response, you can use the [return-response](./api-management-advanced-policies.md#ReturnResponse) policy. The `set-body` element can use an expression to construct a new `JObject` with all the component representations embedded as properties. ```xml
The complete policy looks as follows:
```xml <policies>
- <inbound>
-
- <set-variable name="fromDate" value="@(context.Request.Url.Query["fromDate"].Last())">
- <set-variable name="toDate" value="@(context.Request.Url.Query["toDate"].Last())">
+ <inbound>
+ <set-variable name="fromDate" value="@(context.Request.Url.Query["fromDate"].Last())">
+ <set-variable name="toDate" value="@(context.Request.Url.Query["toDate"].Last())">
<send-request mode="new" response-variable-name="revenuedata" timeout="20" ignore-error="true"> <set-url>@($"https://accounting.acme.com/salesdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
The complete policy looks as follows:
</send-request> <send-request mode="new" response-variable-name="throughputdata" timeout="20" ignore-error="true">
- <set-url>@($"https://production.acme.com/throughput?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
+ <set-url>@($"https://production.acme.com/throughput?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
<set-method>GET</set-method> </send-request> <send-request mode="new" response-variable-name="accidentdata" timeout="20" ignore-error="true">
- <set-url>@($"https://production.acme.com/accidentdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
+ <set-url>@($"https://production.acme.com/accidentdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
<set-method>GET</set-method> </send-request>
The complete policy looks as follows:
new JProperty("materialdata",((IResponse)context.Variables["materialdata"]).Body.As<JObject>()), new JProperty("throughputdata",((IResponse)context.Variables["throughputdata"]).Body.As<JObject>()), new JProperty("accidentdata",((IResponse)context.Variables["accidentdata"]).Body.As<JObject>())
- ).ToString())
+ ).ToString())
</set-body> </return-response>
- </inbound>
- <backend>
- <base />
- </backend>
- <outbound>
- <base />
- </outbound>
+ </inbound>
+ <backend>
+ <base />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
</policies> ```
-In the configuration of the placeholder operation, you can configure the dashboard resource to be cached for at least an hour.
- ## Summary Azure API Management service provides flexible policies that can be selectively applied to HTTP traffic and enables composition of backend services. Whether you want to enhance your API gateway with alerting functions, verification, validation capabilities or create new composite resources based on multiple backend services, the `send-request` and related policies open a world of possibilities.
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
ms.assetid: 0a84734e-b5c1-4264-8d1f-77e781b28426
Previously updated : 08/04/2021 Last updated : 04/08/2022 ms.devlang: azurecli
For more information and pricing. Go to the [NAT gateway overview](../../virtual
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-overview.png" alt-text="Diagram shows Internet traffic flowing to a NAT gateway in an Azure Virtual Network."::: > [!Note]
-> * Using NAT gateway with App Service is dependent on virtual network integration, and therefore **Standard**, **Premium**, **PremiumV2** or **PremiumV3** App Service plan is required.
+> * Using NAT gateway with App Service is dependent on virtual network integration, and therefore a supported App Service plan pricing tier is required.
> * When using NAT gateway together with App Service, all traffic to Azure Storage must be using private endpoint or service endpoint. > * NAT gateway cannot be used together with App Service Environment v1 or v2.
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Title: Integrate your app with an Azure virtual network
description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 03/04/2022 Last updated : 04/08/2022
After your app integrates with your virtual network, it uses the same DNS server
There are some limitations with using regional virtual network integration:
-* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Standard but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created.
+* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Basic and Standard tier but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created.
* The feature can't be used by Isolated plan apps that are in an App Service Environment. * You can't reach resources across peering connections with classic virtual networks. * The feature requires an unused subnet that's an IPv4 `/28` block or larger in an Azure Resource Manager virtual network.
The regional virtual network integration feature has no extra charge for use bey
Three charges are related to the use of the gateway-required virtual network integration feature:
-* **App Service plan pricing tier charges**: Your apps need to be in a Standard, Premium, Premium v2, or Premium v3 App Service plan. For more information on those costs, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
+* **App Service plan pricing tier charges**: Your apps need to be in a Basic, Standard, Premium, Premium v2, or Premium v3 App Service plan. For more information on those costs, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
* **Data transfer costs**: There's a charge for data egress, even if the virtual network is in the same datacenter. Those charges are described in [Data transfer pricing details](https://azure.microsoft.com/pricing/details/data-transfers/). * **VPN gateway costs**: There's a cost to the virtual network gateway that's required for the point-to-site VPN. For more information, see [VPN gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway/).
app-service Resources Kudu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/resources-kudu.md
It also provides other features, such as:
- Generates [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). - Allows access with [REST API](https://github.com/projectkudu/kudu/wiki/REST-API).
-## RBAC permissions required to access Kudo
+## RBAC permissions required to access Kudu
To access Kudu in the browser with Azure Active Directory authentication, you need to be a member of a built-in or custom role. - If using a built-in role, you must be a member of Website Contributor, Contributor, or Owner.
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
We're now ready to deploy our .NET app to the App Service.
| Instructions | Screenshot | |:-|--:| | [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01-240px.png" alt-text="A screenshot showing how to install the Azure Account and App Service extensions in Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01.png"::: |
-| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-02-240px.png" alt-text="A screenshot showing how to use the Azure App Service extension to deploy an app to Azure from Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-02.png"::: |
+| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-folder-small.png" alt-text="A screenshot showing how to deploy using the publish folder." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-folder.png"::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-workflow-small.png" alt-text="A screenshot showing the command palette deployment workflow." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-workflow.png"::: |
### [Deploy using Local Git](#tab/azure-cli-deploy)
application-gateway Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-bicep.md
+
+ Title: 'Quickstart: Direct web traffic using Bicep'
+
+description: In this quickstart, you learn how to use Bicep to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool.
+++ Last updated : 04/14/2022+++++
+# Quickstart: Direct web traffic with Azure Application Gateway - Bicep
+
+In this quickstart, you use Bicep to create an Azure Application Gateway. Then you test the application gateway to make sure it works correctly.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/)
++
+Multiple Azure resources are defined in the Bicep file:
+
+- [**Microsoft.Network/applicationgateways**](/azure/templates/microsoft.network/applicationgateways)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses) : one for the application gateway, and two for the virtual machines.
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines) : two virtual machines
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces) : two for the virtual machines
+- [**Microsoft.Compute/virtualMachine/extensions**](/azure/templates/microsoft.compute/virtualmachines/extensions) : to configure IIS and the web pages
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name myResourceGroupAG --location eastus
+ az deployment group create --resource-group myResourceGroupAG --template-file main.bicep --parameters adminUsername=<admin-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name myResourceGroupAG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName myResourceGroupAG -TemplateFile ./main.bicep -adminUsername "<admin-username>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-username\>** with the admin username for the backend servers. You'll also be prompted to enter **adminPassword**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group myResourceGroupAG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName myResourceGroupAG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name myResourceGroupAG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroupAG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
keywords: document processing
<!-- markdownlint-disable MD029 --> # Get started with the Form Recognizer Sample Labeling tool
-Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract key-value pairs, text, and tables from your documents. You can use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
+Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract key-value pairs, text, and tables from your documents. You can use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR)
The Form Recognizer Sample Labeling tool is an open source tool that enables you
## Prerequisites
-You will need the following to get started:
+You'll need the following to get started:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
Form Recognizer offers several prebuilt models to choose from. Each model has it
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-1. On the sample tool home page select **Use prebuilt model to get data**.
+1. On the sample tool home page, select **Use prebuilt model to get data**.
:::image type="content" source="../media/label-tool/prebuilt-1.jpg" alt-text="Analyze results of Form Recognizer Layout":::
-1. Select the **Form Type** your would like to analyze from the dropdown window.
+1. Select the **Form Type** to analyze from the dropdown window.
1. Choose a URL for the file you would like to analyze from the below options:
Azure the Form Recognizer Layout API extracts text, tables, selection marks, and
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-1. On the sample tool home page select **Use Layout to get text, tables and selection marks**.
+1. On the sample tool home page, select **Use Layout to get text, tables and selection marks**.
:::image type="content" source="../media/label-tool/layout-1.jpg" alt-text="Connection settings for Layout Form Recognizer tool.":::
Train a custom model to analyze and extract data from forms and documents specif
* Configure CORS
- [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you will need access to the CORS blade of your storage account.
-
- :::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
+ [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account.
1. Select the CORS blade for the storage account.
+ :::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
+ 1. Start by creating a new CORS entry in the Blob service.
- 1. Set the **Allowed origins** to **https://fott-2-1.azurewebsites.net**.
+ 1. Set the **Allowed origins** to **<https://fott-2-1.azurewebsites.net>**.
+
+ :::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
+
+ > [!TIP]
+ > You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS.
1. Select all the available 8 options for **Allowed methods**.
Train a custom model to analyze and extract data from forms and documents specif
1. Set the **Max Age** to 120 seconds or any acceptable value.
- 1. Click the save button at the top of the page to save the changes.
+ 1. Select the save button at the top of the page to save the changes.
CORS should now be configured to use the storage account from Form Recognizer Studio.
Train a custom model to analyze and extract data from forms and documents specif
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-1. On the sample tool home page select **Use custom form to train a model with labels and get key-value pairs**.
+1. On the sample tool home page, select **Use custom form to train a model with labels and get key-value pairs**.
:::image type="content" source="../media/label-tool/custom-1.jpg" alt-text="Train a custom model.":::
Configure the **Project Settings** fields with the following values:
> * **Description**. Add a brief description. > * **SAS URL**. Paste the shared access signature (SAS) URL for your Azure Blob Storage container.
- * To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and click **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
+ * To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and select **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
:::image type="content" source="../media/quickstarts/get-sas-url.png" alt-text="SAS location.":::
When you create or open a project, the main tag editor window opens. The tag edi
Select **Run OCR on all files** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
-The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we will not be labeling the table content, but rather rely on the automated extraction.
+The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we won't label the table content, but rather rely on the automated extraction.
:::image type="content" source="../media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool."::: ##### Apply labels to text
-Next, you will create tags (labels) and apply them to the text elements that you want the model to analyze. Note the sample label data set includes already labeled fields; we will add another field.
+Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze. Note the sample label data set includes already labeled fields; we'll add another field.
Use the tags editor pane to create a new tag you'd like to identify:
Choose the Train icon on the left pane to open the Training page. Then select th
#### Analyze a custom form
-1. Select the **Analyze** (light bulb) icon on the left to test your model.
+1. Select the **Analyze** (light bulb) icon on the left to test your model.
-1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
+1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
1. Choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
# Get started: Form Recognizer Studio | Preview >[!NOTE]
-> Form Recognizer Studio is currently in public preview. Some features may not be supported or have limited capabilities.
+> Form Recognizer Studio is currently in public preview. Some features may not be supported or have limited capabilities.
[Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. Get started with exploring the pre-trained models with sample documents or your own. Create projects to build custom template models and reference the models in your applications using the [Python SDK preview](try-v3-python-sdk.md) and other quickstarts.
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account. - 1. Select the CORS blade for the storage account.
-2. Start by creating a new CORS entry in the Blob service.
-3. Set the **Allowed origins** to **https://formrecognizer.appliedai.azure.com**.
-4. Select all the available 8 options for **Allowed methods**.
-5. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field.
-6. Set the **Max Age** to 120 seconds or any acceptable value.
-7. Select the save button at the top of the page to save the changes.
+
+ :::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
+
+1. Start by creating a new CORS entry in the Blob service.
+
+1. Set the **Allowed origins** to **<https://formrecognizer.appliedai.azure.com>**.
+
+ :::image type="content" source="../media/quickstarts/cors-updated-image.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
+
+ > [!TIP]
+ > You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS.
+
+1. Select all the available 8 options for **Allowed methods**.
+
+1. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field.
+
+1. Set the **Max Age** to 120 seconds or any acceptable value.
+
+1. Select the save button at the top of the page to save the changes.
CORS should now be configured to use the storage account from Form Recognizer Studio.
Use fixed tables to extract specific collection of values for a given set of fie
### Signature detection >[!NOTE]
-> Signature fields are currently only supported for custom template models. When training a custom neural model, labeled signature fields are ignored.
+> Signature fields are currently only supported for custom template models. When training a custom neural model, labeled signature fields are ignored.
To label for signature detection: (Custom form only)
azure-app-configuration Integrate Kubernetes Deployment Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-kubernetes-deployment-helm.md
# Integrate with Kubernetes Deployment using Helm
+Applications hosted in Kubernetes can access data in App Configuration [using the App Configuration provider library](./enable-dynamic-configuration-aspnet-core.md). The App Configuration provider has built-in caching and refreshing capabilities so applications can have dynamic configuration without redeployment. If you prefer not to update your application, this tutorial shows how to bring data from App Configuration to your Kubernetes using Helm via deployment. This way, your application can continue accessing configuration from Kubernetes variables and secrets. You run Helm upgrade when you want your application to pick up new configuration changes.
+ Helm provides a way to define, install, and upgrade applications running in Kubernetes. A Helm chart contains the information necessary to create an instance of a Kubernetes application. Configuration is stored outside of the chart itself, in a file called *values.yaml*. During the release process, Helm merges the chart with the proper configuration to run the application. For example, variables defined in *values.yaml* can be referenced as environment variables inside the running containers. Helm also supports creation of Kubernetes Secrets, which can be mounted as data volumes or exposed as environment variables.
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Each metric includes two versions. One metric measures performance for the entir
| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](./cache-planning-faq.yml#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** | | Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. | | Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections. |
+| Connections Created Per Second | The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. |
+| Connections Closed Per Second | The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. |
| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. | | Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** ΓÇô when a cache fails over (subordinate promotes to primary)</li><li>**Dataloss** ΓÇô when there's data loss on the cache</li><li>**UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough</li><li>**AOF** ΓÇô when there's an issue related to AOF persistence</li><li>**RDB** ΓÇô when there's an issue related to RDB persistence</li><li>**Import** ΓÇô when there's an issue related to Import RDB</li><li>**Export** ΓÇô when there's an issue related to Export RDB</li></ul> | | Evicted Keys |The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. This number maps to `evicted_keys` from the Redis INFO command. |
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-timers.md
You create a durable timer by calling the [`CreateTimer` (.NET)](/dotnet/api/mic
When you create a timer that expires at 4:30 pm, the underlying Durable Task Framework enqueues a message that becomes visible only at 4:30 pm. When running in the Azure Functions Consumption plan, the newly visible timer message will ensure that the function app gets activated on an appropriate VM. > [!NOTE]
-> * Starting with [version 2.3.0](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.3.0) of the Durable Extension, Durable timers are unlimited for .NET apps. For JavaScript, Python, and PowerShell apps, as well as .NET apps using earlier versions of the extension, Durable timers are limited to seven days. When you are using an older extension version or a non-.NET language runtime and need a delay longer than seven days, use the timer APIs in a `while` loop to simulate a longer delay.
+> * Starting with [version 2.3.0](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.3.0) of the Durable Extension, Durable timers are unlimited for .NET apps. For JavaScript, Python, and PowerShell apps, as well as .NET apps using earlier versions of the extension, Durable timers are limited to six days. When you are using an older extension version or a non-.NET language runtime and need a delay longer than six days, use the timer APIs in a `while` loop to simulate a longer delay.
> * Always use `CurrentUtcDateTime` instead of `DateTime.UtcNow` in .NET or `currentUtcDateTime` instead of `Date.now` or `Date.UTC` in JavaScript when computing the fire time for durable timers. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md) article. ## Usage for delay
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[CGI Federal, Inc.](https://www.cgi.com/en/us-federal)| |[CGI Technologies and Solutions Inc.](https://www.cgi.com)| |[Ciellos Inc.](https://www.ciellos.com/)|
-|[Ciracom Inc.](https://ciracom.com)|
+|[Ciracom Inc.](https://www.ciracomcloud.com)|
|[Clients First Business Solutions LLC](https://www.clientsfirst-us.com)| |[ClearShark](https://clearshark.com/)| |[CloudFit Software, LLC](https://www.cloudfitsoftware.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[HTS Voice & Data Systems, Inc.](https://www.hts-tx.com/)| |[HumanTouch LLC](https://www.humantouchllc.com/)| |[Hyertek Inc.](https://www.hyertek.com)|
-|[I10 Inc](http://i10agile.com/)|
+|I10 Inc|
|[I2, Inc. (IBM)](https://www.ibm.com/security/intelligence-analysis/i2)| |[i3 Business Solutions, LLC](https://www.i3businesssolutions.com/)| |[i3 LLC](http://i3llc.net/)|
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The following table shows the current support for the Azure Monitor agent with A
| Azure Monitor feature | Current support | More information | |:|:|:| | Text logs and Windows IIS logs | Public preview | [Collect text logs with Azure Monitor agent (preview)](data-collection-text-log.md) |
-| Windows Client OS installer | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+| Windows client installer | Public preview | [Set up Azure Monitor agent on Windows client devices](azure-monitor-agent-windows-client.md) |
| [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | The following table shows the current support for the Azure Monitor agent with Azure solutions.
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-unified-log.md
Location affects which region the alert rule is evaluated in. Queries are execut
Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query.
-Prices for Log Alert rules are availalble on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+Prices for Log Alert rules are available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
## View log alerts usage on your Azure bill
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
It's important to note that the following example doesn't cause the ApplicationI
For more information, see [ILogger configuration](ilogger.md#logging-level).
-### How can I get all custom ILogger error messages?
-
-Disable adaptive sampling. Examples of how to do this are provided in [Configure the Application Insights SDK](#configure-the-application-insights-sdk) section of this article.
- ### Some Visual Studio templates used the UseApplicationInsights() extension method on IWebHostBuilder to enable Application Insights. Is this usage still valid? The extension method `UseApplicationInsights()` is still supported, but it's marked as obsolete in Application Insights SDK version 2.8.0 and later. It will be removed in the next major version of the SDK. To enable Application Insights telemetry, we recommend using `AddApplicationInsightsTelemetry()` because it provides overloads to control some configuration. Also, in ASP.NET Core 3.X apps, `services.AddApplicationInsightsTelemetry()` is the only way to enable Application Insights.
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET based web applications running on [Azure App
| Data | ASP.NET Basic Collection | ASP.NET Recommended collection | | | | |
- | Adds CPU, memory, and I/O usage trends |Yes |Yes |
+ | Adds CPU, memory, and I/O usage trends |No |Yes |
| Collects usage trends, and enables correlation from availability results to transactions | Yes |Yes | | Collects exceptions unhandled by the host process | Yes |Yes | | Improves APM metrics accuracy under load, when sampling is used | Yes |Yes |
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
Azure Application Insights displays data about your application in a Microsoft Azure *resource*. Creating a new resource is therefore part of [setting up Application Insights to monitor a new application][start]. After you have created your new resource, you can get its instrumentation key and use that to configure the Application Insights SDK. The instrumentation key links your telemetry to the resource. > [!IMPORTANT]
-> [Classic Application Insights has been deprecated](https://azure.microsoft.com/updates/we-re-retiring-classic-application-insights-on-29-february-2024/). Please follow these [instructions on how upgrade to workspace-based Application Insights](convert-classic-resource.md).
+> On **February 29th, 2024,** [support for classic Application Insights will end](https://azure.microsoft.com/updates/we-re-retiring-classic-application-insights-on-29-february-2024). [Transition to workspace-based Application Insights](convert-classic-resource.md) to take advantage of [new capabilities](create-workspace-resource.md#new-capabilities). Newer regions introduced after February 2021 do not support creating classic Application Insights resources.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
builder.AddApplicationInsights(
The Application Insights extension in Azure Web Apps uses the new provider. You can modify the filtering rules in the *appsettings.json* file for your application.
+### I can't see some of the logs from my application in the workspace.
+
+This may happen because of adaptive sampling. Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). See the [Sampling in Application Insights](/azure/azure-monitor/app/sampling) for more details.
+ ## Next steps * [Logging in .NET](/dotnet/core/extensions/logging)
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.2.10.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.10/applicationinsights-agent-3.2.10.jar) file.
+Download the [applicationinsights-agent-3.2.11.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.11/applicationinsights-agent-3.2.11.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.2.10.jar](https://github.com/microsoft
#### Point the JVM to the jar file
-Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to your application's JVM args.
+Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to your applicatio
APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.10.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.11.jar` with the following content:
```json {
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
### Send custom telemetry by using the 2.x SDK
-1. Add `applicationinsights-core-2.6.4.jar` to your application. All 2.x versions are supported by Application Insights Java 3.x. If you have a choice. it's worth using the latest version:
+1. Add `applicationinsights-core-2.6.4.jar` to your application. All 2.x versions are supported by Application Insights Java 3.x. If you have a choice, it's worth using the latest version:
```xml <dependency>
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.2.10.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.2.11.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.10.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.11.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.10.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.11.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.10.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.11.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.10.jar -jar <my
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.10.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.11.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.10.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.10.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.11.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.10.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.11.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.10.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.11.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.10.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.11.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.2.10.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.2.11.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.2.10.jar
+-javaagent:path/to/applicationinsights-agent-3.2.11.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.2.10.jar>
+ -javaagent:path/to/applicationinsights-agent-3.2.11.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `j
## WebSphere 8 Open Management Console
-go to **servers > WebSphere application servers > Application servers**, choose the appropriate application servers and select:
+Go to **servers > WebSphere application servers > Application servers**, choose the appropriate application servers and select:
``` Java and Process Management > Process definition > Java Virtual Machine ``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.2.10.jar
+-javaagent:path/to/applicationinsights-agent-3.2.11.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.2.10.jar
+-javaagent:path/to/applicationinsights-agent-3.2.11.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.10.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.11.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.10.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.11.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.10.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.11.jar` is located.
```json {
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.2.10, you can capture request and response headers on your server (request) telemetry:
+Starting from 3.2.11, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.2.10, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.2.11, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.2.10
+> Vertx HTTP Library instrumentation is available starting from version 3.2.11
## Metric interval
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.2.10.jar` is located.
+`applicationinsights-agent-3.2.11.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
-that holds the `applicationinsights-agent-3.2.9.jar` file.
+that holds the `applicationinsights-agent-3.2.11.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing. If no log file is generated, check that your Java application has write permission to the directory that holds the
-`applicationinsights-agent-3.2.9.jar` file.
+`applicationinsights-agent-3.2.11.jar` file.
If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x should log any errors to stdout that would prevent it from logging to its normal location.
azure-monitor Activity Logs Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-logs-insights.md
description: View the overview of Azure Activity logs of your resources
Previously updated : 03/14/2021 Last updated : 04/14/2022 #Customer intent: As an IT administrator, I want to track changes to resource groups or specific resources in a subscription and to see which administrators or services make these changes. # Activity logs insights (Preview)
+Activity logs insights let you view information about changes to resources and resource groups in your Azure subscription. It uses information from the [Activity log](activity-log.md) to also present data about which users or services performed particular activities in the subscription. This includes which administrators deleted, updated or created resources, and whether the activities failed or succeeded. This article explains how to enable and use Activity log insights.
-Activity logs insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view Activity log insights in the Azure portal.
-
-Before using Activity log insights, you'll have to [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
-
-## How does Activity logs insights work?
-
-Activity logs you send to a [Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview) are stored in a table called AzureActivity.
-
-Activity logs insights are a curated [Log Analytics workbook](/azure/azure-monitor/visualize/workbooks-overview) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
+## Enable Activity log insights
+The only requirement to enable Activity log insights is to [configure the Activity log to export to a Log Analytics workspace](activity-log.md#send-to-log-analytics-workspace). Pre-built [workbooks](/azure/azure-monitor/visualize/workbooks-overview) curate this data, which is stored in the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) table in the workspace.
:::image type="content" source="media/activity-log/activity-logs-insights-main.png" lightbox="media/activity-log/activity-logs-insights-main.png" alt-text="A screenshot showing Azure Activity logs insights dashboards":::
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
Processing data to stream logs is charged for [certain services](resource-logs-c
The charge is based on the number of bytes in the exported JSON formatted log data, measured in GB (10^9 bytes).
-Pricing is availalble on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+Pricing is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
## Next steps
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
# Access the Azure Monitor Log Analytics API
-You can communicate with the Azure Monitor Log Analytics API using this endpoint: [api.loganalytics.io](https://api.loganalytics.io/). To access the API, you must authenticate through Azure Active Directory (Azure AD).
+You can communicate with the Azure Monitor Log Analytics API using this endpoint: `https://api.loganalytics.io`. To access the API, you must authenticate through Azure Active Directory (Azure AD).
## Public API format The public API format is:
The public API format is:
https://{hostname}/{api-version}/workspaces/{workspaceId}/query?[parameters] ``` where:
+ - **hostname**: `https://api.loganalytics.io`
- **api-version**: The API version. The current version is "v1" - **workspaceId**: Your workspace ID - **parameters**: The data required for this query
azure-monitor Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/authentication-authorization.md
Before beginning, make sure you have all the values required to make OAuth2 call
In the client credentials flow, the token is used with the ARM endpoint. A single request is made to receive a token, using the application permissions provided during the Azure AD application setup. The resource requested is: <https://management.azure.com/>.
-You can also use this flow to request a token to [https://api.loganalytics.io](https://api.loganalytics.io/). Replace the "resource" in the example.
+You can also use this flow to request a token to `https://api.loganalytics.io`. Replace the "resource" in the example.
#### Client Credentials Token URL (POST request)
azure-monitor Request Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/request-format.md
# Azure Monitor Log Analytics API request format There are two endpoints through which you can communicate with the Log Analytics API:-- A direct URL for the API: [api.loganalytics.io](https://api.loganalytics.io/)
+- A direct URL for the API: `https://api.loganalytics.io`
- Through Azure Resource Manager (ARM). While the URLs are different, the query parameters are the same for each endpoint. Both endpoints require authorization through Azure Active Directory (Azure AD).
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 03/02/2022 Last updated : 04/14/2022 # Resource limits for Azure NetApp Files
Size: 4096 Blocks: 8 IO Block: 65536 directory
## `Maxfiles` limits <a name="maxfiles"></a>
-Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 20 million files per TiB of provisioned volume size.
+Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 21,251,126 files per TiB of provisioned volume size.
-The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 20 million. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
+The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 21,251,126. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
| Volume size (quota) | Automatic readjustment of the `maxfiles` limit | |-|-|
-| <= 1 TiB | 20 million |
-| > 1 TiB but <= 2 TiB | 40 million |
-| > 2 TiB but <= 3 TiB | 60 million |
-| > 3 TiB but <= 4 TiB | 80 million |
-| > 4 TiB | 100 million |
+| <= 1 TiB | 21,251,126 |
+| > 1 TiB but <= 2 TiB | 42,502,252 |
+| > 2 TiB but <= 3 TiB | 63,753,378 |
+| > 3 TiB but <= 4 TiB | 85,004,504 |
+| > 4 TiB | 106,255,630 |
-If you have allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+If you have allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the `maxfiles` (inodes) limit beyond 106,255,630. For every 106,255,630 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 106,255,630 files to 212,511,260 files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
-You can increase the `maxfiles` limit to 500 million if your volume quota is at least 20 TiB.
+You can increase the `maxfiles` limit to 531,278,150 if your volume quota is at least 20 TiB.
## Request limit increase
azure-resource-manager Decompile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/decompile.md
Title: Decompile ARM template JSON to Bicep description: Describes commands for decompiling Azure Resource Manager templates to Bicep files. Previously updated : 11/22/2021 Last updated : 04/12/2022 + # Decompiling ARM template JSON to Bicep This article describes how to decompile Azure Resource Manager templates (ARM templates) to Bicep files. You must have the [Bicep CLI installed](./install.md) to run the conversion commands.
+> [!NOTE]
+> From Visual Studio Code, you can directly create resource declarations by importing from existing resources. For more information, see [Bicep commands](./visual-studio-code.md#bicep-commands).
+ Decompiling an ARM template helps you get started with Bicep development. If you have a library of ARM templates and want to use Bicep for future development, you can decompile them to Bicep. However, the Bicep file might need revisions to implement best practices for Bicep. This article shows how to run the `decompile` command in Azure CLI. If you're not using Azure CLI, run the command without `az` at the start of the command. For example, `az bicep decompile` becomes ``bicep decompile``.
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md
[GitHub Actions](https://docs.github.com/en/actions) is a suite of features in GitHub to automate your software development workflows.
-In this quickstart, you use the [GitHub Action for Azure Resource Manager deployment](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying a Bicep file to Azure.
+In this quickstart, you use the [GitHub Actions for Azure Resource Manager deployment](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying a Bicep file to Azure.
It provides a short introduction to GitHub actions and Bicep files. If you want more detailed steps on setting up the GitHub actions and project, see [Learning path: Deploy Azure resources by using Bicep and GitHub Actions](/learn/paths/bicep-github-actions).
az group create -n exampleRG -l westus
## Generate deployment credentials
-Your GitHub action runs under an identity. Use the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command to create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the identity.
+Your GitHub Actions runs under an identity. Use the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command to create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the identity.
Replace the placeholder `myApp` with the name of your application. Replace `{subscription-id}` with your subscription ID.
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
+
+ Title: Create Bicep files by using Visual Studio Code
+description: Describes how to create Bicep files by using Visual Studio Code
+ Last updated : 04/13/2022++
+# Create Bicep files by using Visual Studio Code
+
+This article shows you how to use Visual Studio Code to create Bicep files
+
+## Install VS Code
+
+To set up your environment for Bicep development, see [Install Bicep tools](install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az).
+
+## Bicep commands
+
+Visual Studio Code comes with several Bicep commands.
+
+Open or create a Bicep file in VS Code, select the **View** menu and then select **Command Palette**. You can also use the key combination **[CTRL]+[SHIFT]+P** to bring up the command palette.
+
+![Visual Studio Code Bicep commands](./media/visual-studio-code/visual-studio-code-bicep-commands.png)
+
+### Build
+
+The `build` command converts a Bicep file to an Azure Resource Manager template (ARM template). The new JSON template is stored in the same folder with the same file name. If a file with the same file name exists, it overwrites the old file. For more information, see [Bicep CLI commands](./bicep-cli.md#bicep-cli-commands).
+
+### Insert Resource
+
+The `insert resource` command adds a resource declaration in the Bicep file by providing the resource ID of an existing resource. After you select **Insert Resource**, enter the resource ID in the command palette. It takes a few moments to insert the resource.
+
+You can find the resource ID from the Azure portal, or by using:
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az resource list
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Get-AzResource
+```
+++
+Similar to exporting templates, the process tries to create a usable resource. However, most of the inserted resources require some modification before they can be used to deploy Azure resources.
+
+For more information, see [Decompiling ARM template JSON to Bicep](./decompile.md).
+
+### Open Visualizer
+
+The visualizer shows the resources defined in the Bicep file with the resource dependency information. The diagram is the visualization of a [Linux virtual machine Bicep file](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-simple-linux/main.bicep).
+
+[![Visual Studio Code Bicep visualizer](./media/visual-studio-code/visual-studio-code-bicep-visualizer.png)](./media/visual-studio-code/visual-studio-code-bicep-visualizer-expanded.png#lightbox)
+
+## Next steps
+
+To walk through a quickstart, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Marketplace | core | | Microsoft.MarketplaceApps | core | | Microsoft.MarketplaceOrdering - [registered](#registration) | core |
-| Microsoft.Media | [Media Services](/media-services/) |
+| Microsoft.Media | [Media Services](/azure/media-services/) |
| Microsoft.Microservices4Spring | [Azure Spring Cloud](../../spring-cloud/overview.md) | | Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) |
ResourceType : Microsoft.KeyVault/vaults
## Next steps
-For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
+For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Title: 'Tutorial: Deploy Bastion: Azure portal'
+ Title: 'Tutorial: Deploy Bastion using manual settings: Azure portal'
description: Learn how to deploy Bastion using manual settings using the Azure portal. Previously updated : 03/14/2022 Last updated : 04/13/2022
-# Tutorial: Deploy Bastion using the Azure portal
+# Tutorial: Deploy Bastion using manual settings
This tutorial helps you deploy Azure Bastion from the Azure portal using manual settings. When you use manual settings, you can specify configuration values such as instance counts and the SKU at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
You can use the following example values when creating this configuration, or yo
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, you can connect securely to any VM in the VNet using its private IP address. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Type **Bastion** in the search.
-1. Under services, select **Bastions**.
-1. On the Bastions page, select **+ Create** to open the **Create a Bastion** page.
-1. On the **Create a Bastion** page, configure the required settings.
- :::image type="content" source="./media/tutorial-create-host-portal/review-create.png" alt-text="Screenshot of Create a Bastion portal page." lightbox="./media/tutorial-create-host-portal/review-create.png":::
+1. Go to your VNet.
-### Project details
+1. Click **Bastion** in the left pane to open the **Bastion** page.
-* **Subscription**: Select your Azure subscription.
+1. On the Bastion page, click **Configure manually**. This lets you configure specific additional settings before deploying Bastion to your VNet.
+ :::image type="content" source="./media/tutorial-create-host-portal/configure-manually.png" alt-text="Screenshot of Bastion page showing configure manually button." lightbox="./media/tutorial-create-host-portal/configure-manually.png":::
-* **Resource Group**: Select your Resource Group.
+1. On the **Create a Bastion** page, configure the settings for your bastion host. Project details are populated from your virtual network values. Configure the **Instance details** values.
-### Instance details
+ * **Name**: Type the name that you want to use for your bastion resource.
-* **Name**: Type the name that you want to use for your bastion resource.
+ * **Region**: The Azure public region in which the resource will be created. Choose the region in which your virtual network resides.
-* **Region**: The Azure public region in which the resource will be created. Choose the region in which your virtual network resides.
+ * **Tier:** The tier is also known as the **SKU**. For this tutorial, select **Standard**. The Standard SKU lets you configure the instance count for host scaling and other features. For more information about features that require the Standard SKU, see [Configuration settings - SKU](configuration-settings.md#skus).
-* **Tier:** The tier is also known as the **SKU**. For this tutorial, select **Standard**. The Standard SKU lets you configure the instance count for host scaling and other features. For more information about features that require the Standard SKU, see [Configuration settings - SKU](configuration-settings.md#skus).
+ * **Instance count:** This is the setting for **host scaling**. It's configured in scale unit increments. Use the slider or type a number to configure the instance count that you want. For this tutorial, you can select the instance count you'd prefer. For more information, see [Host scaling](configuration-settings.md#instance) and [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion).
-* **Instance count:** This is the setting for **host scaling**. It's configured in scale unit increments. Use the slider or type a number to configure the instance count that you want. For this tutorial, you can select the instance count you'd prefer. For more information, see [Host scaling](configuration-settings.md#instance) and [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion).
+ :::image type="content" source="./media/tutorial-create-host-portal/instance-values.png" alt-text="Screenshot of Bastion page instance values." lightbox="./media/tutorial-create-host-portal/instance-values.png":::
-### Configure virtual networks
+1. Configure the **virtual networks** settings. Select the VNet from the dropdown. If you don't see your VNet in the dropdown list, make sure you selected the correct Resource Group and Region in the previous settings on this page.
-* **Virtual network**: Select your virtual network. If you don't see your VNet in the dropdown list, make sure you selected the correct Resource Group and Region in the previous settings on this page.
+1. To configure the AzureBastionSubnet, click **Manage subnet configuration**.
-* **Subnet**: Once select a virtual network, the subnet field appears on the page. This is the subnet to which your Bastion instances will be deployed. In most cases, you won't already have the subnet **AzureBastionSubnet** configured. The subnet name must be **AzureBastionSubnet**. See the following steps to add the subnet.
+ :::image type="content" source="./media/tutorial-create-host-portal/select-vnet.png" alt-text="Screenshot of configure virtual networks section." lightbox="./media/tutorial-create-host-portal/select-vnet.png":::
-#### Manage subnet configuration
-
-To configure the bastion subnet:
-
-1. Select **Manage subnet configuration**. This takes you to the **Subnets** page.
-
- :::image type="content" source="./media/tutorial-create-host-portal/subnet.png" alt-text="Screenshot of Manage subnet configuration." lightbox="./media/tutorial-create-host-portal/subnet.png":::
1. On the **Subnets** page, select **+Subnet** to open the **Add subnet** page.
-1. Create a subnet using the following guidelines:
+1. On the **Add subnet page**, create the 'AzureBastionSubnet' subnet using the following values. Leave the other values as default.
- * The subnet must be named **AzureBastionSubnet**.
+ * The subnet name must be **AzureBastionSubnet**.
* The subnet must be at least **/26 or larger** (/26, /25, /24 etc.) to accommodate features available with the Standard SKU.
-1. You don't need to fill out additional fields on this page. Select **Save** at the bottom of the page to create the subnet.
+ Click **Save** at the bottom of the page to save your values.
-1. At the top of the **Subnets** page, select **Create a Bastion** to return to the Bastion configuration page.
+1. At the top of the **Subnets** page, click **Create a Bastion** to return to the Bastion configuration page.
:::image type="content" source="./media/tutorial-create-host-portal/create-a-bastion.png" alt-text="Screenshot of Create a Bastion."lightbox="./media/tutorial-create-host-portal/create-a-bastion.png":::
-### Public IP address
-
-This is the public IP address of the Bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating. This IP address doesn't have anything to do with any of the VMs that you want to connect to.
-
-1. Select **Create new**.
-1. For **Public IP address name**, you can leave the default naming suggestion.
-1. For **Public IP address SKU**, this setting is prepopulated by default to **Standard**. Azure Bastion supports only the Standard public IP address SKU.
-1. For **Assignment**, this setting is prepopulated by default to **Static**. You can't change this setting.
+1. The public IP address section is where you configure the public IP address of the Bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating. This IP address doesn't have anything to do with any of the VMs that you want to connect to. Create a new IP address. You can leave the default naming suggestion.
-### Review and create
+1. When you finish specifying the settings, select **Review + Create**. This validates the values.
-1. When you finish specifying the settings, select **Review + Create**. This validates the values. Once validation passes, you can deploy Bastion.
-1. Review your settings.
-1. At the bottom of the page, select **Create**.
-1. You'll see a message letting you know that your deployment is underway. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
+1. Once validation passes, you can deploy Bastion. Click **Create**. You'll see a message letting you know that your deployment is process. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
## <a name="connect"></a>Connect to a VM
cloudfoundry Cloudfoundry Deploy Your First App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/cloudfoundry-deploy-your-first-app.md
Now, when you deploy an application, it is automatically created in the new org
## Deploy an application
-Let's use a sample Cloud Foundry application called Hello Spring Cloud, which is written in Java and based on the [Spring Framework](https://spring.io) and [Spring Boot](https://projects.spring.io/spring-boot/).
+Let's use a sample Cloud Foundry application called Hello Spring Cloud, which is written in Java and based on the [Spring Framework](https://spring.io) and [Spring Boot](https://spring.io/projects/spring-boot).
### Clone the Hello Spring Cloud repository
Running the `cf app` command on the application shows that Cloud Foundry is crea
[cf-cli]: https://github.com/cloudfoundry/cli [cloudshell-docs]: ../cloud-shell/overview.md [cf-orgs-spaces-docs]: https://docs.cloudfoundry.org/concepts/roles.html
-[spring-boot]: https://projects.spring.io/spring-boot/
+[spring-boot]: https://spring.io/projects/spring-boot
[spring-framework]: https://spring.io [cf-push-docs]: https://docs.cloudfoundry.org/concepts/how-applications-are-staged.html [cloudfoundry-docs]: https://docs.cloudfoundry.org
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
The **Read** call takes images and documents as its input. They have the followi
* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF * For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed. * The file size must be less than 50 MB (6 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
+* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This corresponds to about 8 font point text at 150 DPI.
## Supported languages The Read API latest preview supports 164 languages for print text and 9 languages for handwritten text.
cognitive-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/azure-resources.md
When you move into the development phase of the project, you should consider:
Typically there are three parameters you need to consider: * **The throughput you need**:
- * Question answering is a free feature, and the throughput is currently capped at 10 transactions per second for both management APIs and prediction APIs.
+ * The throughput for question answering is currently capped at 10 transactions per second for both management APIs and prediction APIs.
* This should also influence your Azure **Cognitive Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Cognitive Search [capacity](../../../../search/search-capacity-planning.md) with replicas. * **Size and the number of knowledge bases**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of knowledge bases you need based on number of different subject domains. One subject domain (for a single language) should be in one knowledge base.
Typically there are three parameters you need to consider:
For example, if your tier has 15 allowed indexes, you can publish 14 knowledge bases of the same language (one index per published knowledge base). The 15th index is used for all the knowledge bases for authoring and testing. If you choose to have knowledge bases in different languages, then you can only publish seven knowledge bases.
-* **Number of documents as sources**: question answering is a free feature, and there are no limits to the number of documents you can add as sources.
+* **Number of documents as sources**: There are no limits to the number of documents you can add as sources in question answering.
The following table gives you some high-level guidelines.
The following table gives you some high-level guidelines.
## Recommended settings
-Custom question answering is a free feature, and the throughput is currently capped at 10 transactions per second for both management APIs and prediction APIs. To target 10 transactions per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
+The throughput for question answering is currently capped at 10 transactions per second for both management APIs and prediction APIs. To target 10 transactions per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
## Keys in question answering
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
Use the following ports for Communication Services Azure direct routing:
|Traffic|From|To|Source port|Destination port| |: |: |: |: |: |
-|SIP/TLS|SIP Proxy|SBC|1024ΓÇô65535|Defined on the SBC (For Office 365 GCC High/DoD only port 5061 must be used)|
+|SIP/TLS|SIP Proxy|SBC|1024ΓÇô65535|Defined on the SBC|
SIP/TLS|SBC|SIP Proxy|Defined on the SBC|5061| ### Failover mechanism for SIP Signaling
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
Title: Connect to SQL Server, Azure SQL Database, or Azure SQL Managed Instance
-description: Automate tasks for SQL databases on premises or in the cloud using Azure Logic Apps.
+ Title: Connect to SQL databases
+description: Automate workflows for SQL databases on premises or in the cloud with Azure Logic Apps.
ms.suite: integration Previously updated : 03/24/2021 Last updated : 04/18/2022 tags: connectors # Connect to a SQL database from Azure Logic Apps
-This article shows how you can access data in your SQL database from inside a logic app with the SQL Server connector. That way, you can automate tasks, processes, or workflows that manage your SQL data and resources by creating logic apps. The SQL Server connector works for [SQL Server](/sql/sql-server/sql-server-technical-documentation) as well as [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md) and [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md).
+This article shows how to access your SQL database with the SQL Server connector in Azure Logic Apps. You can then create automated workflows that are triggered by events in your SQL database or other systems and manage your SQL data and resources.
-You can create logic apps that run when triggered by events in your SQL database or in other systems, such as Dynamics CRM Online. Your logic apps can also get, insert, and delete data along with running SQL queries and stored procedures. For example, you can build a logic app that automatically checks for new records in Dynamics CRM Online, adds items to your SQL database for any new records, and then sends email alerts about the added items.
+For example, you can use actions that get, insert, and delete data along with running SQL queries and stored procedures. You can create workflow that checks for new records in a non-SQL database, does some processing work, creates new records in your SQL database using the results, and sends email alerts about the new records in your SQL database.
-If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). For connector-specific technical information, limitations, and known issues, see the [SQL Server connector reference page](/connectors/sql/).
+ The SQL Server connector supports the following SQL editions:
+
+* [SQL Server](/sql/sql-server/sql-server-technical-documentation)
+* [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md)
+* [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md)
+
+If you're new to Azure Logic Apps, review the following documentation:
+
+* [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
+* [Quickstart: Create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md)
## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An [SQL Server database](/sql/relational-databases/databases/create-a-database), [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md), or [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md).
+* [SQL Server database](/sql/relational-databases/databases/create-a-database), [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md), or [SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md).
- Your tables must have data so that your logic app can return results when calling operations. If you use Azure SQL Database, you can use sample databases, which are included.
+ The SQL connector requires that your tables contain data so that SQL connector operations can return results when called. For example, if you use Azure SQL Database, you can use the included sample databases to try the SQL connector operations.
-* Your SQL server name, database name, your user name, and your password. You need these credentials so that you can authorize your logic to access your SQL server.
+* The information required to create a SQL database connection, such as your SQL server and database names. If you're using Windows Authentication or SQL Server Authentication to authenticate access, you also need your user name and password. You can usually find this information in the connection string.
- * For on-premises SQL Server, you can find these details in the connection string:
-
- `Server={your-server-address};Database={your-database-name};User Id={your-user-name};Password={your-password};`
+ > [!NOTE]
+ >
+ > If you use a SQL Server connection string that you copied directly from the Azure portal,
+ > you have to manually add your password to the connection string.
- * For Azure SQL Database, you can find these details in the connection string.
-
- For example, to find this string in the Azure portal, open your database. On the database menu, select either **Connection strings** or **Properties**:
+ * For a SQL database in Azure, the connection string has the following format:
`Server=tcp:{your-server-name}.database.windows.net,1433;Initial Catalog={your-database-name};Persist Security Info=False;User ID={your-user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;`
+ 1. To find this string in the [Azure portal](https://portal.azure.com), open your database.
+
+ 1. On the database menu, under **Properties**, select **Connection strings**.
+
+ * For an on-premises SQL server, the connection string has the following format:
+
+ `Server={your-server-address};Database={your-database-name};User Id={your-user-name};Password={your-password};`
+
+* The logic app workflow where you want to access your SQL database. If you want to start your workflow with a SQL Server trigger operation, you have to start with a blank workflow.
+ <a name="multi-tenant-or-ise"></a>
-* Based on whether your logic apps are going to run in global, multi-tenant Azure or an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), here are other requirements for connecting to on-premises SQL Server:
+* To connect to an on-premises SQL server, the following extra requirements apply based on whether you have a Consumption logic app workflow, either in multi-tenant Azure Logic Apps or an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), or if you have a Standard logic app workflow in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+
+ * Consumption logic app workflow
+
+ * In multi-tenant Azure Logic Apps, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) installed on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md).
+
+ * In an ISE, when you have non-Windows or SQL Server Authentication connections, you don't need the on-premises data gateway and can use the ISE-versioned SQL Server connector. For Windows Authentication and SQL Server Authentication, you still have to use the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). Also, the ISE-versioned SQL Server connector doesn't support Windows authentication, so you have to use the non-ISE SQL Server connector.
+
+ * Standard logic app workflow
+
+ In single-tenant Azure Logic Apps, you can use the built-in SQL Server connector, which requires a connection string. If you want to use the managed SQL Server connector, you need follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+
+## Connector technical reference
+
+This connector is available for logic app workflows in multi-tenant Azure Logic Apps, ISEs, and single-tenant Azure Logic Apps.
+
+* For Consumption logic app workflows in multi-tenant Azure Logic Apps, this connector is available only as a managed connector. For more information, review the [managed SQL Server connector operations](/connectors/sql).
+
+* For Consumption logic app workflows in an ISE, this connector is available as a managed connector and as an ISE connector that's designed to run in an ISE. For more information, review the [managed SQL Server connector operations](/connectors/sql).
+
+* For Standard logic app workflows in single-tenant Azure Logic Apps, this connector is available as a managed connector and as a built-in connector that's designed to run in the same process as the single-tenant Azure Logic Apps runtime. However, the built-in version differs in the following ways:
+
+ * The built-in SQL Server connector has no triggers.
+
+ * The built-in SQL Server connector has only one operation: **Execute Query**
+
+For the managed SQL Server connector technical information, such as trigger and action operations, limits, and known issues, review the [SQL Server connector's reference page](/connectors/sql/), which is generated from the Swagger description.
+
+<a name="add-sql-trigger"></a>
+
+## Add a SQL Server trigger
+
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create logic app workflows:
+
+* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
+
+* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
+
+### [Consumption](#tab/consumption)
+
+1. In the Azure portal, open your blank logic app workflow in the designer.
+
+1. Find and select the [managed SQL Server connector trigger](/connectors/sql) that you want to use.
+
+ 1. Under the designer search box, select **All**.
+
+ 1. In the designer search box, enter **sql server**.
+
+ 1. From the triggers list, select the SQL trigger that you want. This example continues with the trigger named **When an item is created**.
+
+ ![Screenshot showing the Azure portal, workflow designer for Consumption logic app, search box with "sql server", and the "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-consumption.png)
+
+1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+
+1. In the trigger, specify the interval and frequency for how often the trigger checks the table.
+
+1. To add other properties available for this trigger, open the **Add new parameter** list and select those properties.
+
+ This trigger returns only one row from the selected table, and nothing else. To perform other tasks, continue by adding either a [SQL Server connector action](#add-sql-action) or [another action](../connectors/apis-list.md) that performs the next task that you want in your logic app workflow.
+
+ For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+
+1. On the designer toolbar, select **Save**.
+
+ Although this step automatically enables and publishes your logic app live in Azure, the only action that your logic app currently takes is to check your database based on your specified interval and frequency.
+
+### [Standard](#tab/standard)
+
+In Standard logic app workflows, only the managed SQL Server connector has triggers. The built-in SQL Server connector doesn't have any triggers.
+
+1. In the Azure portal, open your blank logic app workflow in the designer.
+
+1. Find and select the [managed SQL Server connector trigger](/connectors/sql) that you want to use.
+
+ 1. Under the designer search box, select **Azure**.
+
+ 1. In the designer search box, enter **sql server**.
+
+ 1. From the triggers list, select the SQL trigger that you want. This example continues with the trigger named **When an item is created**.
+
+ ![Screenshot showing the Azure portal, workflow designer for Standard logic app, search box with "sql server", and the "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-standard.png)
+
+1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+
+1. In the trigger, specify the interval and frequency for how often the trigger checks the table.
+
+1. To add other properties available for this trigger, open the **Add new parameter** list and select those properties.
+
+ This trigger returns only one row from the selected table, and nothing else. To perform other tasks, continue by adding either a [SQL connector action](#add-sql-action) or [another action](../connectors/apis-list.md) that performs the next task that you want in your logic app workflow.
+
+ For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+
+1. On the designer toolbar, select **Save**.
+
+ Although this step automatically enables and publishes your logic app live in Azure, the only action that your logic app currently takes is to check your database based on your specified interval and frequency.
+++
+<a name="trigger-recurrence-shift-drift"></a>
+
+## Trigger recurrence shift and drift (daylight saving time)
+
+Recurring connection-based triggers where you need to create a connection first, such as the managed SQL Server trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). For recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
+
+To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
+
+<a name="add-sql-action"></a>
+
+## Add a SQL Server action
+
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use Visual Studio to edit Consumption logic app workflows or Visual Studio Code to the following tools to edit logic app workflows:
+
+* Consumption logic app workflow: Visual Studio or Visual Studio Code
+
+* Standard logic app workflows: Visual Studio Code
+
+In this example, the logic app workflow starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from a SQL database.
+
+### [Consumption](#tab/consumption)
+
+1. In the Azure portal, open your logic app workflow in the designer.
+
+1. Find and select the [managed SQL Server connector action](/connectors/sql) that you want to use. This example continues with the action named **Get row**.
+
+ 1. Under the trigger or action where you want to add the SQL action, select **New step**.
+
+ Or, to add an action between existing steps, move your mouse over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+
+ 1. In the **Choose an operation** box, under the designer search box, select **All**.
- * For logic apps in global, multi-tenant Azure that connect to on-premises SQL Server, you need to have the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) installed on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md).
+ 1. In the designer search box, enter **sql server**.
- * For logic apps in an ISE that connect to on-premises SQL Server and use Windows authentication, the ISE-versioned SQL Server connector doesn't support Windows authentication. So, you still need to use the data gateway and the non-ISE SQL Server connector. For other authentication types, you don't need to use the data gateway and can use the ISE-versioned connector.
+ 1. From the actions list, select the SQL Server action that you want. This example uses the **Get row** action, which gets a single record.
-* The logic app where you need access to your SQL database. To start your logic app with a SQL trigger, you need a [blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+ ![Screenshot showing the Azure portal, workflow designer for Consumption logic app, the search box with "sql server", and "Get row" selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-consumption.png)
+
+1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+
+1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
+
+ In this example, the table name is **SalesLT.Customer**.
+
+ ![Screenshot showing Consumption workflow designer and the "Get row" action with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-consumption.png)
+
+ This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions, for example, those that create a file that includes the fields from the returned row, and store that file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+
+1. When you're done, on the designer toolbar, select **Save**.
+
+### [Standard](#tab/standard)
+
+1. In the Azure portal, open your logic app workflow in the designer.
+
+1. Find and select the SQL Server connector action that you want to use.
+
+ 1. Under the trigger or action where you want to add the SQL Server action, select **New step**.
+
+ Or, to add an action between existing steps, move your mouse over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+
+ 1. In the **Choose an operation** box, under the designer search box, select either of the following options:
+
+ * **Built-in** when you want to use built-in SQL Server actions such as **Execute Query**
+
+ ![Screenshot showing the Azure portal, workflow designer for Standard logic app, and designer search box with "Built-in" selected underneath.](./media/connectors-create-api-sqlazure/select-built-in-category-standard.png)
+
+ * **Azure** when you want to use [managed SQL Server connector actions](/connectors/sql) such as **Get row**
+
+ ![Screenshot showing the Azure portal, workflow designer for Standard logic app, and designer search box with "Azure" selected underneath.](./media/connectors-create-api-sqlazure/select-azure-category-standard.png)
+
+ 1. In the designer search box, enter **sql server**.
+
+ 1. From the actions list, select the SQL Server action that you want.
+
+ * Built-in actions
+
+ This example selects the only available built-in action named **Execute Query**.
+
+ ![Screenshot showing the designer search box with "sql server" and "Built-in" selected underneath with the "Execute Query" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-execute-query-action-standard.png)
+
+ * Managed actions
+
+ This example selects the action named **Get row**, which gets a single record.
+
+ ![Screenshot showing the designer search box with "sql server" and "Azure" selected underneath with the "Get row" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-standard.png)
+
+1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+
+1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
+
+ In this example, the table name is **SalesLT.Customer**.
+
+ ![Screenshot showing Standard workflow designer and "Get row" action with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png)
+
+ This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions, for example, those that create a file that includes the fields from the returned row, and store that file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+
+1. When you're done, on the designer toolbar, select **Save**.
++ <a name="create-connection"></a>
If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/log
[!INCLUDE [Create connection general intro](../../includes/connectors-create-connection-general-intro.md)]
-Now, continue with these steps:
+After you provide this information, continue with these steps:
-* [Connect to cloud-based Azure SQL Database or Managed Instance](#connect-azure-sql-db)
+* [Connect to cloud-based Azure SQL Database or SQL Managed Instance](#connect-azure-sql-db)
* [Connect to on-premises SQL Server](#connect-sql-server) <a name="connect-azure-sql-db"></a>
-### Connect to Azure SQL Database or Managed Instance
+### Connect to Azure SQL Database or SQL Managed Instance
-To access an Azure SQL Managed Instance without using the on-premises data gateway or integration service environment, you have to [set up the public endpoint on the Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md). The public endpoint uses port 3342, so make sure that you specify this port number when you create the connection from your logic app.
+To access a SQL Managed Instance without using the on-premises data gateway or integration service environment, you have to [set up the public endpoint on the SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md). The public endpoint uses port 3342, so make sure that you specify this port number when you create the connection from your logic app.
+The first time that you add either a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#add-sql-action), and you haven't previously created a connection to your database, you're prompted to complete these steps:
-The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL action](#add-sql-action), and you haven't previously created a connection to your database, you're prompted to complete these steps:
+1. For **Connection name**, provide a name to use for your connection.
-1. For **Authentication Type**, select the authentication that's required and enabled on your database in Azure SQL Database or Azure SQL Managed Instance:
+1. For **Authentication type**, select the authentication that's required and enabled on your database in Azure SQL Database or SQL Managed Instance:
| Authentication | Description | |-|-|
- | [**Azure AD Integrated**](../azure-sql/database/authentication-aad-overview.md) | - Supports both the non-ISE and ISE SQL Server connector. <p><p>- Requires a valid identity in Azure Active Directory (Azure AD) that has access to your database. <p>For more information, see these topics: <p>- [Azure SQL Security Overview - Authentication](../azure-sql/database/security-overview.md#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](../azure-sql/database/logins-create-manage.md#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](../azure-sql/database/authentication-aad-overview.md) |
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supports both the non-ISE and ISE SQL Server connector. <p><p>- Requires a valid user name and strong password that are created and stored in your database. <p>For more information, see these topics: <p>- [Azure SQL Security Overview - Authentication](../azure-sql/database/security-overview.md#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](../azure-sql/database/logins-create-manage.md#authentication-and-authorization) |
- | **Managed Identity** | - Supports both the non-ISE and ISE SQL Server connector. <p><p>- Requires a valid managed identity that has [access to your database](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md), **SQL DB Contributor** role access to the SQL Server resource, and **Contributor** access to the resource group that includes the SQL Server resource. <p>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles).
- |||
+ | **Service principal (Azure AD application)** | - Available only for the managed SQL Server connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). |
+ | **Logic Apps Managed Identity** | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
+ | [**Azure AD Integrated**](../azure-sql/database/authentication-aad-overview.md) | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](../azure-sql/database/security-overview.md#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](../azure-sql/database/logins-create-manage.md#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](../azure-sql/database/authentication-aad-overview.md) |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](../azure-sql/database/security-overview.md#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](../azure-sql/database/logins-create-manage.md#authentication-and-authorization) |
+
+ This connection and authentication information box looks similar to the following example, which selects **Azure AD Integrated**:
- This example continues with **Azure AD Integrated**:
+ * Consumption logic app workflows
- ![Screenshot that shows the "SQL Server" connection window with the opened "Authentication Type" list and "Azure AD Integrated" selected.](./media/connectors-create-api-sqlazure/select-azure-ad-authentication.png)
+ ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" cloud connection information with selected authentication type for Consumption.](./media/connectors-create-api-sqlazure/select-azure-ad-sql-cloud-consumption.png)
-1. After you select **Azure AD Integrated**, select **Sign In**. Based on whether you use Azure SQL Database or Azure SQL Managed Instance, select your user credentials for authentication.
+ * Standard logic app workflows
+
+ ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" cloud connection information with selected authentication type for Standard.](./media/connectors-create-api-sqlazure/select-azure-ad-sql-cloud-standard.png)
+
+1. After you select **Azure AD Integrated**, select **Sign in**. Based on whether you use Azure SQL Database or SQL Managed Instance, select your user credentials for authentication.
1. Select these values for your database: | Property | Required | Description | |-|-|-|
- | **Server name** | Yes | The address for your SQL server, for example, `Fabrikam-Azure-SQL.database.windows.net` |
- | **Database name** | Yes | The name for your SQL database, for example, `Fabrikam-Azure-SQL-DB` |
- | **Table name** | Yes | The table that you want to use, for example, `SalesLT.Customer` |
+ | **Server name** | Yes | The address for your SQL server, for example, **Fabrikam-Azure-SQL.database.windows.net** |
+ | **Database name** | Yes | The name for your SQL database, for example, **Fabrikam-Azure-SQL-DB** |
+ | **Table name** | Yes | The table that you want to use, for example, **SalesLT.Customer** |
|||| > [!TIP] > To provide your database and table information, you have these options: >
- > * Find this information in your database's connection string. For example, in the Azure portal, find and open your database. On the database menu, select either **Connection strings** or **Properties**, where you can find this string:
+ > * Find this information in your database's connection string. For example, in the Azure portal, find and open your database. On the database menu, select either **Connection strings** or **Properties**, where you can find the following string:
> > `Server=tcp:{your-server-address}.database.windows.net,1433;Initial Catalog={your-database-name};Persist Security Info=False;User ID={your-user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;` > > * By default, tables in system databases are filtered out, so they might not automatically appear when you select a system database. As an alternative, you can manually enter the table name after you select **Enter custom value** from the database list. >
- This example shows how these values might look:
+ This database information box looks similar to the following example:
+
+ * Consumption logic app workflows
- ![Create connection to SQL database](./media/connectors-create-api-sqlazure/azure-sql-database-create-connection.png)
+ ![Screenshot showing SQL cloud database cloud information with sample values for Consumption.](./media/connectors-create-api-sqlazure/azure-sql-database-information-consumption.png)
+
+ * Standard logic app workflows
+
+ ![Screenshot showing SQL cloud database information with sample values for Standard.](./media/connectors-create-api-sqlazure/azure-sql-database-information-standard.png)
1. Now, continue with the steps that you haven't completed yet in either [Add a SQL trigger](#add-sql-trigger) or [Add a SQL action](#add-sql-action).
The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL act
| Authentication | Description | |-|-|
- | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Supports only the non-ISE SQL Server connector, which requires a data gateway resource that's previously created in Azure for your connection, regardless whether you use multi-tenant Azure or an ISE. <p><p>- Requires a valid Windows user name and password to confirm your identity through your Windows account. <p>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) |
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supports both the non-ISE and ISE SQL Server connector. <p><p>- Requires a valid user name and strong password that are created and stored in your SQL Server. <p>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
+ | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Available only for the managed SQL Server connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). |
|||
- This example continues with **Windows Authentication**:
-
- ![Select authentication type to use](./media/connectors-create-api-sqlazure/select-windows-authentication.png)
- 1. Select or provide the following values for your SQL database: | Property | Required | Description |
The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL act
| **Username** | Yes | Your user name for the SQL server and database | | **Password** | Yes | Your password for the SQL server and database | | **Subscription** | Yes, for Windows authentication | The Azure subscription for the data gateway resource that you previously created in Azure |
- | **Connection Gateway** | Yes, for Windows authentication | The name for the data gateway resource that you previously created in Azure <p><p>**Tip**: If your gateway doesn't appear in the list, check that you correctly [set up your gateway](../logic-apps/logic-apps-gateway-connection.md). |
+ | **Connection Gateway** | Yes, for Windows authentication | The name for the data gateway resource that you previously created in Azure <br><br><br><br>**Tip**: If your gateway doesn't appear in the list, check that you correctly [set up your gateway](../logic-apps/logic-apps-gateway-connection.md). |
||| > [!TIP]
The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL act
> * `User ID={your-user-name}` > * `Password={your-password}`
- This example shows how these values might look:
+ This connection and authentication information box looks similar to the following example, which selects **Windows Authentication**:
- ![Create SQL Server connection completed](./media/connectors-create-api-sqlazure/sql-server-create-connection-complete.png)
+ * Consumption logic app workflows
-1. When you're ready, select **Create**.
+ ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" on-premises connection information with selected authentication for Consumption.](./media/connectors-create-api-sqlazure/select-windows-authentication-consumption.png)
-1. Now, continue with the steps that you haven't completed yet in either [Add a SQL trigger](#add-sql-trigger) or [Add a SQL action](#add-sql-action).
-
-<a name="add-sql-trigger"></a>
-
-## Add a SQL trigger
-
-1. In the [Azure portal](https://portal.azure.com) or in Visual Studio, create a blank logic app, which opens the Logic App Designer. This example continues with the Azure portal.
-
-1. On the designer, in the search box, enter `sql server`. From the triggers list, select the SQL trigger that you want. This example uses the **When an item is created** trigger.
-
- ![Select "When an item is created" trigger](./media/connectors-create-api-sqlazure/select-sql-server-trigger.png)
-
-1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
-
-1. In the trigger, specify the interval and frequency for how often the trigger checks the table.
-
-1. To add other available properties for this trigger, open the **Add new parameter** list.
-
- This trigger returns only one row from the selected table, and nothing else. To perform other tasks, continue by adding either a [SQL connector action](#add-sql-action) or [another action](../connectors/apis-list.md) that performs the next task that you want in your logic app workflow.
-
- For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
-
-1. On the designer toolbar, select **Save**.
-
- Although this step automatically enables and publishes your logic app live in Azure, the only action that your logic app currently takes is to check your database based on your specified interval and frequency.
-
-<a name="trigger-recurrence-shift-drift"></a>
-
-## Trigger recurrence shift and drift (daylight saving time)
-
-Recurring connection-based triggers where you need to create a connection first, such as the managed SQL Server trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
-
-To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
-
-<a name="add-sql-action"></a>
-
-## Add a SQL action
-
-In this example, the logic app starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from a SQL database.
-
-1. In the [Azure portal](https://portal.azure.com) or in Visual Studio, open your logic app in Logic App Designer. This example continues the Azure portal.
-
-1. Under the trigger or action where you want to add the SQL action, select **New step**.
-
- ![Add an action to your logic app](./media/connectors-create-api-sqlazure/select-new-step-logic-app.png)
-
- Or, to add an action between existing steps, move your mouse over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+ * Standard logic app workflows
-1. Under **Choose an action**, in the search box, enter `sql server`. From the actions list, select the SQL action that you want. This example uses the **Get row** action, which gets a single record.
+ ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" on-premises connection information with selected authentication for Standard.](./media/connectors-create-api-sqlazure/select-windows-authentication-standard.png)
- ![Select SQL "Get row" action](./media/connectors-create-api-sqlazure/select-sql-get-row-action.png)
-
-1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
-
-1. Select the **Table name**, which is `SalesLT.Customer` in this example. Enter the **Row ID** for the record that you want.
-
- ![Select table name and specify row ID](./media/connectors-create-api-sqlazure/specify-table-row-id.png)
-
- This action returns only one row from the selected table, nothing else. So, to view the data in this row, you might add other actions that create a file that includes the fields from the returned row, and store that file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
-
-1. When you're done, on the designer toolbar, select **Save**.
+1. When you're ready, select **Create**.
- This step automatically enables and publishes your logic app live in Azure.
+1. Now, continue with the steps that you haven't completed yet in either [Add a SQL trigger](#add-sql-trigger) or [Add a SQL action](#add-sql-action).
<a name="handle-bulk-data"></a>
Sometimes, you have to work with result sets so large that the connector doesn't
* Create a [*stored procedure*](/sql/relational-databases/stored-procedures/stored-procedures-database-engine) that organizes the results the way that you want. The SQL connector provides many backend features that you can access by using Azure Logic Apps so that you can more easily automate business tasks that work with SQL database tables.
- When getting or inserting multiple rows, your logic app can iterate through these rows by using an [*until loop*](../logic-apps/logic-apps-control-flow-loops.md#until-loop) within these [limits](../logic-apps/logic-apps-limits-and-config.md). However, when your logic app has to work with record sets so large, for example, thousands or millions of rows, that you want to minimize the costs resulting from calls to the database.
+ When a SQL action gets or inserts multiple rows, your logic app workflow can iterate through these rows by using an [*until loop*](../logic-apps/logic-apps-control-flow-loops.md#until-loop) within these [limits](../logic-apps/logic-apps-limits-and-config.md). However, when your logic app has to work with record sets so large, for example, thousands or millions of rows, that you want to minimize the costs resulting from calls to the database.
To organize the results in the way that you want, you can create a stored procedure that runs in your SQL instance and uses the **SELECT - ORDER BY** statement. This solution gives you more control over the size and structure of your results. Your logic app calls the stored procedure by using the SQL Server connector's **Execute stored procedure** action. For more information, see [SELECT - ORDER BY Clause](/sql/t-sql/queries/select-order-by-clause-transact-sql).
Sometimes, you have to work with result sets so large that the connector doesn't
> For this task, you can use the [Azure Elastic Job Agent](../azure-sql/database/elastic-jobs-overview.md) > for [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md). For > [SQL Server on premises](/sql/sql-server/sql-server-technical-documentation)
- > and [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md),
+ > and [SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md),
> you can use the [SQL Server Agent](/sql/ssms/agent/sql-server-agent). To learn more, see > [Handle long-running stored procedure timeouts in the SQL connector for Azure Logic Apps](../logic-apps/handle-long-running-stored-procedures-sql-connector.md).
Sometimes, you have to work with result sets so large that the connector doesn't
When you call a stored procedure by using the SQL Server connector, the returned output is sometimes dynamic. In this scenario, follow these steps:
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
1. View the output format by performing a test run. Copy and save your sample output. 1. In the designer, under the action where you call the stored procedure, select **New step**.
-1. Under **Choose an action**, find and select the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) action.
+1. In the **Choose an operation** box, find and select the action named [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action).
1. In the **Parse JSON** action, select **Use sample payload to generate schema**.
When you call a stored procedure by using the SQL Server connector, the returned
Connection problems can commonly happen, so to troubleshoot and resolve these kinds of issues, review [Solving connectivity errors to SQL Server](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server). Here are some examples:
-* `A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.`
-
-* `(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)`
-
-* `(provider: TCP Provider, error: 0 - No such host is known.) (Microsoft SQL Server, Error: 11001)`
+* **A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.**
-## Connector-specific details
+* **(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)**
-For technical information about this connector's triggers, actions, and limits, see the [connector's reference page](/connectors/sql/), which is generated from the Swagger description.
+* **(provider: TCP Provider, error: 0 - No such host is known.) (Microsoft SQL Server, Error: 11001)**
## Next steps
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
With ingress enabled, your container app features the following characteristics:
- Supports TLS termination - Supports HTTP/1.1 and HTTP/2 - Supports WebSocket and gRPC-- Endpoints always use TLS 1.2, terminated at the ingress point
+- HTTPS endpoints always use TLS 1.2, terminated at the ingress point
- Endpoints always expose ports 80 (for HTTP) and 443 (for HTTPS). - By default, HTTP requests to port 80 are automatically redirected to HTTPS on 443.
+- Request timeout is 240 seconds.
## Configuration
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
Previously updated : 03/25/2022 Last updated : 04/11/2022
These features include:
## Azure Monitor metrics
-The Azure Monitor metrics feature allows you to monitor your app's compute and network usage. These metrics are available to view and analyze through the [metrics explorer in the Azure portal](/../azure-monitor/essentials/metrics-getting-started). Metric data is also available through the [Azure CLI](/cli/azure/monitor/metrics), and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
+The Azure Monitor metrics feature allows you to monitor your app's compute and network usage. These metrics are available to view and analyze through the [metrics explorer in the Azure portal](../azure-monitor/essentials/metrics-getting-started.md). Metric data is also available through the [Azure CLI](/cli/azure/monitor/metrics), and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
### Available metrics for Container Apps
The metrics namespace is `microsoft.app/containerapps`.
Using the Azure portal, navigate to your container apps **Overview** page. The **Monitoring** section displays the current CPU, memory, and network utilization for your container app. From this view, you can pin one or more charts to your dashboard. When you select a chart, it's opened in the metrics explorer.
The metrics page allows you to create and view charts to display your container
When you first navigate to the metrics explorer, you'll see the main page. From here, select the metric that you want to display. You can add more metrics to the chart by selecting **Add Metric** in the upper left. You can filter your metrics by revision or replica. For example, to filter by a replica, select **Add filter**, then select a replica from the *Value* drop-down. You can also filter by your container app's revision. You can split the information in your chart by revision or replica. For example, to split by revision, select **Applying splitting** and select **Revision** as the value. Splitting is only available when the chart contains a single metric. You can view metrics across multiple container apps to view resource utilization over your entire application. ## Azure Monitor Log Analytics
You can run Log Analytic queries via the Azure portal, the Azure CLI or PowerShe
In the Azure portal, logs are available from either the **Monitor**->**Logs** page or by navigating to your container app and selecting the **Logs** menu item. From Log Analytics interface, you can query the logs based on the **CustomLogs>ContainerAppConsoleLogs_CL** table. Here's an example of a simple query, that displays log entries for the container app named *album-api*.
For more information, see [Viewing Logs](monitor.md#viewing-logs).
## Azure Monitor alerts
-You can configure alerts to send notifications based on metrics values and Log Analytics queries. Alerts can be added from the metrics explorer and the Log Analytics interface in the Azure portal.
+You can configure alerts to send notifications based on metrics values and Log Analytics queries.  In the Azure portal, you can add alerts from the metrics explorer and the Log Analytics interface.
In the metrics explorer and the Log Analytics interface, alerts are based on existing charts and queries. You can manage your alerts from the **Monitor>Alerts** page. From this page, you can create metric and log alerts without existing metric charts or log queries. To learn more about alerts, refer to [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
-### Setting alerts in metrics explorer
+### Setting alerts in the metrics explorer
Metric alerts monitor metric data at set intervals and trigger when an alert rule condition is met. For more information, see [Metric alerts](../azure-monitor/alerts/alerts-metric-overview.md).
-In metrics explorer, you can create metric alerts based on Container Apps metrics. Once you create a metric chart, you're able to create alert rules based on the chart's settings. You can create an alert rule by selecting **New alert rule**.
+in the metrics explorer, you can create metric alerts based on Container Apps metrics. Once you create a metric chart, you're able to create alert rules based on the chart's settings. You can create an alert rule by selecting **New alert rule**.
When you create a new alert rule, the rule creation pane is opened to the **Condition** tab. An alert condition is started for you based on the metric that you selected for the chart. You then edit the condition to configure threshold and other settings. You can add more conditions to your alert rule by selecting the **Add condition** option in the **Create an alert rule** pane. When you add an alert condition, the **Select a signal** pane is opened. This pane lists the Container Apps metrics from which you can select for the condition. After you've selected the metric, you can configure the settings for your alert condition. For more information about configuring alerts, see [Manage metric alerts](../azure-monitor/alerts/alerts-metric.md).
You can add alert splitting to the condition so you can receive individual alert
Example of setting a dimension for a condition: Once you create the alert rule, it's a resource in your resource group. To manage your alert rules, navigate to **Monitor>Alerts**.
Once you create the alert rule, it's a resource in your resource group. To mana
You can use Log Analytics queries to periodically monitor logs and trigger alerts based on the results. The Log Analytics interface allows you to add alert rules to your queries. Once you have created and run a query, you're able to create an alert rule. Selecting **New alert rule** opens the **Create an alert rule** editor, where you can configure the setting for your alerts. To learn more about creating a log alert, see [Manage log alerts](../azure-monitor/alerts/alerts-log.md)
Enabling splitting will send individual alerts for each dimension you define. C
- container - log message To learn more about log alerts, refer to [Log alerts in Azure Monitor](../azure-monitor/alerts/alerts-unified-log.md).
To learn more about log alerts, refer to [Log alerts in Azure Monitor](../azure-
## Next steps -- [Health probes in Azure Container Apps](health-probes.md)-- [Monitor an App in Azure Container Apps](monitor.md)
+> [!div class="nextstepaction"]
+> [Health probes in Azure Container Apps](health-probes.md)
+> [Monitor an App in Azure Container Apps](monitor.md)
container-registry Container Registry Auth Aci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-aci.md
The following articles contain additional details on working with service princi
<!-- IMAGES --> <!-- LINKS - External -->
-[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry
+[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry/create-registry/create-registry-service-principal-assign-role.sh
[acr-scripts-psh]: https://github.com/Azure/azure-docs-powershell-samples/tree/master/container-registry <!-- LINKS - Internal -->
container-registry Container Registry Auth Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-kubernetes.md
This command returns a new, valid password for your service principal.
## Create an image pull secret
-Kubernetes uses an *image pull secret* to store information needed to authenticate to your registry. To create the pull secret for an Azure container registry, you provide the service principal ID, password, and the registry URL.
+Kubernetes uses an *image pull secret* to store information needed to authenticate to your registry. To create the pull secret for an Azure container registry, you provide the service principal ID, password, and the registry URL.
Create an image pull secret with the following `kubectl` command:
kubectl create secret docker-registry <secret-name> \
--docker-username=<service-principal-ID> \ --docker-password=<service-principal-password> ```+ where: | Value | Description |
spec:
In the preceding example, `my-awesome-app:v1` is the name of the image to pull from the Azure container registry, and `acr-secret` is the name of the pull secret you created to access the registry. When you deploy the pod, Kubernetes automatically pulls the image from your registry, if it is not already present on the cluster. - ## Next steps * For more about working with service principals and Azure Container Registry, see [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md) * Learn more about image pull secrets in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) - <!-- IMAGES --> <!-- LINKS - External -->
-[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry
+[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry/create-registry/create-registry-service-principal-assign-role.sh
[acr-scripts-psh]: https://github.com/Azure/azure-docs-powershell-samples/tree/master/container-registry <!-- LINKS - Internal -->
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md
For example, configure your web application to use a service principal that prov
You should use a service principal to provide registry access in **headless scenarios**. That is, an application, service, or script that must push or pull container images in an automated or otherwise unattended manner. For example:
- * *Pull*: Deploy containers from a registry to orchestration systems including Kubernetes, DC/OS, and Docker Swarm. You can also pull from container registries to related Azure services such as [Azure Container Instances](container-registry-auth-aci.md), [App Service](../app-service/index.yml), [Batch](../batch/index.yml), [Service Fabric](../service-fabric/index.yml), and others.
+* *Pull*: Deploy containers from a registry to orchestration systems including Kubernetes, DC/OS, and Docker Swarm. You can also pull from container registries to related Azure services such as [Azure Container Instances](container-registry-auth-aci.md), [App Service](../app-service/index.yml), [Batch](../batch/index.yml), [Service Fabric](../service-fabric/index.yml), and others.
> [!TIP]
- > A service principal is recommended in several [Kubernetes scenarios](authenticate-kubernetes-options.md) to pull images from an Azure container registry. With Azure Kubernetes Service (AKS), you can also use an automated mechanism to authenticate with a target registry by enabling the cluster's [managed identity](../aks/cluster-container-registry-integration.md).
+ > A service principal is recommended in several [Kubernetes scenarios](authenticate-kubernetes-options.md) to pull images from an Azure container registry. With Azure Kubernetes Service (AKS), you can also use an automated mechanism to authenticate with a target registry by enabling the cluster's [managed identity](../aks/cluster-container-registry-integration.md).
* *Push*: Build container images and push them to a registry using continuous integration and deployment solutions like Azure Pipelines or Jenkins. For individual access to a registry, such as when you manually pull a container image to your development workstation, we recommend using your own [Azure AD identity](container-registry-authentication.md#individual-login-with-azure-ad) instead for registry access (for example, with [az acr login][az-acr-login]).
Once you have a service principal that you've granted access to your container r
* **User name** - service principal's **application (client) ID** * **Password** - service principal's **password (client secret)**
-Each value has the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+Each value has the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
> [!TIP] > You can regenerate the password (client secret) of a service principal by running the [az ad sp credential reset](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command.
Once logged in, Docker caches the credentials.
### Use with certificate
-If you've added a certificate to your service principal, you can sign into the Azure CLI with certificate-based authentication, and then use the [az acr login][az-acr-login] command to access a registry. Using a certificate as a secret instead of a password provides additional security when you use the CLI.
+If you've added a certificate to your service principal, you can sign into the Azure CLI with certificate-based authentication, and then use the [az acr login][az-acr-login] command to access a registry. Using a certificate as a secret instead of a password provides additional security when you use the CLI.
A self-signed certificate can be created when you [create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli). Or, add one or more certificates to an existing service principal. For example, if you use one of the scripts in this article to create or update a service principal with rights to pull or push images from a registry, add a certificate using the [az ad sp credential reset][az-ad-sp-credential-reset] command.
A service principal can also be used in Azure scenarios that require pulling ima
To create a service principal that can authenticate with a container registry in a cross-tenant scenario:
-* Create a [multitenant app](../active-directory/develop/single-and-multi-tenant-apps.md) (service principal) in Tenant A
+* Create a [multitenant app](../active-directory/develop/single-and-multi-tenant-apps.md) (service principal) in Tenant A
* Provision the app in Tenant B * Grant the service principal permissions to pull from the registry in Tenant B * Update the service or app in Tenant A to authenticate using the new service principal
For example steps, see [Pull images from a container registry to an AKS cluster
## Service principal renewal
-The service principal is created with one-year validity. You have options to extend the validity further than one year, or can provide expiry date of your choice using the [`az ad sp credential reset`](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command.
+The service principal is created with one-year validity. You have options to extend the validity further than one year, or can provide expiry date of your choice using the [`az ad sp credential reset`](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command.
## Next steps
The service principal is created with one-year validity. You have options to ext
* For an example of using an Azure key vault to store and retrieve service principal credentials for a container registry, see the tutorial to [build and deploy a container image using ACR Tasks](container-registry-tutorial-quick-task.md). <!-- LINKS - External -->
-[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry
+[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry/create-registry/create-registry-service-principal-assign-role.sh
[acr-scripts-psh]: https://github.com/Azure/azure-docs-powershell-samples/tree/master/container-registry <!-- LINKS - Internal -->
container-registry Container Registry Tutorial Multistep Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-multistep-task.md
# Tutorial: Run a multi-step container workflow in the cloud when you commit source code
-In addition to a [quick task](container-registry-tutorial-quick-task.md), ACR Tasks supports multi-step, multi-container-based workflows that can automatically trigger when you commit source code to a Git repository.
+In addition to a [quick task](container-registry-tutorial-quick-task.md), ACR Tasks supports multi-step, multi-container-based workflows that can automatically trigger when you commit source code to a Git repository.
In this tutorial, you learn how to use example YAML files to define multi-step tasks that build, run, and push one or more container images to a registry when you commit source code. To create a task that only automates a single image build on code commit, see [Tutorial: Automate container image builds in the cloud when you commit source code](container-registry-tutorial-build-task.md). For an overview of ACR Tasks, see [Automate OS and framework patching with ACR Tasks](container-registry-tasks-overview.md), In this tutorial: > [!div class="checklist"]
+>
> * Define a multi-step task using a YAML file > * Create a task > * Optionally add credentials to the task to enable access to another registry
steps:
This multi-step task does the following:
-1. Runs a `build` step to build an image from the Dockerfile in the working directory. The image targets the `Run.Registry`, the registry where the task is run, and is tagged with a unique ACR Tasks run ID.
+1. Runs a `build` step to build an image from the Dockerfile in the working directory. The image targets the `Run.Registry`, the registry where the task is run, and is tagged with a unique ACR Tasks run ID.
1. Runs a `cmd` step to run the image in a temporary container. This example starts a long-running container in the background and returns the container ID, then stops the container. In a real-world scenario, you might include steps to test the running container to ensure it runs correctly. 1. In a `push` step, pushes the image that was built to the run registry.
az acr task create \
--git-access-token $GIT_PAT ```
-This task specifies that any time code is committed to the *main* branch in the repository specified by `--context`, ACR Tasks will run the multi-step task from the code in that branch. The YAML file specified by `--file` from the repository root defines the steps.
+This task specifies that any time code is committed to the *main* branch in the repository specified by `--context`, ACR Tasks will run the multi-step task from the code in that branch. The YAML file specified by `--file` from the repository root defines the steps.
Output from a successful [az acr task create][az-acr-task-create] command is similar to the following:
steps:
This multi-step task does the following: 1. Runs two `build` steps to build images from the Dockerfile in the working directory:
- * The first targets the `Run.Registry`, the registry where the task is run, and is tagged with the ACR Tasks run ID.
+ * The first targets the `Run.Registry`, the registry where the task is run, and is tagged with the ACR Tasks run ID.
* The second targets the registry identified by the value of `regDate`, which you set when you create the task (or provide through an external `values.yaml` file passed to `az acr task create`). This image is tagged with the run date. 1. Runs a `cmd` step to run one of the built containers. This example starts a long-running container in the background and returns the container ID, then stops the container. In a real-world scenario, you might test a running container to ensure it runs correctly. 1. In a `push` step, pushes the images that were built, the first to the run registry, the second to the registry identified by `regDate`.
az acr task create \
To push images to the registry identified by the value of `regDate`, use the [az acr task credential add][az-acr-task-credential-add] command to add login credentials for that registry to the task.
-For this example, we recommend that you create a [service principal](container-registry-auth-service-principal.md) with access to the registry scoped to the *AcrPush* role, so that it has permissions to push images. To create the service principal, see this [Azure CLI script](https://github.com/Azure-Samples/azure-cli-samples/blob/master/container-registry/service-principal-create/service-principal-create.sh).
+For this example, we recommend that you create a [service principal](container-registry-auth-service-principal.md) with access to the registry scoped to the *AcrPush* role, so that it has permissions to push images. To create the service principal, use the following script:
+ Pass the service principal application ID and password in the following `az acr task credential add` command. Be sure to update the login server name *mycontainerregistrydate* with the name of your second registry:
In this tutorial, you learned how to create multi-step, multi-container-based ta
[az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create [az-acr-task-run]: /cli/azure/acr/task#az-acr-task-run [az-acr-task-list-runs]: /cli/azure/acr/task#az-acr-task-list-runs
-[az-acr-task-credential-add]: /cli/azure/acr/task/credential#az-acr-task-credential-add
+[az-acr-task-credential-add]: /cli/azure/acr/task/credential#az-acr-task-credential-add
[az-login]: /cli/azure/reference-index#az-login <!-- IMAGES -->
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-time-to-live.md
async function createcontainerWithTTL(db: Database, containerDefinition: Contain
In addition to setting a default time to live on a container, you can set a time to live for an item. Setting time to live at the item level will override the default TTL of the item in that container.
-* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`.
+* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item should not expire.
* If the item doesn't have a TTL field, then by default, the TTL set to the container will apply to the item.
container = database.createContainerIfNotExists(containerProperties, 400).block(
Learn more about time to live in the following article:
-* [Time to live](time-to-live.md)
+* [Time to live](time-to-live.md)
cost-management-billing Export Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/export-subscriptions.md
- Title: Export your Azure subscription top level information
-description: Describes how you can view all Azure subscription IDs associated with your account.
--
-tags: billing
--- Previously updated : 09/15/2021---
-# Export and view your top-level Subscription information
-If you need to view the set of subscription IDs associated with your user credentials, [download a .json file with your subscription information from the Azure Account Center](https://account.azure.com/subscriptions/download).
--
-The downloaded .json file provides the following information:
-- Email: The email address associated with your account.-- Puid: The unique identifier associated with your billing account.-- SubscriptionIds: A list of subscriptions that belong to your account, enumerated by subscription ID.-
-### subscriptions.json sample
-
-```json
-{
- "Email":"admin@contoso.com",
- "Puid":"00052xxxxxxxxxxx",
- "SubscriptionIds":[
- "38124d4d-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "7c8308f1-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "39a25f2b-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "52ec2489-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "e42384b2-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "90757cdc-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- ]
-}
-```
cost-management-billing Microsoft 365 Account For Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/microsoft-365-account-for-azure-subscription.md
If you already have both a Microsoft 365 account and an Azure subscription, see
Save time and avoid account proliferation by signing up for Azure using your Microsoft 365 user name and password.
-1. Sign up at [Azure.com](https://account.azure.com/signup?offer=MS-AZR-0044p&appId=docs).
+1. Sign up at [Azure.com](https://signup.azure.com/signup?offer=MS-AZR-0044p&appId=docs).
2. Sign in by using your Microsoft 365 user name and password. The account you use doesn't need to have administrator permissions. If you have more than one Microsoft 365 account, make sure you use the credentials for the Microsoft 365 account that you want to associate with your Azure subscription. ![Screenshot that shows the sign-in page.](./media/microsoft-365-account-for-azure-subscription/billing-sign-in-with-office-365-account.png)
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md
Previously updated : 12/20/2021 Last updated : 04/14/2022 # Copy and transform data in Azure Database for PostgreSQL using Azure Data Factory or Synapse Analytics
This Azure Database for PostgreSQL connector is supported for the following acti
- [Mapping data flow](concepts-data-flow-overview.md) - [Lookup activity](control-flow-lookup-activity.md)
-Currently, data flow supports Azure database for PostgreSQL Single Server but not Flexible Server or Hyperscale (Citus); data flow in Azure Synapse Analytics supports all PostgreSQL flavors.
+The three activities work on all Azure Database for PostgreSQL deployment options:
+
+* [Single Server](../postgresql/single-server/index.yml)
+* [Flexible Server](../postgresql/flexible-server/index.yml)
+* [Hyperscale (Citus)](../postgresql/hyperscale/index.yml)
## Getting started
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 03/25/2022 Last updated : 04/13/2022
data-factory Connector Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-twilio.md
The following properties are supported for the Twilio linked service:
} ```
+## Mapping data flow properties
+When transforming data in mapping data flow, you can read resources from Twilio. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
-### Source transformation
-When transforming data in mapping data flow, you can read resources from Twilio. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+### Source transformation
The below table lists the properties supported by Twilio source. You can edit these properties in the **Source options** tab.
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 03/25/2022 Last updated : 04/13/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [data.world (Preview)](connector-dataworld.md#mapping-data-flow-properties) | | -/Γ£ô |
| [Dataverse](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô | | [REST](connector-rest.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [TeamDesk (Preview)](connector-teamdesk.md#mapping-data-flow-properties) | | -/Γ£ô |
+| [Twilio (Preview)](connector-twilio.md#mapping-data-flow-properties) | | -/Γ£ô |
| [Zendesk (Preview)](connector-zendesk.md#mapping-data-flow-properties) | | -/Γ£ô | Settings specific to these connectors are located on the **Source options** tab. Information and data flow script examples on these settings are located in the connector documentation.
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud
When you connect Defender for Cloud to Microsoft Sentinel, the status of Defender for Cloud alerts that get ingested into Microsoft Sentinel is synchronized between the two services. So, for example, when an alert is closed in Defender for Cloud, that alert is also shown as closed in Microsoft Sentinel. If you change the status of an alert in Defender for Cloud, the status of the alert in Microsoft Sentinel is also updated, but the statuses of any Microsoft Sentinel **incidents** that contain the synchronized Microsoft Sentinel alert aren't updated.
-You can enable the preview feature **bi-directional alert synchronization** to automatically sync the status of the original Defender for Cloud alerts with Microsoft Sentinel incidents that contain the copies of those Defender for Cloud alerts. So, for example, when a Microsoft Sentinel incident that contains a Defender for Cloud alert is closed, Defender for Cloud automatically closes the corresponding original alert.
+You can enable the **bi-directional alert synchronization** feature to automatically sync the status of the original Defender for Cloud alerts with Microsoft Sentinel incidents that contain the copies of those Defender for Cloud alerts. So, for example, when a Microsoft Sentinel incident that contains a Defender for Cloud alert is closed, Defender for Cloud automatically closes the corresponding original alert.
Learn more in [Connect alerts from Microsoft Defender for Cloud](../sentinel/connect-azure-security-center.md).
You can set up your Azure environment to support continuous export using either:
Enter the required parameters and the script performs all of the steps for you. When the script finishes, it outputs the information youΓÇÖll use to install the solution in the SIEM platform. -- In the Azure portal
+- The Azure portal
Here's an overview of the steps you'll do in the Azure portal: 1. Create an Event Hubs namespace and event hub. 2. Define a policy for the event hub with ΓÇ£SendΓÇ¥ permissions.
- 3. **If you are streaming your alerts to QRadar SIEM** - Create an event hub "Listen" policy, then copy and save the connection string of the policy that youΓÇÖll use in QRadar.
+ 3. **If you're streaming alerts to QRadar** - Create an event hub "Listen" policy, then copy and save the connection string of the policy that youΓÇÖll use in QRadar.
4. Create a consumer group, then copy and save the name that youΓÇÖll use in the SIEM platform.
- 5. Enable continuous export of your security alerts to the defined event hub.
- 6. **If you are streaming your alerts to QRadar SIEM** - Create a storage account, then copy and save the connection string to the account that youΓÇÖll use in QRadar.
- 7. **If you are streaming your alerts to Splunk SIEM**:
- 1. Create a Microsoft Azure Active Directory application.
+ 5. Enable continuous export of security alerts to the defined event hub.
+ 6. **If you're streaming alerts to QRadar** - Create a storage account, then copy and save the connection string to the account that youΓÇÖll use in QRadar.
+ 7. **If you're streaming alerts to Splunk**:
+ 1. Create an Azure Active Directory (AD) application.
2. Save the Tenant, App ID, and App password. 3. Give permissions to the Azure AD Application to read from the event hub you created before.
To view the event schemas of the exported data types, visit the [Event Hubs even
## Use the Microsoft Graph Security API to stream alerts to third-party applications
-As an alternative to Sentinel and Azure Monitor, you can use Defender for Cloud's built-in integration with [Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api). No configuration is required and there are no additional costs.
+As an alternative to Microsoft Sentinel and Azure Monitor, you can use Defender for Cloud's built-in integration with [Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api). No configuration is required and there are no additional costs.
You can use this API to stream alerts from your **entire tenant** (and data from many Microsoft Security products) into third-party SIEMs and other popular platforms:
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
The patch for this situation needs to update both the model and the twin's tempe
You may optionally decide to use the `sourceTime` field on twin properties to record timestamps for when property updates are observed in the real world. Azure Digital Twins natively supports `sourceTime` in the metadata for each twin property. For more information about this field and other fields on digital twins, see [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format).
-This property can only be written using the latest version of the [Azure Digital Twins APIs/SDKs](concepts-apis-sdks.md). The value must comply to ISO 8601 date and time format.
+The minimum REST API version to support this field is the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version. To work with this field using the [Azure Digital Twins SDKs](concepts-apis-sdks.md), we recommend using the latest version of the SDK to make sure this field is included (keep in mind that the latest version of an SDK may be in beta or preview).
+
+The `sourceTime` value must comply to ISO 8601 date and time format.
Here's an example of a JSON Patch document that updates both the value and the `sourceTime` field of a `Temperature` property:
frontdoor Front Door How To Redirect Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-how-to-redirect-https.md
You can use the Azure portal to [create a Front Door](quickstart-create-front-do
:::image type="content" source="./media/front-door-url-redirect/front-door-designer-routing-rule.png" alt-text="Front Door configuration designer routing rule":::
-1. Under the *Route Details* section, set the *Route Type* to **Redirect**. Ensure that the *Redirect type* get set to **Found (302)** and *Redirect protocol* get set to **HTTPS only**.
+1. Under the *Route Details* section, set the *Route Type* to **Redirect**. Set the *Redirect type* to **Moved (301)** and *Redirect protocol* get set to **HTTPS only**.
:::image type="content" source="./media/front-door-url-redirect/front-door-redirect-config-example.png" alt-text="Add an HTTP to HTTPS redirect route":::
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
In order for the certificate to be automatically rotated to the latest version w
If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified version or vice versa, add a new certificate.
-##### How to switch between certificate types
+## How to switch between certificate types
1. You can change an existing Azure managed certificate to a user-managed certificate by selecting the certificate state to open the **Certificate details** page.
If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified versio
*Bring Your Own Certificate (BYOC)*. Then follow the same steps as earlier to choose a certificate. Select **Update** to change the associated certificate with a domain. > [!NOTE]
- > It may take up to an hour for the new certificate to be deployed when you switch between certificate types.
+ > * It may take up to an hour for the new certificate to be deployed when you switch between certificate types.
+ > * If your domain state is Approved, switching the certificate type between BYOC and managed certificate won't have any downtime. Whhen switching to managed certificate, unless the domain ownership is re-validated and the domain state becomes Approved, you will continue to be served by the previous certificate.
+ > * If you switch from BYOC to managed certificate, domain re-validation is required. If you switch from managed certificate to BYOC, you're not required to re-validate the domain.
> :::image type="content" source="../media/how-to-configure-https-custom-domain/certificate-details-page.png" alt-text="Screenshot of certificate details page.":::
hdinsight Create Cluster Error Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/create-cluster-error-dictionary.md
description: Learn how to troubleshoot errors that occur when creating Azure HDI
Previously updated : 08/24/2020 Last updated : 04/14/2022
hdinsight Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/disk-encryption.md
description: This article describes the two layers of encryption available for data at rest on Azure HDInsight clusters. Previously updated : 08/10/2020 Last updated : 04/14/2022 ms.devlang: azurecli
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
Previously updated : 03/11/2020 Last updated : 04/14/2022 # Use Enterprise Security Package in HDInsight
If federation is being used and password hashes are synced correctly, but you're
- [Configure HDInsight clusters with ESP](apache-domain-joined-configure-using-azure-adds.md) - [Configure Apache Hive policies for HDInsight clusters with ESP](apache-domain-joined-run-hive.md)-- [Manage HDInsight clusters with ESP](apache-domain-joined-manage.md)
+- [Manage HDInsight clusters with ESP](apache-domain-joined-manage.md)
hdinsight Apache Domain Joined Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-manage.md
Title: Manage Enterprise Security Package clusters - Azure HDInsight
description: Learn how to manage Azure HDInsight clusters with Enterprise Security Package. Previously updated : 12/04/2019 Last updated : 04/14/2022 # Manage HDInsight clusters with Enterprise Security Package
HDInsight Enterprise Security Package has the following roles:
## Next steps - For configuring a HDInsight cluster with Enterprise Security Package, see [Configure HDInsight clusters with ESP](./apache-domain-joined-configure-using-azure-adds.md).-- For configuring Hive policies and run Hive queries, see [Configure Apache Hive policies for HDInsight clusters with ESP](apache-domain-joined-run-hive.md).
+- For configuring Hive policies and run Hive queries, see [Configure Apache Hive policies for HDInsight clusters with ESP](apache-domain-joined-run-hive.md).
hdinsight Apache Domain Joined Run Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-run-kafka.md
Title: Tutorial - Apache Kafka & Enterprise Security - Azure HDInsight
description: Tutorial - Learn how to configure Apache Ranger policies for Kafka in Azure HDInsight with Enterprise Security Package. Previously updated : 05/19/2020 Last updated : 04/14/2022 # Tutorial: Configure Apache Kafka policies in HDInsight with Enterprise Security Package (Preview)
hdinsight Encryption In Transit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/encryption-in-transit.md
Title: Azure HDInsight Encryption in transit
description: Learn about security features to provide encryption in transit for your Azure HDInsight cluster. Previously updated : 08/24/2020 Last updated : 04/14/2022 # IPSec Encryption in transit for Azure HDInsight
hdinsight General Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/general-guidelines.md
Title: Enterprise security general guidelines in Azure HDInsight
description: Some best practices that should make Enterprise Security Package deployment and management easier. Previously updated : 02/13/2020 Last updated : 04/14/2022 # Enterprise security general information and guidelines in Azure HDInsight
hdinsight Hdinsight Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/hdinsight-security-overview.md
description: Learn the various methods to ensure enterprise security in Azure HD
Previously updated : 08/24/2020 Last updated : 04/14/2022 #Customer intent: As a user of Azure HDInsight, I want to learn the means that Azure HDInsight offers to ensure security for the enterprise.
hdinsight Identity Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/identity-broker.md
Title: Azure HDInsight ID Broker (HIB)
description: Learn about Azure HDInsight ID Broker to simplify authentication for domain-joined Apache Hadoop clusters. Previously updated : 11/03/2020 Last updated : 04/14/2022 # Azure HDInsight ID Broker (HIB)
When the cluster is deleted, HDInsight delete the app and there is no need to cl
* [Configure an HDInsight cluster with Enterprise Security Package by using Azure Active Directory Domain Services](apache-domain-joined-configure-using-azure-adds.md) * [Synchronize Azure Active Directory users to an HDInsight cluster](../hdinsight-sync-aad-users-to-cluster.md)
-* [Monitor cluster performance](../hdinsight-key-scenarios-to-monitor.md)
+* [Monitor cluster performance](../hdinsight-key-scenarios-to-monitor.md)
hdinsight Ldap Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/ldap-sync.md
Title: LDAP sync in Ranger and Apache Ambari in Azure HDInsight
description: Address the LDAP sync in Ranger and Ambari and provide general guidelines. Previously updated : 02/14/2020 Last updated : 04/14/2022 # LDAP sync in Ranger and Apache Ambari in Azure HDInsight
hdinsight Apache Hadoop Linux Tutorial Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started-bicep.md
+
+ Title: 'Quickstart: Create Apache Hadoop cluster in Azure HDInsight using Bicep'
+description: In this quickstart, you create Apache Hadoop cluster in Azure HDInsight using Bicep
+++++ Last updated : 04/14/2022
+#Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Bicep
++
+# Quickstart: Create Apache Hadoop cluster in Azure HDInsight using Bicep
+
+In this quickstart, you use Bicep to create an [Apache Hadoop](./apache-hadoop-introduction.md) cluster in Azure HDInsight. Hadoop was the original open-source framework for distributed processing and analysis of big data sets on clusters. The Hadoop ecosystem includes related software and utilities, including Apache Hive, Apache HBase, Spark, Kafka, and many others.
+
+
+Currently HDInsight comes with [seven different cluster types](../hdinsight-overview.md#cluster-types-in-hdinsight). Each cluster type supports a different set of components. All cluster types support Hive. For a list of supported components in HDInsight, see [What's new in the Hadoop cluster versions provided by HDInsight?](../hdinsight-component-versioning.md)
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-linux-ssh-password/).
++
+Two Azure resources are defined in the Bicep file:
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.
+* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters clusterName=<cluster-name> clusterType=<cluster-type> clusterLoginUserName=<cluster-username> sshUserName=<ssh-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -clusterName "<cluster-name>" -clusterType "<cluster-type>" -clusterLoginUserName "<cluster-username>" -sshUserName "<ssh-username>"
+ ```
+
+
+
+ You need to provide values for the parameters:
+
+ * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create.
+ * Replace **\<cluster-type\>** with the type of the HDInsight cluster to create. Allowed strings include: `hadoop`, `interactivehive`, `hbase`, `storm`, and `spark`.
+ * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards.
+ * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username cannot be admin.
+
+ You'll also be prompted to enter the following:
+
+ * **clusterLoginPassword**, which must be at least 10 characters long and contain one digit, one uppercase letter, one lowercase letter, and one non-alphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username or SSH username.
+ * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster login name.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create an Apache Hadoop cluster in HDInsight using Bicep. In the next article, you learn how to perform an extract, transform, and load (ETL) operation using Hadoop on HDInsight.
+
+> [!div class="nextstepaction"]
+> [Extract, transform, and load data using Interactive Query on HDInsight](../interactive-query/interactive-query-tutorial-analyze-flight-data.md)
hdinsight Network Virtual Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/network-virtual-appliance.md
You can optionally enable one or more of the following service endpoints which w
You can get the list of dependent FQDNs (mostly Azure Storage and Azure Service Bus) for configuring your network virtual appliance [in this repo](https://github.com/Azure-Samples/hdinsight-fqdn-lists/). For the regional list see [here](https://github.com/Azure-Samples/hdinsight-fqdn-lists/tree/main/Public). These dependencies are used by HDInsight resource provider(RP) to create and monitor/manage clusters successfully. These include telemetry/diagnostic logs, provisioning metadata, cluster-related configurations, scripts, etc. This FQDN dependency list might change with releasing future HDInsight updates.
-The list below only gives a few FQDNs that may be needed for OS and security patching or certificate validations *after* the cluster is created and during the lifetime of cluster operations:
+The list below only gives a few FQDNs that may be needed for OS and security patching or certificate validations during the cluster create process and during the lifetime of cluster operations:
| **Runtime Dependencies FQDNs** | ||
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/about-iot-dps.md
There are many provisioning scenarios in which DPS is an excellent choice for ge
Provisioning of nested edge devices (parent/child hierarchies) is not currently supported by DPS.
->[!NOTE]
->**Data residency consideration:**
->
->DPS uses the same [device provisioning endpoint](concepts-service.md#device-provisioning-endpoint) for all provisioning service instances, and will perform traffic load balancing to the nearest available service endpoint. As a result, authentication secrets may be temporarily transferred outside of the region where the DPS instance was initially created. However, once the device is connected, the device data will flow directly to the original region of the DPS instance.
->
->To ensure that your data doesn't leave the region that your DPS instance was created in, use a private endpoint. To learn how to set up private endpoints, see [Azure IoT Device Provisioning Service (DPS) support for virtual networks](virtual-network-support.md#private-endpoint-limitations).
- ## Behind the scenes All the scenarios listed in the previous section can be done using DPS for zero-touch provisioning with the same flow. Many of the manual steps traditionally involved in provisioning are automated with DPS to reduce the time to deploy IoT devices and lower the risk of manual error. The following section describes what goes on behind the scenes to get a device provisioned. The first step is manual, all of the following steps are automated.
DPS only supports HTTPS connections for service operations.
DPS is available in many regions. The updated list of existing and newly announced regions for all services is at [Azure Regions](https://azure.microsoft.com/regions/). You can check availability of the Device Provisioning Service on the [Azure Status](https://azure.microsoft.com/status/) page.
-> [!NOTE]
-> DPS is global and not bound to a location. However, you must specify a region in which the metadata associated with your DPS profile will reside.
+### Data residency consideration
+
+Device Provisioning Service doesn't store or process customer data outside of the geography where you deploy the service instance. For more information, see [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md).
+
+However, by default, DPS uses the same [device provisioning endpoint](concepts-service.md#device-provisioning-endpoint) for all provisioning service instances, and performs traffic load balancing to the nearest available service endpoint. As a result, authentication secrets may be temporarily transferred outside of the region where the DPS instance was initially created. However, once the device is connected, the device data will flow directly to the original region of the DPS instance.
+
+To ensure that your data doesn't leave the region that your DPS instance was created in, use a private endpoint. To learn how to set up private endpoints, see [Azure IoT Device Provisioning Service (DPS) support for virtual networks](virtual-network-support.md#private-endpoint-limitations).
## Quotas and Limits
iot-dps Iot Dps Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-ha-dr.md
You don't need to take any action to use availability zones in supported regions
## Disaster recovery and Microsoft-initiated failover
-DPS leverages [paired regions](../availability-zones/cross-region-replication-azure.md) to enable automatic failover. Microsoft-initiated failover is exercised by Microsoft in rare situations when an entire region goes down to failover all the DPS instances from the affected region to its corresponding paired region. This process is a default option (there is no way for users to opt out) and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve user consent before the user's DPS instance is failed over.
+DPS leverages [paired regions](../availability-zones/cross-region-replication-azure.md) to enable automatic failover. Microsoft-initiated failover is exercised by Microsoft in rare situations when an entire region goes down to failover all the DPS instances from the affected region to its corresponding paired region. This process is a default option and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve user consent before the user's DPS instance is failed over.
+
+The only users who are able to opt-out of this feature are those deploying to the Brazil South and Southeast Asia (Singapore) regions.
+
+>[!NOTE]
+>Azure IoT Hub Device Provisioning Service doesn't store or process customer data outside of the geography where you deploy the service instance. For more information, see [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md).
## Disable disaster recovery
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md
An example of a command is rebooting a device. IoT Hub implements commands by al
IoT Hub gives you the ability to unlock the value of your device data with other Azure services so you can shift to predictive problem-solving rather than reactive management. Connect your IoT hub with other Azure services to do machine learning, analytics, and AI to act on real-time data, optimize processing, and gain deeper insights.
+>[!NOTE]
+>Azure IoT Hub doesn't store or process customer data outside of the geography where you deploy the service instance. For more information, see [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md).
+ ### Built-in endpoint collects device data by default A built-in endpoint collects data from your device by default. The data is collected using a request-response pattern over dedicated IoT device endpoints, is available for a maximum duration of seven days, and can be used to take actions on a device. Here is the data accepted by the device endpoint:
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ha-dr.md
Once the failover operation for the IoT hub completes, all operations from the d
## Microsoft-initiated failover
-Microsoft-initiated failover is exercised by Microsoft in rare situations to failover all the IoT hubs from an affected region to the corresponding geo-paired region. This process is a default option (no way for users to opt out) and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve a user consent before the user's hub is failed over. Microsoft-initiated failover has a recovery time objective (RTO) of 2-26 hours.
+Microsoft-initiated failover is exercised by Microsoft in rare situations to failover all the IoT hubs from an affected region to the corresponding geo-paired region. This process is a default option and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve a user consent before the user's hub is failed over. Microsoft-initiated failover has a recovery time objective (RTO) of 2-26 hours.
The large RTO is because Microsoft must perform the failover operation on behalf of all the affected customers in that region. If you are running a less critical IoT solution that can sustain a downtime of roughly a day, it is ok for you to take a dependency on this option to satisfy the overall disaster recovery goals for your IoT solution. The total time for runtime operations to become fully operational once this process is triggered, is described in the "Time to recover" section.
+The only users who are able to opt-out of this feature are those deploying to the Brazil South and Southeast Asia (Singapore) regions. For more information, see [Disable disaster recovery](#disable-disaster-recovery).
+
+>[!NOTE]
+>Azure IoT Hub doesn't store or process customer data outside of the geography where you deploy the service instance. For more information, see [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md).
+ ## Manual failover If your business uptime goals aren't satisfied by the RTO that Microsoft initiated failover provides, consider using manual failover to trigger the failover process yourself. The RTO using this option could be anywhere between 10 minutes to a couple of hours. The RTO is currently a function of the number of devices registered against the IoT hub instance being failed over. You can expect the RTO for a hub hosting approximately 100,000 devices to be in the ballpark of 15 minutes. The total time for runtime operations to become fully operational once this process is triggered, is described in the "Time to recover" section.
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
In order to ensure a client/IoT Hub connection stays alive, both the service and
|Language |Default keep-alive interval |Configurable | |||| |Node.js | 180 seconds | No |
-|Java | 230 seconds | No |
+|Java | 230 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-java/blob/main/device/iot-device-client/src/main/java/com/microsoft/azure/sdk/iot/device/ClientOptions.java#L64) |
|C | 240 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Iothub_sdk_options.md#mqtt-transport) | |C# | 300 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/src/Transport/Mqtt/MqttTransportSettings.cs#L89) | |Python | 60 seconds | No |
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md
Last updated 01/25/2022
# Azure Key Vault soft-delete overview > [!IMPORTANT]
-> You must enable soft-delete on your key vaults immediately. The ability to opt out of soft-delete will be deprecated soon. See full details [here](soft-delete-change.md)
+> You must enable soft-delete on your key vaults immediately. The ability to opt out of soft-delete is deprecated and will be removed in February 202. See full details [here](soft-delete-change.md)
> [!IMPORTANT] > When a Key Vault is soft-deleted, services that are integrated with the Key Vault will be deleted. For example: Azure RBAC roles assignments and Event Grid subscriptions. Recovering a soft-deleted Key Vault will not restore these services. They will need to be recreated.
load-balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/overview.md
+
+ Title: What is Basic Azure Load Balancer?
+description: Overview of Basic Azure Load Balancer.
++++ Last updated : 04/14/2022+++
+# What is Basic Azure Load Balancer?
+
+Basic Azure Load Balancer is a SKU of Azure Load Balancer. A basic load balancer provides limited features and capabilities. Azure recommends a standard SKU Azure Load Balancer for production environments.
+
+For more information on a standard SKU Azure Load Balancer, see [What is Azure Load Balancer?](../load-balancer-overview.md).
+
+For more information about the Azure Load Balancer SKUs, see [SKUs](../skus.md).
+
+## Load balancer types
+
+An Azure Load Balancer is available in two types:
+
+A **[public load balancer](../components.md#frontend-ip-configurations)** can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public load balancers are used to load balance internet traffic to your VMs.
+
+An **[internal (or private) load balancer](../components.md#frontend-ip-configurations)** is used where private IPs are needed at the frontend only. Internal load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be accessed from an on-premises network in a hybrid scenario.
+
+## Next steps
+
+For more information on creating a basic load balancer, see:
+
+- [Quickstart: Create a basic internal load balancer - Azure portal](./quickstart-basic-internal-load-balancer-portal.md)
+- [Quickstart: Create a basic public load balancer - Azure portal](./quickstart-basic-public-load-balancer-portal.md)
+
marketplace Azure Private Plan Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-private-plan-troubleshooting.md
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Troubleshooting Checklist -- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](/media-services/latest/setup-azure-subscription-how-to?tabs=portal)
+- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](/azure/media-services/latest/setup-azure-subscription-how-to?tabs=portal)
- ISV to ensure that the Customer is not buying through a CSP. Private Plans are not available on a CSP-managed subscription. - Customer to ensure customer is logging in with an email ID that is registered under the same tenant ID (use the same user ID they used in step #1 above) - ISV to ask the customer to find the Private Plan in Azure Marketplace: [Private plans in Azure Marketplace](/marketplace/private-plans)
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
description: Go-To-Market Services - Describes Microsoft resources that publishe
Previously updated : 03/21/2021 Last updated : 04/14/2022
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Previously updated : 02/28/2022 Last updated : 04/14/2022 # Supported PostgreSQL major versions in Azure Database for PostgreSQL - Flexible Server
Azure Database for PostgreSQL - Flexible Server currently supports the following
## PostgreSQL version 13
-The current minor release is **13.5**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/13/static/release-13-5.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **13.6**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/13/static/release-13-6.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 12
-The current minor release is **12.9**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-9.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **12.10**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-10.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 11
-The current minor release is **11.14**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-14.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **11.15**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-15.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 10 and older
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 02/28/2022 Last updated : 04/14/2022 # Release notes - Azure Database for PostgreSQL - Flexible Server This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL.
+## Release: April 2022
+
+* Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.6, 12.10 and 11.15 with new server creates<sup>$</sup>.
+
+<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
+ ## Release: February 2022 * Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.5, 12.9 and 11.14 with new server creates<sup>$</sup>.
private-5g-core Monitor Private 5G Core With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-log-analytics.md
You can find information on how to create a Log Analytics dashboard in [Create a
## Estimate costs
-Log Analytics will ingest an average of 8GB of data a day for each log streamed to it by a single packet core instance. [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) provides information on how to estimate the cost of using Log Analytics to monitor Azure Private 5G Core.
+Log Analytics will ingest an average of 1.4 GB of data a day for each log streamed to it by a single packet core instance. [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) provides information on how to estimate the cost of using Log Analytics to monitor Azure Private 5G Core.
## Next steps - [Enable Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md)
purview Concept Data Owner Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-data-owner-policies.md
A policy published to a data source could contain references to an asset belongi
## Next steps Check the tutorials on how to create policies in Azure Purview that work on specific data systems such as Azure Storage:
-* [Access provisioning by data owner to Azure Storage datasets](tutorial-data-owner-policies-storage.md)
-* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
+* [Access provisioning by data owner to Azure Storage datasets](how-to-data-owner-policies-storage.md)
+* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
purview Concept Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-self-service-data-access-policy.md
This article helps you understand Azure Purview Self-service data access policy.
## Important limitations
-The self-service data access policy is only supported when the prerequisites mentioned in [data use governance](./tutorial-data-owner-policies-storage.md) are satisfied.
+The self-service data access policy is only supported when the prerequisites mentioned in [data use governance](./how-to-enable-data-use-governance.md#prerequisites) are satisfied.
## Overview
With self-service data access workflow, data consumers can not only find data as
A default self-service data access workflow template is provided with every Azure Purview account.The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
-Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Azure purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./tutorial-data-owner-policies-storage.md) have to be satisfied.
+Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Azure purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md#prerequisites) have to be satisfied.
## Next steps If you would like to preview these features in your environment, follow the link below.-- [Enable data use governance](./tutorial-data-owner-policies-storage.md)
+- [Enable data use governance](./how-to-enable-data-use-governance.md#prerequisites)
- [create self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [working with policies at file level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166) - [working with policies at folder level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
+
+ Title: Resource group and subscription access provisioning by data owner
+description: Step-by-step guide showing how a data owner can create access policies to resource groups or subscriptions.
+++++ Last updated : 4/08/2022+++
+# Resource group and subscription access provisioning by data owner (preview)
+
+[Policies](concept-data-owner-policies.md) in Azure Purview allow you to enable access to data sources that have been registered to a collection. You can also [register an entire Azure resource group or subscription to a collection](register-scan-azure-multiple-sources.md), which will allow you to scan all available data sources in that resource group or subscription. If you create a single access policy against a registered resource group or subscription, a data owner can enable access to **all** available data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards.
+
+This article describes how a data owner can create a single access policy for **all available** data sources in a subscription or a resource group.
+
+> [!IMPORTANT]
+> Currently, these are the available data sources for access policies:
+> - Blob storage
+> - Azure Data Lake Storage (ADLS) Gen2
+
+## Prerequisites
++
+## Configuration
+
+### Register the subscription or resource group for data use governance
+The subscription or resource group needs to be registered with Azure Purview to later define access policies.
+
+To register your resource, follow the **Prerequisites** and **Register** sections of this guide:
+
+- [Register multiple sources in Azure Purview](register-scan-azure-multiple-sources.md#prerequisites)
+
+After you've registered your resources, you'll need to enable data use governance. Data use governance affects the security of your data, as it allows your users to manage access to resources from within Azure Purview.
+
+To ensure you securely enable data use governance, and follow best practices, follow this guide to enable data use governance for your resource group or subscription:
+
+- [How to enable data use governance](./how-to-enable-data-use-governance.md)
+
+In the end, your resource will have the **Data use governance** toggle to **Enabled**, as shown in the picture:
++
+## Create and publish a data owner policy
+Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*:
++
+>[!Important]
+> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
+
+## Additional information
+- Creating a policy at subscription or resource group level will enable the Subjects to access Azure Storage system containers, for example, *$logs*. If this is undesired, first scan the data source and then create finer-grained policies for each (that is, at container or subcontainer level).
+
+### Limits
+The limit for Azure Purview policies that can be enforced by Storage accounts is 100 MB per subscription, which roughly equates to 5000 policies.
+
+## Next steps
+Check blog, demo and related tutorials:
+
+* [Concepts for Azure Purview data owner policies](./concept-data-owner-policies.md)
+* [Data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md)
+* [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
+* [Demo of data owner access policies for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
+
+ Title: Access provisioning by data owner to Azure Storage datasets
+description: Step-by-step guide showing how data owners can create access policies to datasets in Azure Storage
+++++ Last updated : 04/08/2022+++
+# Access provisioning by data owner to Azure Storage datasets (preview)
++
+[Policies](concept-data-owner-policies.md) in Azure Purview allow you to enable access to data sources that have been registered to a collection.
+
+This article describes how a data owner can use Azure Purview to enable access to datasets in Azure Storage. Currently, these Azure Storage sources are supported:
+- Blob storage
+- Azure Data Lake Storage (ADLS) Gen2
+
+## Prerequisites
++
+## Configuration
+
+### Register the data sources in Azure Purview for Data use governance
+The Azure Storage resources need to be registered with Azure Purview to later define access policies.
+
+To register your resources, follow the **Prerequisites** and **Register** sections of these guides:
+
+- [Register and scan Azure Storage Blob - Azure Purview](register-scan-azure-blob-storage-source.md#prerequisites)
+
+- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md#prerequisites)
+
+After you've registered your resources, you'll need to enable data use governance. Data use governance affects the security of your data, as it allows your users to manage access to resources from within Azure Purview.
+
+To ensure you securely enable data use governance, and follow best practices, follow this guide to enable data use governance for your resource group or subscription:
+
+- [How to enable data use governance](./how-to-enable-data-use-governance.md)
+
+In the end, your resource will have the **Data use governance** toggle to **Enabled**, as shown in the picture:
++
+## Create and publish a data owner policy
+Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
+++
+>[!Important]
+> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
++
+## Additional information
+- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container, and there's no access at that level, the request will fail. The following documents show examples of how to do perform a direct access. See also blogs in the *Next steps* section of this tutorial.
+ - [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster)
+ - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)
+- Creating a policy at Storage account level will enable the Subjects to access system containers, for example *$logs*. If this is undesired, first scan the data source(s) and then create finer-grained policies for each (that is, at container or subcontainer level).
++
+### Limits
+- The limit for Azure Purview policies that can be enforced by Storage accounts is 100 MB per subscription, which roughly equates to 5000 policies.
+
+### Known issues
+
+> [!Warning]
+> **Known issues** related to Policy creation
+> - Do not create policy statements based on Azure Purview resource sets. Even if displayed in Azure Purview policy authoring UI, they are not yet enforced. Learn more about [resource sets](concept-resource-sets.md).
+
+### Policy action mapping
+
+This section contains a reference of how actions in Azure Purview data policies map to specific actions in Azure Storage.
+
+| **Azure Purview policy action** | **Data source specific actions** |
+||--|
+|||
+| *Read* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
+|||
+| *Modify* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/write |
+| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/delete |
+|||
++
+## Next steps
+Check blog, demo and related tutorials:
+
+* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Concepts for Azure Purview data owner policies](./concept-data-owner-policies.md)
+* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
+* [Blog: What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
+* [Blog: Accessing data when folder level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
+* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Steps to update or delete a policy in Azure Purview are as follows.
For specific guides on creating policies, you can follow these tutorials: -- [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)-- [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
+- [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
+- [Enable Azure Purview data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md)
purview How To Delete Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-delete-self-service-data-access-policy.md
This guide describes how to delete self-service data access policies that have b
Self-service policies must exist for them to be deleted. Refer to the articles below to create self-service policies -- [Enable Data Use Governance](./tutorial-data-owner-policies-storage.md)
+- [Enable Data Use Governance](./how-to-enable-data-use-governance.md)
- [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [Approve self-service data access request](how-to-workflow-manage-requests-approvals.md)
purview How To Enable Data Use Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-governance.md
To disable data use governance for a source, resource group, or subscription, a
## Next steps - [Create data owner policies for your resources](how-to-data-owner-policy-authoring-generic.md)-- [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)-- [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
+- [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
+- [Enable Azure Purview data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md)
purview How To Monitor With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-with-azure-monitor.md
Last updated 04/07/2022+ # Azure Purview metrics in Azure Monitor
To add a user to the **Monitoring Reader** role, the owner of Azure Purview acco
1. Go to the [Azure portal](https://portal.azure.com) and search for the Azure Purview account name.
-2. Select **Access control (IAM)**.
+1. Select **Access control (IAM)**.
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/access-iam.png" alt-text="Screenshot showing how to access IAM.":::
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-3. Select **Add a role assignment**.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/add-role-assignment.png" alt-text="Screenshot showing how to add role assignment.":::
+ | Setting | Value |
+ | | |
+ | Role | Monitoring Reader |
+ | Assign access to | User, group, or service principal |
+ | Members | &lt;Azure AD account user&gt; |
-4. Select the Role **Monitoring Reader** and set assign access to **Azure AD user, group, or service principal**. And assign the AAD account to access the metrics.
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/add-monitoring-reader.png" alt-text="Screenshot showing how to add monitoring reader role.":::
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot showing Add role assignment page in Azure portal.":::
## Metrics visualization
purview How To View Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-view-self-service-data-access-policy.md
This guide describes how to view self-service data access policies that have bee
Self-service policies must exist for them to be viewed. Refer to the articles below to create self-service policies -- [Enable Data Use Governance](./tutorial-data-owner-policies-storage.md)
+- [Enable Data Use Governance](./how-to-enable-data-use-governance.md)
- [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [Approve self-service data access request](how-to-workflow-manage-requests-approvals.md)
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
To enable data use governance, follow [the data use governance guide](how-to-ena
Now that youΓÇÖve prepared your storage account and environment for access policies, you can follow one of these configuration guides to create your policies:
-* [Single storage account](./tutorial-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
-* [All sources in a subscription or resource group](./tutorial-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+* [Single storage account](./how-to-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
+* [All sources in a subscription or resource group](./how-to-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
Or you can follow the [generic guide for creating data access policies](how-to-data-owner-policy-authoring-generic.md).
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
To enable data use governance, follow [the data use governance guide](how-to-ena
Now that youΓÇÖve prepared your storage account and environment for access policies, you can follow one of these configuration guides to create your policies:
-* [Single storage account](./tutorial-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
-* [All sources in a subscription or resource group](./tutorial-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+* [Single storage account](./how-to-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
+* [All sources in a subscription or resource group](./how-to-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
Or you can follow the [generic guide for creating data access policies](how-to-data-owner-policy-authoring-generic.md).
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Yes](tutorial-data-owner-policies-resource-group.md) | [Source Dependant](catalog-lineage-user-guide.md)|
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Yes](how-to-data-owner-policies-resource-group.md) | [Source Dependant](catalog-lineage-user-guide.md)|
## Prerequisites
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
- Title: Resource group and subscription access provisioning by data owner
-description: Step-by-step guide showing how a data owner can create access policies to resource groups or subscriptions.
----- Previously updated : 3/14/2022---
-# Tutorial: Resource group and subscription access provisioning by data owner (preview)
-
-This tutorial describes how a data owner can leverage Azure Purview to enable access to ALL data sources in a subscription or a resource group. This can be achieved through a single policy statement, and will cover all existing data sources, as well as data sources that are created afterwards. However, at this point, only the following data sources are supported:
-- Blob storage-- Azure Data Lake Storage (ADLS) Gen2-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
-> * Prerequisites
-> * Configure permissions
-> * Register a data asset for Data use governance
-> * Create and publish a policy
-
-## Prerequisites
--
-## Configuration
-
-### Register the subscription or resource group in Azure Purview for Data use governance
-The subscription or resource group needs to be registered with Azure Purview to later define access policies. You can follow this guide:
--- [Register multiple sources - Azure Purview](register-scan-azure-multiple-sources.md)-
-Follow this link to [Enable the resource group or subscription for access policies](./how-to-enable-data-use-governance.md) in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
-
-![Image shows how to register a resource group or subscription for policy.](./media/tutorial-data-owner-policies-resource-group/register-resource-group-for-policy.png)
-
-## Create and publish a data owner policy
-Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*:
-
-![Image shows a sample data owner policy giving access to a resource group.](./media/tutorial-data-owner-policies-resource-group/data-owner-policy-example-resource-group.png)
-
->[!Important]
-> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
-
-## Additional information
-- Creating a policy at subscription or resource group level will enable the Subjects to access Azure Storage system containers e.g., *$logs*. If this is undesired, first scan the data source and then create finer-grained policies for each (i.e., at container or sub-container level).-
-### Limits
-The limit for Azure Purview policies that can be enforced by Storage accounts is 100MB per subscription, which roughly equates to 5000 policies.
-
-## Next steps
-Check blog, demo and related tutorials
-
-* [Concepts for Azure Purview data owner policies](./concept-data-owner-policies.md)
-* [Data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
-* [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Demo of data owner access policies for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
Title: Access provisioning by data owner to Azure Storage datasets
-description: Step-by-step guide showing how data owners can create access policies to datasets in Azure Storage
--
+ Title: Tutorial to provision access for Azure Storage
+description: This tutorial describes how a data owner can create access policies for Azure Storage resources.
++ Previously updated : 03/14/2022- Last updated : 04/08/2022 # Tutorial: Access provisioning by data owner to Azure Storage datasets (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-This tutorial describes how a data owner can leverage Azure Purview to enable access to datasets in Azure Storage. At this point, only the following data sources are supported:
-- Blob storage-- Azure Data Lake Storage (ADLS) Gen2
+[Policies](concept-data-owner-policies.md) in Azure Purview allow you to enable access to data sources that have been registered to a collection. This tutorial describes how a data owner can use Azure Purview to enable access to datasets in Azure Storage through Azure Purview.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Prerequisites
-> * Configure permissions
-> * Register a data asset for Data use governance
-> * Create and publish a policy
+> * Prepare your Azure environment
+> * Configure permissions to allow Azure Purview to connect to your resources
+> * Register your Azure Storage resource for data use governance
+> * Create and publish a policy for your resource group or subscription
## Prerequisites+ [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)] [!INCLUDE [Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)] ## Configuration+ [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
-### Register the data sources in Azure Purview for Data use governance
-Register and scan each Storage account with Azure Purview to later define access policies. You can follow these guides:
+### Register the data sources in Azure Purview for data use governance
+
+Your Azure Storage account needs to be registered in Azure Purview to later define access policies, and during registration we'll enable data use governance. **Data use governance** is an available feature in Azure Purview that allows users to manage access to a resource from within Azure Purview. This allows you to centralize data discovery and access management, however it's a feature that directly impacts your data security.
+
+> [!WARNING]
+> Before enabling data use governance for any of your resources, read through our [**data use governance article**](how-to-enable-data-use-governance.md).
+>
+> This article includes data use governance best practices to help you ensure that your information is secure.
++
+To register your resource and enable data use governance, follow these steps:
+
+> [!Note]
+> You need to be an owner of the subscription or resource group to be able to add a managed identity on an Azure resource.
+
+1. From the [Azure portal](https://portal.azure.com), find the Azure Blob storage account that you would like to register.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/register-blob-storage-acct.png" alt-text="Screenshot that shows the storage account":::
+
+1. Select **Access Control (IAM)** in the left navigation and then select **+ Add** --> **Add role assignment**.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/register-blob-access-control.png" alt-text="Screenshot that shows the access control for the storage account":::
+
+1. Set the **Role** to **Storage Blob Data Reader** and enter your _Azure Purview account name_ under the **Select** input box. Then, select **Save** to give this role assignment to your Azure Purview account.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/register-blob-assign-permissions.png" alt-text="Screenshot that shows the details to assign permissions for the Azure Purview account":::
+
+1. If you have a firewall enabled on your Storage account, follow these steps as well:
+ 1. Go into your Azure Storage account in [Azure portal](https://portal.azure.com).
+ 1. Navigate to **Security + networking > Networking**.NET
+ 1. Choose **Selected Networks** under **Allow access from**.
+ 1. In the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account** and select **Save**.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/register-blob-permission.png" alt-text="Screenshot that shows the exceptions to allow trusted Microsoft services to access the storage account.":::
+
+1. Once you have set up authentication for your storage account, go to the [Azure Purview Studio](https://web.purview.azure.com/).
+1. Select **Data Map** on the left menu.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/select-data-map.png" alt-text="Screenshot that shows the far left menu in the Azure Purview Studio open with Data Map highlighted.":::
+
+1. Select **Register**.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/select-register.png" alt-text="Screenshot that shows Azure Purview Studio Data Map sources, with the register button highlighted at the top.":::
+
+1. On **Register sources**, select **Azure Blob Storage**.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/select-azure-blob-storage.png" alt-text="Screenshot that shows the tile for Azure Multiple on the screen for registering multiple sources.":::
+
+1. Select **Continue**.
+1. On the **Register sources (Azure)** screen, do the following:
+ 1. In the **Name** box, enter a friendly name that the data source will be listed with in the catalog.
+ 1. In the **Subscription** dropdown list boxes, select the subscription where your storage account is housed. Then select your storage account under **Storage account name**. In **Select a collection** select the collection where you'd like to register your Azure Storage account.
+
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows the boxes for selecting a storage account.":::
+
+ 1. In the **Select a collection** box, select a collection or create a new one (optional).
+ 1. Set the *Data use governance* toggle to **Enabled**, as shown in the image below.
+
+ :::image type="content" source="./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows Data use governance toggle set to active on the registered resource page.":::
+
+ >[!TIP]
+ >If the data use governance toggle is greyed out and unable to be selected:
+ > 1. Confirm you have followed all prerequisites to enable Data use governance across your resources.
+ > 1. Confirm that you have selected a storage account to be registered.
+ > 1. It may be that this resource is already registered in another Azure Purview account. Hover over it to know the name of the Azure Purview account that has registered the data resource.first. Only one Azure Purview account can register a resource for data use governance at at time.
+
+ 1. Select **Register** to register the resource group or subscription with Azure Purview with data use governance enabled.
-- [Register and scan Azure Storage Blob - Azure Purview](register-scan-azure-blob-storage-source.md)
+>[!TIP]
+> For more information about data use governance, including best practices or known issues, see our [data use governance article](how-to-enable-data-use-governance.md).
-- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md)
+## Create a data owner policy
-Follow this link to [Enable the data source for access policies](./how-to-enable-data-use-governance.md) in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
+1. Sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
-![Image shows how to register a data source for policy.](./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png)
+1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
-## Create and publish a data owner policy
-Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
+1. Select the **New Policy** button in the policy page.
-![Image shows a sample data owner policy giving access to an Azure Storage account.](./media/tutorial-data-owner-policies-storage/data-owner-policy-example-storage.png)
+ :::image type="content" source="./media/access-policies-common/policy-onboard-guide-1.png" alt-text="Data owner can access the Policy functionality in Azure Purview when it wants to create policies.":::
+1. The new policy page will appear. Enter the policy **Name** and **Description**.
+
+1. To add policy statements to the new policy, select the **New policy statement** button. This will bring up the policy statement builder.
+
+ :::image type="content" source="./media/access-policies-common/create-new-policy.png" alt-text="Data owner can create a new policy statement.":::
+
+1. Select the **Effect** button and choose *Allow* from the drop-down list.
+
+1. Select the **Action** button and choose *Read* or *Modify* from the drop-down list.
+
+1. Select the **Data Resources** button to bring up the window to enter Data resource information, which will open to the right.
+
+1. Under the **Data Resources** Panel do one of two things depending on the granularity of the policy:
+ - To create a broad policy statement that covers an entire data source, resource group, or subscription that was previously registered, use the **Data sources** box and select its **Type**.
+ - To create a fine-grained policy, use the **Assets** box instead. Enter the **Data Source Type** and the **Name** of a previously registered and scanned data source. See example in the image.
+
+ :::image type="content" source="./media/access-policies-common/select-data-source-type.png" alt-text="Screenshot showing the policy editor, with Data Resources selected, and Data source Type highlighted in the data resources menu.":::
+
+1. Select the **Continue** button and transverse the hierarchy to select and underlying data-object (for example: folder, file, etc.). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
+
+ :::image type="content" source="./media/access-policies-common/select-asset.png" alt-text="Screenshot showing the Select asset menu, and the Add button highlighted.":::
+
+1. Select the **Subjects** button and enter the subject identity as a principal, group, or MSI. Then select the **OK** button. This will take you back to the policy editor
+
+ :::image type="content" source="./media/access-policies-common/select-subject.png" alt-text="Screenshot showing the Subject menu, with a subject select from the search and the OK button highlighted at the bottom.":::
+
+1. Repeat the steps #5 to #11 to enter any more policy statements.
+
+1. Select the **Save** button to save the policy.
+
+ :::image type="content" source="./media/tutorial-data-owner-policies-storage/data-owner-policy-example-storage.png" alt-text="Screenshot showing a sample data owner policy giving access to an Azure Storage account.":::
+
+## Publish a data owner policy
+
+1. Sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
+
+1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
+
+ :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Screenshot showing the Azure Purview studio with the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
+
+1. The Policy portal will present the list of existing policies in Azure Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
+
+ :::image type="content" source="./media/access-policies-common/publish-policy.png" alt-text="Screenshot showing the policy editing menu with the Publish button highlighted in the top right of the page.":::
+
+1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
+
+ :::image type="content" source="./media/access-policies-common/select-data-sources-publish-policy.png" alt-text="Screenshot showing with Policy publish menu with a data resource selected and the publish button highlighted.":::
>[!Important] > - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
+## Clean up resources
-## Additional information
-- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container, and there is no access at that level, the request will fail. The following documents show examples of how to do perform a direct access. See also blogs in the *Next steps* section of this tutorial.
- - [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster)
- - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)
-- Creating a policy at Storage account level will enable the Subjects to access system containers e.g., *$logs*. If this is undesired, first scan the data source(s) and then create finer-grained policies for each (i.e., at container or sub-container level).
+To delete a policy in Azure Purview, follow these steps:
+1. Sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
-### Limits
-- The limit for Azure Purview policies that can be enforced by Storage accounts is 100MB per subscription, which roughly equates to 5000 policies.
+1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
-### Known issues
+ :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Screenshot showing the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
-> [!Warning]
-> **Known issues** related to Policy creation
-> - Do not create policy statements based on Azure Purview resource sets. Even if displayed in Azure Purview policy authoring UI, they are not yet enforced. Learn more about [resource sets](concept-resource-sets.md).
+1. The Policy portal will present the list of existing policies in Azure Purview. Select the policy that needs to be updated.
-### Policy action mapping
+1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
-This section contains a reference of how actions in Azure Purview data policies map to specific actions in Azure Storage.
-
-| **Azure Purview policy action** | **Data source specific actions** |
-||--|
-|||
-| *Read* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
-|||
-| *Modify* |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/read |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/write |
-| |<sub>Microsoft.Storage/storageAccounts/blobServices/containers/delete |
-|||
+ :::image type="content" source="./media/access-policies-common/edit-policy.png" alt-text="Screenshot showing an open policy with the Edit button highlighted in the top menu on the page.":::
## Next steps
-Check blog, demo and related tutorials
-
-* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
-* [Concepts for Azure Purview data owner policies](./concept-data-owner-policies.md)
-* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
-* [Blog: What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
-* [Blog: Accessing data when folder level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
-* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
+
+Check our demo and related tutorials:
+
+> [!div class="nextstepaction"]
+> [Demo of access policy for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+> [Concepts for Azure Purview data owner policies](./concept-data-owner-policies.md)
+> [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
search Search Features List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-features-list.md
The following table summarizes features by category. For more information about
| REST | [**Service REST API**](/rest/api/searchservice/) is for data plane operations, including all operations related to indexing, queries, and AI enrichment. You can also use this client library to retrieve system information and statistics. <br/><br/>[**Management REST API**](/rest/api/searchmanagement/) is for service creation and provisioning through Azure Resource Manager. You can also use this API to manage keys and capacity.| | Azure SDK for .NET | [**Azure.Search.Documents**](/dotnet/api/overview/azure/search.documents-readme) is for data plane operations, including all operations related to indexing, queries, and AI enrichment. You can also use this client library to retrieve system information and statistics. <br/><br/>[**Microsoft.Azure.Management.Search**](/dotnet/api/microsoft.azure.management.search) is for service creation and provisioning through Azure Resource Manager. You can also use this API to manage keys and capacity.| | Azure SDK for Java | [**com.azure.search.documents**](/java/api/com.azure.search.documents) is for data plane operations, including all operations related to indexing, queries, and AI enrichment. You can also use this client library to retrieve system information and statistics. <br/><br/>[**com.microsoft.azure.management.search**](/java/api/overview/azure/search/management) is for service creation and provisioning through Azure Resource Manager. You can also use this API to manage keys and capacity.|
-| Azure SDK for Python | [**azure-search-documents**](/python/api/overview/azure/search-documents-readme) is for data plane operations, including all operations related to indexing, queries, and AI enrichment. You can also use this client library to retrieve system information and statistics. <br/><br/>[**azure-mgmt-search**](/python/api/overview/azure/search/management) is for service creation and provisioning through Azure Resource Manager. You can also use this API to manage keys and capacity. |
+| Azure SDK for Python | [**azure-search-documents**](/python/api/overview/azure/search-documents-readme) is for data plane operations, including all operations related to indexing, queries, and AI enrichment. You can also use this client library to retrieve system information and statistics. <br/><br/>[**azure-mgmt-search**](/python/api/azure-mgmt-search/) is for service creation and provisioning through Azure Resource Manager. You can also use this API to manage keys and capacity. |
| Azure SDK for JavaScript/TypeScript | [**azure/search-documents**](/javascript/api/@azure/search-documents/) is for data plane operations, including all operations related to indexing, queries, and AI enrichment. You can also use this client library to retrieve system information and statistics. <br/><br/>[**azure/arm-search**](/javascript/api/@azure/arm-search/) is for service creation and provisioning through Azure Resource Manager. You can also use this API to manage keys and capacity. | ## See also
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
Title: Create an index alias
-description: Create an alias to define a secondary name that can be used to refer to an index for querying, indexing, and other operations.
+description: Create an alias to define a secondary name that can be used to refer to an index for querying and indexing.
To create an alias in Visual Studio Code:
## Send requests
-Once you've created your alias, you're ready to start using it. Aliases can be used for all [document operations](/rest/api/searchservice/document-operations) including querying, indexing, suggestions, and autocomplete.
+Once you've created your alias, you're ready to start using it. Aliases can be used for [querying](/rest/api/searchservice/search-documents) and [indexing](/rest/api/searchservice/addupdate-or-delete-documents).
In the query below, instead of sending the request to `hotel-samples-index`, you can instead send the request to `my-alias` and it will be routed accordingly.
POST /indexes/my-alias/docs/search?api-version=2021-04-30-preview
If you expect that you may need to make updates to your index definition for your production indexes, you should use an alias rather than the index name for requests in your client-side application. Scenarios that require you to create a new index are outlined under these [rebuild conditions](search-howto-reindex.md#rebuild-conditions). > [!NOTE]
-> You can only use an alias with [document operations](/rest/api/searchservice/document-operations). Aliases can't be used to get or update an index definition, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
+> You can only use an alias for [querying](/rest/api/searchservice/search-documents) and [indexing](/rest/api/searchservice/addupdate-or-delete-documents). Aliases can't be used to get or update an index definition, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
## Swap indexes
sentinel Authentication Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/authentication-normalization-schema.md
The following filtering parameters are available:
| Name | Type | Description | |-|--|-|
-| **starttime** | datetime | Filter only DNS queries that ran at or after this time. |
-| **endtime** | datetime | Filter only DNS queries that finished running at or before this time. |
+| **starttime** | datetime | Filter only authentication events that ran at or after this time. |
+| **endtime** | datetime | Filter only authentication events that finished running at or before this time. |
| **targetusername_has** | string | Filter only authentication events that has any of the listed user names. |
-For example, to filter only DNS queries from the last day to a specific user, use:
+For example, to filter only authentication events from the last day to a specific user, use:
```kql imAuthentication (targetusername_has = 'johndoe', starttime = ago(1d), endtime=now())
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-workspace-architecture.md
# Microsoft Sentinel workspace architecture best practices - When planning your Microsoft Sentinel workspace deployment, you must also design your Log Analytics workspace architecture. Decisions about the workspace architecture are typically driven by business and technical requirements. This article reviews key decision factors to help you determine the right workspace architecture for your organizations, including: - Whether you'll use a single tenant or multiple tenants
For more information, see [Design your Microsoft Sentinel workspace architecture
See our video: [Architecting SecOps for Success: Best Practices for Deploying Microsoft Sentinel](https://youtu.be/DyL9MEMhqmI) - ## Tenancy considerations While fewer workspaces are simpler to manage, you may have specific needs for multiple tenants and workspaces. For example, many organizations have a cloud environment that contains multiple [Azure Active Directory (Azure AD) tenants](../active-directory/develop/quickstart-create-new-tenant.md), resulting from mergers and acquisitions or due to identity separation requirements. When determining how many tenants and workspaces to use, consider that most Microsoft Sentinel features operate by using a single workspace or Microsoft Sentinel instance, and Microsoft Sentinel ingests all logs housed within the workspace.
-> [!IMPORTANT]
-> Costs are one of the main considerations when determining Microsoft Sentinel architecture. For more information, see [Microsoft Sentinel costs and billing](billing.md).
->
+Costs are one of the main considerations when determining Microsoft Sentinel architecture. For more information, see [Microsoft Sentinel costs and billing](billing.md).
+ ### Working with multiple tenants If you have multiple tenants, such as if you're a managed security service provider (MSSP), we recommend that you create at least one workspace for each Azure AD tenant to support built-in, [service to service data connectors](connect-data-sources.md#service-to-service-integration) that work only within their own Azure AD tenant.
Use [Azure Lighthouse](../lighthouse/how-to/onboard-customer.md) to help manage
> [!NOTE] > [Partner data connectors](data-connectors-reference.md) are often based on API or agent collections, and therefore are not attached to a specific Azure AD tenant.
->
---
+> >
## Compliance considerations After your data is collected, stored, and processed, compliance can become an important design requirement, with a significant impact on your Microsoft Sentinel architecture. Having the ability to validate and prove who has access to what data under all conditions is a critical data sovereignty requirement in many countries and regions, and assessing risks and getting insights in Microsoft Sentinel workflows is a priority for many customers.
To start validating your compliance, assess your data sources, and how and where
> [!NOTE] > The [Log Analytics agent](connect-windows-security-events.md) supports TLS 1.2 to ensure data security in transit between the agent and the Log Analytics service, as well as the FIPS 140 standard.
->
+> >
> If you are sending data to a geography or region that is different from your Microsoft Sentinel workspace, regardless of whether or not the sending resource resides in Azure, consider using a workspace in the same geography or region.
->
-
+> >
## Region considerations Use separate Microsoft Sentinel instances for each region. While Microsoft Sentinel can be used in multiple regions, you may have requirements to separate data by team, region, or site, or regulations and controls that make multi-region models impossible or more complex than needed. Using separate instances and workspaces for each region helps to avoid bandwidth / egress costs for moving data across regions.
Consider the following when working with multiple regions:
- Bandwidth costs vary depending on the source and destination region and collection method. For more information, see:
- - [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/)
- - [Data transfers charges using Log Analytics ](../azure-monitor/usage-estimated-costs.md#data-transfer-charges).
+ - [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/)
+ - [Data transfers charges using Log Analytics ](../azure-monitor/usage-estimated-costs.md#data-transfer-charges).
- Use templates for your analytics rules, custom queries, workbooks, and other resources to make your deployments more efficient. Deploy the templates instead of manually deploying each resource in each region.
For more information, see [Permissions in Microsoft Sentinel](roles.md).
The following image shows a simplified version of a workspace architecture where security and operations teams need access to different sets of data, and resource-context RBAC is used to provide the required permissions. - [ ![Diagram of a sample architecture for resource-context RBAC.](media/resource-context-rbac/resource-context-rbac-sample.png) ](media/resource-context-rbac/resource-context-rbac-sample.png#lightbox) In this image, the Microsoft Sentinel workspace is placed in a separate subscription to better isolate permissions. > [!NOTE] > Another option would be to place Microsoft Sentinel under a separate management group that's dedicated to security, which would ensure that only minimal permission assignments are inherited. Within the security team, several groups are assigned permissions according to their functions. Because these teams have access to the entire workspace, they'll have access to the full Microsoft Sentinel experience, restricted only by the Microsoft Sentinel roles they're assigned. For more information, see [Permissions in Microsoft Sentinel](roles.md).
->
-
+> >
In addition to the security subscription, a separate subscription is used for the applications teams to host their workloads. The applications teams are granted access to their respective resource groups, where they can manage their resources. This separate subscription and resource-context RBAC allows these teams to view logs generated by any resources they have access to, even when the logs are stored in a workspace where they *don't* have direct access. The applications teams can access their logs via the **Logs** area of the Azure portal, to show logs for a specific resource, or via Azure Monitor, to show all of the logs they can access at the same time. Azure resources have built-in support for resource-context RBAC, but may require additional fine-tuning when working with non-Azure resources. For more information, see [Explicitly configure resource-context RBAC](resource-context-rbac.md#explicitly-configure-resource-context-rbac).
For example, consider if the organization whose architecture is described in the
### Access considerations with multiple workspaces
-If you have different entities, subsidiaries, or geographies within your organization, each with their own security teams that need access to Microsoft Sentinel, use separate workspaces for each entity or subsidiary. Implement the separate workspaces within a single Azure AD tenant, or across multiple tenants using Azure Lighthouse.
+If you have different entities, subsidiaries, or geographies within your organization, each with their own security teams that need access to Microsoft Sentinel, use separate workspaces for each entity or subsidiary. Implement the separate workspaces within a single Azure AD tenant, or across multiple tenants using Azure Lighthouse.
Your central SOC team may also use an additional, optional Microsoft Sentinel workspace to manage centralized artifacts such as analytics rules or workbooks. For more information, see [Simplify working with multiple workspaces](#simplify-working-with-multiple-workspaces). - ## Technical best practices for creating your workspace Use the following best practice guidance when creating the Log Analytics workspace you'll use for Microsoft Sentinel:
Use the following best practice guidance when creating the Log Analytics workspa
- **Use a dedicated workspace cluster if your projected data ingestion is around or more than 1 TB per day**. A [dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md) enables you to secure resources for your Microsoft Sentinel data, which enables better query performance for large data sets. Dedicated clusters also provide the option for more encryption and control of your organization's keys.
+Don't apply a resource lock to a Log Analytics workspace you'll use for Microsoft Sentinel. A resource lock on a workspace can cause many Microsoft Sentinel operations to fail.
+ ## Simplify working with multiple workspaces If you do need to work with multiple workspaces, simplify your incident management and investigation by [condensing and listing all incidents from each Microsoft Sentinel instance in a single location](multiple-workspace-view.md).
union Update, workspace("contosoretail-it").Update, workspace("WORKSPACE ID").Up
``` For more information, see [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md).
-## Next steps
-
+## Next steps
> [!div class="nextstepaction"]
->[Design your Microsoft Sentinel workspace architecture](design-your-workspace-architecture.md)
-
+> >[Design your Microsoft Sentinel workspace architecture](design-your-workspace-architecture.md)
> [!div class="nextstepaction"]
->[Microsoft Sentinel sample workspace designs](sample-workspace-designs.md)
-
+> >[Microsoft Sentinel sample workspace designs](sample-workspace-designs.md)
> [!div class="nextstepaction"]
->[On-board Microsoft Sentinel](quickstart-onboard.md)
-
+> >[On-board Microsoft Sentinel](quickstart-onboard.md)
> [!div class="nextstepaction"]
->[Get visibility into alerts](get-visibility.md)
+> >[Get visibility into alerts](get-visibility.md)
+
service-bus-messaging Service Bus Filter Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-filter-examples.md
sys.correlationid like 'abc-%'
> [!NOTE] > - For a list of system properties, see [Messages, payloads, and serialization](service-bus-messages-payloads.md).
-> - Use system property names from [Microsoft.Azure.ServiceBus.Message](/dotnet/api/microsoft.azure.servicebus.message#properties) in your filters even when you use [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) from the new [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus) namespace to send and receive messages. The `Subject` from [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) maps to `Label` in [Microsoft.Azure.ServiceBus.Message](/dotnet/api/microsoft.azure.servicebus.message#properties).
+> - Use system property names from [Microsoft.Azure.ServiceBus.Message](/dotnet/api/microsoft.azure.servicebus.message#properties) in your filters even when you use [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) from the new [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus) namespace to send and receive messages.
+> - `Subject` from [Azure.Messaging.ServiceBus.ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) maps to `Label` in [Microsoft.Azure.ServiceBus.Message](/dotnet/api/microsoft.azure.servicebus.message#properties).
## Filter on message properties
-Here are the examples of using message properties in a filter. You can access message properties using `user.property-name` or just `property-name`.
+Here are the examples of using application or user properties in a filter. You can access application properties set by using [Azure.Messaging.ServiceBus.ServiceBusMessage.ApplicationProperties](/dotnet/api/azure.messaging.servicebus.servicebusmessage.applicationproperties)) (latest) or user properties set by [Microsoft.Azure.ServiceBus.Message.UserProperty](/dotnet/api/microsoft.azure.servicebus.message.userproperties) (deprecated) using the syntax: `user.property-name` or just `property-name`.
```csharp MessageProperty = 'A'
-SuperHero like 'SuperMan%'
+user.SuperHero like 'SuperMan%'
``` ## Filter on message properties with special characters
filter.Properties["color"] = "Red";
It's equivalent to: `sys.ReplyTo = 'johndoe@contoso.com' AND sys.Label = 'Important' AND color = 'Red'`
+## .NET example for creating subscription filters
+Here's a .NET C# example that creates the following Service Bus entities:
+- Service Bus topic named `topicfiltersampletopic`
+- Subscription to the topic named `AllOrders` with a True Rule filter, which is equivalent to a SQL rule filter with expression `1=1`.
+- Subscription named `ColorBlueSize10Orders` with a SQL filter expression `color='blue' AND quantity=10`
+- Subscription named `ColorRed` with a SQL filter expression `color='red'` and an action
+- Subscription named `HighPriorityRedOrders` with a correlation filter expression `Subject = "red", CorrelationId = "high"`
+
+See the inline code comments for more details.
+
+```csharp
+namespace CreateTopicsAndSubscriptionsWithFilters
+{
+ using Azure.Messaging.ServiceBus.Administration;
+ using System;
+ using System.Threading.Tasks;
+
+ public class Program
+ {
+ // Service Bus Administration Client object to create topics and subscriptions
+ static ServiceBusAdministrationClient adminClient;
+
+ // connection string to the Service Bus namespace
+ static readonly string connectionString = "<YOUR SERVICE BUS NAMESPACE - CONNECTION STRING>";
+
+ // name of the Service Bus topic
+ static readonly string topicName = "topicfiltersampletopic";
+
+ // names of subscriptions to the topic
+ static readonly string subscriptionAllOrders = "AllOrders";
+ static readonly string subscriptionColorBlueSize10Orders = "ColorBlueSize10Orders";
+ static readonly string subscriptionColorRed = "ColorRed";
+ static readonly string subscriptionHighPriorityRedOrders = "HighPriorityRedOrders";
+
+ public static async Task Main()
+ {
+ try
+ {
+
+ Console.WriteLine("Creating the Service Bus Administration Client object");
+ adminClient = new ServiceBusAdministrationClient(connectionString);
+
+ Console.WriteLine($"Creating the topic {topicName}");
+ await adminClient.CreateTopicAsync(topicName);
+
+ Console.WriteLine($"Creating the subscription {subscriptionAllOrders} for the topic with a True filter ");
+ // Create a True Rule filter with an expression that always evaluates to true
+ // It's equivalent to using SQL rule filter with 1=1 as the expression
+ await adminClient.CreateSubscriptionAsync(
+ new CreateSubscriptionOptions(topicName, subscriptionAllOrders),
+ new CreateRuleOptions("AllOrders", new TrueRuleFilter()));
++
+ Console.WriteLine($"Creating the subscription {subscriptionColorBlueSize10Orders} with a SQL filter");
+ // Create a SQL filter with color set to blue and quantity to 10
+ await adminClient.CreateSubscriptionAsync(
+ new CreateSubscriptionOptions(topicName, subscriptionColorBlueSize10Orders),
+ new CreateRuleOptions("BlueSize10Orders", new SqlRuleFilter("color='blue' AND quantity=10")));
+
+ Console.WriteLine($"Creating the subscription {subscriptionColorRed} with a SQL filter");
+ // Create a SQL filter with color equals to red and a SQL action with a set of statements
+ await adminClient.CreateSubscriptionAsync(topicName, subscriptionColorRed);
+ // remove the $Default rule
+ await adminClient.DeleteRuleAsync(topicName, subscriptionColorRed, "$Default");
+ // now create the new rule. notice that user. prefix is used for the user/application property
+ await adminClient.CreateRuleAsync(topicName, subscriptionColorRed, new CreateRuleOptions
+ {
+ Name = "RedOrdersWithAction",
+ Filter = new SqlRuleFilter("user.color='red'"),
+ Action = new SqlRuleAction("SET quantity = quantity / 2; REMOVE priority;SET sys.CorrelationId = 'low';")
+
+ }
+ );
+
+ Console.WriteLine($"Creating the subscription {subscriptionHighPriorityRedOrders} with a correlation filter");
+ // Create a correlation filter with color set to Red and priority set to High
+ await adminClient.CreateSubscriptionAsync(
+ new CreateSubscriptionOptions(topicName, subscriptionHighPriorityRedOrders),
+ new CreateRuleOptions("HighPriorityRedOrders", new CorrelationRuleFilter() {Subject = "red", CorrelationId = "high"} ));
+
+ // delete resources
+ //await adminClient.DeleteTopicAsync(topicName);
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine(e.ToString());
+ }
+ }
+ }
+}
+```
+
+## .NET example for sending receiving messages
+
+```csharp
+namespace SendAndReceiveMessages
+{
+ using System;
+ using System.Text;
+ using System.Threading.Tasks;
+ using Azure.Messaging.ServiceBus;
+ using Newtonsoft.Json;
+
+ public class Program
+ {
+ const string TopicName = "TopicFilterSampleTopic";
+ const string SubscriptionAllMessages = "AllOrders";
+ const string SubscriptionColorBlueSize10Orders = "ColorBlueSize10Orders";
+ const string SubscriptionColorRed = "ColorRed";
+ const string SubscriptionHighPriorityOrders = "HighPriorityRedOrders";
+
+ // connection string to your Service Bus namespace
+ static string connectionString = "<YOUR SERVICE BUS NAMESPACE - CONNECTION STRING>";
+
+ // the client that owns the connection and can be used to create senders and receivers
+ static ServiceBusClient client;
+
+ // the sender used to publish messages to the topic
+ static ServiceBusSender sender;
+
+ // the receiver used to receive messages from the subscription
+ static ServiceBusReceiver receiver;
+
+ public async Task SendAndReceiveTestsAsync(string connectionString)
+ {
+ // This sample demonstrates how to use advanced filters with ServiceBus topics and subscriptions.
+ // The sample creates a topic and 3 subscriptions with different filter definitions.
+ // Each receiver will receive matching messages depending on the filter associated with a subscription.
+
+ // Send sample messages.
+ await this.SendMessagesToTopicAsync(connectionString);
+
+ // Receive messages from subscriptions.
+ await this.ReceiveAllMessageFromSubscription(connectionString, SubscriptionAllMessages);
+ await this.ReceiveAllMessageFromSubscription(connectionString, SubscriptionColorBlueSize10Orders);
+ await this.ReceiveAllMessageFromSubscription(connectionString, SubscriptionColorRed);
+ await this.ReceiveAllMessageFromSubscription(connectionString, SubscriptionHighPriorityOrders);
+ }
++
+ async Task SendMessagesToTopicAsync(string connectionString)
+ {
+ // Create the clients that we'll use for sending and processing messages.
+ client = new ServiceBusClient(connectionString);
+ sender = client.CreateSender(TopicName);
+
+ Console.WriteLine("\nSending orders to topic.");
+
+ // Now we can start sending orders.
+ await Task.WhenAll(
+ SendOrder(sender, new Order()),
+ SendOrder(sender, new Order { Color = "blue", Quantity = 5, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "red", Quantity = 10, Priority = "high" }),
+ SendOrder(sender, new Order { Color = "yellow", Quantity = 5, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "blue", Quantity = 10, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "blue", Quantity = 5, Priority = "high" }),
+ SendOrder(sender, new Order { Color = "blue", Quantity = 10, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "red", Quantity = 5, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "red", Quantity = 10, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "red", Quantity = 5, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "yellow", Quantity = 10, Priority = "high" }),
+ SendOrder(sender, new Order { Color = "yellow", Quantity = 5, Priority = "low" }),
+ SendOrder(sender, new Order { Color = "yellow", Quantity = 10, Priority = "low" })
+ );
+
+ Console.WriteLine("All messages sent.");
+ }
+
+ async Task SendOrder(ServiceBusSender sender, Order order)
+ {
+ var message = new ServiceBusMessage(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(order)))
+ {
+ CorrelationId = order.Priority,
+ Subject = order.Color,
+ ApplicationProperties =
+ {
+ { "color", order.Color },
+ { "quantity", order.Quantity },
+ { "priority", order.Priority }
+ }
+ };
+ await sender.SendMessageAsync(message);
+
+ Console.WriteLine("Sent order with Color={0}, Quantity={1}, Priority={2}", order.Color, order.Quantity, order.Priority);
+ }
+
+ async Task ReceiveAllMessageFromSubscription(string connectionString, string subsName)
+ {
+ var receivedMessages = 0;
+
+ receiver = client.CreateReceiver(TopicName, subsName, new ServiceBusReceiverOptions() { ReceiveMode = ServiceBusReceiveMode.ReceiveAndDelete } );
+
+ // Create a receiver from the subscription client and receive all messages.
+ Console.WriteLine("\nReceiving messages from subscription {0}.", subsName);
+
+ while (true)
+ {
+ var receivedMessage = await receiver.ReceiveMessageAsync(TimeSpan.FromSeconds(10));
+ if (receivedMessage != null)
+ {
+ foreach (var prop in receivedMessage.ApplicationProperties)
+ {
+ Console.Write("{0}={1},", prop.Key, prop.Value);
+ }
+ Console.WriteLine("CorrelationId={0}", receivedMessage.CorrelationId);
+ receivedMessages++;
+ }
+ else
+ {
+ // No more messages to receive.
+ break;
+ }
+ }
+ Console.WriteLine("Received {0} messages from subscription {1}.", receivedMessages, subsName);
+ }
+
+ public static async Task Main()
+ {
+ try
+ {
+ Program app = new Program();
+ await app.SendAndReceiveTestsAsync(connectionString);
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine(e.ToString());
+ }
+ }
+ }
+
+ class Order
+ {
+ public string Color
+ {
+ get;
+ set;
+ }
+
+ public int Quantity
+ {
+ get;
+ set;
+ }
+
+ public string Priority
+ {
+ get;
+ set;
+ }
+ }
+}
+```
## Next steps
service-fabric Service Fabric Cluster Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-capacity.md
The number of initial nodes types depends upon the purpose of you cluster and th
* ***Will your cluster span across Availability Zones?***
- Service Fabric supports clusters that span across [Availability Zones](../availability-zones/az-overview.md) by deploying node types that are pinned to specific zones, ensuring high-availability of your applications. Availability Zones require additional node type planning and minimum requirements. For details, see [Recommended topology for spanning a primary node type across Availability Zones](service-fabric-cross-availability-zones.md#recommended-topology-for-spanning-a-primary-node-type-across-availability-zones).
+ Service Fabric supports clusters that span across [Availability Zones](../availability-zones/az-overview.md) by deploying node types that are pinned to specific zones, ensuring high-availability of your applications. Availability Zones require additional node type planning and minimum requirements. For details, see [Topology for spanning a primary node type across Availability Zones](service-fabric-cross-availability-zones.md#topology-for-spanning-a-primary-node-type-across-availability-zones).
When determining the number and properties of node types for the initial creation of your cluster, keep in mind that you can always add, modify, or remove (non-primary) node types once your cluster is deployed. [Primary node types can also be scaled up or down](service-fabric-scale-up-primary-node-type.md) in running clusters, though to do so you will need to create a new node type, move the workload over, and then remove the original primary node type.
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cross-availability-zones.md
To support clusters that span across Availability Zones, Azure Service Fabric pr
Sample templates are available at [Service Fabric cross-Availability Zone templates](https://github.com/Azure-Samples/service-fabric-cluster-templates).
-## Recommended topology for spanning a primary node type across Availability Zones
+## Topology for spanning a primary node type across Availability Zones
+
+>[!NOTE]
+>The benefit of spanning the primary node type across availability zones is really only seen for three zones and not just two.
* The cluster reliability level set to `Platinum` * A single public IP resource using Standard SKU
spring-cloud How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-config-server.md
Now that your configuration files are saved in a repository, you need to connect
> [!CAUTION] > Some Git repository servers use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Cloud, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps Server, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Cloud.
- > GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for Github. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
+ > GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
* **SSH**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **SSH**, and then enter your **Private key**. Optionally, specify your **Host key** and **Host key algorithm**. Be sure to include your public key in your Config Server repository. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
spring-cloud How To Enable Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-availability-zone.md
+
+ Title: Create an Azure Spring Cloud instance with availability zone enabled
+
+description: How to create an Azure Spring Cloud instance with availability zone enabled.
++++ Last updated : 04/14/2022++
+# Create Azure Spring Cloud instance with availability zone enabled
++
+**This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
+
+> [!NOTE]
+> This feature is not available in Basic tier.
+
+This article explains availability zones in Azure Spring Cloud, and how to enable them.
+
+In Microsoft Azure, [Availability Zones (AZ)](../availability-zones/az-overview.md) are unique physical locations within an Azure region. Each zone is made up of one or more data centers that are equipped with independent power, cooling, and networking. Availability zones protect your applications and data from data center failures.
+
+When a service in Azure Spring Cloud has availability zone enabled, Azure automatically spreads the application's deployment instance across all three zones in the selected region. If the application's deployment instance count is larger than three and is divisible by three, the instances will be spread evenly. Otherwise, the extra instance counts are spread across the remaining zones.
+
+## How to create an instance in Azure Spring Cloud with availability zone enabled
+
+>[!NOTE]
+> You can only enable availability zone when creating your instance. You can't enable or disable availability zone after creation of the service instance.
+
+You can enable availability zone in Azure Spring Cloud using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure portal](https://portal.azure.com).
+
+# [Azure CLI](#tab/azure-cli)
+
+To create a service in Azure Spring Cloud with availability zone enabled using the Azure CLI, include the `--zone-redundant` parameter when you create your service in Azure Spring Cloud.
+
+```azurecli
+az spring-cloud create -name <MyService> \
+ -group <MyResourceGroup> \
+ -location <MyLocation> \
+ --zone-redundant true
+```
+
+# [Azure portal](#tab/portal)
+
+To create a service in Azure Spring Cloud with availability zone enabled using the Azure portal, enable the Zone Redundant option when creating the instance.
+
+![Image of where to enable availability zone using the portal.](media/spring-cloud-availability-zone/availability-zone-portal.png)
+++
+## Region availability
+
+Azure Spring Cloud currently supports availability zones in the following regions:
+- Central US
+- West US 2
+- East US
+- Australia East
+- North Europe
+- East US 2
+- West Europe
+- South Central US
+- UK South
+- Brazil South
+- France Central
+
+## Pricing
+
+There's no extra cost for enabling the availability zone.
+
+## Next steps
+
+* [Plan for disaster recovery](disaster-recovery.md)
spring-cloud How To Maven Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-maven-deploy-apps.md
+
+ Title: "Tutorial: Deploy Spring Boot applications using Maven"
+
+description: Use Maven to deploy applications to Azure Spring Cloud.
++++ Last updated : 04/07/2022+++
+# Deploy Spring Boot applications using Maven
+
+**This article applies to:** ✔️ Java ❌ C#
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to use the Azure Spring Cloud Maven plugin to configure and deploy applications to Azure Spring Cloud.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An already provisioned Azure Spring Cloud instance.
+* [JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install)
+* [Apache Maven](https://maven.apache.org/download.cgi)
+* [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) with the Azure Spring Cloud extension. You can install the extension by using the following command: `az extension add --name spring-cloud`
+
+## Generate a Spring Cloud project
+
+To create a Spring Cloud project for use in this article, use the following steps:
+
+1. Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with the recommended dependencies for Azure Spring Cloud. This link uses the following URL to provide default settings for you.
+
+ ```url
+ https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.5.7&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
+ ```
+
+ The following image shows the recommended Spring Initializr setup for this sample project.
+
+ :::image type="content" source="media/how-to-maven-deploy-apps/initializr-page.png" alt-text="Screenshot of Spring Initializr.":::
+
+ This example uses Java version 8. If you want to use Java version 11, change the option under **Project Metadata**.
+
+1. Select **Generate** when all the dependencies are set.
+1. Download and unpack the package, then create a web controller for a web application. Add the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
+
+ ```java
+ package com.example.hellospring;
+
+ import org.springframework.web.bind.annotation.RestController;
+ import org.springframework.web.bind.annotation.RequestMapping;
+
+ @RestController
+ public class HelloController {
+
+ @RequestMapping("/")
+ public String index() {
+ return "Greetings from Azure Spring Cloud!";
+ }
+
+ }
+ ```
+
+## Build the Spring applications locally
+
+To build the project by using Maven, run the following commands:
+
+```azurecli
+cd hellospring
+mvn clean package -DskipTests -Denv=cloud
+```
+
+Compiling the project takes several minutes. After it's completed, you should have individual JAR files for each service in their respective folders.
+
+## Provision an instance of Azure Spring Cloud
+
+The following procedure creates an instance of Azure Spring Cloud using the Azure portal.
+
+1. In a new tab, open the [Azure portal](https://portal.azure.com/).
+
+2. From the top search box, search for **Azure Spring Cloud**.
+
+3. Select **Azure Spring Cloud** from the results.
+
+ ![ASC icon start](media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png)
+
+4. On the Azure Spring Cloud page, select **Create**.
+
+ ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png)
+
+5. Fill out the form on the Azure Spring Cloud **Create** page. Consider the following guidelines:
+
+ - **Subscription**: Select the subscription you want to be billed for this resource.
+ - **Resource group**: Creating new resource groups for new resources is a best practice. You will use this resource group in later steps as **\<resource group name\>**.
+ - **Service Details/Name**: Specify the **\<service instance name\>**. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+ - **Location**: Select the region for your service instance.
+
+ ![ASC portal start](media/spring-cloud-quickstart-launch-app-portal/portal-start.png)
+
+6. Select **Review and create**.
++
+## Generate configurations and deploy to the Azure Spring Cloud
+
+To generate configurations and deploy the app, follow these steps:
+
+1. Run the following command from the *hellospring* root folder, which contains the POM file. If you've already signed-in with Azure CLI, the command will automatically pick up the credentials. Otherwise, the command will prompt you with sign-in instructions. For more information, see [Authentication](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication) in the [azure-maven-plugins](https://github.com/microsoft/azure-maven-plugins) repository on GitHub.
+
+ ```azurecli
+ mvn com.microsoft.azure:azure-spring-cloud-maven-plugin:1.7.0:config
+ ```
+
+ You'll be asked to select:
+
+ * **Subscription ID** - the subscription you used to create an Azure Spring Cloud instance.
+ * **Service instance** - the name of your Azure Spring Cloud instance.
+ * **App name** - an app name of your choice, or use the default value `artifactId`.
+ * **Public endpoint** - *true* to expose the app to public access; otherwise, *false*.
+
+1. Verify that the `appName` element in the POM file has the correct value. The relevant portion of the POM file should look similar to the following example.
+
+ ```xml
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-spring-cloud-maven-plugin</artifactId>
+ <version>1.7.0</version>
+ <configuration>
+ <subscriptionId>xxxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx</subscriptionId>
+ <clusterName>v-spr-cld</clusterName>
+ <appName>customers-service</appName>
+ ```
+
+ The POM file now contains the plugin dependencies and configurations.
+
+1. Deploy the app using the following command.
+
+ ```azurecli
+ mvn azure-spring-cloud:deploy
+ ```
+
+## Verify the services
+
+After deployment has completed, you can access the app at `https://<service instance name>-hellospring.azuremicroservices.io/`.
++
+## Clean up resources
+
+If you plan to continue working with the example application, you might want to leave the resources in place. When no longer needed, delete the resource group containing your Azure Spring Cloud instance. To delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+* [Prepare Spring application for Azure Spring Cloud](how-to-prepare-app-deployment.md)
+* [Learn more about Azure Spring Cloud Maven Plugin](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Spring-Cloud)
spring-cloud Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart.md
In order to deploy to Azure, you must sign in with your Azure account, then choo
1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Cloud app** dialog. The plug-in will run the command `mvn package -DskipTests` on the `hellospring` app and deploy the jar generated by the `package` command. #### [Visual Studio Code](#tab/VS-Code)+ To deploy a simple Spring Boot web app to Azure Spring Cloud, follow the steps in [Build and Deploy Java Spring Boot Apps to Azure Spring Cloud with Visual Studio Code](https://code.visualstudio.com/docs/java/java-spring-cloud#_download-and-test-the-spring-boot-app).
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
description: Learn about change feed logs in Azure Blob Storage and how to use t
Previously updated : 03/29/2022 Last updated : 04/13/2022
The following example shows a change event record in JSON format that uses event
"sequencer": "00000000000000010000000000000002000000000000001d", "previousInfo": { "SoftDeleteSnapshot": "2022-02-17T13:08:42.4825913Z",
- "WasBlobSoftDeleted": true,
+ "WasBlobSoftDeleted": "true",
"BlobVersion": "2024-02-17T16:11:52.0781797Z", "LastVersion" : "2022-02-17T16:11:52.0781797Z", "PreviousTier": "Hot"
The following example shows a change event record in JSON format that uses event
"sequencer": "00000000000000010000000000000002000000000000001d", "previousInfo": { "SoftDeleteSnapshot": "2022-02-17T13:08:42.4825913Z",
- "WasBlobSoftDeleted": true,
+ "WasBlobSoftDeleted": "true",
"BlobVersion": "2024-02-17T16:11:52.0781797Z", "LastVersion" : "2022-02-17T16:11:52.0781797Z", "PreviousTier": "Hot"
The following example shows a change event record in JSON format that uses event
"sequencer": "00000000000000010000000000000002000000000000001d", "previousInfo": { "SoftDeleteSnapshot": "2022-02-17T13:12:11.5726507Z",
- "WasBlobSoftDeleted": true,
+ "WasBlobSoftDeleted": "true",
"BlobVersion": "2024-02-17T16:11:52.0781797Z", "LastVersion" : "2022-02-17T16:11:52.0781797Z", "PreviousTier": "Hot"
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Title: Authorize data operations
+ Title: Authorize operations for data access
description: Learn about the different ways to authorize access to data in Azure Storage. Azure Storage supports authorization with Azure Active Directory, Shared Key authorization, or shared access signatures (SAS), and also supports anonymous access to blobs.
Previously updated : 11/16/2021 Last updated : 04/14/2022 + # Authorize access to data in Azure Storage
-Each time you access data in your storage account, your client application makes a request over HTTP/HTTPS to Azure Storage. By default, every resource in Azure Storage is secured, and every request to a secure resource must be authorized. Authorization ensures that the client application has the appropriate permissions to access data in your storage account.
+Each time you access data in your storage account, your client application makes a request over HTTP/HTTPS to Azure Storage. By default, every resource in Azure Storage is secured, and every request to a secure resource must be authorized. Authorization ensures that the client application has the appropriate permissions to access a particular resource in your storage account.
+
+## Understand authorization for data operations
The following table describes the options that Azure Storage offers for authorizing access to data:
Each authorization option is briefly described below:
- **Shared Key authorization** for blobs, files, queues, and tables. A client using Shared Key passes a header with every request that is signed using the storage account access key. For more information, see [Authorize with Shared Key](/rest/api/storageservices/authorize-with-shared-key/).
- You can disallow Shared Key authorization for a storage account. When Shared Key authorization is disallowed, clients must use Azure AD to authorize requests for data in that storage account. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md).
+ Microsoft recommends that you disallow Shared Key authorization for your storage account. When Shared Key authorization is disallowed, clients must use Azure AD or a user delegation SAS to authorize requests for data in that storage account. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md).
-- **Shared access signatures** for blobs, files, queues, and tables. Shared access signatures (SAS) provide limited delegated access to resources in a storage account. Adding constraints on the time interval for which the signature is valid or on permissions it grants provides flexibility in managing access. For more information, see [Using shared access signatures (SAS)](storage-sas-overview.md).
+- **Shared access signatures** for blobs, files, queues, and tables. Shared access signatures (SAS) provide limited delegated access to resources in a storage account via a signed URL. The signed URL specifies the permissions granted to the resource and the interval over which the signature is valid. A service SAS or account SAS is signed with the account key, while the user delegation SAS is signed with Azure AD credentials and applies to blobs only. For more information, see [Using shared access signatures (SAS)](storage-sas-overview.md).
- **Anonymous public read access** for containers and blobs. When anonymous access is configured, then clients can read blob data without authorization. For more information, see [Manage anonymous read access to containers and blobs](../blobs/anonymous-read-access-configure.md). You can disallow anonymous public read access for a storage account. When anonymous public read access is disallowed, then users cannot configure containers to enable anonymous access, and all requests must be authorized. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).
-
+ - **Storage Local Users** can be used to access blobs with SFTP or files with SMB. Storage Local Users support container level permissions for authorization. See [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md) for more information on how Storage Local Users can be used with SFTP. + ## Next steps - Authorize access with Azure Active Directory to either [blob](../blobs/authorize-access-azure-active-directory.md), [queue](../queues/authorize-access-azure-active-directory.md), or [table](../tables/authorize-access-azure-active-directory.md) resources. - [Authorize with Shared Key](/rest/api/storageservices/authorize-with-shared-key/) - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md)
-
storage Manage Storage Analytics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-logs.md
[Azure Storage Analytics](storage-analytics.md) provides logs for blobs, queues, and tables. You can use the [Azure portal](https://portal.azure.com) to configure logs are recorded for your account. This article shows you how to enable and manage logs. To learn how to enable metrics, see [Enable and manage Azure Storage Analytics metrics (classic)](). There are costs associated with examining and storing monitoring data in the Azure portal. For more information, see [Storage Analytics](storage-analytics.md). > [!NOTE]
-> We recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. To learn more, see any of the following articles:
+> We recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. See any of the following articles:
> > - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md) > - [Monitoring Azure Files](../files/storage-files-monitoring.md) > - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md) > - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
-For an in-depth guide on using Storage Analytics and other tools to identify, diagnose, and troubleshoot Azure Storage-related issues, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md).
- <a id="configure-logging"></a> ## Enable logs
storage Manage Storage Analytics Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-metrics.md
[Azure Storage Analytics](storage-analytics.md) provides metrics for all storage services for blobs, queues, and tables. You can use the [Azure portal](https://portal.azure.com) to configure which metrics are recorded for your account, and configure charts that provide visual representations of your metrics data. This article shows you how to enable and manage metrics. To learn how to enable logs, see [Enable and manage Azure Storage Analytics logs (classic)](manage-storage-analytics-logs.md).
-We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json) (preview). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
+We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
> [!NOTE] > There are costs associated with examining monitoring data in the Azure portal. For more information, see [Storage Analytics](storage-analytics.md).
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Last updated 04/01/2022 -+ ms.devlang: azurecli
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Previously updated : 01/25/2022- Last updated : 04/14/2022++
To bring a storage account into compliance, rotate the account access keys.
- [Azure storage account overview](storage-account-overview.md) - [Create a storage account](storage-account-create.md)
+- [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md)
storage Storage Analytics Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics-logging.md
Storage Analytics logs detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis. This means that most requests will result in a log record, but the completeness and timeliness of Storage Analytics logs are not guaranteed. > [!NOTE]
-> We recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. To learn more, see any of the following articles:
+> We recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. To learn more, see any of the following articles:
> > - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md) > - [Monitoring Azure Files](../files/storage-files-monitoring.md)
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Previously updated : 10/14/2020 Last updated : 04/14/2022 -+
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md
You can synchronize the contents of a local file system with a file share or syn
> [!NOTE] > Currently, this scenario is supported for accounts that have enabled hierarchical namespace via the blob endpoint.
+> [!Warning]
+> AzCopy sync is supported but not fully recommended for Azure Files. AzCopy sync doesn't support differential copies at scale, and some file fidelity might be lost. To learn more, see [Migrate to Azure file shares](https://docs.microsoft.com/azure/storage/files/storage-files-migration-overview#file-copy-tools).
+ ### Guidelines - The [sync](storage-ref-azcopy-sync.md) command compares file names and last modified timestamps. Set the `--delete-destination` optional flag to a value of `true` or `prompt` to delete files in the destination directory if those files no longer exist in the source directory.
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Improvements to the Synapse Machine Learning library v0.9.5 (previously called M
### Synapse SQL
-* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how Github leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
+* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how GitHub leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13).
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
Title: Azure Virtual Desktop session host autoscale preview
description: How to use the autoscale feature to allocate resources in your deployment. Previously updated : 10/19/2021 Last updated : 04/14/2022
Before you create your first scaling plan, make sure you follow these guidelines
To start creating a scaling plan, you'll first need to create a custom Role-based Access Control (RBAC) role in your subscription. This role will allow Azure Virtual Desktop to power manage all VMs in your subscription. It will also let the service apply actions on both host pools and VMs when there are no active user sessions. Creating this RBAC role at any level lower than your subscription, like at the host pool or VM level, will prevent the autoscale feature from working properly.
+>[!IMPORTANT]
+>You must have global admin permissions in order to assign the RBAC role to the service principal.
+ To create the custom role, follow the instructions in [Azure custom roles](../role-based-access-control/custom-roles.md) while using the following JSON template. This template already includes any permissions you need. For more detailed instructions, see [Assign custom roles with the Azure portal](#assign-custom-roles-with-the-azure-portal).+ ```json { "properties": {
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
Title: Start virtual machine connect - Azure
description: How to configure the start virtual machine on connect feature. Previously updated : 09/17/2021 Last updated : 04/14/2022
The following Remote Desktop clients support the Start VM on Connect feature:
## Create a custom role for Start VM on Connect
-Before you can configure the Start VM on Connect feature, you'll need to assign a subscription-level custom RBAC (role-based access control) role to the Azure Virtual Desktop service principal . This role will let Azure Virtual Desktop manage the VMs in your subscription. This role grants Azure Virtual Desktop the permissions to turn on VMs, check their status, and report diagnostic info. If you want to know more about Azure custom RBAC roles, take a look at [Azure custom roles](../role-based-access-control/custom-roles.md).
+Before you can configure the Start VM on Connect feature, you'll need to assign a subscription-level custom RBAC (role-based access control) role to the Azure Virtual Desktop service principal. This role will let Azure Virtual Desktop manage the VMs in your subscription. This role grants Azure Virtual Desktop the permissions to turn on VMs, check their status, and report diagnostic info. If you want to know more about Azure custom RBAC roles, take a look at [Azure custom roles](../role-based-access-control/custom-roles.md).
+
+>[!IMPORTANT]
+>You must have global admin permissions in order to assign the RBAC role to the service principal.
>[!NOTE] >If your VMs and host pool are in different subscriptions, the RBAC role needs to be created in the subscription that the VMs are in.
virtual-desktop Whats New Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-azure-monitor.md
For example, a release with a version number of 1.2.31 is on the first major rel
When one of the numbers is increased, all numbers after it must change, too. One release has one version number. However, not all version numbers track releases. Patch numbers can be somewhat arbitrary, for example.
-## Version 1.0.0
+## Version 1.1.10
+
+This update was released in February 2022 and has the following changes:
+
+- We added support for [category groups](../azure-monitor/essentials/diagnostic-settings.md#resource-logs) for resource logs.
+
+## Version 1.1.8
+
+This update was released in November 2021 and has the following changes:
+
+- We added a dynamic check for host pool and workspaces Log Analytics tables to show instances where diagnostics may not be configured.
+- Updated the source table for session history and calculations for users per core.
+
+## Version 1.1.7
+
+This update was released in November 2021 and has the following changes:
+
+- We increased the session host limit to 1000 for the configuration workbook to allow for larger deployments.
+
+## Version 1.1.6
+
+This update was released in October 2021 and has the following changes:
+
+- We updated contents to reflect change from *Windows Virtual Desktop* to *Azure Virtual Desktop*.
+
+## Version 1.1.4
+
+This update was released in October 2021 and has the following changes:
+
+- We updated data usage reporting in the configuration workbook to include the agent health table.
+
+## Version 1.1.3
+
+This update was released in September 2021 and has the following changes:
+
+- We updated filtering behavior to make use of resource IDs.
+
+## Version 1.1.2
+
+This update was released in August 2021 and has the following changes:
-Release date: March 21st, 2021.
+- We updated some formatting in the workbooks.
+
+## Version 1.1.1
+
+This update was released in July 2021 and has the following changes:
+
+- We added the Workbooks gallery for quick access to Azure Virtual Desktop related Azure workbooks.
+
+## Version 1.1.0
+
+This update was released July 2021 and has the following changes:
+
+- We added a **Data Generated** tab to the configuration workbook for detailed data on storage space usage for Azure Virtual Desktop Insights to allow more insight into Log Analytics usage.
+
+## Version 1.0.4
+
+This update was released in June 2021 and has the following changes:
+
+- We made some changes to formatting and layout for better use of whitespace.
+- We changed the sort order for **User Input Delay** details in **Host Performance** to descending.
+
+## Version 1.0.3
+
+This update was released in May 2021 and has the following changes:
+
+- We updated formatting to prevent truncation of text.
+
+## Version 1.0.2
+
+This update was released in May 2021 and has the following changes:
+
+- We resolved an issue with user per core calculation in the **Utilization** tab.
+
+## Version 1.0.1
+
+This update was released in April 2021 and has the following changes:
+
+- We made a formatting update for columns containing sparklines.
+
+## Version 1.0.0
-In this version, we made the following changes:
+This update was released in March 2021 and has the following changes:
- We introduced a new visual indicator for high-impact errors and warnings from the Azure Virtual Desktop agent event log on the host diagnostics page.
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Supported distributions and versions:
- OpenSUSE 13.1+ - SUSE Linux Enterprise Server 12 - Debian 9, 8, 7-- Red Hat Enterprise Linux (RHEL) 7, 6.7+
+- Red Hat Enterprise Linux (RHEL) 8, 7, 6.7+
### Prerequisites
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
Another way to find an image in a location is to run the [az vm image list-publi
```azurecli az vm image list \ --location westus \
- --publisher Canonical \
- --offer UbuntuServer \
+ --publisher Canonical \
+ --offer UbuntuServer \
--sku 18.04-LTS \ --all --output table ```
virtual-machines Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md
vm-windows Previously updated : 08/12/2020 Last updated : 04/14/2022
Currently you can use Azure Premium SSD disks as an Azure shared disk for the SA
- Locally redundant storage (LRS) for premium shared disk (skuName - Premium_LRS) is supported with deployment in availability set. - Zone-redundant storage (ZRS) for premium shared disk (skuName - Premium_ZRS) is supported with deployment in availability zones. - Azure shared disk value [maxShares](../../disks-shared-enable.md?tabs=azure-cli#disk-sizes) determines how many cluster nodes can use the shared disk. Typically for SAP ASCS/SCS instance you will configure two nodes in Windows Failover Cluster, therefore the value for `maxShares` must be set to two.-- When using [Azure proximity placement group](../../windows/proximity-placement-groups.md) for SAP system, all virtual machines sharing a disk must be part of the same PPG.
+- [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But if you are using PPG for SAP system, all virtual machines sharing a disk must be part of the same PPG.
For further details on limitations for Azure shared disk, please review carefully the [limitations](../../disks-shared.md#limitations) section of Azure Shared Disk documentation.
virtual-machines Sap High Availability Guide Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-guide-wsfc-shared-disk.md
vm-windows Previously updated : 07/29/2021 Last updated : 04/14/2022
Currently you can use Azure Premium SSD disks as an Azure shared disk for the SA
- Locally redundant storage (LRS) for premium shared disk (skuName - Premium_LRS) is supported with deployment in Azure availability set. - Zone-redundant storage (ZRS) for premium shared disk (skuName - Premium_ZRS) is supported with deployment in Azure availability zones. - Azure shared disk value [maxShares](../../disks-shared-enable.md?tabs=azure-cli#disk-sizes) determines how many cluster nodes can use the shared disk. Typically for SAP ASCS/SCS instance you will configure two nodes in Windows Failover Cluster, therefore the value for `maxShares` must be set to two.-- When using [Azure proximity placement group](../../windows/proximity-placement-groups.md) for SAP system, all virtual machines sharing a disk must be part of the same PPG.
+- [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But if you are using PPG for SAP system, all virtual machines sharing a disk must be part of the same PPG.
For further details on limitations for Azure shared disk, please review carefully the [limitations](../../disks-shared.md#limitations) section of Azure Shared Disk documentation.
virtual-machines Sap High Availability Infrastructure Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-shared-disk.md
vm-windows Previously updated : 10/16/2020 Last updated : 04/14/2022
Based on your deployment type, the host names and the IP addresses of the scenar
The steps mentioned in the document remain same for both deployment type. But if your cluster is running in availability set, you need to deploy LRS for Azure premium shared disk (Premium_LRS) and if the cluster is running in availability zone deploy ZRS for Azure premium shared disk (Premium_ZRS). > [!Note]
-> When using [Azure proximity placement group](../../windows/proximity-placement-groups.md) for SAP system, all virtual machines sharing a disk must be part of the same PPG.
+> [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But if you are using PPG for SAP system, all virtual machines sharing a disk must be part of the same PPG.
## <a name="fe0bd8b5-2b43-45e3-8295-80bee5415716"></a> Create Azure internal load balancer
As Enqueue Replication Server 2 (ERS2) is also clustered, ERS2 virtual IP addres
- Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server return) is Enabled. - > [!TIP] > With the [Azure Resource Manager Template for WSFC for SAP ASCS/SCS instance with Azure Shared Disk](https://github.com/robotechredmond/301-shared-disk-sap), you can automate the infrastructure preparation, using Azure Shared Disk for one SAP SID with ERS1. > The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and attach to the VMs. Azure Internal Load Balancer will be created and configured as well.
virtual-network Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-template.md
If you don't have an Azure subscription, create a [free account](https://azure.m
The template used in this quickstart is from [Azure Quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/vnet-two-subnets/azuredeploy.json) The following Azure resources have been defined in the template: - [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks): create an Azure virtual network.
Deploy Resource Manager template to Azure:
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fvnet-two-subnets%2Fazuredeploy.json) 2. In the portal, on the **Create a Virtual Network with two Subnets** page, type or select the following values:
- - **Resource group**: Select **Create new**, type a name for the resource group, and select **OK**.
+ - **Resource group**: Select **Create new**, type **CreateVNetQS-rg** for the resource group name, and select **OK**.
- **Virtual Network Name**: Type a name for the new virtual network. 3. Select **Review + create**, and then select **Create**.
+1. When deployment completes, click on **Go to resource** button to review the resources deployed.
## Review deployed resources
-Explore the resources that were created with the virtual network.
+Explore the resources that were created with the virtual network by browsing the settings blades for **VNet1**.
+
+1. On the **Overview** tab, you will see the defined address space of **10.0.0.0/16**.
+
+2. On the **Subnets** tab, you will see the deployed subnets of **Subnet1** and **Subnet2** with the appropriate values from the template.
To learn about the JSON syntax and properties for a virtual network in a template, see [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks).
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules.
-You can use service tags to define network access controls on [network security groups](./network-security-groups-overview.md#security-rules), [Azure Firewall](../firewall/service-tags.md), and [user-defined routes](./virtual-networks-udr-overview.md#service-tags-for-user-defined-routes-preview). Use service tags in place of specific IP addresses when you create security rules and routes. By specifying the service tag name, such as **ApiManagement**, in the appropriate *source* or *destination* field of a security rule, you can allow or deny the traffic for the corresponding service. By specifying the service tag name in the address prefix of a route, you can route traffic intended for any of the prefixes encapsulated by the service tag to a desired next hop type.
+You can use service tags to define network access controls on [network security groups](./network-security-groups-overview.md#security-rules), [Azure Firewall](../firewall/service-tags.md), and [user-defined routes](./virtual-networks-udr-overview.md#service-tags-for-user-defined-routes). Use service tags in place of specific IP addresses when you create security rules and routes. By specifying the service tag name, such as **ApiManagement**, in the appropriate *source* or *destination* field of a security rule, you can allow or deny the traffic for the corresponding service. By specifying the service tag name in the address prefix of a route, you can route traffic intended for any of the prefixes encapsulated by the service tag to a desired next hop type.
> [!NOTE] > As of March 2022, using service tags in place of explicit address prefixes in [user defined routes](./virtual-networks-udr-overview.md#user-defined) is out of preview and generally available.
virtual-network Tutorial Connect Virtual Networks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md
ms.devlang: azurecli
virtual-network Previously updated : 07/06/2021 Last updated : 04/14/2022
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
You can specify the following next hop types when creating a user-defined route:
You cannot specify **VNet peering** or **VirtualNetworkServiceEndpoint** as the next hop type in user-defined routes. Routes with the **VNet peering** or **VirtualNetworkServiceEndpoint** next hop types are only created by Azure, when you configure a virtual network peering, or a service endpoint.
-### Service Tags for user-defined routes (Preview)
+### Service Tags for user-defined routes
You can now specify a [Service Tag](service-tags-overview.md) as the address prefix for a user-defined route instead of an explicit IP range. A Service Tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to user-defined routes and reducing the number of routes you need to create. You can currently create 25 or less routes with Service Tags in each route table. </br>
-> [!IMPORTANT]
-> Service Tags for user-defined routes is currently in preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- #### Exact Match When there is an exact prefix match between a route with an explicit IP prefix and a route with a Service Tag, preference is given to the route with the explicit prefix. When multiple routes with Service Tags have matching IP prefixes, routes will be evaluated in the following order:
The route table for *Subnet2* contains all Azure-created default routes and the
* [Configure BGP for an Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br> * [Use BGP with ExpressRoute](../expressroute/expressroute-routing.md?toc=%2fazure%2fvirtual-network%2ftoc.json#route-aggregation-and-prefix-limits)<br> * [View all routes for a subnet](diagnose-network-routing-problem.md). A user-defined route table only shows you the user-defined routes, not the default, and BGP routes for a subnet. Viewing all routes shows you the default, BGP, and user-defined routes for the subnet a network interface is in.<br>
-* [Determine the next hop type](../network-watcher/diagnose-vm-network-routing-problem.md?toc=%2fazure%2fvirtual-network%2ftoc.json) between a virtual machine and a destination IP address. The Azure Network Watcher next hop feature enables you to determine whether traffic is leaving a subnet and being routed to where you think it should be.
+* [Determine the next hop type](../network-watcher/diagnose-vm-network-routing-problem.md?toc=%2fazure%2fvirtual-network%2ftoc.json) between a virtual machine and a destination IP address. The Azure Network Watcher next hop feature enables you to determine whether traffic is leaving a subnet and being routed to where you think it should be.