Updates from: 05/28/2022 01:11:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
This table shows support for authenticating Azure Active Directory (Azure AD) an
| | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | | **Windows** | ![Chrome supports USB on Windows for Azure AD accounts.][y] | ![Chrome supports NFC on Windows for Azure AD accounts.][y] | ![Chrome supports BLE on Windows for Azure AD accounts.][y] | ![Edge supports USB on Windows for Azure AD accounts.][y] | ![Edge supports NFC on Windows for Azure AD accounts.][y] | ![Edge supports BLE on Windows for Azure AD accounts.][y] | ![Firefox supports USB on Windows for Azure AD accounts.][y] | ![Firefox supports NFC on Windows for Azure AD accounts.][y] | ![Firefox supports BLE on Windows for Azure AD accounts.][y] | ![Safari supports USB on Windows for Azure AD accounts.][n] | ![Safari supports NFC on Windows for Azure AD accounts.][n] | ![Safari supports BLE on Windows for Azure AD accounts.][n] | | **macOS** | ![Chrome supports USB on macOS for Azure AD accounts.][y] | ![Chrome supports NFC on macOS for Azure AD accounts.][n] | ![Chrome supports BLE on macOS for Azure AD accounts.][n] | ![Edge supports USB on macOS for Azure AD accounts.][y] | ![Edge supports NFC on macOS for Azure AD accounts.][n] | ![Edge supports BLE on macOS for Azure AD accounts.][n] | ![Firefox supports USB on macOS for Azure AD accounts.][n] | ![Firefox supports NFC on macOS for Azure AD accounts.][n] | ![Firefox supports BLE on macOS for Azure AD accounts.][n] | ![Safari supports USB on macOS for Azure AD accounts.][n] | ![Safari supports NFC on macOS for Azure AD accounts.][n] | ![Safari supports BLE on macOS for Azure AD accounts.][n] |
-| **ChromeOS** | ![Chrome supports USB on ChromeOS for Azure AD accounts.][y] | ![Chrome supports NFC on ChromeOS for Azure AD accounts.][n] | ![Chrome supports BLE on ChromeOS for Azure AD accounts.][n] | ![Edge supports USB on ChromeOS for Azure AD accounts.][n] | ![Edge supports NFC on ChromeOS for Azure AD accounts.][n] | ![Edge supports BLE on ChromeOS for Azure AD accounts.][n] | ![Firefox supports USB on ChromeOS for Azure AD accounts.][n] | ![Firefox supports NFC on ChromeOS for Azure AD accounts.][n] | ![Firefox supports BLE on ChromeOS for Azure AD accounts.][n] | ![Safari supports USB on ChromeOS for Azure AD accounts.][n] | ![Safari supports NFC on ChromeOS for Azure AD accounts.][n] | ![Safari supports BLE on ChromeOS for Azure AD accounts.][n] |
+| **ChromeOS** | ![Chrome supports USB on ChromeOS for Azure AD accounts.][y]* | ![Chrome supports NFC on ChromeOS for Azure AD accounts.][n] | ![Chrome supports BLE on ChromeOS for Azure AD accounts.][n] | ![Edge supports USB on ChromeOS for Azure AD accounts.][n] | ![Edge supports NFC on ChromeOS for Azure AD accounts.][n] | ![Edge supports BLE on ChromeOS for Azure AD accounts.][n] | ![Firefox supports USB on ChromeOS for Azure AD accounts.][n] | ![Firefox supports NFC on ChromeOS for Azure AD accounts.][n] | ![Firefox supports BLE on ChromeOS for Azure AD accounts.][n] | ![Safari supports USB on ChromeOS for Azure AD accounts.][n] | ![Safari supports NFC on ChromeOS for Azure AD accounts.][n] | ![Safari supports BLE on ChromeOS for Azure AD accounts.][n] |
| **Linux** | ![Chrome supports USB on Linux for Azure AD accounts.][y] | ![Chrome supports NFC on Linux for Azure AD accounts.][n] | ![Chrome supports BLE on Linux for Azure AD accounts.][n] | ![Edge supports USB on Linux for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on Linux for Azure AD accounts.][n] | ![Firefox supports BLE on Linux for Azure AD accounts.][n] | ![Safari supports USB on Linux for Azure AD accounts.][n] | ![Safari supports NFC on Linux for Azure AD accounts.][n] | ![Safari supports BLE on Linux for Azure AD accounts.][n] | | **iOS** | ![Chrome supports USB on iOS for Azure AD accounts.][n] | ![Chrome supports NFC on iOS for Azure AD accounts.][n] | ![Chrome supports BLE on iOS for Azure AD accounts.][n] | ![Edge supports USB on iOS for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on iOS for Azure AD accounts.][n] | ![Firefox supports BLE on iOS for Azure AD accounts.][n] | ![Safari supports USB on iOS for Azure AD accounts.][n] | ![Safari supports NFC on iOS for Azure AD accounts.][n] | ![Safari supports BLE on iOS for Azure AD accounts.][n] | | **Android** | ![Chrome supports USB on Android for Azure AD accounts.][n] | ![Chrome supports NFC on Android for Azure AD accounts.][n] | ![Chrome supports BLE on Android for Azure AD accounts.][n] | ![Edge supports USB on Android for Azure AD accounts.][n] | ![Edge supports NFC on Android for Azure AD accounts.][n] | ![Edge supports BLE on Android for Azure AD accounts.][n] | ![Firefox supports USB on Android for Azure AD accounts.][n] | ![Firefox supports NFC on Android for Azure AD accounts.][n] | ![Firefox supports BLE on Android for Azure AD accounts.][n] | ![Safari supports USB on Android for Azure AD accounts.][n] | ![Safari supports NFC on Android for Azure AD accounts.][n] | ![Safari supports BLE on Android for Azure AD accounts.][n] | -
+*Key Registration is currently not supported with ChromeOS/Chrome Browser.
## Unsupported browsers
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Title: Terms of use - Azure Active Directory | Microsoft Docs
+ Title: Terms of use in Azure Active Directory
description: Get started using Azure Active Directory terms of use to present information to employees or guests before getting access. Previously updated : 01/12/2022 Last updated : 05/26/2022 -+
Azure AD terms of use policies have the following capabilities:
To use and configure Azure AD terms of use policies, you must have: -- Azure AD Premium P1, P2, EMS E3, or EMS E5 subscription.
+- Azure AD Premium P1, P2, EMS E3, or EMS E5 licenses.
- If you don't have one of these subscriptions, you can [get Azure AD Premium](../fundamentals/active-directory-get-started-premium.md) or [enable Azure AD Premium trial](https://azure.microsoft.com/trial/get-started-active-directory/). - One of the following administrator accounts for the directory you want to configure: - Global Administrator
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to Azure as a Global Administrator, Security Administrator, or Conditional Access Administrator.
-1. Navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
-
- ![Conditional Access - Terms of use blade](./media/terms-of-use/tou-blade.png)
-
-1. Click **New terms**.
-
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Select, **New terms**.
+
![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png) 1. In the **Name** box, enter a name for the terms of use policy that will be used in the Azure portal.
-1. In the **Display name** box, enter a title that users see when they sign in.
1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it. 1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user will see will be based on their browser preferences.
+1. In the **Display name** box, enter a title that users see when they sign in.
1. To require end users to view the terms of use policy before accepting them, set **Require users to expand the terms of use** to **On**. 1. To require end users to accept your terms of use policy on every device they're accessing from, set **Require users to consent on every device** to **On**. Users may be required to install other applications if this option is enabled. For more information, see [Per-device terms of use](#per-device-terms-of-use). 1. If you want to expire terms of use policy consents on a schedule, set **Expire consents** to **On**. When set to On, two more schedule settings are displayed.
Once you've completed your terms of use policy document, use the following proce
| Alice | Jan 1 | Jan 31 | Mar 2 | Apr 1 | | Bob | Jan 15 | Feb 14 | Mar 16 | Apr 15 |
- It is possible to use the **Expire consents** and **Duration before re-acceptance required (days)** settings together, but typically you use one or the other.
+ It's possible to use the **Expire consents** and **Duration before re-acceptance required (days)** settings together, but typically you use one or the other.
1. Under **Conditional Access**, use the **Enforce with Conditional Access policy template** list to select the template to enforce the terms of use policy.
- ![Conditional Access drop-down list to select a policy template](./media/terms-of-use/conditional-access-templates.png)
- | Template | Description | | | |
- | **Access to cloud apps for all guests** | A Conditional Access policy will be created for all guests and all cloud apps. This policy impacts the Azure portal. Once this is created, you might be required to sign out and sign in. |
- | **Access to cloud apps for all users** | A Conditional Access policy will be created for all users and all cloud apps. This policy impacts the Azure portal. Once this is created, you'll be required to sign out and sign in. |
| **Custom policy** | Select the users, groups, and apps that this terms of use policy will be applied to. | | **Create Conditional Access policy later** | This terms of use policy will appear in the grant control list when creating a Conditional Access policy. |
- >[!IMPORTANT]
- >Conditional Access policy controls (including terms of use policies) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
+ > [!IMPORTANT]
+ > Conditional Access policy controls (including terms of use policies) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
Custom Conditional Access policies enable granular terms of use policies, down to a specific cloud application or group of users. For more information, see [Quickstart: Require terms of use to be accepted before accessing cloud apps](require-tou.md).
-1. Click **Create**.
+1. Select **Create**.
If you selected a custom Conditional Access template, then a new screen appears that allows you to create the custom Conditional Access policy.
Once you've completed your terms of use policy document, use the following proce
You should now see your new terms of use policies.
- ![New terms of use listed in the terms of use blade](./media/terms-of-use/create-tou.png)
- ## View report of who has accepted and declined The Terms of use blade shows a count of the users who have accepted and declined. These counts and who accepted/declined are stored for the life of the terms of use policy.
The Terms of use blade shows a count of the users who have accepted and declined
![Terms of use blade listing the number of user show have accepted and declined](./media/terms-of-use/view-tou.png)
-1. For a terms of use policy, click the numbers under **Accepted** or **Declined** to view the current state for users.
+1. For a terms of use policy, select the numbers under **Accepted** or **Declined** to view the current state for users.
![Terms of use consents pane listing the users that have accepted](./media/terms-of-use/accepted-tou.png)
-1. To view the history for an individual user, click the ellipsis (**...**) and then **View History**.
+1. To view the history for an individual user, select the ellipsis (**...**) and then **View History**.
![View History context menu for a user](./media/terms-of-use/view-history-menu.png)
If you want to view more activity, Azure AD terms of use policies include audit
To get started with Azure AD audit logs, use the following procedure:
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select a terms of use policy.
-1. Click **View audit logs**.
-
- ![Terms of use blade with the View audit logs option highlighted](./media/terms-of-use/audit-tou.png)
-
+1. Select **View audit logs**.
1. On the Azure AD audit logs screen, you can filter the information using the provided lists to target specific audit log information.
- You can also click **Download** to download the information in a .csv file for use locally.
+ You can also select **Download** to download the information in a .csv file for use locally.
![Azure AD audit logs screen listing date, target policy, initiated by, and activity](./media/terms-of-use/audit-logs-tou.png)
- If you click a log, a pane appears with more activity details.
+ If you select a log, a pane appears with more activity details.
![Activity details for a log showing activity, activity status, initiated by, target policy](./media/terms-of-use/audit-log-activity-details.png)
Users can review and see the terms of use policies that they've accepted by usin
You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to edit.
-1. Click **Edit terms**.
-1. In the Edit terms of use pane, you can change the following:
- - **Name** ΓÇô this is the internal name of the ToU that isn't shared with end users
- - **Display name** ΓÇô this is the name that end users can see when viewing the ToU
- - **Require users to expand the terms of use** ΓÇô Setting this to **On** will force the end user to expand the terms of use policy document before accepting it.
+1. Select **Edit terms**.
+1. In the Edit terms of use pane, you can change the following options:
+ - **Name** ΓÇô the internal name of the ToU that isn't shared with end users
+ - **Display name** ΓÇô the name that end users can see when viewing the ToU
+ - **Require users to expand the terms of use** ΓÇô Setting this option to **On** will force the end user to expand the terms of use policy document before accepting it.
- (Preview) You can **update an existing terms of use** document - You can add a language to an existing ToU
You can edit some details of terms of use policies, but you can't modify an exis
![Edit showing different language options ](./media/terms-of-use/edit-terms-use.png)
-1. Once you're done, click **Save** to save your changes.
+1. Once you're done, select **Save** to save your changes.
## Update the version or pdf of an existing terms of use
-1. Sign in to Azure and navigate to [Terms of use](https://aka.ms/catou)
-2. Select the terms of use policy you want to edit.
-3. Click **Edit terms**.
-4. For the language that you would like to update a new version, click **Update** under the action column
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Select the terms of use policy you want to edit.
+1. Select **Edit terms**.
+1. For the language that you would like to update a new version, select **Update** under the action column
![Edit terms of use pane showing name and expand options](./media/terms-of-use/edit-terms-use.png)
-5. In the pane on the right, upload the pdf for the new version
-6. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who haven't consented before or whose consent expires will see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
+1. In the pane on the right, upload the pdf for the new version
+1. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who haven't consented before or whose consent expires will see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
![Edit terms of use re-accept option highlighted](./media/terms-of-use/re-accept.png)
-7. Once you've uploaded your new pdf and decided on reaccept, click Add at the bottom of the pane.
-8. You'll now see the most recent version under the Document column.
+1. Once you've uploaded your new pdf and decided on reaccept, select Add at the bottom of the pane.
+1. You'll now see the most recent version under the Document column.
## View previous versions of a ToU
-1. Sign in to Azure and navigate to **Terms of use** at https://aka.ms/catou.
-2. Select the terms of use policy for which you want to view a version history.
-3. Click on **Languages and version history**
-4. Click on **See previous versions.**
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Select the terms of use policy for which you want to view a version history.
+1. Select **Languages and version history**
+1. Select **See previous versions.**
![document details including language versions](./media/terms-of-use/document-details.png)
-5. You can click on the name of the document to download that version
+1. You can select the name of the document to download that version
## See who has accepted each version
-1. Sign in to Azure and navigate to **Terms of use** at https://aka.ms/catou.
-2. To see who has currently accepted the ToU, click on the number under the **Accepted** column for the ToU you want.
-3. By default, the next page will show you the current state of each users acceptance to the ToU
-4. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each users events in details about each version and what happened.
-5. Alternatively, you can select a specific version from the **Version** drop-down to see who has accepted that specific version.
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. To see who has currently accepted the ToU, select the number under the **Accepted** column for the ToU you want.
+1. By default, the next page will show you the current state of each user's acceptance to the ToU
+1. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each users events in details about each version and what happened.
+1. Alternatively, you can select a specific version from the **Version** drop-down to see who has accepted that specific version.
## Add a ToU language The following procedure describes how to add a ToU language.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to edit.
-1. Click **Edit Terms**
-1. Click **Add language** at the bottom of the page.
+1. Select **Edit Terms**
+1. Select **Add language** at the bottom of the page.
1. In the Add terms of use language pane, upload your localized PDF, and select the language. ![Terms of use selected and showing the Languages tab in the details pane](./media/terms-of-use/select-language.png)
-1. Click **Add language**.
-1. Click **Save**
+1. Select **Add language**.
+1. Select **Save**
-1. Click **Add** to add the language.
+1. Select **Add** to add the language.
## Per-device terms of use
If a user is using browser that isn't supported, they'll be asked to use a diffe
You can delete old terms of use policies using the following procedure.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to remove.
-1. Click **Delete terms**.
-1. In the message that appears asking if you want to continue, click **Yes**.
+1. Select **Delete terms**.
+1. In the message that appears asking if you want to continue, select **Yes**.
![Message asking for confirmation to delete terms of use](./media/terms-of-use/delete-tou.png)
You can configure a Conditional Access policy for the Microsoft Intune Enrollmen
A: Terms of use can only be accepted when authenticating interactively. **Q: How do I see when/if a user has accepted a terms of use?**<br />
-A: On the Terms of use blade, click the number under **Accepted**. You can also view or search the accept activity in the Azure AD audit logs. For more information, see View report of who has accepted and declined and [View Azure AD audit logs](#view-azure-ad-audit-logs).
+A: On the Terms of use blade, select the number under **Accepted**. You can also view or search the accept activity in the Azure AD audit logs. For more information, see View report of who has accepted and declined and [View Azure AD audit logs](#view-azure-ad-audit-logs).
**Q: How long is information stored?**<br /> A: The user counts in the terms of use report and who accepted/declined are stored for the life of the terms of use. The Azure AD audit logs are stored for 30 days.
A: The user counts in the terms of use report and who accepted/declined are stor
A: The terms of use report is stored for the lifetime of that terms of use policy, while the Azure AD audit logs are stored for 30 days. Also, the terms of use report only displays the users current consent state. For example, if a user declines and then accepts, the terms of use report will only show that user's accept. If you need to see the history, you can use the Azure AD audit logs. **Q: If hyperlinks are in the terms of use policy PDF document, will end users be able to click them?**<br />
-A: Yes, end users are able to select hyperlinks to other pages but links to sections within the document are not supported. Also, hyperlinks in terms of use policy PDFs do not work when accessed from the Azure AD MyApps/MyAccount portal.
+A: Yes, end users are able to select hyperlinks to other pages but links to sections within the document aren't supported. Also, hyperlinks in terms of use policy PDFs don't work when accessed from the Azure AD MyApps/MyAccount portal.
**Q: Can a terms of use policy support multiple languages?**<br /> A: Yes. Currently there are 108 different languages an administrator can configure for a single terms of use policy. An administrator can upload multiple PDF documents and tag those documents with a corresponding language (up to 108). When end users sign in, we look at their browser language preference and display the matching document. If there's no match, we display the default document, which is the first document that is uploaded.
A: You can [review previously accepted terms of use policies](#how-users-can-rev
A: If you've configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user will be required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409). **Q: What endpoints does the terms of use service use for authentication?**<br />
-A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you will need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
+A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you'll need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
## Next steps
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
-> [!div renderon="portal" class="sxs-lookup display-on-portal"]
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
> # Quickstart: Sign in users and call the Microsoft Graph API from an Android app > > In this quickstart, you download and run a code sample that demonstrates how an Android application can sign in users and get an access token to call the Microsoft Graph API.
> ### Step 1: Configure your application in the Azure portal > For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker. >
-> <button id="makechanges" class="nextstepaction" class="configure-app-button"> Make this change for me </button>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
> > > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
> ### Step 2: Download the project > > Run the project using Android Studio.
-> <a href='https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip'><button id="downloadsample" class="download-sample-button">Download the code sample</button></a>
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
> > > ### Step 3: Your app is configured and ready to run
> Move on to the Android tutorial in which you build an Android app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API. > > > [!div class="nextstepaction"]
-> > [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
+> > [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
-> [!div renderon="portal" class="sxs-lookup display-on-portal"]
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
> # Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app > > In this quickstart, you download and run a code sample that demonstrates how a native iOS or macOS application can sign in users and get an access token to call the Microsoft Graph API.
> #### Step 1: Configure your application > For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker. >
-> <button id="makechanges" class="nextstepaction" class="configure-app-button"> Make this change for me </button>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
> > > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes > > #### Step 2: Download the sample project >
-> <a href='https://github.com/Azure-Samples/active-directory-ios-swift-native-v2/archive/master.zip'><button id="downloadsample" class="downloadsample_ios">Download the code sample for iOS</button></a>
->
-> <a href='https://github.com/Azure-Samples/active-directory-macOS-swift-native-v2/archive/master.zip'><button id="downloadsample" class="downloadsample_ios">Download the code sample for macOS</button></a>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample_ios" class="download-sample-button">Download the code sample for iOS</button>
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample_macos" class="download-sample-button">Download the code sample for macOS</button>
> > #### Step 3: Install dependencies >
> Move on to the step-by-step tutorial in which you build an iOS or macOS app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API. > > > [!div class="nextstepaction"]
-> > [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
+> > [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Before reading through this article, it's recommended that you go through the fo
## Refresh token lifetime
-Refresh tokens have a longer lifetime than access tokens. The default lifetime for the tokens is 90 days and they replace themselves with a fresh token upon every use. As such, whenever a refresh token is used to acquire a new access token, a new refresh token is also issued. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
+Refresh tokens have a longer lifetime than access tokens. The default lifetime for the refresh tokens is 24 hours for [single page apps](reference-third-party-cookies-spas.md) and 90 days for all other scenarios. Refresh tokens replace themselves with a fresh token upon every use. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
+
+>[!IMPORTANT]
+> Refresh tokens sent to a redirect URI registered as `spa` expire after 24 hours. Additional refresh tokens acquired using the initial refresh token carry over that expiration time, so apps must be prepared to rerun the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users do not have to enter their credentials and usually don't even see any related user experience, just a reload of your application. The browser must visit the log-in page in a top-level frame to show the login session. This is due to [privacy features in browsers that block third party cookies](reference-third-party-cookies-spas.md).
## Refresh token expiration
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Because subdomains inherit the authentication type of the root domain by default
Use the following command to promote the subdomain: ```http
-POST https://graph.microsoft.com/v1.0/domains/foo.contoso.com/promote
+POST https://graph.windows.net/{tenant-id}/domains/foo.contoso.com/promote
``` ### Promote command error conditions
Invoking API with a federated verified subdomain with user references | POST | 4
- [Add custom domain names](../fundamentals/add-custom-domain.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) - [Manage domain names](domains-manage.md)-- [ForceDelete a custom domain name with Microsoft Graph API](/graph/api/domain-forcedelete)
+- [ForceDelete a custom domain name with Microsoft Graph API](/graph/api/domain-forcedelete)
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
-# Tutorial: Configure Silverfort with Azure Active Directory for secure hybrid access
+# Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort
-In this tutorial, learn how to integrate Silverfort with Azure Active Directory (Azure AD). [Silverfort](https://www.silverfort.com/) uses innovative agent-less and proxy-less technology to connect all your assets on-premises and in the cloud to Azure AD. This solution enables organizations to apply identity protection, visibility, and user experience across all environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and proactively prevents threats.
+[Silverfort](https://www.silverfort.com/) uses innovative agent-less and proxy-less technology to connect all your assets on-premises and in the cloud to Azure AD. This solution enables organizations to apply identity protection, visibility, and user experience across all environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and proactively prevents threats.
-Silverfort can seamlessly connect any type of asset into Azure AD, as if it was a modern web application. For example:
+In this tutorial, learn how to integrate your existing on premises Silverfort implementation with Azure Active Directory (Azure AD) for [hybrid access](../devices/concept-azure-ad-join-hybrid.md).
+
+Silverfort seamlessly connects assets with Azure AD. These **bridged** assets appear as regular applications in Azure AD and can be protected with Conditional Access, single-sign-on (SSO), multifactor authentication, auditing and more. Use Silverfort to connect assets including:
- Legacy and homegrown applications
Silverfort can seamlessly connect any type of asset into Azure AD, as if it was
- Infrastructure and industrial systems
-These **bridged** assets appear as regular applications in Azure AD and can be protected with Conditional Access, single-sign-on (SSO), multifactor authentication, auditing and more.
-
-This solution combines all corporate assets and third-party Identity and Access Management (IAM) platforms. For example, Active Directory, Active Directory Federation Services (ADFS), and Remote Authentication Dial-In User Service (RADIUS) on Azure AD, including hybrid and multi-cloud environments.
+Silverfort integrates your corporate assets and third-party Identity and Access Management (IAM) platforms. This includes Active Directory, Active Directory Federation Services (ADFS), and Remote Authentication Dial-In User Service (RADIUS) on Azure AD, including hybrid and multi-cloud environments.
-## Scenario description
+Follow the steps in this tutorial to configure and test the Silverfort Azure AD bridge in your Azure AD tenant to communicate with your existing Silverfort implementation. Once configured, you can create Silverfort authentication policies that bridge authentication requests from various identity sources to Azure AD for SSO. After an application is bridged, it can be managed in Azure AD.
-In this guide, you'll configure and test the Silverfort Azure AD bridge in your Azure AD tenant.
+## Silverfort with Azure AD Authentication Architecture
-Once configured, you can create Silverfort authentication policies that bridge authentication requests from various identity sources to Azure AD for SSO. Once an application is bridged, it can be managed in Azure AD.
-
-The following diagram shows the components included in the solution and sequence of authentication orchestrated by Silverfort.
+The following diagram describes the authentication architecture orchestrated by Silverfort in a hybrid environment.
![image shows the architecture diagram](./media/silverfort-azure-ad-integration/silverfort-architecture-diagram.png)
The following diagram shows the components included in the solution and sequence
## Prerequisites
-To set up SSO for an application that you added to your Azure AD tenant, you'll need:
+You must already have Silverfort deployed in your tenant or infrastructure in order to perform this tutorial. To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](https://www.silverfort.com/). You will need to install Silverfort Desktop app on relevant workstations.
+
+This tutorial requires you to set up Silverfort Azure AD Adapter in your Azure AD tenant. You'll need:
- An Azure account with an active subscription. You can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles in your Azure account - Global administrator, Cloud application administrator, Application administrator, or Owner of the service principal. -- An application that supports SSO and that was already pre-configured and added to the Azure AD gallery. The Silverfort application in the Azure AD gallery is already pre-configured. You'll need to add it as an Enterprise application from the gallery.-
-## Onboard with Silverfort
-
-To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](https://www.silverfort.com/). Install Silverfort Desktop app on relevant workstations.
+- The Silverfort Azure AD Adapter application in the Azure AD gallery is pre-configured to support SSO. You'll need to add Silverfort Azure AD Adapter to your tenant as an Enterprise application from the gallery.
## Configure Silverfort and create a policy 1. From a browser, log in to the **Silverfort admin console**.
-2. In the main menu, navigate to **Settings**, and then scroll to
+2. In the main menu, navigate to **Settings** and then scroll to
**Azure AD Bridge Connector** in the General section. Confirm your tenant ID, and then select **Authorize**. ![image shows azure ad bridge connector](./media/silverfort-azure-ad-integration/azure-ad-bridge-connector.png)
To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](http
![image shows enterprise application](./media/silverfort-azure-ad-integration/enterprise-application.png)
-5. In the Silverfot admin console, navigate to the **Policies** page, and select **Create Policy**.
+5. In the Silverfort admin console, navigate to the **Policies** page and select **Create Policy**.
-6. The **New Policy** dialog will appear. Enter a **Policy Name**, that would indicate the application name that will be created in Azure. For example, if you're adding multiple servers or applications under this policy, name it to reflect the resources covered by the policy. In the example, we'll create a policy for the *SL-APP1* server.
+6. The **New Policy** dialog will appear. Enter a **Policy Name** that would indicate the application name that will be created in Azure. For example, if you're adding multiple servers or applications under this policy, name it to reflect the resources covered by the policy. In the example, we'll create a policy for the *SL-APP1* server.
![image shows define policy](./media/silverfort-azure-ad-integration/define-policy.png)
To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](http
![image shows add policy](./media/silverfort-azure-ad-integration/add-policy.png)
-14. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application should now appear. This application can now be included in [CA policies](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
+14. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application should now appear. This application can now be included in [Conditional Access policies](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
## Next steps - [Silverfort Azure AD adapter](https://azuremarketplace.microsoft.com/marketplace/apps/aad.silverfortazureadadapter?tab=overview) - [Silverfort resources](https://www.silverfort.com/resources/)+
+- [Contact Silverfort](https://www.silverfort.com/company/contact/)
active-directory How To Use Vm Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md
Last updated 01/11/2022 -
+ms.tool: azure-cli, azure-powershell
ms.devlang: azurecli
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Last updated 01/11/2022 -+
+ms.tool: azure-cli, azure-powershell
ms.devlang: azurecli #Customer intent: As an administrator, I want to know how to access Cosmos DB from a virtual machine using a managed identity
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
na Previously updated : 05/24/2022 Last updated : 10/07/2021
Select an alert to see a report that lists the users or roles that triggered the
## Alerts
-Alert | Severity | Trigger | Recommendation
- | | |
-**Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles.
-**Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use.
-**Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles.
-**Roles are being assigned outside of Privileged Identity Management (Preview)** | High | A role is managed directly through the Azure IAM resource blade or the Azure Resource Manager API | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management.
-
-> [!Note]
-> During the public preview of the **Roles are being assigned outside of Privileged Identity Management (Preview)** alert, Microsoft supports only permissions that are assigned at the subscription level.
+| Alert | Severity | Trigger | Recommendation |
+| | | | |
+| **Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles. |
+| **Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use. |
+| **Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles. |
### Severity
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
na Previously updated : 05/13/2019 Last updated : 05/27/2022 -+ # Usage and insights report in the Azure Active Directory portal
To access the data from the usage and insights report, you need:
## Use the report
-The usage and insights report shows the list of applications with one or more sign-in attempts, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
+The usage and insights report shows the list of applications with one or more sign-in attempts, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate. The sign-in graph per application only counts interactive user sign-ins.
Clicking **Load more** at the bottom of the list allows you to view additional applications on the page. You can select the date range to view all applications that have been used within the range.
active-directory Timeclock 365 Saml Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeclock-365-saml-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Timeclock 365 SAML | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Timeclock 365 SAML'
description: Learn how to configure single sign-on between Azure Active Directory and Timeclock 365 SAML.
Previously updated : 09/02/2021 Last updated : 05/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Timeclock 365 SAML
+# Tutorial: Azure AD SSO integration with Timeclock 365 SAML
In this tutorial, you'll learn how to integrate Timeclock 365 SAML with Azure Active Directory (Azure AD). When you integrate Timeclock 365 SAML with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Timeclock 365 SAML supports **SP** initiated SSO.
-* Timeclock 365 SAML supports [Automated user provisioning](timeclock-365-provisioning-tutorial.md).
+* Timeclock 365 SAML supports [Automated user provisioning](timeclock-365-saml-provisioning-tutorial.md).
## Adding Timeclock 365 SAML from the gallery
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Click on **Create** button to create the test user. > [!NOTE]
-> Timeclock 365 SAML also supports automatic user provisioning, you can find more details [here](./timeclock-365-provisioning-tutorial.md) on how to configure automatic user provisioning.
+> Timeclock 365 SAML also supports automatic user provisioning, you can find more details [here](./timeclock-365-saml-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
active-directory Whimsical Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whimsical-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Whimsical for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Whimsical.
++
+writer: twimmers
+
+ms.assetid: 4457a724-ed81-4f7b-bb3e-70beea80cb51
++++ Last updated : 05/11/2022+++
+# Tutorial: Configure Whimsical for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Whimsical and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Whimsical](https://service-portaltest.benq.com/login) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Whimsical
+> * Remove users in Whimsical when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Whimsical
+> * [Single sign-on](benq-iam-tutorial.md) to Whimsical (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* To use SCIM, SAML has to be enabled and correctly configured.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Whimsical](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Whimsical to support provisioning with Azure AD
+1. To enable SCIM, you must first set up SAML SSO with AAD.
+1. Go to "Workspace Settings", which you'll find under your workspace name in the top left.
+1. Enable SCIM provisioning and click "Reveal" to retrieve the token.
+1. In the "Provisioning" tab in AAD, set "Provisioning Mode" to "Automatic", and paste "https://whimsical.com/public-api/scim-v2/?aadOptscim062020" into "Tenant URL"
+
+## Step 3. Add Whimsical from the Azure AD application gallery
+
+Add Whimsical from the Azure AD application gallery to start managing provisioning to Whimsical. If you have previously setup Whimsical for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Whimsical, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+## Step 5. Configure automatic user provisioning to Whimsical
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Whimsical in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Whimsical**.
+
+ ![The Whimsical link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provision tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Whimsical Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Whimsical. If the connection fails, ensure your Whimsical account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Whimsical**.
+
+9. Review the user attributes that are synchronized from Azure AD to Whimsical in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Whimsical for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Whimsical API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |externalId|String|
+ |active|Boolean|
+ |displayName|String|
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for Whimsical, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to Whimsical by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
Last updated 01/03/2022
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## May 2022
+
+### Unlimited number of subscriptions
+It is easier now to get an overview of optimization opportunities available to your organization ΓÇô no need to spend time and effort to apply filters and process subscription in batches.
+
+To learn more, visit [Get started with Azure Advisor](advisor-get-started.md).
+
+### Tag filtering
+
+You can now get Advisor recommendations scoped to a business unit, workload, or team. Filter recommendations and calculate scores using tags you have already assigned to Azure resources, resource groups and subscriptions. Apply tag filters to:
+
+* Identify cost saving opportunities by business units
+* Compare scores for workloads to optimize critical ones first
+
+To learn more, visit [How to filter Advisor recommendations using tags](advisor-tag-filtering.md).
+ ## January 2022 [**Shutdown/Resize your virtual machines**](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) recommendation was enhanced to increase the quality, robustness, and applicability.
advisor Advisor Tag Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-tag-filtering.md
+
+ Title: Review optimization opportunities by workload, environment or team
+description: Review optimization opportunities by workload, environment or team
++ Last updated : 05/25/2022++
+# Review optimization opportunities by workload, environment or team
+
+You can now get Advisor recommendations and scores scoped to a workload, environment, or team using resource tag filters. Filter recommendations and calculate scores using tags you have already assigned to Azure resources, resource groups and subscriptions. Use tag filters to:
+
+* Identify cost saving opportunities by team
+* Compare scores for workloads to optimize the critical ones first
+
+> [!TIP]
+> For more information on how to use resource tags to organize and govern your Azure resources, please see the [Cloud Adoption FrameworkΓÇÖs guidance](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging) and [Build a cloud governance strategy on Azure](/learn/modules/build-cloud-governance-strategy-azure/).
+
+## How to filter recommendations using tags
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select [Advisor](https://aka.ms/azureadvisordashboard) from any page.
+1. On the Advisor dashboard, click on the **Add Filter** button.
+1. Select the tag in the **Filter** field and value(s).
+1. Click **Apply**. Summary tiles will be updated to reflect the filter.
+1. Click on any of the categories to review recommendations.
+
+ [ ![Screenshot of the Azure Advisor dashboard that shows count of recommendations after tag filter is applied.](./media/tags/overview-tag-filters.png) ](./media/tags/overview-tag-filters.png#lightbox)
+
+
+## How to calculate scores using resource tags
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select [Advisor](https://aka.ms/azureadvisordashboard) from any page.
+1. Select **Advisor score (preview)** from the navigation menu on the left.
+1. Click on the **Add Filter** button.
+1. Select the tag in the **Filter** field and value(s).
+1. Click **Apply**. Advisor score will be updated to only include resources impacted by the filter.
+1. Click on any of the categories to review recommendations.
+
+ [ ![Screenshot of the Azure Advisor score dashboard that shows score and recommendations after tag filter is applied.](./media/tags/score-tag-filters.png) ](./media/tags/score-tag-filters.png#lightbox)
+
+> [!NOTE]
+> Not all capabilities are available when tag filters are used. For example, tag filters are not supported for security score and score history.
+
+## Next steps
+
+To learn more about tagging, see:
+- [Define your tagging strategy - Cloud Adoption Framework](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging)
+- [Tag resources, resource groups, and subscriptions for logical organization - Azure Resource Manager](/azure/azure-resource-manager/management/tag-resources?tabs=json)
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
Last updated 06/10/2021-
+ms.tool: azure-cli, azure-powershell
ms.devlang: azurecli
aks Howto Deploy Java Liberty App With Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app-with-postgresql.md
The steps in this section guide you through creating an Azure Database for Postg
Use the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command to create the DB server. The following example creates a DB server named *youruniquedbname*. Make sure *youruniqueacrname* is unique within Azure. > [!TIP]
- > To help ensure a globally unique name, prepend a disambiguation string such as your intitials and the MMDD of today's date.
+ > To help ensure a globally unique name, prepend a disambiguation string such as your initials and the MMDD of today's date.
```bash
In directory *liberty/config*, the *server.xml* is used to configure the DB conn
After the offer is successfully deployed, an AKS cluster will be generated automatically. The AKS cluster is configured to connect to the ACR. Before we get started with the application, we need to extract the namespace configured for the AKS.
-1. Run following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
+1. Run the following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
```bash echo <appDeploymentTemplateYamlEncoded> | base64 -d
aks Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-tags.md
Title: Use Azure tags in Azure Kubernetes Service (AKS)
description: Learn how to use Azure provider tags to track resources in Azure Kubernetes Service (AKS). Previously updated : 02/08/2022 Last updated : 05/26/2022 # Use Azure tags in Azure Kubernetes Service (AKS)
When you create or update an AKS cluster with the `--tags` parameter, the follow
* The AKS cluster * The route table that's associated with the cluster * The public IP that's associated with the cluster
+* The load balancer that's associated with the cluster
* The network security group that's associated with the cluster * The virtual network that's associated with the cluster
+* The AKS managed kubelet msi associated with the cluster
+* The AKS managed addon msi associated with the cluster
+* The private DNS zone associated with the private cluster
+* The private endpoint associated with the private cluster
+
+> [!NOTE]
+> Azure Private DNS only supports 15 tags. [tag resources](../azure-resource-manager/management/tag-resources.md).
To create a cluster and assign Azure tags, run `az aks create` with the `--tags` parameter, as shown in the following command. Running the command creates a *myAKSCluster* in the *myResourceGroup* with the tags *dept=IT* and *costcenter=9999*.
parameters:
> > Any updates that you make to tags through Kubernetes will retain the value that's set through Kubernetes. For example, if your disk has tags *dept=IT* and *costcenter=5555* set by Kubernetes, and you use the portal to set the tags *team=beta* and *costcenter=3333*, the new list of tags would be *dept=IT*, *team=beta*, and *costcenter=5555*. If you then remove the disk through Kubernetes, the disk would have the tag *team=beta*.
-[install-azure-cli]: /cli/azure/install-azure-cli
+[install-azure-cli]: /cli/azure/install-azure-cli
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
The Web Application Routing solution makes it easy to access applications that a
The add-on deploys four components: an [nginx ingress controller][nginx], [Secrets Store CSI Driver][csi-driver], [Open Service Mesh (OSM)][osm], and [External-DNS][external-dns] controller. - **Nginx ingress Controller**: The ingress controller exposed to the internet.-- **External-dns**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
+- **External-DNS controller**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
- **CSI driver**: Connector used to communicate with keyvault to retrieve SSL certificates for ingress controller. - **OSM**: A lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.-- **External-DNS controller**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone. ## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).
+- An Azure Key Vault containing any application certificates.
+- A DNS solution.
### Install the `aks-preview` Azure CLI extension
You can also enable Web Application Routing on an existing AKS cluster using the
az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons web_application_routing ```
-After the cluster is deployed or updated, use the [az aks show][az-aks-show] command to retrieve the DNS zone name.
- ## Connect to your AKS cluster To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client.
If you use the Azure Cloud Shell, `kubectl` is already installed. You can also i
az aks install-cli ```
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*:
```azurecli
-az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
+az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
``` ## Create the application namespace
Copy the identity's object ID:
### Grant access to Azure Key Vault
+Obtain the vault URI for your Azure Key Vault:
+
+```azurecli
+az keyvault show --resource-group myResourceGroup --name myapp-contoso
+```
+ Grant `GET` permissions for Web Application Routing to retrieve certificates from Azure Key Vault: ```azurecli
annotations:
These annotations in the service manifest would direct Web Application Routing to create an ingress servicing `myapp.contoso.com` connected to the keyvault `myapp-contoso`.
-Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` and `<MY_KEYVAULT_URI>` with the DNS zone name collected in the previous step of this article.
+Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` with your DNS host name and `<MY_KEYVAULT_URI>` with the vault URI collected in the previous step of this article.
```yaml apiVersion: apps/v1
Use the [kubectl apply][kubectl-apply] command to create the resources.
kubectl apply -f samples-web-app-routing.yaml -n hello-web-app-routing ```
-The following example shows the created resources:
+The following example output shows the created resources:
```bash
-$ kubectl apply -f samples-web-app-routing.yaml -n hello-web-app-routing
- deployment.apps/aks-helloworld created service/aks-helloworld created ```
service/aks-helloworld created
## Verify the managed ingress was created ```bash
-$ kubectl get ingress -n hello-web-app-routing -n hello-web-app-routing
+$ kubectl get ingress -n hello-web-app-routing
``` Open a web browser to *<MY_HOSTNAME>*, for example *myapp.contoso.com* and verify you see the demo application. The application may take a few minutes to appear.
az aks disable-addons --addons web_application_routing --name myAKSCluster --re
When the Web Application Routing add-on is disabled, some Kubernetes resources may remain in the cluster. These resources include *configMaps* and *secrets*, and are created in the *app-routing-system* namespace. To maintain a clean cluster, you may want to remove these resources.
-Look for *addon-web-application-routing* resources using the following [kubectl get][kubectl-get] commands:
- ## Clean up Remove the associated Kubernetes objects created in this article using `kubectl delete`.
service "aks-helloworld" deleted
[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
-[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
+[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
app-service Configure Vnet Integration Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-enable.md
Last updated 10/20/2021
+ms.tool: azure-cli, azure-powershell
# Enable virtual network integration in Azure App Service
app-service Provision Resource Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-terraform.md
Last updated 8/26/2021
+ms.tool: terraform
application-gateway Application Gateway Websocket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-websocket.md
To establish a WebSocket connection, a specific HTTP-based handshake is exchange
![Diagram compares a client interacting with a web server, connecting twice to get two replies, with a WebSocket interaction, where a client connects to a server once to get multiple replies.](./media/application-gateway-websocket/websocket.png)
+> [!NOTE]
+> As described, the HTTP protocol is used only to perform a handshake when establishing a WebSocket connection. Once the handshake is completed, a WebSocket connection gets opened for transmitting the data, and the Web Application Firewall (WAF) cannot parse any contents. Therefore, WAF does not perform any inspections on such data.
+ ### Listener configuration element An existing HTTP listener can be used to support WebSocket traffic. The following is a snippet of an httpListeners element from a sample template file. You would need both HTTP and HTTPS listeners to support WebSocket and secure WebSocket traffic. Similarly you can use the portal or Azure PowerShell to create an application gateway with listeners on port 80/443 to support WebSocket traffic.
Another reason for this is that application gateway backend health probe support
## Next steps
-After learning about WebSocket support, go to [create an application gateway](quick-create-powershell.md) to get started with a WebSocket enabled web application.
+After learning about WebSocket support, go to [create an application gateway](quick-create-powershell.md) to get started with a WebSocket enabled web application.
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
As of March 15, 2021, Key Vault recognizes Application Gateway as a trusted serv
When you're using a restricted Key Vault, use the following steps to configure Application Gateway to use firewalls and virtual networks: > [!TIP]
-> The following steps are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address.
+> Steps 1-3 are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address.
1. In the Azure portal, in your Key Vault, select **Networking**. 1. On the **Firewalls and virtual networks** tab, select **Selected networks**.
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
description: Learn about regions and availability zones and how they work to hel
Previously updated : 03/30/2022 Last updated : 05/30/2022
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
description: Learn what services are supported by availability zones and underst
Previously updated : 03/25/2022 Last updated : 05/30/2022
azure-arc Active Directory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md
# Azure Arc-enabled SQL Managed Instance with Active Directory authentication + Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). The Arc-enabled SQL Managed Instance uses an existing on-premises Active Directory (AD) domain for authentication. + This article describes how to enable Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes: - Customer-managed keytab (CMK) - System-managed keytab (SMK)
azure-arc Active Directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-prerequisites.md
This document explains how to prepare to deploy Azure Arc-enabled data services with Active Directory (AD) authentication. Specifically the article describes Active Directory objects you need to configure before the deployment of Kubernetes resources. + [The introduction](active-directory-introduction.md#compare-ad-integration-modes) describes two different integration modes: - *System-managed keytab* mode allows the system to create and manage the AD accounts for each SQL Managed Instance. - *Customer-managed keytab* mode allows you to create and manage the AD accounts for each SQL Managed Instance.
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md
Previously updated : 02/22/2022 Last updated : 05/27/2022
To view the changes made to the Azure Arc-enabled SQL managed instance, you can
az sql mi-arc show -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s ```
+## Configure readable secondaries
+
+When you deploy Azure Arc enabled SQL managed instance in `BusinessCritical` service tier with 2 or more replicas, by default, one secondary replica is automatically configured as `readableSecondary`. This setting can be changed, either to add or to remove the readable secondaries as follows:
+
+```azurecli
+az sql mi-arc update --name <sqlmi name> --readable-secondaries <value> --k8s-namespace <namespace> --use-k8s
+```
+
+For example, the following example will reset the readable secondaries to 0.
+
+```azurecli
+az sql mi-arc update --name sqlmi1 --readable-secondaries 0 --k8s-namespace mynamespace --use-k8s
+```
+## Configure replicas
+
+You can also scale up or down the number of replicas deployed in the `BusinessCritical` service tier as follows:
+
+```azurecli
+az sql mi-arc update --name <sqlmi name> --replicas <value> --k8s-namespace <namespace> --use-k8s
+```
+
+For example:
+
+The following example will scale down the number of replicas from 3 to 2.
+
+```azurecli
+az sql mi-arc update --name sqlmi1 --replicas 2 --k8s-namespace mynamespace --use-k8s
+```
+
+> [Note]
+> If you scale down from 2 replicas to 1 replica, you may run into a conflict with the pre-configured `--readable--secondaries` setting. You can first edit the `--readable--secondaries` before scaling down the replicas.
++ ## Configure Server options You can configure server configuration settings for Azure Arc-enabled SQL managed instance after creation time. This article describes how to configure settings like enabling or disabling mssql Agent, enable specific trace flags for troubleshooting scenarios.
azure-arc Configure Transparent Data Encryption Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md
# Enable transparent data encryption on Azure Arc-enabled SQL Managed Instance
-This article describes how to enable transparent data encryption on a database created in an Azure Arc-enabled SQL Managed Instance.
+This article describes how to enable transparent data encryption on a database created in an Azure Arc-enabled SQL Managed Instance. In this article, the term *managed instance* refers to a deployment of Azure Arc-enabled SQL Managed Instance.
## Prerequisites
-Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and have connected to it.
+Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and connect to it.
- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md) - [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md)
-## Turn on transparent data encryption on a database in Azure Arc-enabled SQL Managed Instance
+## Turn on transparent data encryption on a database in the managed instance
-Turning on transparent data encryption in Azure Arc-enabled SQL Managed Instance follows the same steps as SQL Server on-premises. Follow the steps described in [SQL Server's transparent data encryption guide](/sql/relational-databases/security/encryption/transparent-data-encryption#enable-tde).
+Turning on transparent data encryption in the managed instance follows the same steps as SQL Server on-premises. Follow the steps described in [SQL Server's transparent data encryption guide](/sql/relational-databases/security/encryption/transparent-data-encryption#enable-tde).
-After creating the necessary credentials, it's highly recommended to back up any newly created credentials.
+After you create the necessary credentials, back up any newly created credentials.
-## Back up a transparent data encryption credential from Azure Arc-enabled SQL Managed Instance
+## Back up a transparent data encryption credential
-When backing up from Azure Arc-enabled SQL Managed Instance, the credentials will be stored within the container. It isn't necessary to store the credentials on a persistent volume, but you may use the mount path for the data volume within the container if you'd like: `/var/opt/mssql/data`. Otherwise, the credentials will be stored in-memory in the container. Below is an example of backing up a certificate from Azure Arc-enabled SQL Managed Instance.
+When you back up credentials from the managed instance, the credentials are stored within the container. To store credentials on a persistent volume, specify the mount path in the container. For example, `var/opt/mssql/data`. The following example backs up a certificate from the managed instance:
> [!NOTE]
-> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. `kubectl` can mistake the drive in the path as a pod name. For example, `kubectl` might mistake `C` to be a pod name in `C:\folder`. Users can avoid this issue by using relative paths or removing the `C:` from the provided path while in the `C:` drive. This issue also applies to environment variables on Windows like `$HOME`.
+> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below.
1. Back up the certificate from the container to `/var/opt/mssql/data`.
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
2. Copy the certificate from the container to your file system.
+### [Windows](#tab/windows)
+
+ ```console
+ kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-certificate-path> > <local-certificate-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.crt > $HOME\sqlcerts\servercert.crt
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path> ```
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt ./sqlcerts/servercert.crt
+ kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt $HOME/sqlcerts/servercert.crt
``` ++ 3. Copy the private key from the container to your file system.
+### [Windows](#tab/windows)
+ ```console
+ kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-private-key-path> > <local-private-key-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.key > $HOME\sqlcerts\servercert.key
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path> ```
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key ./sqlcerts/servercert.key
+ kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key $HOME/sqlcerts/servercert.key
``` ++ 4. Delete the certificate and private key from the container. ```console
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" ```
-## Restore a transparent data encryption credential to Azure Arc-enabled SQL Managed Instance
+## Restore a transparent data encryption credential to a managed instance
-Similar to above, restore the credentials by copying them into the container and running the corresponding T-SQL afterwards.
+Similar to above, to restore the credentials, copy them into the container and run the corresponding T-SQL afterwards.
> [!NOTE]
-> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. `kubectl` can mistake the drive in the path as a pod name. For example, `kubectl` might mistake `C` to be a pod name in `C:\folder`. Users can avoid this issue by using relative paths or removing the `C:` from the provided path while in the `C:` drive. This issue also applies to environment variables on Windows like `$HOME`.
+> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below.
1. Copy the certificate from your file system to the container.
+### [Windows](#tab/windows)
+ ```console
+ type <local-certificate-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-certificate-path>
+ ```
+
+ Example:
+ ```console
+ type $HOME\sqlcerts\servercert.crt | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.crt
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path> ```
Similar to above, restore the credentials by copying them into the container and
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi ./sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt
+ kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt
``` ++ 2. Copy the private key from your file system to the container.
+### [Windows](#tab/windows)
+ ```console
+ type <local-private-key-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-private-key-path>
+ ```
+
+ Example:
+ ```console
+ type $HOME\sqlcerts\servercert.key | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.key
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path> ```
Similar to above, restore the credentials by copying them into the container and
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi ./sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key
+ kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key
``` ++ 3. Create the certificate using file paths from `/var/opt/mssql/data`. ```sql
azure-arc Connect Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-active-directory-sql-managed-instance.md
This article describes how to connect to SQL Managed Instance endpoint using Active Directory (AD) authentication. Before you proceed, make sure you have an AD-integrated Azure Arc-enabled SQL Managed Instance deployed already. + See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance](deploy-active-directory-sql-managed-instance.md) to deploy Azure Arc-enabled SQL Managed Instance with Active Directory authentication enabled. > [!NOTE]
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
Previously updated : 03/24/2022 Last updated : 05/27/2022
Optionally, you can specify certificates for logs and metrics UI dashboards. See
After the extension and custom location are created, proceed to deploy the Azure Arc data controller as follows. ```azurecli
-az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-logs true --auto-upload-metrics true --custom-location <name of custom location> --storage-class <storageclass>
+az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-metrics true --custom-location <name of custom location> --storage-class <storageclass>
# Example
-az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-logs true --auto-upload-metrics true --custom-location mycustomlocation --storage-class mystorageclass
+az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --custom-location mycustomlocation --storage-class mystorageclass
``` If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows: ```azurecli
-az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --path ./azure-arc-custom --auto-upload-logs true --auto-upload-metrics true --custom-location <name of custom location>
+az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location <name of custom location>
# Example
-az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --path ./azure-arc-custom --auto-upload-logs true --auto-upload-metrics true --custom-location mycustomlocation
+az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location mycustomlocation
``` ## Monitor the status of Azure Arc data controller deployment
azure-arc Deploy Active Directory Connector Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-cli.md
This article explains how to deploy an Active Directory (AD) connector using Azure CLI. The AD connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance. + ## Prerequisites ### Install tools
azure-arc Deploy Active Directory Connector Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-portal.md
Active Directory (AD) connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instances. + This article explains how to deploy, manage, and delete an Active Directory (AD) connector in directly connected mode from the Azure portal. ## Prerequisites
azure-arc Deploy Active Directory Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance-cli.md
This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication using Azure CLI. + See these articles for specific instructions: - [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication. + Before you proceed, complete the steps explained in [Customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a system-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md) ## Prerequisites
azure-arc Deploy Customer Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-customer-managed-keytab-active-directory-connector.md
This article explains how to deploy Active Directory (AD) connector in customer-managed keytab mode. The connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance. + ## Active Directory connector in customer-managed keytab mode In customer-managed keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
azure-arc Deploy System Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-system-managed-keytab-active-directory-connector.md
This article explains how to deploy Active Directory connector in system-managed keytab mode. It is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance. + ## Active Directory connector in system-managed keytab mode In System-Managed Keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
azure-arc Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-guide.md
description: Introduction to troubleshooting resources
--++ Previously updated : 07/30/2021 Last updated : 05/27/2022
This article identifies troubleshooting resources for Azure Arc-enabled data services.
+## Logs Upload related errors
+
+If you deployed Azure Arc data controller in the `direct` connectivity mode using `kubectl`, and have not created a secret for the Log Analytics workspace credentials, you may see the following error messages in the Data Controller CR (Custom Resource):
+
+```
+status": {
+ "azure": {
+ "uploadStatus": {
+ "logs": {
+ "lastUploadTime": "YYYY-MM-HHTMM:SS:MS.SSSSSSZ",
+ "message": "spec.settings.azure.autoUploadLogs is true, but failed to get log-workspace-secret secret."
+ },
+
+```
+
+To resolve the above error, create a secret with the Log Analytics Workspace credentials containing the `WorkspaceID` and the `SharedAccessKey` as follows:
+
+```
+apiVersion: v1
+data:
+ primaryKey: <base64 encoding of Azure Log Analytics workspace primary key>
+ workspaceId: <base64 encoding of Azure Log Analytics workspace Id>
+kind: Secret
+metadata:
+ name: log-workspace-secret
+ namespace: <your datacontroller namespace>
+type: Opaque
+
+```
+
+## Metrics upload related errors in direct connected mode
+
+If you configured automatic upload of metrics, in the direct connected mode and the permissions needed for the MSI have not been properly granted (as described in [Upload metrics](upload-metrics.md)), you might see an error in your logs as follows:
+
+```output
+'Metric upload response: {"error":{"code":"AuthorizationFailed","message":"Check Access Denied Authorization for AD object XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX over scope /subscriptions/XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX/resourcegroups/my-resource-group/providers/microsoft.azurearcdata/sqlmanagedinstances/arc-dc, User Tenant Id: XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX. Microsoft.Insights/Metrics/write was not allowed, Microsoft.Insights/Telemetry/write was notallowed. Warning: Principal will be blocklisted if the service principal is not granted proper access while it hits the GIG endpoint continuously."}}
+```
+
+To resolve above error, retrieve the MSI for the Azure Arc data controller extension, and grant the required roles as described in [Upload metrics](upload-metrics.md).
++
+## Usage upload related errors in direct connected mode
+
+If you deployed your Azure Arc data controller in the direct connected mode the permissions needed to upload your usage information are automatically granted for the Azure Arc data controller extension MSI. If the automatic upload process runs into permissions related issues you might see an error in your logs as follows:
+
+```
+identified that your data controller stopped uploading usage data to Azure. The error was:
+
+{"lastUploadTime":"2022-05-05T20:10:47.6746860Z","message":"Data controller upload response: {\"error\":{\"code\":\"AuthorizationFailed\",\"message\":\"The client 'XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX' with object id 'XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX' does not have authorization to perform action 'microsoft.azurearcdata/datacontrollers/write' over scope '/subscriptions/XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX/resourcegroups/my-resource-group/providers/microsoft.azurearcdata/datacontrollers/arc-dc' or the scope is invalid. If access was recently granted, please refresh your credentials.\"}}"}
+```
+
+To resolve the permissions issue, retrieve the MSI and grant the required roles as described in [Upload metrics](upload-metrics.md)).
++ ## Resources by type [Scenario: Troubleshooting PostgreSQL Hyperscale server groups](troubleshoot-postgresql-hyperscale-server-group.md)
azure-arc Upload Usage Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-usage-data.md
description: Upload usage Azure Arc-enabled data services data to Azure
--++ Previously updated : 11/03/2021 Last updated : 05/27/2022 # Upload usage data to Azure in **indirect** mode
-Periodically, you can export out usage information. The export and upload of this information creates and updates the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure.
+Periodically, you can export out usage information. The export and upload of this information creates and updates the data controller, SQL managed instance, and PostgreSQL resources in Azure.
> [!NOTE] > Usage information is automatically uploaded for Azure Arc data controller deployed in **direct** connectivity mode. The instructions in this article only apply to uploading usage information for Azure Arc data controller deployed in **indirect** connectivity mode..
Usage information such as inventory and resource usage can be uploaded to Azure
az arcdata dc export --type usage --path usage.json --k8s-namespace <namespace> --use-k8s ```
- This command creates a `usage.json` file with all the Azure Arc-enabled data resources such as SQL managed instances and PostgreSQL Hyperscale instances etc. that are created on the data controller.
+ This command creates a `usage.json` file with all the Azure Arc-enabled data resources such as SQL managed instances and PostgreSQL instances etc. that are created on the data controller.
For now, the file is not encrypted so that you can see the contents. Feel free to open in a text editor and see what the contents look like.
-You will notice that there are two sets of data: `resources` and `data`. The `resources` are the data controller, PostgreSQL Hyperscale server groups, and SQL Managed Instances. The `resources` records in the data capture the pertinent events in the history of a resource - when it was created, when it was updated, and when it was deleted. The `data` records capture how many cores were available to be used by a given instance for every hour.
+You will notice that there are two sets of data: `resources` and `data`. The `resources` are the data controller, PostgreSQL, and SQL Managed Instances. The `resources` records in the data capture the pertinent events in the history of a resource - when it was created, when it was updated, and when it was deleted. The `data` records capture how many cores were available to be used by a given instance for every hour.
Example of a `resource` entry:
Example of a `data` entry:
az arcdata dc upload --path usage.json ```
+## Upload frequency
+
+In the **indirect** mode, usage information needs to be uploaded to Azure at least once in every 30 days. It is highly recommended to upload more frequently, such as daily or weekly. If usage information is not uploaded past 32 days, you will see some degradation in the service such as being unable to provision any new resources.
+
+There will be two types of notifications for delayed usage uploads - warning phase and degraded phase. In the warning phase there will be a message such as `Billing data for the Azure Arc data controller has not been uploaded in {0} hours. Please upload billing data as soon as possible.`.
+
+In the degraded phase, the message will look like `Billing data for the Azure Arc data controller has not been uploaded in {0} hours. Some functionality will not be available until the billing data is uploaded.`.
+
+The Azure portal Overview page for Data Controller and the Custom Resource status of the Data controller in your kubernetes cluster will both indicate the last upload date and the status message(s).
+++ ## Automating uploads (optional) If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script.
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Title: "Configurations and GitOps - Azure Arc-enabled Kubernetes"
+ Title: "GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes"
Last updated 05/24/2022
description: "This article provides a conceptual overview of GitOps and configur
keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps"
-# Configurations and GitOps with Azure Arc-enabled Kubernetes
+# GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes
> [!NOTE] > This document is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. The Git repository can contain:+ * YAML-format manifests describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc. * Helm charts for deploying applications.
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
Title: "Conceptual overview Azure Kubernetes Configuration Management (GitOps)"
+ Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes"
description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 5/3/2022 Last updated : 5/26/2022
-# GitOps in Azure
+# GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes
Azure provides configuration management capability using GitOps in Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. You can easily enable and use GitOps in these clusters.
azure-functions Azure Functions Az Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/azure-functions-az-redundancy.md
Title: Azure Functions availability zone support on Elastic Premium plans
description: Learn how to use availability zone redundancy with Azure Functions for high-availability function applications on Elastic Premium plans. Previously updated : 09/07/2021 Last updated : 03/24/2022 # Goal: Introduce AZ Redundancy in Azure Functions elastic premium plans to customers + a tutorial on how to get started with ARM templates # Azure Functions support for availability zone redundancy
-Availability zone (AZ) support for Azure Functions is now available on Elastic Premium and Dedicated (App Service) plans. A Zone Redundant Azure Function application will automatically balance its instances between availability zones for higher availability. This document focuses on zone redundancy support for Elastic Premium Function plans. For zone redundancy on Dedicated plans, refer [here](../app-service/how-to-zone-redundancy.md).
+Availability zone (AZ) support for Azure Functions is now available on Premium (Elastic Premium) and Dedicated (App Service) plans. A zone-redundant Functions application automatically balances its instances between availability zones for higher availability. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, refer [here](../app-service/how-to-zone-redundancy.md).
+ ## Overview
-An [availability zone](../availability-zones/az-overview.md#availability-zones) is a high-availability offering that protects your applications and data from datacenter failures. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there&#39;s a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones.
+An [availability zone](../availability-zones/az-overview.md#availability-zones) is a high-availability offering that protects your applications and data from datacenter failures. Availability zones are unique physical locations within an Azure region. Each zone comprises one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high-availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating into other zones.
-A zone redundant function app will automatically distribute load the instances that your app runs on between the availability zones in the region. For Zone Redundant Elastic Premium apps, even as the app scales in and out, the instances the app is running on are still evenly distributed between availability zones.
+A zone redundant function app automatically distributes the instances your app runs on between the availability zones in the region. For apps running in a zone-redundant Premium plan, even as the app scales in and out, the instances the app is running on are still evenly distributed between availability zones.
## Requirements
-> [!IMPORTANT]
-> When selecting a [storage account](storage-considerations.md#storage-account-requirements) for your function app, be sure to use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Otherwise, in the case of a zonal outage, Functions may show unexpected behavior due to its dependency on Storage.
+When hosting in a zone-redundant Premium plan, the following requirements must be met.
+- You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions may show unexpected behavior during a zonal outage.
- Both Windows and Linux are supported.-- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. Instructions on zone redundancy with Dedicated (App Service) hosting plan can be found [here](../app-service/how-to-zone-redundancy.md).
+- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. Instructions on zone redundancy with Dedicated (App Service) hosting plan can be found [in this article](../app-service/how-to-zone-redundancy.md).
- Availability zone (AZ) support isn't currently available for function apps on [Consumption](consumption-plan.md) plans.-- Zone redundant plans must specify a minimum instance count of 3.-- Function apps on an Elastic Premium plan additionally must have a minimum [always ready instances](functions-premium-plan.md#always-ready-instances) count of 3.-- Can be enabled in any of the following regions:
+- Zone redundant plans must specify a minimum instance count of three.
+- Function apps hosted on a Premium plan must also have a minimum [always ready instances](functions-premium-plan.md#always-ready-instances) count of three.
+
+Zone-redundant Premium plans can currently be enabled in any of the following regions:
- West US 2 - West US 3 - Central US
+ - South Central US
- East US - East US 2 - Canada Central
A zone redundant function app will automatically distribute load the instances t
- Japan East - Southeast Asia - Australia East-- At this time, must be created through [ARM template](../azure-resource-manager/templates/index.yml). ## How to deploy a function app on a zone redundant Premium plan
-For initial creation of a zone redundant Elastic Premium Functions plan, you need to deploy via [ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md). Then, once successfully created, you can view and interact with the Function Plan via the Azure portal and CLI tooling. An ARM template is only needed for the initial creation of the Function Plan. A guide to hosting Functions on Premium plans can be found [here](functions-infrastructure-as-code.md#deploy-on-premium-plan). Once the zone redundant plan is created and deployed, any function app hosted on your new plan will now be zone redundant.
+There are currently two ways to deploy a zone-redundant premium plan and function app. You can use either the [Azure portal](https://portal.azure.com) or an ARM template.
+
+# [Azure portal](#tab/azure-portal)
+
+1. Open the Azure portal and navigate to the **Create Function App** page. Information on creating a function app in the portal can be found [here](functions-create-function-app-portal.md#create-a-function-app).
+
+1. In the **Basics** page, fill out the fields for your function app. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
+
+ | Setting | Suggested value | Notes for Zone Redundancy |
+ | | - | -- |
+ | **Region** | Preferred region | The subscription under which this new function app is created. You must pick a region that is AZ enabled from the [list above](#requirements). |
+
+ ![Screenshot of Basics tab of function app create page.](./media/functions-az-redundancy\azure-functions-basics-az.png)
+
+1. In the **Hosting** page, fill out the fields for your function app hosting plan. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
+
+ | Setting | Suggested value | Notes for Zone Redundancy |
+ | | - | -- |
+ | **Storage Account** | A [zone-redundant storage account](storage-considerations.md#storage-account-requirements) | As mentioned above in the [requirements](#requirements) section, we strongly recommend using a zone-redundant storage account for your zone redundant function app. |
+ | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../app-service/how-to-zone-redundancy.md). |
+ | **Zone Redundancy** | Enabled | This field populates the flag that determines if your app is zone redundant or not. You won't be able to select `Enabled` unless you have chosen a region supporting zone redundancy, as mentioned in step 2. |
+
+ ![Screenshot of Hosting tab of function app create page.](./media/functions-az-redundancy\azure-functions-hosting-az.png)
-The only properties to be aware of while creating a zone redundant Function plan are the new **zoneRedundant** property and the Function Plan instance count (**capacity**) fields. The **zoneRedundant** property must be set to **true** and the **capacity** property should be set based on the workload requirement, but no less than 3. Choosing the right capacity varies based on several factors and high availability/fault tolerance strategies. A good rule of thumb is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
+1. For the rest of the function app creation process, create your function app as normal. There are no fields in the rest of the creation process that affect zone redundancy.
+
+# [ARM template](#tab/arm-template)
+
+You can use an [ARM template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md) to deploy to a zone-redundant Premium plan. A guide to hosting Functions on Premium plans can be found [here](functions-infrastructure-as-code.md#deploy-on-premium-plan).
+
+The only properties to be aware of while creating a zone-redundant hosting plan are the new `zoneRedundant` property and the plan's instance count (`capacity`) fields. The `zoneRedundant` property must be set to `true` and the `capacity` property should be set based on the workload requirement, but not less than `3`. Choosing the right capacity varies based on several factors and high availability/fault tolerance strategies. A good rule of thumb is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
> [!IMPORTANT]
-> Azure function Apps hosted on an elastic premium, zone redundant Function plan must have a minimum [always ready instance](functions-premium-plan.md#always-ready-instances) count of 3. This is to enforce that a zone redundant function app always has enough instances to satisfy at least one worker per zone.
+> Azure Functions apps hosted on an elastic premium, zone-redundant plan must have a minimum [always ready instance](functions-premium-plan.md#always-ready-instances) count of 3. This make sure that a zone-redundant function app always has enough instances to satisfy at least one worker per zone.
-Below is an ARM template snippet for a zone redundant, Premium Function Plan, showing the new **zoneRedundant** field and the **capacity** specification.
+Below is an ARM template snippet for a zone-redundant, Premium plan showing the `zoneRedundant` field and the `capacity` specification.
-```
- "resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-01-15",
- "name": "your_plan_name_here",
- "location": "Central US",
- "sku": {
- "name": "EP3",
- "tier": "ElasticPremium",
- "size": "EP3",
- "family": "EP",
- "capacity": 3
- },
- "kind": "elastic",
- "properties": {
- "perSiteScaling": false,
- "elasticScaleEnabled": true,
- "maximumElasticWorkerCount": 20,
- "isSpot": false,
- "reserved": false,
- "isXenon": false,
- "hyperV": false,
- "targetWorkerCount": 0,
- "targetWorkerSizeId": 0,
- "zoneRedundant": true
- }
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2021-01-15",
+ "name": "your_plan_name_here",
+ "location": "Central US",
+ "sku": {
+ "name": "EP3",
+ "tier": "ElasticPremium",
+ "size": "EP3",
+ "family": "EP",
+ "capacity": 3
+ },
+ "kind": "elastic",
+ "properties": {
+ "perSiteScaling": false,
+ "elasticScaleEnabled": true,
+ "maximumElasticWorkerCount": 20,
+ "isSpot": false,
+ "reserved": false,
+ "isXenon": false,
+ "hyperV": false,
+ "targetWorkerCount": 0,
+ "targetWorkerSizeId": 0,
+ "zoneRedundant": true
}
- ]
+ }
+]
```
-To learn more, see [Automate resource deployment for your function app in Azure Functions](functions-infrastructure-as-code.md).
+To learn more about these templates, see [Automate resource deployment in Azure Functions](functions-infrastructure-as-code.md).
+++
+After the zone-redundant plan is created and deployed, any function app hosted on your new plan is considered zone-redundant.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Improve the performance and reliability of Azure Functions](performance-reliability.md)
++
azure-functions Durable Functions Http Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-api.md
GET /admin/extensions/DurableTaskExtension/instances
&createdTimeFrom={timestamp} &createdTimeTo={timestamp} &runtimeStatus={runtimeStatus1,runtimeStatus2,...}
+ &instanceIdPrefix={prefix}
&showInput=[true|false] &top={integer} ```
GET /runtime/webhooks/durableTask/instances?
&createdTimeFrom={timestamp} &createdTimeTo={timestamp} &runtimeStatus={runtimeStatus1,runtimeStatus2,...}
+ &instanceIdPrefix={prefix}
&showInput=[true|false] &top={integer} ```
Request parameters for this API include the default set mentioned previously as
| **`createdTimeFrom`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or after the given ISO8601 timestamp.| | **`createdTimeTo`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or before the given ISO8601 timestamp.| | **`runtimeStatus`** | Query string | Optional parameter. When specified, filters the list of returned instances based on their runtime status. To see the list of possible runtime status values, see the [Querying instances](durable-functions-instance-management.md) article. |
+| **`instanceIdPrefix`** | Query string | Optional parameter. When specified, filters the list of returned instances to include only instances whose instance id starts with the specified prefix string. Available starting with [version 2.7.2](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask/2.7.2) of the extension. |
| **`top`** | Query string | Optional parameter. When specified, limits the number of instances returned by the query. | ### Response
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The above sample value of `1800` sets a timeout of 30 minutes. To learn more, se
## WEBSITE\_CONTENTAZUREFILECONNECTIONSTRING
-Connection string for storage account where the function app code and configuration are stored in event-driven scaling plans running on Windows. For more information, see [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
+Connection string for storage account where the function app code and configuration are stored in event-driven scaling plans. For more information, see [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
|Key|Sample value| ||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-Only used when deploying to a Windows or Linux Premium plan or to a Windows Consumption plan. Not supported for Linux Consumption plans or Windows or Linux Dedicated plans. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+This setting is used for Consumption and Premium plan apps on both Windows and Linux. It's not used for Dedicated plan apps, which aren't dynamically scaled by Functions.
+
+Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
## WEBSITE\_CONTENTOVERVNET
azure-functions Functions Create First Function Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-bicep.md
+
+ Title: Create your function app resources in Azure using Bicep
+description: Create and deploy to Azure a simple HTTP triggered serverless function using Bicep.
++ Last updated : 05/12/2022+++++
+# Quickstart: Create and deploy Azure Functions resources using Bicep
+
+In this article, you use Bicep to create a function that responds to HTTP requests.
+
+Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
++
+## Prerequisites
+
+### Azure account
+
+Before you begin, you must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/function-app-create-dynamic/).
++
+The following four Azure resources are created by this Bicep file:
+++ [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage account, which is required by Functions.++ [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms): create a serverless Consumption hosting plan for the function app.++ [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites): create a function app.++ [**microsoft.insights/components**](/azure/templates/microsoft.insights/components): create an Application Insights instance for monitoring.+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters appInsightsLocation=<app-location>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -appInsightsLocation "<app-location>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<app-location\>** with the region for Application Insights, which is usually the same as the resource group.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use Azure CLI or Azure PowerShell to validate the deployment.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Visit function app welcome page
+
+1. Use the output from the previous validation step to retrieve the unique name created for your function app.
+1. Open a browser and enter the following URL: **\<https://<appName.azurewebsites.net\>**. Make sure to replace **<\appName\>** with the unique name created for your function app.
+
+When you visit the URL, you should see a page like this:
++
+## Clean up resources
+
+If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place as you'll build on what you've already done.
+
+Otherwise, if you no longer need the resources, use Azure CLI, PowerShell, or Azure portal to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+Now that you've publish your first function, learn more by adding an output binding to your function.
+
+# [Visual Studio Code](#tab/visual-studio-code)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md)
+
+# [Visual Studio](#tab/visual-studio)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs.md)
+
+# [Command line](#tab/command-line)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md)
++
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
Title: Develop and run Azure Functions locally
description: Learn how to code and test Azure Functions on your local computer before you run them on Azure Functions. Previously updated : 09/04/2018 Last updated : 05/19/2022 # Code and test Azure Functions locally
-While you're able to develop and test Azure Functions in the [Azure portal], many developers prefer a local development experience. Functions makes it easy to use your favorite code editor and development tools to create and test functions on your local computer. Your local functions can connect to live Azure services, and you can debug them on your local computer using the full Functions runtime.
+While you're able to develop and test Azure Functions in the [Azure portal], many developers prefer a local development experience. When you use Functions, using your favorite code editor and development tools to create and test functions on your local computer becomes easier. Your local functions can connect to live Azure services, and you can debug them on your local computer using the full Functions runtime.
This article provides links to specific development environments for your preferred language. It also provides some shared guidance for local development, such as working with the [local.settings.json file](#local-settings-file).
The way in which you develop functions on your local computer depends on your [l
|[Visual Studio Code](functions-develop-vs-code.md)| [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). | | [Command prompt or terminal](functions-run-local.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. | | [Visual Studio 2019](functions-develop-vs.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio 2019](https://www.visualstudio.com/vs/) and later versions. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
-| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md) |
+| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md). |
[!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
-Each of these local development environments lets you create function app projects and use predefined Functions templates to create new functions. Each uses the Core Tools so that you can test and debug your functions against the real Functions runtime on your own machine just as you would any other app. You can also publish your function app project from any of these environments to Azure.
+Each of these local development environments lets you create function app projects and use predefined function templates to create new functions. Each uses the Core Tools so that you can test and debug your functions against the real Functions runtime on your own machine just as you would any other app. You can also publish your function app project from any of these environments to Azure.
## Local settings file
These settings are supported when you run projects locally:
| Setting | Description | | | -- | | **`IsEncrypted`** | When this setting is set to `true`, all values are encrypted with a local machine key. Used with `func settings` commands. Default value is `false`. You might want to encrypt the local.settings.json file on your local computer when it contains secrets, such as service connection strings. The host automatically decrypts settings when it runs. Use the `func settings decrypt` command before trying to read locally encrypted settings. |
-| **`Values`** | Collection of application settings used when a project is running locally. These key-value (string-string) pairs correspond to application settings in your function app in Azure, like [`AzureWebJobsStorage`]. Many triggers and bindings have a property that refers to a connection string app setting, like `Connection` for the [Blob storage trigger](functions-bindings-storage-blob-trigger.md#configuration). For these properties, you need an application setting defined in the `Values` array. See the subsequent table for a list of commonly used settings. <br/>Values must be strings and not JSON objects or arrays. Setting names can't include a double underline (`__`) and should not include a colon (`:`). Double underline characters are reserved by the runtime, and the colon is reserved to support [dependency injection](functions-dotnet-dependency-injection.md#working-with-options-and-settings). |
+| **`Values`** | Collection of application settings used when a project is running locally. These key-value (string-string) pairs correspond to application settings in your function app in Azure, like [`AzureWebJobsStorage`]. Many triggers and bindings have a property that refers to a connection string app setting, like `Connection` for the [Blob storage trigger](functions-bindings-storage-blob-trigger.md#configuration). For these properties, you need an application setting defined in the `Values` array. See the subsequent table for a list of commonly used settings. <br/>Values must be strings and not JSON objects or arrays. Setting names can't include a double underline (`__`) and shouldn't include a colon (`:`). Double underline characters are reserved by the runtime, and the colon is reserved to support [dependency injection](functions-dotnet-dependency-injection.md#working-with-options-and-settings). |
| **`Host`** | Settings in this section customize the Functions host process when you run projects locally. These settings are separate from the host.json settings, which also apply when you run projects in Azure. | | **`LocalHttpPort`** | Sets the default port used when running the local Functions host (`func host start` and `func run`). The `--port` command-line option takes precedence over this setting. For example, when running in Visual Studio IDE, you may change the port number by navigating to the "Project Properties -> Debug" window and explicitly specifying the port number in a `host start --port <your-port-number>` command that can be supplied in the "Application Arguments" field. | | **`CORS`** | Defines the origins allowed for [cross-origin resource sharing (CORS)](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing). Origins are supplied as a comma-separated list with no spaces. The wildcard value (\*) is supported, which allows requests from any origin. |
The following application settings can be included in the **`Values`** array whe
| Setting | Values | Description | |--|--|--| |**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azurite Emulator](../storage/common/storage-use-azurite.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
-|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson) |
+|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson). |
|**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.|
-| **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates that PowerShell 7 be used when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. When running in Azure, the PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
+| **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates to use PowerShell 7 when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. The PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, when it runs in Azure, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
## Next steps
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
Title: Develop Azure Functions by using Visual Studio Code
description: Learn how to develop and test Azure Functions by using the Azure Functions extension for Visual Studio Code. ms.devlang: csharp, java, javascript, powershell, python- Previously updated : 02/21/2021+ Last updated : 05/19/2022 #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
Before you install and run the [Azure Functions extension][Azure Functions exten
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-Other resources that you need, like an Azure storage account, are created in your subscription when you [publish by using Visual Studio Code](#publish-to-azure).
+Other resources that you need, like an Azure storage account, are created in your subscription when you [publish by using Visual Studio Code](#publish-to-azure).
### Run local requirements
These prerequisites are only required to [run and debug your functions locally](
# [C\#](#tab/csharp)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
+* The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-+ [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x).
+* [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x).
# [Java](#tab/java)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
+* [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
-+ [Java 8](/azure/developer/jav#java-versions).
+* [Java 8](/azure/developer/jav#java-versions).
-+ [Maven 3 or later](https://maven.apache.org/)
+* [Maven 3 or later](https://maven.apache.org/).
# [JavaScript](#tab/nodejs)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
+* [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
# [PowerShell](#tab/powershell)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time.
-+ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
+* [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
-+ Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1)
+* Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1).
-+ The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
+* The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
# [Python](#tab/python)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ [Python 3.x](https://www.python.org/downloads/). For version information, see [Python versions](functions-reference-python.md#python-version) by the Azure Functions runtime.
+* [Python 3.x](https://www.python.org/downloads/). For version information, see [Python versions](functions-reference-python.md#python-version) by the Azure Functions runtime.
-+ [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
+* [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
The Functions extension lets you create a function app project, along with your
1. From **Azure: Functions**, select the **Create Function** icon:
- ![Create a function](./media/functions-develop-vs-code/create-function.png)
+ :::image type="content" source="./media/functions-develop-vs-code/create-function.png" alt-text=" Screenshot for Create Function.":::
1. Select the folder for your function app project, and then **Select a language for your function project**. 1. Select the **HTTP trigger** function template, or you can select **Skip for now** to create a project without a function. You can always [add a function to your project](#add-a-function-to-your-project) later.
- ![Choose the HTTP trigger template](./media/functions-develop-vs-code/create-function-choose-template.png)
+ :::image type="content" source="./media/functions-develop-vs-code/select-http-trigger.png" alt-text="Screenshot for selecting H T T P trigger.":::
1. Type **HttpExample** for the function name and select Enter, and then select **Function** authorization. This authorization level requires you to provide a [function key](functions-bindings-http-webhook-trigger.md#authorization-keys) when you call the function endpoint.
- ![Select Function authorization](./media/functions-develop-vs-code/create-function-auth.png)
+ :::image type="content" source="./media/functions-develop-vs-code/create-function-auth.png" alt-text="Screenshot for creating function authorization.":::
- A function is created in your chosen language and in the template for an HTTP-triggered function.
+1. From the dropdown list, select **Add to workplace**.
- ![HTTP-triggered function template in Visual Studio Code](./media/functions-develop-vs-code/new-function-full.png)
+ :::image type="content" source="./media/functions-develop-vs-code/add-to-workplace.png" alt-text=" Screenshot for selectIng Add to workplace.":::
+
+1. In **Do you trust the authors of the files in this folder?** window, select **Yes**.
+
+ :::image type="content" source="./media/functions-develop-vs-code/select-author-file.png" alt-text="Screenshot to confirm trust in authors of the files.":::
+
+1. A function is created in your chosen language and in the template for an HTTP-triggered function.
+
+ :::image type="content" source="./media/functions-develop-vs-code/new-function-created.png" alt-text="Screenshot for H T T P-triggered function template in Visual Studio Code.":::
### Generated project files
Depending on your language, these other files are created:
# [Java](#tab/java)
-+ A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
+* A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
-+ A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
+* A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
# [JavaScript](#tab/nodejs)
Depending on your language, these other files are created:
# [PowerShell](#tab/powershell) * An HttpExample folder that contains the [function.json definition file](functions-reference-powershell.md#folder-structure) and the run.ps1 file, which contains the function code.
-
+ # [Python](#tab/python)
-
+ * A project-level requirements.txt file that lists packages required by Functions.
-
+ * An HttpExample folder that contains the [function.json definition file](functions-reference-python.md#folder-structure) and the \_\_init\_\_.py file, which contains the function code.
-At this point, you can [add input and output bindings](#add-input-and-output-bindings) to your function.
+At this point, you can [add input and output bindings](#add-input-and-output-bindings) to your function.
You can also [add a new function to your project](#add-a-function-to-your-project). ## Install binding extensions
Replace `<TARGET_VERSION>` in the example with a specific version of the package
## Add a function to your project
-You can add a new function to an existing project by using one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
+You can add a new function to an existing project by using one of the predefined Functions triggers templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
The results of this action depend on your project's language:
The `msg` parameter is an `ICollector<T>` type, which represents a collection of
Messages are sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=csharp) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=csharp) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
# [Java](#tab/java)
Update the function method to add the following parameter to the `Run` method de
:::code language="java" source="~/functions-quickstart-java/functions-add-output-binding-storage-queue/src/main/java/com/function/Function.java" range="20-21":::
-The `msg` parameter is an `OutputBinding<T>` type, where is `T` is a string that is written to an output binding when the function completes. The following code sets the message in the output binding:
+The `msg` parameter is an `OutputBinding<T>` type, where `T` is a string that is written to an output binding when the function completes. The following code sets the message in the output binding:
:::code language="java" source="~/functions-quickstart-java/functions-add-output-binding-storage-queue/src/main/java/com/function/Function.java" range="33-34"::: This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=java) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=java) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
# [JavaScript](#tab/nodejs)
In your function code, the `msg` binding is accessed from the `context`, as in t
This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=javascript) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=javascript) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
# [PowerShell](#tab/powershell)
To learn more, see the [Queue storage output binding reference article](function
This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=powershell) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=powershell) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
# [Python](#tab/python)
The following code adds string data from the request to the output queue:
This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=python) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=python) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
To learn more, see the [Queue storage output binding reference article](function
Visual Studio Code lets you publish your Functions project directly to Azure. In the process, you create a function app and related resources in your Azure subscription. The function app provides an execution context for your functions. The project is packaged and deployed to the new function app in your Azure subscription.
-When you publish from Visual Studio Code to a new function app in Azure, you can choose either a quick function app create path using defaults or an advanced path, where you have more control over the remote resources created.
+When you publish from Visual Studio Code to a new function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you'll have more control over the remote resources created.
-When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
+When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
### Quick function app create
The following steps publish your project to a new function app created with adva
1. If you're not signed in, you're prompted to **Sign in to Azure**. You can also **Create a free Azure account**. After signing in from the browser, go back to Visual Studio Code.
-1. If you have multiple subscriptions, **Select a subscription** for the function app, and then select **+ Create New Function App in Azure... _Advanced_**. This _Advanced_ option gives you more control over the resources you create in Azure.
+1. If you have multiple subscriptions, **Select a subscription** for the function app, and then select **+ Create New Function App in Azure... _Advanced_**. This _Advanced_ option gives you more control over the resources you create in Azure.
1. Following the prompts, provide this information:
To call an HTTP-triggered function from a client, you need the URL of the functi
The function URL is copied to the clipboard, along with any required keys passed by the `code` query parameter. Use an HTTP tool to submit POST requests, or a browser for GET requests to the remote function.
-When getting the URL of functions in Azure, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
+When the extension gets the URL of functions in Azure, it uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
## Republish project files
When you set up [continuous deployment](functions-continuous-deployment.md), you
## Run functions
-The Azure Functions extension lets you run individual functions, either in your project on your local development computer or in your Azure subscription.
+The Azure Functions extension lets you run individual functions. You can run functions either in your project on your local development computer or in your Azure subscription.
For HTTP trigger functions, the extension calls the HTTP endpoint. For other kinds of triggers, it calls administrator APIs to start the function. The message body of the request sent to the function depends on the type of trigger. When a trigger requires test data, you're prompted to enter data in a specific JSON format.
-### Run functions in Azure
+### Run functions in Azure.
-To execute a function in Azure from Visual Studio Code.
+To execute a function in Azure from Visual Studio Code.
-1. In the command pallet, enter **Azure Functions: Execute function now** and choose your Azure subscription.
+1. In the command pallet, enter **Azure Functions: Execute function now** and choose your Azure subscription.
-1. Choose your function app in Azure from the list. If you don't see your function app, make sure you're signed in to the correct subscription.
+1. Choose your function app in Azure from the list. If you don't see your function app, make sure you're signed in to the correct subscription.
-1. Choose the function you want to run from the list and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
+1. Choose the function you want to run from the list and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
-
+ You can also run your function from the **Azure: Functions** area by right-clicking (Ctrl-clicking on Mac) the function you want to run from your function app in your Azure subscription and choosing **Execute Function Now...**.
-When running functions in Azure, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
+When you run your functions in Azure from Visual Studio Code, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
### Run functions locally
-The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings). To run your Functions project locally, you must meet [additional requirements](#run-local-requirements).
+The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings). To run your Functions project locally, you must meet [more requirements](#run-local-requirements).
#### Configure the project to run locally
For more information, see [Local settings file](#local-settings).
#### <a name="debugging-functions-locally"></a>Debug functions locally
-To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This is the same as running the `func host start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
+To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This step is the same as running the `func host start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
-When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
+When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
-1. In the command pallet, enter **Azure Functions: Execute function now** and choose **Local project**.
+1. In the command pallet, enter **Azure Functions: Execute function now** and choose **Local project**.
-1. Choose the function you want to run in your project and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
+1. Choose the function you want to run in your project and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
1. When the function runs locally and after the response is received, a notification is raised in Visual Studio Code. Information about the function execution is shown in **Terminal** panel.
-Running functions locally doesn't require using keys.
+Running functions locally doesn't require using keys.
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
The settings in the local.settings.json file in your project should be the same
The easiest way to publish the required settings to your function app in Azure is to use the **Upload settings** link that appears after you publish your project:
-![Upload application settings](./media/functions-develop-vs-code/upload-app-settings.png)
You can also publish settings by using the **Azure Functions: Upload Local Setting** command in the command palette. You can add individual settings to application settings in Azure by using the **Azure Functions: Add New Setting** command.
If the local file is encrypted, it's decrypted, published, and encrypted again.
View existing app settings in the **Azure: Functions** area by expanding your subscription, your function app, and **Application Settings**.
-![View function app settings in Visual Studio Code](./media/functions-develop-vs-code/view-app-settings.png)
### Download settings from Azure
When you [run functions locally](#run-functions-locally), log data is streamed t
When you're developing an application, it's often useful to see logging information in near-real time. You can view a stream of log files being generated by your functions. This output is an example of streaming logs for a request to an HTTP-triggered function:
-![Streaming logs output for HTTP trigger](media/functions-develop-vs-code/streaming-logs-vscode-console.png)
To learn more, see [Streaming logs](functions-monitoring.md#streaming-logs).
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Title: Develop Azure Functions using Visual Studio description: Learn how to develop and test Azure Functions by using Azure Functions Tools for Visual Studio 2019. ms.devlang: csharp-+ Previously updated : 12/10/2020 Last updated : 05/19/2022 # Develop Azure Functions using Visual Studio
Unless otherwise noted, procedures and examples shown are for Visual Studio 2019
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] > [!NOTE]
-> In Visual Studio 2017, the Azure development workload installs Azure Functions Tools as a separate extension. When you update your Visual Studio 2017 installation, make sure that you're using the [most recent version](#check-your-tools-version) of the Azure Functions tools. The following sections show you how to check and (if needed) update your Azure Functions Tools extension in Visual Studio 2017.
+> In Visual Studio 2017, the Azure development workload installs Azure Functions Tools as a separate extension. When you update your Visual Studio 2017 installation, make sure that you're using the [most recent version](#check-your-tools-version) of the Azure Functions Tools. The following sections show you how to check and (if needed) update your Azure Functions Tools extension in Visual Studio 2017.
> > Skip these sections if you're using Visual Studio 2019.
For a full list of the bindings supported by Functions, see [Supported bindings]
## Run functions locally
-Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project the local Functions host (func.exe) is started listening on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
+Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project, the local Functions host (func.exe) starts to listen on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
To start your function in Visual Studio in debug mode:
For a more detailed testing scenario using Visual Studio, see [Testing functions
## Publish to Azure
-When you publish from Visual Studio, it uses one of two deployment methods:
+When you publish from Visual Studio, it uses one of the two deployment methods:
* [Web Deploy](functions-deployment-technologies.md#web-deploy-msdeploy): Packages and deploys Windows apps to any IIS server. * [Zip Deploy with run-From-package enabled](functions-deployment-technologies.md#zip-deploy): Recommended for Azure Functions deployments.
Use the following steps to publish your project to a function app in Azure.
## Function app settings
-Because Visual Studio doesn't upload these settings automatically when you publish the project, any settings you add in the local.settings.json you must also add to the function app in Azure.
+Visual Studio doesn't upload these settings automatically when you publish the project. Any settings you add in the local.settings.json you must also add to the function app in Azure.
The easiest way to upload the required settings to your function app in Azure is to select the **Manage Azure App Service settings** link that appears after you successfully publish your project.
To learn more about monitoring using Application Insights, see [Monitor Azure Fu
## Testing functions
-This section describes how to create a C# function app project in Visual Studio and run and tests with [xUnit](https://github.com/xunit/xunit).
+This section describes how to create a C# function app project in Visual Studio and to run and test with [xUnit](https://github.com/xunit/xunit).
![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
Now that the projects are created, you can create the classes used to run the au
Each function takes an instance of [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) to handle message logging. Some tests either don't log messages or have no concern for how logging is implemented. Other tests need to evaluate messages logged to determine whether a test is passing.
-You'll create a new class named `ListLogger` which holds an internal list of messages to evaluate during a testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
+You'll create a new class named `ListLogger`, which holds an internal list of messages to evaluate during testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
Create a new class in *Functions.Tests* project named **NullScope.cs** and enter the following code:
The members implemented in this class are:
- **Http_trigger_should_return_string_from_member_data**: This test uses xUnit attributes to provide sample data to the HTTP function. -- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer functions. Once the function is run, then the log is checked to ensure the expected message is present.
+- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer function. Once the function is run, then the log is checked to ensure the expected message is present.
If you want to access application settings in your tests, you can [inject](functions-dotnet-dependency-injection.md) an `IConfiguration` instance with mocked environment variable values into your function. ### Run tests
-To run the tests, navigate to the **Test Explorer** and click **Run all**.
+To run the tests, navigate to the **Test Explorer** and select **Run all**.
![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png) ### Debug tests
-To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and click **Run > Debug Last Run**.
+To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and select **Run > Debug Last Run**.
## Next steps For more information about the Azure Functions Core Tools, see [Work with Azure Functions Core Tools](functions-run-local.md).
-For more information about developing functions as .NET class libraries, see [Azure Functions C# developer reference](functions-dotnet-class-library.md). This article also links to examples of how to use attributes to declare the various types of bindings supported by Azure Functions.
+For more information about developing functions as .NET class libraries, see [Azure Functions C# developer reference](functions-dotnet-class-library.md). This article also links to examples on how to use attributes to declare the various types of bindings supported by Azure Functions.
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
az functionapp create \
--storage-account $STORAGE_ACCOUNT \ --consumption-plan-location $LOCATION \ --runtime java \
- --functions-version 2
+ --functions-version 3
``` # [Cmd](#tab/cmd)
az functionapp create ^
--storage-account %STORAGE_ACCOUNT% ^ --consumption-plan-location %LOCATION% ^ --runtime java ^
- --functions-version 2
+ --functions-version 3
```
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps. For other hosting plan options, see the [hosting plan article](functions-scale.md).
->[!IMPORTANT]
->Azure Functions runs on the Azure App Service platform. In the App Service platform, plans that host Premium plan function apps are referred to as *Elastic* Premium plans, with SKU names like `EP1`. If you choose to run your function app on a Premium plan, make sure to create a plan with an SKU name that starts with "E", such as `EP1`. App Service plan SKU names that start with "P", such as `P1V2` (Premium V2 Small plan), are actually [Dedicated hosting plans](dedicated-plan.md). Because they are Dedicated and not Elastic Premium, plans with SKU names starting with "P" won't scale dynamically and may increase your costs.
Premium plan hosting provides the following benefits to your functions:
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions description: Understand how to develop functions with Python Previously updated : 11/4/2020 Last updated : 05/19/2022 ms.devlang: python-+ # Azure Functions Python developer guide
def main(req):
return f'Hello, {user}!' ```
-You can also explicitly declare the attribute types and return type in the function using Python type annotations. This helps you use the intellisense and autocomplete features provided by many Python code editors.
+You can also explicitly declare the attribute types and return type in the function using Python type annotations. This action helps you to use the IntelliSense and autocomplete features provided by many Python code editors.
```python import azure.functions
The main project folder (<project_root>) can contain the following files:
Each function has its own code file and binding configuration file (function.json).
-When deploying your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
+When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
## Import behavior
from . import example #(relative)
> [!NOTE] > The *shared_code/* folder needs to contain an \_\_init\_\_.py file to mark it as a Python package when using absolute import syntax.
-The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it is not supported by static type checker and not supported by Python test frameworks:
+The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it isn't supported by static type checker and not supported by Python test frameworks:
```python from __app__.shared_code import my_first_helper_function #(deprecated __app__ import)
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.
### Log custom telemetry
-By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure), which sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
>[!NOTE] >To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
Likewise, you can set the `status_code` and `headers` for the response message i
## Web frameworks
-You can leverage WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
+You can apply WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example:
The host.json file must also be updated to include an HTTP `routePrefix`, as sho
} ```
-Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI hander approach or a WSGI wrapper approach for Flask:
+Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
# [ASGI](#tab/asgi)
Name of the function.
ID of the current function invocation. `trace_context`
-Context for distributed tracing. Please refer to [`Trace Context`](https://www.w3.org/TR/trace-context/) for more information..
+Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/).
`retry_context`
-Context for retries to the function. Please refer to [`retry-policies`](./functions-bindings-errors.md#retry-policies-preview) for more information.
+Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies-preview).
## Global variables
-It is not guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
+It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
```python CACHED_DATA = None
Azure Functions supports the following Python versions:
| 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 | | 2.x | 3.7<br/>3.6 |
-<sup>*</sup>Official CPython distributions
+<sup>*</sup>Official Python distributions
To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created and can't be changed.
-When running locally, the runtime uses the available Python version.
+The runtime uses the available Python version, when you run it locally.
### Changing Python version
-To set a Python function app to a specific language version, you need to specify the language as well as the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
-
-To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md)
-
-To see the full list of supported Python versions functions apps, please refer to this [article](./supported-languages.md)
+To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+To learn more about Azure Functions runtime support policy, refer [article](./language-support-policy.md).
+To see the full list of supported Python versions functions apps, refer [article](./supported-languages.md).
# [Azure CLI](#tab/azurecli-linux)
az functionapp config set --name <FUNCTION_APP> \
--linux-fx-version <LINUX_FX_VERSION> ```
-Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the Python version you want to use, prefixed by `python|` e.g. `python|3.9`
+Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the Python version you want to use, prefixed by `python|` for example, `python|3.9`.
You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
pip install -r requirements.txt
## Publishing to Azure
-When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file, which is located at the root of your project directory.
+When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file. You can locate this file at the root of your project directory.
-Project files and folders that are excluded from publishing, including the virtual environment folder, are listed in the .funcignore file.
+Project files and folders that are excluded from publishing, including the virtual environment folder, you can find them in the root directory of your project.
There are three build actions supported for publishing your Python project to Azure: remote build, local build, and builds using custom dependencies.
You can also use Azure Pipelines to build your dependencies and publish using co
### Remote build
-When using remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
+When you use remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
Dependencies are obtained remotely based on the contents of the requirements.txt file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, the Azure Functions Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure.
func azure functionapp publish <APP_NAME> --build local
Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-Using the `--build local` option, project dependencies are read from the requirements.txt file and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, dependencies in your requirements.txt file can't be acquired by Core Tools, you must use the custom dependencies option for publishing.
+When you use the `--build local` option, project dependencies are read from the requirements.txt file and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, you can't get requirements.txt file by Core Tools, you must use the custom dependencies option for publishing.
We don't recommend using local builds when developing locally on Windows.
If your project uses packages not publicly available to our tools, you can make
pip install --target="<PROJECT_DIR>/.python_packages/lib/site-packages" -r requirements.txt ```
-When using custom dependencies, you should use the `--no-build` publishing option, since you have already installed the dependencies into the project folder.
+When using custom dependencies, you should use the `--no-build` publishing option, since you've already installed the dependencies into the project folder.
```command func azure functionapp publish <APP_NAME> --no-build
Remember to replace `<APP_NAME>` with the name of your function app in Azure.
## Unit Testing
-Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package is not immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
+Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
Take *my_second_function* as an example, following is a mock test of an HTTP triggered function:
from os import listdir
filesDirListInTemp = listdir(tempFilePath) ```
-We recommend that you maintain your tests in a folder separate from the project folder. This keeps you from deploying test code with your app.
+We recommend that you maintain your tests in a folder separate from the project folder. This action keeps you from deploying test code with your app.
## Preinstalled libraries
-There are a few libraries come with the Python Functions runtime.
+There are a few libraries that come with the Python Functions runtime.
### Python Standard Library
-The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they are provided by package collections.
+The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they're provided by package collections.
To view the full details of the list of these libraries, see the links below:
Extensions are imported in your function code much like a standard Python librar
Review the information for a given extension to learn more about the scope in which the extension runs.
-Extensions implement a Python worker extension interface that lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
+Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
### Using extensions
You can use a Python worker extension library in your Python functions by follow
1. Add the extension package in the requirements.txt file for your project. 1. Install the library into your app. 1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`:
- + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file)
+ + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
+ Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings). 1. Import the extension module into your function trigger. 1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation.
function-level-extension==1.0.0
``` ```python+ # <project_root>/Trigger/__init__.py from function_level_extension import FuncExtension
def main(req, context):
### Creating extensions
-Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer designs, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
+Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer design, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
To learn how to create, package, publish, and consume a Python worker extension package, see [Develop Python worker extensions for Azure Functions](develop-python-worker-extensions.md).
By default, a host instance for Python can process only one function invocation
## <a name="shared-memory"></a>Shared memory (preview)
-To improve throughput, Functions lets your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
+To improve throughput, Functions let your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
For example, you might enable shared memory to reduce bottlenecks when using Blob storage bindings to transfer payloads larger than 1 MB. This functionality is available only for function apps running in Premium and Dedicated (App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory). -
+
## Known issues and FAQ Following is a list of troubleshooting guides for common issues: * [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror)
-* [Cannot import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
+* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
All known issues and feature requests are tracked using [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-containers.md
ms.contributor: charles.weininger Previously updated : 04/25/2022 Last updated : 05/26/2022 # Profile live Azure containers with Application Insights
In this article, you'll learn the various ways you can:
} ```
+1. Enable Application Insights and Profiler in `Startup.cs`:
+
+ ```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddApplicationInsightsTelemetry(); // Add this line of code to enable Application Insights.
+ services.AddServiceProfiler(); // Add this line of code to Enable Profiler
+ services.AddControllersWithViews();
+ }
+ ```
+ ## Pull the latest ASP.NET Core build/runtime images 1. Navigate to the .NET Core 6.0 example directory.
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-overview.md
ms.contributor: charles.weininger Previously updated : 05/11/2022 Last updated : 05/26/2022
For these metrics, you can get a value of greater than 100% by consuming multipl
## Limitations
-The default data retention period is five days. The maximum data ingested per day is 10 GB.
+The default data retention period is five days.
There are no charges for using the Profiler service. To use it, your web app must be hosted in the basic tier of the Web Apps feature of Azure App Service, at minimum.
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler.md
To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps
:::image type="content" source="./media/profiler/enable-profiler.png" alt-text="Screenshot of enabling Profiler on your app.":::
-## Enable Profiler manually
+## Enable Profiler using app settings
If your Application Insights resource is in a different subscription from your App Service, you'll need to enable Profiler manually by creating app settings for your Azure App Service. You can automate the creation of these settings using a template or other means. The settings needed to enable the profiler:
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
For information about how to enable Container insights, see [Onboard Container i
Azure Monitor provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019 deployed across resource groups in your subscriptions. It shows clusters discovered across all environments that aren't monitored by the solution. You can immediately understand cluster health, and from here, you can drill down to the node and controller performance page or navigate to see performance charts for the cluster. For AKS clusters that were discovered and identified as unmonitored, you can enable monitoring for them at any time.
-The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described [here](container-insights-overview.md#what-does-container-insights-provide) in the overview article.
+The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described in [Feature of Container insights](container-insights-overview.md#features-of-container-insights) in the overview article.
-## Sign in to the Azure portal
-
-Sign in to the [Azure portal](https://portal.azure.com).
## Multi-cluster view from Azure Monitor
azure-monitor Container Insights Azure Redhat Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-azure-redhat-setup.md
- Title: Configure Azure Red Hat OpenShift v3.x with Container insights | Microsoft Docs
-description: This article describes how to configure monitoring of a Kubernetes cluster with Azure Monitor hosted on Azure Red Hat OpenShift version 3 and higher.
- Previously updated : 06/30/2020--
-# Configure Azure Red Hat OpenShift v3 with Container insights
-
->[!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired June 2022.
->
-> As of October 2020 you will no longer be able to create new 3.11 clusters.
-> Existing 3.11 clusters will continue to operate until June 2022 but will no be longer supported after that date.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](../../openshift/tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:aro-feedback@microsoft.com).
-
-Container insights provides rich monitoring experience for the Azure Kubernetes Service (AKS) and AKS Engine clusters. This article describes how to enable monitoring of Kubernetes clusters hosted on [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 3 and latest supported version of version 3, to achieve a similar monitoring experience.
-
->[!NOTE]
->Support for Azure Red Hat OpenShift is a feature in public preview at this time.
->
-
-Container insights can be enabled for new, or one or more existing deployments of Azure Red Hat OpenShift using the following supported methods:
--- For an existing cluster from the Azure portal or using Azure Resource Manager template.-- For a new cluster using Azure Resource Manager template, or while creating a new cluster using the [Azure CLI](/cli/azure/openshift#az-openshift-create).-
-## Supported and unsupported features
-
-Container insights supports monitoring Azure Red Hat OpenShift as described in the [Overview](container-insights-overview.md) article, except for the following features:
--- Live Data (preview)-- [Collect metrics](container-insights-update-metrics.md) from cluster nodes and pods and storing them in the Azure Monitor metrics database-
-## Prerequisites
--- A [Log Analytics workspace](../logs/workspace-design.md).-
- Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
--- To enable and access the features in Container insights, at a minimum you need to be a member of the Azure *Contributor* role in the Azure subscription, and a member of the [*Log Analytics Contributor*](../logs/manage-access.md#azure-rbac) role of the Log Analytics workspace configured with Container insights.--- To view the monitoring data, you are a member of the [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role permission with the Log Analytics workspace configured with Container insights.-
-## Identify your Log Analytics workspace ID
-
- To integrate with an existing Log Analytics workspace, start by identifying the full resource ID of your Log Analytics workspace. The resource ID of the workspace is required for the parameter `workspaceResourceId` when you enable monitoring using the Azure Resource Manager template method.
-
-1. List all the subscriptions that you have access to by running the following command:
-
- ```azurecli
- az account list --all -o table
- ```
-
- The output will look like the following:
-
- ```azurecli
- Name CloudName SubscriptionId State IsDefault
- -- - --
- Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
- ```
-
-1. Copy the value for **SubscriptionId**.
-
-1. Switch to the subscription that hosts the Log Analytics workspace by running the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the workspace>
- ```
-
-1. Display the list of workspaces in your subscriptions in the default JSON format by running the following command:
-
- ```
- az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
- ```
-
-1. In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-
-## Enable for a new cluster using an Azure Resource Manager template
-
-Perform the following steps to deploy an Azure Red Hat OpenShift cluster with monitoring enabled. Before proceeding, review the tutorial [Create an Azure Red Hat OpenShift cluster](../../openshift/tutorial-create-cluster.md) to understand the dependencies that you need to configure so your environment is set up correctly.
-
-This method includes two JSON templates. One template specifies the configuration to deploy the cluster with monitoring enabled, and the other contains parameter values that you configure to specify the following:
--- The Azure Red Hat OpenShift cluster resource ID.--- The resource group the cluster is deployed in.--- [Azure Active Directory tenant ID](../../openshift/howto-create-tenant.md#create-a-new-azure-ad-tenant) noted after performing the steps to create one or one already created.--- [Azure Active Directory client application ID](../../openshift/howto-aad-app-configuration.md#create-an-azure-ad-app-registration) noted after performing the steps to create one or one already created.--- [Azure Active Directory Client secret](../../openshift/howto-aad-app-configuration.md#create-a-client-secret) noted after performing the steps to create one or one already created.--- [Azure AD security group](../../openshift/howto-aad-app-configuration.md#create-an-azure-ad-security-group) noted after performing the steps to create one or one already created.--- Resource ID of an existing Log Analytics workspace. See [Identify your Log Analytics workspace ID](#identify-your-log-analytics-workspace-id) to learn how to get this information.--- The number of master nodes to create in the cluster.--- The number of compute nodes in the agent pool profile.--- The number of infrastructure nodes in the agent pool profile.-
-If you are unfamiliar with the concept of deploying resources by using a template, see:
--- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)--- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)-
-If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.65 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create a cluster with the monitoring add-on using the following commands:
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_new_cluster/newClusterWithMonitoring.json`
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_new_cluster/newClusterWithMonitoringParam.json`
-
-2. Sign in to Azure
-
- ```azurecli
- az login
- ```
-
- If you have access to multiple subscriptions, run `az account set -s {subscription ID}` replacing `{subscription ID}` with the subscription you want to use.
-
-3. Create a resource group for your cluster if you don't already have one. For a list of Azure regions that supports OpenShift on Azure, see [Supported Regions](../../openshift/supported-resources.md#azure-regions).
-
- ```azurecli
- az group create -g <clusterResourceGroup> -l <location>
- ```
-
-4. Edit the JSON parameter file **newClusterWithMonitoringParam.json** and update the following values:
-
- - *location*
- - *clusterName*
- - *aadTenantId*
- - *aadClientId*
- - *aadClientSecret*
- - *aadCustomerAdminGroupId*
- - *workspaceResourceId*
- - *masterNodeCount*
- - *computeNodeCount*
- - *infraNodeCount*
-
-5. The following step deploys the cluster with monitoring enabled by using the Azure CLI.
-
- ```azurecli
- az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./newClusterWithMonitoring.json --parameters @./newClusterWithMonitoringParam.json
- ```
-
- The output resembles the following:
-
- ```output
- provisioningState : Succeeded
- ```
-
-## Enable for an existing cluster
-
-Perform the following steps to enable monitoring of an Azure Red Hat OpenShift cluster deployed in Azure. You can accomplish this from the Azure portal or using the provided templates.
-
-### From the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. On the Azure portal menu or from the Home page, select **Azure Monitor**. Under the **Insights** section, select **Containers**.
-
-3. On the **Monitor - containers** page, select **Non-monitored clusters**.
-
-4. From the list of non-monitored clusters, find the cluster in the list and click **Enable**. You can identify the results in the list by looking for the value **ARO** under the column **CLUSTER TYPE**.
-
-5. On the **Onboarding to Container insights** page, if you have an existing Log Analytics workspace in the same subscription as the cluster, select it from the drop-down list.
- The list preselects the default workspace and location that the cluster is deployed to in the subscription.
-
- ![Enable monitoring for non-monitored clusters](./media/container-insights-onboard/kubernetes-onboard-brownfield-01.png)
-
- >[!NOTE]
- >If you want to create a new Log Analytics workspace for storing the monitoring data from the cluster, follow the instructions in [Create a Log Analytics workspace](../logs/quick-create-workspace.md). Be sure to create the workspace in the same subscription that the RedHat OpenShift cluster is deployed to.
-
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-
-### Enable using an Azure Resource Manager template
-
-This method includes two JSON templates. One template specifies the configuration to enable monitoring, and the other contains parameter values that you configure to specify the following:
--- The Azure RedHat OpenShift cluster resource ID.--- The resource group the cluster is deployed in.--- A Log Analytics workspace. See [Identify your Log Analytics workspace ID](#identify-your-log-analytics-workspace-id) to learn how to get this information.-
-If you are unfamiliar with the concept of deploying resources by using a template, see:
--- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)--- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)-
-If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.65 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-1. Download the template and parameter file to update your cluster with the monitoring add-on using the following commands:
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_existing_cluster/existingClusterOnboarding.json`
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_existing_cluster/existingClusterParam.json`
-
-2. Sign in to Azure
-
- ```azurecli
- az login
- ```
-
- If you have access to multiple subscriptions, run `az account set -s {subscription ID}` replacing `{subscription ID}` with the subscription you want to use.
-
-3. Specify the subscription of the Azure RedHat OpenShift cluster.
-
- ```azurecli
- az account set --subscription "Subscription Name"
- ```
-
-4. Run the following command to identify the cluster location and resource ID:
-
- ```azurecli
- az openshift show -g <clusterResourceGroup> -n <clusterName>
- ```
-
-5. Edit the JSON parameter file **existingClusterParam.json** and update the values *aroResourceId* and *aroResourceLocation*. The value for **workspaceResourceId** is the full resource ID of your Log Analytics workspace, which includes the workspace name.
-
-6. To deploy with Azure CLI, run the following commands:
-
- ```azurecli
- az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./ExistingClusterOnboarding.json --parameters @./existingClusterParam.json
- ```
-
- The output resembles the following:
-
- ```output
- provisioningState : Succeeded
- ```
-
-## Next steps
--- With monitoring enabled to collect health and resource utilization of your RedHat OpenShift cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.--- By default, the containerized agent collects the stdout/ stderr container logs of all the containers running in all the namespaces except kube-system. To configure container log collection specific to particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure desired data collection settings to your ConfigMap configurations file.--- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)--- To learn how to stop monitoring your cluster with Container insights, see [How to Stop Monitoring Your Azure Red Hat OpenShift cluster](./container-insights-optout-openshift-v3.md).
azure-monitor Container Insights Azure Redhat4 Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-azure-redhat4-setup.md
- Title: Configure Azure Red Hat OpenShift v4.x with Container insights | Microsoft Docs
-description: This article describes how to configure monitoring for a Kubernetes cluster with Azure Monitor that's hosted on Azure Red Hat OpenShift version 4 or later.
- Previously updated : 03/05/2021--
-# Configure Azure Red Hat OpenShift v4.x with Container insights
-
-Container insights provides a rich monitoring experience for Azure Kubernetes Service (AKS) and AKS engine clusters. This article describes how to achieve a similar monitoring experience by enabling monitoring for Kubernetes clusters that are hosted on [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x.
-
->[!NOTE]
-> We are phasing out Container Insights support for Azure Red Hat OpenShift v4.x by May 2022. We recommend customers to migrate Container Insights on Azure Arc enabled Kubernetes, which offers an upgraded experience and 1-click onboarding. For more information, please visit our [documentation](./container-insights-enable-arc-enabled-clusters.md)
->
--
->[!NOTE]
->Support for Azure Red Hat OpenShift is a feature in public preview at this time.
->
-
-You can enable Container insights for one or more existing deployments of Azure Red Hat OpenShift v4.x by using the supported methods described in this article.
-
-For an existing cluster, run this [Bash script in the Azure CLI](/cli/azure/openshift#az-openshift-create&preserve-view=true).
-
-## Supported and unsupported features
-
-Container insights supports monitoring Azure Red Hat OpenShift v4.x as described in [Container insights overview](container-insights-overview.md), except for the following features:
--- Live Data (preview)-- [Collecting metrics](container-insights-update-metrics.md) from cluster nodes and pods and storing them in the Azure Monitor metrics database-
-## Prerequisites
--- The Azure CLI version 2.0.72 or later --- The [Helm 3](https://helm.sh/docs/intro/install/) CLI tool--- Latest version of [OpenShift CLI](https://docs.openshift.com/container-platform/4.7/cli_reference/openshift_cli/getting-started-cli.html)--- [Bash version 4](https://www.gnu.org/software/bash/)--- The [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool--- A [Log Analytics workspace](../logs/workspace-design.md).-
- Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
--- To enable and access the features in Container insights, you need to have, at minimum, an Azure *Contributor* role in the Azure subscription and a [*Log Analytics Contributor*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.--- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.-
-## Enable monitoring for an existing cluster
-
-To enable monitoring for an Azure Red Hat OpenShift version 4 or later cluster that's deployed in Azure by using the provided Bash script, do the following:
-
-1. Sign in to Azure by running the following command:
-
- ```azurecli
- az login
- ```
-
-1. Download and save to a local folder the script that configures your cluster with the monitoring add-in by running the following command:
-
- `curl -o enable-monitoring.sh -L https://aka.ms/enable-monitoring-bash-script`
-
-1. Connect to ARO v4 cluster using the instructions in [Tutorial: Connect to an Azure Red Hat OpenShift 4 cluster](../../openshift/tutorial-connect-cluster.md).
--
-### Integrate with an existing workspace
-
-In this section, you enable monitoring of your cluster using the Bash script you downloaded earlier. To integrate with an existing Log Analytics workspace, start by identifying the full resource ID of your Log Analytics workspace that's required for the `logAnalyticsWorkspaceResourceId` parameter, and then run the command to enable the monitoring add-in against the specified workspace.
-
-If you don't have a workspace to specify, you can skip to the [Integrate with the default workspace](#integrate-with-the-default-workspace) section and let the script create a new workspace for you.
-
-1. List all the subscriptions that you have access to by running the following command:
-
- ```azurecli
- az account list --all -o table
- ```
-
- The output will look like the following:
-
- ```azurecli
- Name CloudName SubscriptionId State IsDefault
- -- - --
- Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
- ```
-
-1. Copy the value for **SubscriptionId**.
-
-1. Switch to the subscription that hosts the Log Analytics workspace by running the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the workspace>
- ```
-
-1. Display the list of workspaces in your subscriptions in the default JSON format by running the following command:
-
- ```
- az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
- ```
-
-1. In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-
-1. To enable monitoring, run the following command. Replace the values for the `azureAroV4ClusterResourceId` and `logAnalyticsWorkspaceResourceId` parameters.
-
- ```bash
- export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
- export logAnalyticsWorkspaceResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>"
- ```
-
- Here is the command you must run once you have populated the variables with Export commands:
-
- `bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --workspace-id $logAnalyticsWorkspaceResourceId`
-
-After you've enabled monitoring, it might take about 15 minutes before you can view the health metrics for the cluster.
-
-### Integrate with the default workspace
-
-In this section, you enable monitoring for your Azure Red Hat OpenShift v4.x cluster by using the Bash script that you downloaded.
-
-In this example, you're not required to pre-create or specify an existing workspace. This command simplifies the process for you by creating a default workspace in the default resource group of the cluster subscription, if one doesn't already exist in the region.
-
-The default workspace that's created is in the format of *DefaultWorkspace-\<GUID>-\<Region>*.
-
-Replace the value for the `azureAroV4ClusterResourceId` parameter.
-
-```bash
-export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
-```
-
-For example:
-
-`bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId
-
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-
-### Enable monitoring from the Azure portal
-
-The multi-cluster view in Container insights highlights your Azure Red Hat OpenShift clusters that don't have monitoring enabled under the **Unmonitored clusters** tab. The **Enable** option next to your cluster doesn't initiate onboarding of monitoring from the portal. You're redirected to this article to enable monitoring manually by following the steps that were outlined earlier in this article.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. On the left pane or from the home page, select **Azure Monitor**.
-
-1. In the **Insights** section, select **Containers**.
-
-1. On the **Monitor - containers** page, select **Unmonitored clusters**.
-
-1. In the list of non-monitored clusters, select the cluster, and then select **Enable**.
-
- You can identify the results in the list by looking for the **ARO** value in the **Cluster Type** column. After you select **Enable**, you're redirected to this article.
-
-## Next steps
--- Now that you've enabled monitoring to collect health and resource utilization of your RedHat OpenShift version 4.x cluster and the workloads that are running on them, learn [how to use](container-insights-analyze.md) Container insights.--- By default, the containerized agent collects the *stdout* and *stderr* container logs of all the containers that are running in all the namespaces except kube-system. To configure a container log collection that's specific to a particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure the data collection settings you want for your *ConfigMap* configuration file.--- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md).--- To learn how to stop monitoring your cluster by using Container insights, see [How to stop monitoring your Azure Red Hat OpenShift cluster](./container-insights-optout-openshift-v3.md).
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Last updated 05/29/2020
This article provides pricing guidance for Container insights to help you understand the following:
-* How to estimate costs up-front before you enable this Insight
-
+* How to estimate costs up-front before you enable Container Insights.
* How to measure costs after Container insights has been enabled for one or more containers- * How to control the collection of data and make cost reductions Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster.
The Azure Monitor pricing model is primarily based on the amount of data ingeste
The following is a summary of what types of data are collected from a Kubernetes cluster with Container insights that influences cost and can be customized based on your usage: - Stdout, stderr container logs from every monitored container in every Kubernetes namespace in the cluster- - Container environment variables from every monitored container in the cluster- - Completed Kubernetes jobs/pods in the cluster that does not require monitoring- - Active scraping of Prometheus metrics- - [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes master node logs in your AKS cluster to analyze log data generated by master components such as the *kube-apiserver* and *kube-controller-manager*. ## What is collected from Kubernetes clusters
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
Last updated 02/04/2021
# Enable AKS monitoring addon using Azure Policy
-This article describes how to enable AKS Monitoring Addon using Azure Custom Policy. Monitoring Addon Custom Policy can be assigned either at subscription or resource group scope. If Azure Log Analytics workspace and AKS cluster are in different subscriptions then the managed identity used by the policy assignment has to have the required role permissions on both the subscriptions or least on the resource of the Log Analytics workspace. Similarly, if the policy is scoped to the resource group, then the managed identity should have the required role permissions on the Log Analytics workspace if the workspace not in the selected resource group scope.
+This article describes how to enable AKS Monitoring Addon using Azure Custom Policy.
+## Permissions required
Monitoring Addon require following roles on the managed identity used by Azure Policy: - [azure-kubernetes-service-contributor-role](../../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role) - [log-analytics-contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
+Monitoring Addon Custom Policy can be assigned at either the subscription or resource group scope. If the Log Analytics workspace and AKS cluster are in different subscriptions, then the managed identity used by the policy assignment must have the required role permissions on both the subscriptions or on the Log Analytics workspace resource. Similarly, if the policy is scoped to the resource group, then the managed identity should have the required role permissions on the Log Analytics workspace if the workspace is not in the selected resource group scope.
++ ## Create and assign policy definition using Azure portal ### Create policy definition
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Title: "Monitor Azure Arc-enabled Kubernetes clusters" Previously updated : 04/05/2021
+ Title: Monitor Azure Arc-enabled Kubernetes clusters
Last updated : 05/24/2022
-description: "Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor"
+description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.
# Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters
description: "Collect metrics and logs of Azure Arc-enabled Kubernetes clusters
## Prerequisites -- You've met the pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).-- A Log Analytics workspace: Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).-- You need to have [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.
+- Pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).
+- og Analytics workspace. Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace using [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).
+- [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.
- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#azure-rbac) role assignment on the Log Analytics workspace. - The following endpoints need to be enabled for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
Title: Monitor an Azure Kubernetes Service (AKS) cluster deployed | Microsoft Docs description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS) cluster with Container insights already deployed in your subscription. Previously updated : 09/12/2019 Last updated : 05/24/2022 # Enable monitoring of Azure Kubernetes Service (AKS) cluster already deployed- This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on [Azure Kubernetes Service](../../aks/index.yml) that have already been deployed in your subscription.
-You can enable monitoring of an AKS cluster that's already deployed using one of the supported methods:
-
-* Azure CLI
-* [Terraform](#enable-using-terraform)
-* [From Azure Monitor](#enable-from-azure-monitor-in-the-portal) or [directly from the AKS cluster](#enable-directly-from-aks-cluster-in-the-portal) in the Azure portal
-* With the [provided Azure Resource Manager template](#enable-using-an-azure-resource-manager-template) by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with Azure CLI.
- If you're connecting an existing AKS cluster to an Azure Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription in which the Log Analytics workspace was created. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
-## Sign in to the Azure portal
-
-Sign in to the [Azure portal](https://portal.azure.com).
- ## Enable using Azure CLI The following step enables monitoring of your AKS cluster using Azure CLI. In this example, you are not required to pre-create or specify an existing workspace. This command simplifies the process for you by creating a default workspace in the default resource group of the AKS cluster subscription if one does not already exist in the region. The default workspace created resembles the format of *DefaultWorkspace-\<GUID>-\<Region>*.
azure-monitor Container Insights Enable New Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md
Title: Monitor a new Azure Kubernetes Service (AKS) cluster | Microsoft Docs description: Learn how to enable monitoring for a new Azure Kubernetes Service (AKS) cluster with Container insights subscription. Previously updated : 04/25/2019 Last updated : 05/24/2022 ms.devlang: azurecli
ms.devlang: azurecli
This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on [Azure Kubernetes Service](../../aks/index.yml) that you are preparing to deploy in your subscription.
-You can enable monitoring of an AKS cluster using one of the supported methods:
-
-* Azure CLI
-* Terraform
## Enable using Azure CLI
To enable monitoring of a new AKS cluster created with Azure CLI, follow the ste
## Enable using Terraform
-If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://www.terraform.io/docs/providers/azurerm/r/log_analytics_workspace.html) if you do not chose to specify an existing one.
+If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one.
>[!NOTE] >If you choose to use Terraform, you must be running the Terraform Azure RM Provider version 1.17.0 or above.
-To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://www.terraform.io/docs/providers/azurerm/r/log_analytics_solution.html) and complete the profile by including the [**addon_profile**](https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#addon_profile) and specify **oms_agent**.
+To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
After you've enabled monitoring and all configuration tasks are completed successfully, you can monitor the performance of your cluster in either of two ways:
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Title: Configure GPU monitoring with Container insights | Microsoft Docs
+ Title: Configure GPU monitoring with Container insights
description: This article describes how you can configure monitoring Kubernetes clusters with NVIDIA and AMD GPU enabled nodes with Container insights. Previously updated : 03/27/2020 Last updated : 05/24/2022 # Configure GPU monitoring with Container insights
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
The following configurations are officially supported with Container insights. I
Before you start, make sure that you have the following: -- A [Log Analytics workspace](../logs/workspace-design.md).-
- Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
+- [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
>[!NOTE] >Enable monitoring of multiple clusters with the same cluster name to same Log Analytics workspace is not supported. Cluster names must be unique.
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md
Title: View metrics in real-time with Container insights | Microsoft Docs
+ Title: View metrics in real-time with Container insights
description: This article describes the real-time view of metrics without using kubectl with Container insights. Previously updated : 10/15/2019 Last updated : 05/24/2022
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
Title: View Live Data with Container insights | Microsoft Docs
+ Title: View Live Data with Container insights
description: This article describes the real-time view of Kubernetes logs, events, and pod metrics without using kubectl in Container insights. Previously updated : 03/04/2021 Last updated : 05/24/2022
Container insights includes the Live Data feature, which is an advanced diagnost
This article provides a detailed overview and helps you understand how to use this feature.
-For help setting up or troubleshooting the Live Data feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly access the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
+For help setting up or troubleshooting the Live Data feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly accesses the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
## View AKS resource live logs Use the following procedure to view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view.
The pane title shows the name of the Pod the container is grouped with.
### Filter events
-While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to chose from.
+While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to choose from.
## View metrics
The Live Data feature includes search functionality. In the **Search** field, yo
### Scroll Lock and Pause
-To suspend autoscroll and control the behavior of the pane, allowing you to manually scroll through the new data read, you can use the **Scroll** option. To re-enable autoscroll, simply select the **Scroll** option again. You can also pause retrieval of log or event data by selecting the the **Pause** option, and when you are ready to resume, simply select **Play**.
+To suspend autoscroll and control the behavior of the pane, allowing you to manually scroll through the new data read, you can use the **Scroll** option. To re-enable autoscroll, simply select the **Scroll** option again. You can also pause retrieval of log or event data by selecting the **Pause** option, and when you are ready to resume, simply select **Play**.
![Live Data console pane pause live view](./media/container-insights-livedata-overview/livedata-pane-scroll-pause-example.png)
azure-monitor Container Insights Livedata Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-setup.md
Title: Set up Container insights Live Data (preview) | Microsoft Docs
+ Title: Configure live data in Container insights
description: This article describes how to set up the real-time view of container logs (stdout/stderr) and events without using kubectl with Container insights. Previously updated : 01/08/2020 Last updated : 05/24/2022
-# How to set up the Live Data (preview) feature
+# How to configure Live Data in Container insights
-To view Live Data (preview) with Container insights from Azure Kubernetes Service (AKS) clusters, you need to configure authentication to grant permission to access to your Kubernetes data. This security configuration allows real-time access to your data through the Kubernetes API directly in the Azure portal.
+To view Live Data with Container insights from Azure Kubernetes Service (AKS) clusters, you need to configure authentication to grant permission to access to your Kubernetes data. This security configuration allows real-time access to your data through the Kubernetes API directly in the Azure portal.
This feature supports the following methods to control access to the logs, events, and metrics:
This feature supports the following methods to control access to the logs, event
These instructions require both administrative access to your Kubernetes cluster, and if configuring to use Azure Active Directory (AD) for user authentication, administrative access to Azure AD.
-This article explains how to configure authentication to control access to the Live Data (preview) feature from the cluster:
+This article explains how to configure authentication to control access to the Live Data feature from the cluster:
- Kubernetes role-based access control (Kubernetes RBAC) enabled AKS cluster - Azure Active Directory integrated AKS cluster.
This article explains how to configure authentication to control access to the L
## Authentication model
-The Live Data (preview) features utilizes the Kubernetes API, identical to the `kubectl` command-line tool. The Kubernetes API endpoints utilize a self-signed certificate, which your browser will be unable to validate. This feature utilizes an internal proxy to validate the certificate with the AKS service, ensuring the traffic is trusted.
+The Live Data features utilizes the Kubernetes API, identical to the `kubectl` command-line tool. The Kubernetes API endpoints utilize a self-signed certificate, which your browser will be unable to validate. This feature utilizes an internal proxy to validate the certificate with the AKS service, ensuring the traffic is trusted.
The Azure portal prompts you to validate your login credentials for an Azure Active Directory cluster, and redirect you to the client registration setup during cluster creation (and re-configured in this article). This behavior is similar to the authentication process required by `kubectl`.
The Azure portal prompts you to validate your login credentials for an Azure Act
## Using clusterMonitoringUser with Kubernetes RBAC-enabled clusters
-To eliminate the need to apply additional configuration changes to allow the Kubernetes user role binding **clusterUser** access to the Live Data (preview) feature after [enabling Kubernetes RBAC](#configure-kubernetes-rbac-authorization) authorization, AKS has added a new Kubernetes cluster role binding called **clusterMonitoringUser**. This cluster role binding has all the necessary permissions out-of-the-box to access the Kubernetes API and the endpoints for utilizing the Live Data (preview) feature.
+To eliminate the need to apply additional configuration changes to allow the Kubernetes user role binding **clusterUser** access to the Live Data feature after [enabling Kubernetes RBAC](#configure-kubernetes-rbac-authorization) authorization, AKS has added a new Kubernetes cluster role binding called **clusterMonitoringUser**. This cluster role binding has all the necessary permissions out-of-the-box to access the Kubernetes API and the endpoints for utilizing the Live Data feature.
-In order to utilize the Live Data (preview) feature with this new user, you need to be a member of the [Azure Kubernetes Service Cluster User](../../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role on the AKS cluster resource. Container insights, when enabled, is configured to authenticate using the clusterMonitoringUser by default. If the clusterMonitoringUser role binding does not exist on a cluster, **clusterUser** is used for authentication instead. Contributor gives you access to the clusterMonitoringUser (if it exists) and Azure Kuberenetes Service Cluster User gives you access to the clusterUser. Any of these two roles give sufficient access to use this feature.
+In order to utilize the Live Data feature with this new user, you need to be a member of the [Azure Kubernetes Service Cluster User](../../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role on the AKS cluster resource. Container insights, when enabled, is configured to authenticate using the clusterMonitoringUser by default. If the clusterMonitoringUser role binding does not exist on a cluster, **clusterUser** is used for authentication instead. Contributor gives you access to the clusterMonitoringUser (if it exists) and Azure Kuberenetes Service Cluster User gives you access to the clusterUser. Any of these two roles give sufficient access to use this feature.
AKS released this new role binding in January 2020, so clusters created before January 2020 do not have it. If you have a cluster that was created before January 2020, the new **clusterMonitoringUser** can be added to an existing cluster by performing a PUT operation on the cluster, or performing any other operation on the cluster that performs a PUT operation on the cluster, such as updating the cluster version.
For more information on advanced security setup in Kubernetes, review the [Kuber
## Grant permission
-Each Azure AD account must be granted permission to the appropriate APIs in Kubernetes in order to access the Live Data (preview) feature. The steps to grant the Azure Active Directory account are similar to the steps described in the [Kubernetes RBAC authentication](#configure-kubernetes-rbac-authorization) section. Before applying the yaml configuration template to your cluster, replace **clusterUser** under **ClusterRoleBinding** with the desired user.
+Each Azure AD account must be granted permission to the appropriate APIs in Kubernetes in order to access the Live Data feature. The steps to grant the Azure Active Directory account are similar to the steps described in the [Kubernetes RBAC authentication](#configure-kubernetes-rbac-authorization) section. Before applying the yaml configuration template to your cluster, replace **clusterUser** under **ClusterRoleBinding** with the desired user.
>[!IMPORTANT] >If the user you grant the Kubernetes RBAC binding for is in the same Azure AD tenant, assign permissions based on the userPrincipalName. If the user is in a different Azure AD tenant, query for and use the objectId property.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Metric alerts from Container insights description: This article reviews the recommended metric alerts available from Container insights in public preview. Previously updated : 10/28/2020 Last updated : 05/24/2022
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Title: Enable Container insights | Microsoft Docs
+ Title: Enable Container insights
description: This article describes how to enable and configure Container insights so that you can understand how your container is performing and what performance-related issues have been identified. Previously updated : 06/30/2020- Last updated : 05/24/2022 # Enable Container insights
+This article provides an overview of the requirements and options that are available for configuring Container insights to monitor the performance of workloads that are deployed to Kubernetes environments. You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using a number of supported methods.
-This article provides an overview of the options that are available for setting up Container insights to monitor the performance of workloads that are deployed to Kubernetes environments and hosted on:
+## Supported configurations
+Container insights supports the following environments:
- [Azure Kubernetes Service (AKS)](../../aks/index.yml) - [Azure Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md)
This article provides an overview of the options that are available for setting
- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x - [Red Hat OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4.x
-You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using any of the following supported methods:
--- The Azure portal-- Azure PowerShell-- The Azure CLI-- [Terraform and AKS](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks)-
-For any non-AKS kubernetes cluster, you will need to first connect your cluster to [Azure Arc](../../azure-arc/kubernetes/overview.md) before enabling monitoring.
+## Supported Kubernetes versions
+The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
## Prerequisites- Before you start, make sure that you've met the following requirements:
-> [!IMPORTANT]
-> Log Analytics Containerized Linux Agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics.
-Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
->
-> If you have a Kubernetes cluster with Windows nodes, then please review and configure the Network Security Group and Network Policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in cluster's virtual network.
-
+**Log Analytics workspace**
+Container insights supports a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). For a list of the supported mapping pairs to use for the default workspace, see [Region mappings supported by Container insights](container-insights-region-mapping.md).
-- You have a Log Analytics workspace.
+You can let the onboarding experience create a default workspace in the default resource group of the AKS cluster subscription. If you already have a workspace though, then you will most likely want to use that one. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for details.
- Container insights supports a Log Analytics workspace in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor).
+An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure portal, but can be done with Azure CLI or Resource Manager template.
- You can create a workspace when you enable monitoring for your new AKS cluster, or you can let the onboarding experience create a default workspace in the default resource group of the AKS cluster subscription.
-
- If you choose to create the workspace yourself, you can create it through:
- - [Azure Resource Manager](../logs/resource-manager-workspace.md)
- - [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json)
- - [The Azure portal](../logs/quick-create-workspace.md)
-
- For a list of the supported mapping pairs to use for the default workspace, see [Region mapping for Container insights](container-insights-region-mapping.md).
-- You are a member of the *Log Analytics contributor* group for enabling container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage workspaces](../logs/manage-access.md).
+**Permissions**
+To enable container monitoring, you require the following permissions:
-- You are a member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on the AKS cluster resource.
+- Member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.
+- Member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on any AKS cluster resources.
- [!INCLUDE [log-analytics-agent-note](../../../includes/log-analytics-agent-note.md)]
+To enable container monitoring, you require the following permissions:
-- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
+- Member of [Log Analytics reader](../logs/manage-access.md#azure-rbac) role if you aren't already a member of [Log Analytics contributor](../logs/manage-access.md#azure-rbac).
-- Prometheus metrics aren't collected by default. Before you [configure the agent](container-insights-prometheus-integration.md) to collect the metrics, it's important to review the [Prometheus documentation](https://prometheus.io/) to understand what data can be scraped and what methods are supported.-- An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure Portal, but can be done with Azure CLI or Resource Manager template.
+**Prometheus**
+Prometheus metrics aren't collected by default. Before you [configure the agent](container-insights-prometheus-integration.md) to collect the metrics, it's important to review the [Prometheus documentation](https://prometheus.io/) to understand what data can be scraped and what methods are supported.
-## Supported configurations
+**Kubelet secure port**
+Log Analytics Containerized Linux Agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
-Container insights officially supports the following configurations:
+If you have a Kubernetes cluster with Windows nodes, then please review and configure the Network Security Group and Network Policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in cluster's virtual network.
-- Environments: Azure Red Hat OpenShift, Kubernetes on-premises, and the AKS engine on Azure and Azure Stack. For more information, see [the AKS engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).-- The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).-- We recommend connecting your cluster to [Azure Arc](../../azure-arc/kubernetes/overview.md) and enabling monitoring through Container Insights via Azure Arc.
-> [!IMPORTANT]
-> Please note that the monitoring add-on is not currently supported for AKS clusters configured with the [HTTP Proxy (preview)](../../aks/http-proxy.md)
## Network firewall requirements
The following table lists the proxy and firewall configuration information for A
| `*.oms.opinsights.azure.us` | 443 | OMS onboarding | | `dc.services.visualstudio.com` | 443 | For agent telemetry that uses Azure Public Cloud Application Insights |
-## Components
+## Agent
+Container insights relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
-Your ability to monitor performance relies on a containerized Log Analytics agent for Linux that's specifically developed for Container insights. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
+The agent version is *microsoft/oms:ciprod04202018* or later, and it's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
-The agent version is microsoft/oms:ciprod04202018 or later, and it's represented by a date in the following format: *mmddyyyy*.
>[!NOTE] >With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows server node to collect logs and forward it to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor on behalf all Windows nodes in the cluster.
-When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
> [!NOTE]
-> If you've already deployed an AKS cluster, you've enabled monitoring by using either the Azure CLI or a provided Azure Resource Manager template, as demonstrated later in this article. You can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent.
->
-> The template needs to be deployed in the same resource group as the cluster.
+> If you've already deployed an AKS cluster and enabled monitoring using either the Azure CLI or a Azure Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
+## Installation options
To enable Container insights, use one of the methods that's described in the following table:
-| Deployment state | Method | Description |
-||--|-|
-| New Kubernetes cluster | [Create an AKS cluster by using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md)| You can enable monitoring for a new AKS cluster that you create by using the Azure CLI. |
-| | [Create an AKS cluster by using Terraform](container-insights-enable-new-cluster.md#enable-using-terraform)| You can enable monitoring for a new AKS cluster that you create by using the open-source tool Terraform. |
-| | [Create an OpenShift cluster by using an Azure Resource Manager template](container-insights-azure-redhat-setup.md#enable-for-a-new-cluster-using-an-azure-resource-manager-template) | You can enable monitoring for a new OpenShift cluster that you create by using a preconfigured Azure Resource Manager template. |
-| | [Create an OpenShift cluster by using the Azure CLI](/cli/azure/openshift#az-openshift-create) | You can enable monitoring when you deploy a new OpenShift cluster by using the Azure CLI. |
-| Existing AKS cluster | [Enable monitoring of an AKS cluster by using the Azure CLI](container-insights-enable-existing-clusters.md#enable-using-azure-cli) | You can enable monitoring for an AKS cluster that's already deployed by using the Azure CLI. |
-| |[Enable for AKS cluster using Terraform](container-insights-enable-existing-clusters.md#enable-using-terraform) | You can enable monitoring for an AKS cluster that's already deployed by using the open-source tool Terraform. |
-| | [Enable for AKS cluster from Azure Monitor](container-insights-enable-existing-clusters.md#enable-from-azure-monitor-in-the-portal)| You can enable monitoring for one or more AKS clusters that are already deployed from the multi-cluster page in Azure Monitor. |
-| | [Enable from AKS cluster](container-insights-enable-existing-clusters.md#enable-directly-from-aks-cluster-in-the-portal)| You can enable monitoring directly from an AKS cluster in the Azure portal. |
-| | [Enable for AKS cluster using an Azure Resource Manager template](container-insights-enable-existing-clusters.md#enable-using-an-azure-resource-manager-template)| You can enable monitoring for an AKS cluster by using a preconfigured Azure Resource Manager template. |
-| Existing non-AKS Kubernetes cluster | [Enable for non-AKS Kubernetes cluster by using the Azure CLI](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-cli). | You can enable monitoring for your Kubernetes clusters that are hosted outside of Azure and enabled with Azure Arc, this includes hybrid, OpenShift, and multi-cloud using Azure CLI. |
-| | [Enable for non-AKS Kubernetes cluster using an Azure Resource Manager template](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-resource-manager) | You can enable monitoring for your clusters enabled with Arc by using a preconfigured Azure Resource Manager template. |
-| | [Enable for non-AKS Kubernetes cluster from Azure Monitor](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-portal) | You can enable monitoring for one or more clusters enabled with Arc that are already deployed from the multicluster page in Azure Monitor. |
+| Deployment state | Method |
+||--|
+| New Kubernetes cluster | [Enable monitoring for a new AKS cluster using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md)|
+| | [Enable for a new AKS cluster by using the open-source tool Terraform](container-insights-enable-new-cluster.md#enable-using-terraform)|
+| | [Enable for a new OpenShift cluster by using an Azure Resource Manager template](container-insights-azure-redhat-setup.md#enable-for-a-new-cluster-using-an-azure-resource-manager-template) |
+| | [Enable for a new OpenShift cluster by using the Azure CLI](/cli/azure/openshift#az-openshift-create) |
+| Existing AKS cluster | [Enable monitoring for an existing AKS cluster using the Azure CLI](container-insights-enable-existing-clusters.md#enable-using-azure-cli) |
+| |[Enable for an existing AKS cluster using Terraform](container-insights-enable-existing-clusters.md#enable-using-terraform) |
+| | [Enable for an existing AKS cluster from Azure Monitor](container-insights-enable-existing-clusters.md#enable-from-azure-monitor-in-the-portal)|
+| | [Enable directly from an AKS cluster in the Azure portal](container-insights-enable-existing-clusters.md#enable-directly-from-aks-cluster-in-the-portal)|
+| | [Enable for AKS cluster using an Azure Resource Manager template](container-insights-enable-existing-clusters.md#enable-using-an-azure-resource-manager-template)|
+| Existing non-AKS Kubernetes cluster | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc using the Azure CLI](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-cli). |
+| | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc using a preconfigured Azure Resource Manager template](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-resource-manager) |
+| | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc from the multicluster page Azure Monitor](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-portal) |
## Next steps
+Once you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
-Now that you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
Title: How to stop monitoring your hybrid Kubernetes cluster | Microsoft Docs description: This article describes how you can stop monitoring of your hybrid Kubernetes cluster with Container insights. Previously updated : 06/16/2020 Last updated : 05/24/2022
azure-monitor Container Insights Optout Openshift V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v3.md
Title: How to stop monitoring your Azure Red Hat OpenShift v3 cluster | Microsoft Docs description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift cluster with Container insights. Previously updated : 04/24/2020 Last updated : 05/24/2022
azure-monitor Container Insights Optout Openshift V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v4.md
Title: How to stop monitoring your Azure and Red Hat OpenShift v4 cluster | Microsoft Docs description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift and Red Hat OpenShift version 4 cluster with Container insights. Previously updated : 04/24/2020 Last updated : 05/24/2022
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
Title: How to Stop Monitoring Your Azure Kubernetes Service cluster | Microsoft Docs description: This article describes how you can discontinue monitoring of your Azure AKS cluster with Container insights. Previously updated : 08/19/2019 Last updated : 05/24/2022 ms.devlang: azurecli
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights | Microsoft Docs description: This article describes Container insights that monitors AKS Container Insights solution and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. + Last updated 09/08/2020
Container insights is a feature designed to monitor the performance of container
- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine) - [Azure Container Instances](../../container-instances/container-instances-overview.md) - Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises-- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) (preview) Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Docker, Moby, and any CRI compatible runtime such as CRI-O and ContainerD. Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications.
-Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are written to the metrics store and log data is written to the logs store associated with your [Log Analytics](../logs/log-query-overview.md) workspace.
+Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
-![Container insights architecture](./media/container-insights-overview/azmon-containers-architecture-01.png)
-## What does Container insights provide?
+## Features of Container insights
-Container insights delivers a comprehensive monitoring experience using different features of Azure Monitor. These features enable you to understand the performance and health of your Kubernetes cluster running Linux and Windows Server 2019 operating system, and the container workloads. With Container insights you can:
+Container insights delivers a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads.
-* Identify AKS containers that are running on the node and their average processor and memory utilization. This knowledge can help you identify resource bottlenecks.
-* Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.
-* Identify where the container resides in a controller or a pod. This knowledge can help you view the controller's or pod's overall performance.
-* Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod.
-* Understand the behavior of the cluster under average and heaviest loads. This knowledge can help you identify capacity needs and determine the maximum load that the cluster can sustain.
-* Configure alerts to proactively notify you or record it when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.
-* Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes using [queries](container-insights-log-query.md) to create custom alerts, dashboards, and perform detailed analysis.
-* Monitor container workloads [deployed to AKS Engine](https://github.com/Azure/aks-engine) on-premises and [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
-* Monitor container workloads [deployed to Azure Red Hat OpenShift](../../openshift/intro-openshift.md).
+- Identify resource bottlenecks by identifying AKS containers running on the node and their average processor and memory utilization.
+- Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.
+- View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod.
+- Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod.
+- Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.
+- Configure alerts to proactively notify you or record it when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.
+- Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes using [queries](container-insights-log-query.md) to create custom alerts, dashboards, and perform detailed analysis.
+- Monitor container workloads [deployed to AKS Engine](https://github.com/Azure/aks-engine) on-premises and [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
+- Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
- >[!NOTE]
- >Support for Azure Red Hat OpenShift is a feature in public preview at this time.
- >
-* Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
-The main differences in monitoring a Windows Server cluster compared to a Linux cluster are the following:
+Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights. Note that the video refers to *Azure Monitor for Containers* which is the previous name for *Container insights*.
-- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows node and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.-- Disk storage capacity information isn't available for Windows nodes.-- Only pod environments are monitored, not Docker environments.-- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.-
-Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights.
+[!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
-> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
-## How do I access this feature?
-You can access Container insights two ways, from Azure Monitor or directly from the selected AKS cluster. From Azure Monitor, you have a global perspective of all the containers deployed, which are monitored and which are not, allowing you to search and filter across your subscriptions and resource groups, and then drill into Container insights from the selected container. Otherwise, you can access the feature directly from a selected AKS container from the AKS page.
+## How to access Container insights
+Access Container insights in the Azure portal from Azure Monitor or directly from the selected AKS cluster. The Azure Monitor menu gives you the global perspective of all the containers deployed amd which are monitored, allowing you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
![Overview of methods to access Container insights](./media/container-insights-overview/azmon-containers-experience.png) +
+## Differences between Windows and Linux clusters
+The main differences in monitoring a Windows Server cluster compared to a Linux cluster include the following:
+
+- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows node and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+- Disk storage capacity information isn't available for Windows nodes.
+- Only pod environments are monitored, not Docker environments.
+- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.
+ ## Next steps To begin monitoring your Kubernetes cluster, review [How to enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Title: Configure PV monitoring with Container insights | Microsoft Docs description: This article describes how you can configure monitoring Kubernetes clusters with persistent volumes with Container insights. Previously updated : 03/03/2021 Last updated : 05/24/2022 # Configure PV monitoring with Container insights
azure-monitor Container Insights Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-region-mapping.md
When enabling Container insights, only certain regions are supported for linking a Log Analytics workspace and an AKS cluster, and collecting custom metrics submitted to Azure Monitor. ## Log Analytics workspace supported mappings- Supported AKS regions are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service). The Log Analytics workspace must be in the same region except for the regions listed in the following table. Watch [AKS release notes](https://github.com/Azure/AKS/releases) for updates.
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
Title: Reports in Container insights description: Describes reports available to analyze data collected by Container insights. Previously updated : 03/02/2021 Last updated : 05/24/2022 # Reports in Container insights
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
Title: "Transition from the Container Monitoring Solution to using Container Insights"
+ Title: Transition from the Container Monitoring Solution to using Container Insights
Last updated 1/18/2022
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
Title: How to Troubleshoot Container insights | Microsoft Docs description: This article describes how you can troubleshoot and resolve issues with Container insights. Previously updated : 03/25/2021 Last updated : 05/24/2022
You can also manually grant this role from the Azure portal by performing the fo
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). ## Container insights is enabled but not reporting any information-
-If Container insights is successfully enabled and configured, but you cannot view status information or no results are returned from a log query, you diagnose the problem by following these steps:
+Use the following steps to diagnose the problem if you can't view status information or no results are returned from a log query:
1. Check the status of the agent by running the command:
If Container insights is successfully enabled and configured, but you cannot vie
omsagent 1 1 1 1 3h ```
-4. Check the status of the pod to verify that it is running using the command: `kubectl get pods --namespace=kube-system`
+4. Check the status of the pod to verify that it's running using the command: `kubectl get pods --namespace=kube-system`
The output should resemble the following example with a status of *Running* for the omsagent:
The table below summarizes known errors you may encounter while using Container
| Error messages | Action | | - | | | Error Message `No data for selected filters` | It may take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster. |
-| Error Message `Error retrieving data` | While Azure Kubernetes Service cluster is setting up for health and performance monitoring, a connection is established between the cluster and Azure Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error may occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted and if it was, you will need to re-enable monitoring of your cluster with Container insights and specify an existing or create a new workspace. To re-enable, you will need to [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |
-| `Error retrieving data` after adding Container insights through az aks cli | When enable monitoring using `az aks cli`, Container insights may not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left-hand side. To resolve this issue, you will need to redeploy the solution by following the instructions on [how to deploy Container insights](container-insights-onboard.md) |
+| Error Message `Error retrieving data` | While Azure Kubernetes Service cluster is setting up for health and performance monitoring, a connection is established between the cluster and Azure Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error may occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, you'll need to re-enable monitoring of your cluster with Container insights and either specify an existing workspace or create a new one. To re-enable, you'll need to [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |
+| `Error retrieving data` after adding Container insights through az aks cli | When enable monitoring using `az aks cli`, Container insights may not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left-hand side. To resolve this issue, you'll need to redeploy the solution by following the instructions on [how to deploy Container insights](container-insights-onboard.md) |
-To help diagnose the problem, we have provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot).
+To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot).
-## Container insights agent ReplicaSet Pods are not scheduled on non-Azure Kubernetes cluster
+## Container insights agent ReplicaSet Pods aren't scheduled on non-Azure Kubernetes cluster
Container insights agent ReplicaSet Pods has a dependency on the following node selectors on the worker (or agent) nodes for the scheduling:
If your worker nodes donΓÇÖt have node labels attached, then agent ReplicaSet Po
## Performance charts don't show CPU or memory of nodes and containers on a non-Azure cluster
-Container insights agent Pods uses the cAdvisor endpoint on the node agent to gather the performance metrics. Verify the containerized agent on the node is configured to allow `cAdvisor port: 10255` to be opened on all nodes in the cluster to collect performance metrics.
+Container insights agent pods use the cAdvisor endpoint on the node agent to gather the performance metrics. Verify the containerized agent on the node is configured to allow `cAdvisor port: 10255` to be opened on all nodes in the cluster to collect performance metrics.
-## Non-Azure Kubernetes cluster are not showing in Container insights
+## Non-Azure Kubernetes cluster aren't showing in Container insights
To view the non-Azure Kubernetes cluster in Container insights, Read access is required on the Log Analytics workspace supporting this Insight and on the Container Insights solution resource **ContainerInsights (*workspace*)**.
To view the non-Azure Kubernetes cluster in Container insights, Read access is r
``` azurecli az role assignment list --assignee "SP/UserassignedMSI for omsagent" --scope "/subscriptions/<subid>/resourcegroups/<RG>/providers/Microsoft.ContainerService/managedClusters/<clustername>" --role "Monitoring Metrics Publisher" ```
- For clusters with MSI, the user assigned client id for omsagent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client id.
+ For clusters with MSI, the user assigned client ID for omsagent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client ID.
3. For clusters with Azure Active Directory pod identity enabled and using MSI:
To view the non-Azure Kubernetes cluster in Container insights, Read access is r
``` ## Installation of Azure Monitor Containers Extension fail with an error containing ΓÇ£manifests contain a resource that already existsΓÇ¥ on Azure Arc Enabled Kubernetes cluster
-The error _manifests contain a resource that already exists_ indicates that resources of the Container Insights agent already exist on the Azure Arc Enabled Kubernetes cluster. This indicates that the container insights agent is already installed either through azuremonitor-containers HELM chart or Monitoring Addon if it is AKS Cluster which is connected Azure Arc. The solution to this issue, is to clean up the existing resources of container insights agent if it exists and then enable Azure Monitor Containers Extension.
+The error _manifests contain a resource that already exists_ indicates that resources of the Container Insights agent already exist on the Azure Arc Enabled Kubernetes cluster. This indicates that the container insights agent is already installed, either through azuremonitor-containers HELM chart or Monitoring Addon if it's AKS Cluster that's connected Azure Arc. The solution to this issue is to clean up the existing resources of container insights agent if it exists. Then enable Azure Monitor Containers Extension.
### For non-AKS clusters
-1. Against the K8s cluster which is connected to Azure Arc, run below command to verify whether the azmon-containers-release-1 helm chart release exists or not:
+1. Against the K8s cluster that's connected to Azure Arc, run below command to verify whether the azmon-containers-release-1 helm chart release exists or not:
`helm list -A`
The error _manifests contain a resource that already exists_ indicates that reso
`helm del azmon-containers-release-1` ### For AKS clusters
-1. Run below commands and look for omsagent addon profile to verify the AKS monitoring addon enabled or not:
+1. Run the following commands and look for omsagent addon profile to verify whether the AKS monitoring addon is enabled:
``` az account set -s <clusterSubscriptionId> az aks show -g <clusterResourceGroup> -n <clusterName> ```
-2. If there is omsagent addon profile config with log analytics workspace resource Id in the output of the above command indicates that, AKS Monitoring addon enabled and which needs to be disabled:
+2. If the output includes an omsagent addon profile config with a log analytics workspace resource ID, this indicates that AKS Monitoring addon is enabled and needs to be disabled:
`az aks disable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName>`
-If above steps didnΓÇÖt resolve the installation of Azure Monitor Containers Extension issues, please create a ticket to Microsoft for further investigation.
+If above steps didnΓÇÖt resolve the installation of Azure Monitor Containers Extension issues, create a ticket to Microsoft for further investigation.
## Next steps
azure-monitor Azure Cli Application Insights Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-cli-application-insights-component.md
Last updated 09/10/2012--
+ms.tool: azure-cli
# Manage Application Insights components by using Azure CLI
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups
**Example**
-This example configures the `ContainerLog` table for Basic Logs.
+This example configures the `ContainerLogV2` table for Basic Logs.
+
+Container Insights uses ContainerLog by default, to switch to using ContainerLogV2, please follow these [instructions](../containers/container-insights-logging-v2.md) before attempting to convert the table to Basic Logs.
**Sample request** ```http
-PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
+PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview
``` Use this request body to change to Basic Logs:
Status code: 200
"schema": {...} }, "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
- "name": "ContainerLog"
+ "name": "ContainerLogV2"
} ```
For example:
- To set Basic Logs: ```azurecli
- az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLog --plan Basic
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Basic
``` - To set Analytics Logs: ```azurecli
- az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLog --plan Analytics
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Analytics
```
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{
**Sample Request** ```http
-GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
+GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview
```
Status code: 200
"provisioningState": "Succeeded" }, "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
- "name": "ContainerLog"
+ "name": "ContainerLogV2"
} ```
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 03/15/2022 Last updated : 05/27/2022 # Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes
Azure NetApp Files supports fetching of extended groups from the LDAP name servi
When itΓÇÖs determined that LDAP will be used for operations such as name lookup and fetching extended groups, the following process occurs: 1. Azure NetApp Files uses an LDAP client configuration to make a connection attempt to the ADDS/AADDS LDAP server that is specified in the [Azure NetApp Files AD configuration](create-active-directory-connections.md).
-1. If the TCP connection over the defined ADDS/AADDS LDAP service port is successful, then the Azure NetApp Files LDAP client attempts to ΓÇ£bindΓÇ¥ (log in) to the ADDS/AADDS LDAP server (domain controller) by using the defined credentials in the LDAP client configuration.
+1. If the TCP connection over the defined ADDS/AADDS LDAP service port is successful, then the Azure NetApp Files LDAP client attempts to ΓÇ£bindΓÇ¥ (sign in) to the ADDS/AADDS LDAP server (domain controller) by using the defined credentials in the LDAP client configuration.
1. If the bind is successful, then the Azure NetApp Files LDAP client uses the RFC 2307bis LDAP schema to make an LDAP search query to the ADDS/AADDS LDAP server (domain controller). The following information is passed to the server in the query: * [Base/user DN](configure-ldap-extended-groups.md#ldap-search-scope) (to narrow search scope)
The following information is passed to the server in the query:
![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png) 7. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
- 1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**.
+ 1. Select **Active Directory connections**. On an existing Active Directory connection, select the context menu (the three dots `…`), and select **Edit**.
2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option. ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
The following information is passed to the server in the query:
* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Create and manage Active Directory connections](create-active-directory-connections.md) * [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain)
+* [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md)
* [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) * [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)
azure-netapp-files Configure Nfs Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-nfs-clients.md
na Previously updated : 09/22/2021 Last updated : 05/27/2022 # Configure an NFS client for Azure NetApp Files
-The NFS client configuration described in this article is part of the setup when you [configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) or [create a dual-protocol volume](create-volumes-dual-protocol.md). A wide variety of Linux distributions are available to use with Azure NetApp Files. This article describes configurations for two of the more commonly used environments: RHEL 8 and Ubuntu 18.04.
+The NFS client configuration described in this article is part of the setup when you [configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) or [create a dual-protocol volume](create-volumes-dual-protocol.md) or [NFSv3/NFSv4.1 with LDAP](configure-ldap-extended-groups.md). A wide variety of Linux distributions are available to use with Azure NetApp Files. This article describes configurations for two of the more commonly used environments: RHEL 8 and Ubuntu 18.04.
## Requirements and considerations
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Azure enables you to create and manage support requests, also known as support t
> The Azure portal URL is specific to the Azure cloud where your organization is deployed. > >- Azure portal for commercial use is: [https://portal.azure.com](https://portal.azure.com)
->- Azure portal for Germany is: [https://portal.microsoftazure.de](https://portal.microsoftazure.de)
>- Azure portal for the United States government is: [https://portal.azure.us](https://portal.azure.us) Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans).
Follow these links to learn more:
* [Azure support ticket REST API](/rest/api/support) * Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
-* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 05/03/2022 Last updated : 05/25/2022 # Use tags to organize your Azure resources and management hierarchy
-You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them by values that make sense for your organization. Each tag consists of a name and a value pair. For example, you can apply the name _Environment_ and the value _Production_ to all the resources in production.
+Tags are metadata elements that you apply to your Azure resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named Environment. To identify the resources deployed to production, give them a value of Production. Fully formed, the key-value pair becomes, Environment = Production.
+
+You can apply tags to your Azure resources, resource groups, and subscriptions.
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
Resource tags support all cost-accruing services. To ensure that cost-accruing s
> Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, commands that return existing tag definitions, deployment histories, exported templates, and monitoring logs. > [!IMPORTANT]
-> Tag names are case-insensitive for operations. A tag with a tag name, regardless of casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports.
+> Tag names are case-insensitive for operations. A tag with a tag name, regardless of the casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports.
> > Tag values are case-sensitive.
Resource tags support all cost-accruing services. To ensure that cost-accruing s
There are two ways to get the required access to tag resources. -- You can have write access to the `Microsoft.Resources/tags` resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. Currently, the tag contributor role can't apply tags to resources or resource groups through the portal. It can apply tags to subscriptions through the portal. It supports all tag operations through PowerShell and REST API.
+- You can have write access to the `Microsoft.Resources/tags` resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. The tag contributor role, for example, can't apply tags to resources or resource groups through the portal. It can, however, apply tags to subscriptions through the portal. It supports all tag operations through Azure PowerShell and REST API.
-- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. For example, to apply tags to virtual machines, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
+- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. To apply tags to virtual machines, for example, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
## PowerShell ### Apply tags
-Azure PowerShell offers two commands for applying tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You must have the `Az.Resources` module 1.12.0 or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) 3.6.1 or later.
+Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) version 3.6.1 or later.
-The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
+The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
The following example applies a set of tags to a storage account:
Properties :
Status Normal ```
-If you run the command again but this time with different tags, notice that the earlier tags are removed.
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
```azurepowershell-interactive $tags = @{"Team"="Compliance"; "Environment"="Production"}
$tags = @{"Dept"="Finance"; "Status"="Normal"}
Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge ```
-Notice that the two new tags were added to the two existing tags.
+Notice that the existing tags grow with the addition of the two new tags.
```output Properties :
Properties :
Environment Production ```
-Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
```azurepowershell-interactive $tags = @{"Status"="Green"}
Properties :
Environment Production ```
-When you set the `-Operation` parameter to `Replace`, the existing tags are replaced by the new set of tags.
+When you set the `-Operation` parameter to `Replace`, the new set of tags replaces the existing tags.
```azurepowershell-interactive $tags = @{"Project"="ECommerce"; "CostCenter"="00123"; "Team"="Web"}
Properties :
Project ECommerce ```
-The same commands also work with resource groups or subscriptions. You pass in the identifier for the resource group or subscription you want to tag.
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
To add a new set of tags to a resource group, use:
$resource | ForEach-Object { Update-AzTag -Tag @{ "Dept"="IT"; "Environment"="Te
### List tags
-To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass in the resource ID for the entity.
+To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass the resource ID of the entity.
To see the tags for a resource, use:
To get resource groups that have a specific tag name and value, use:
### Remove tags
-To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass in the tags you want to delete.
+To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
```azurepowershell-interactive $removeTags = @{"Project"="ECommerce"; "Team"="Web"}
Remove-AzTag -ResourceId "/subscriptions/$subscription"
### Apply tags
-Azure CLI offers two commands for applying tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You must have Azure CLI 2.10.0 or later. You can check your version with `az version`. To update or install, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+Azure CLI offers two commands to apply tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You need to have the Azure CLI 2.10.0 version or later. You can check your version with `az version`. To update or install it, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-The `az tag create` replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
+The `az tag create` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
The following example applies a set of tags to a storage account:
When the command completes, notice that the resource has two tags.
}, ```
-If you run the command again but this time with different tags, notice that the earlier tags are removed.
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
```azurecli-interactive az tag create --resource-id $resource --tags Team=Compliance Environment=Production
To add tags to a resource that already has tags, use `az tag update`. Set the `-
az tag update --resource-id $resource --operation Merge --tags Dept=Finance Status=Normal ```
-Notice that the two new tags were added to the two existing tags.
+Notice that the existing tags grow with the addition of the two new tags.
```output "properties": {
Notice that the two new tags were added to the two existing tags.
}, ```
-Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+Each tag name can have only one value. If you provide a new value for a tag, the new tag replaces the old value, even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
```azurecli-interactive az tag update --resource-id $resource --operation Merge --tags Status=Green
az tag update --resource-id $resource --operation Merge --tags Status=Green
}, ```
-When you set the `--operation` parameter to `Replace`, the existing tags are replaced by the new set of tags.
+When you set the `--operation` parameter to `Replace`, the new set of tags replaces the existing tags.
```azurecli-interactive az tag update --resource-id $resource --operation Replace --tags Project=ECommerce CostCenter=00123 Team=Web
Only the new tags remain on the resource.
}, ```
-The same commands also work with resource groups or subscriptions. You pass in the identifier for the resource group or subscription you want to tag.
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
To add a new set of tags to a resource group, use:
az tag update --resource-id /subscriptions/$sub --operation Merge --tags Team="W
### List tags
-To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass in the resource ID for the entity.
+To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass the resource ID of the entity.
To see the tags for a resource, use:
az group list --tag Dept=Finance
### Remove tags
-To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass in the tags you want to delete.
+To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass the resource ID of the tags you want to delete.
```azurecli-interactive az tag update --resource-id $resource --operation Delete --tags Project=ECommerce Team=Web ```
-The specified tags are removed.
+You've removed the specified tags.
```output "properties": {
az tag delete --resource-id $resource
### Handling spaces
-If your tag names or values include spaces, enclose them in double quotes.
+If your tag names or values include spaces, enclose them in quotation marks.
```azurecli-interactive az tag update --resource-id $group --operation Merge --tags "Cost Center"=Finance-1222 Location="West US"
az tag update --resource-id $group --operation Merge --tags "Cost Center"=Financ
## ARM templates
-You can tag resources, resource groups, and subscriptions during deployment with an Azure Resource Manager template (ARM template).
+You can tag resources, resource groups, and subscriptions during deployment with an ARM template.
> [!NOTE] > The tags you apply through an ARM template or Bicep file overwrite any existing tags.
resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
### Apply an object
-You can define an object parameter that stores several tags, and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that is applied to the tag element.
+You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
# [JSON](#tab/json)
resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
### Apply tags from resource group
-To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When getting the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
+To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
# [JSON](#tab/json)
resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
### Apply tags to resource groups or subscriptions
-You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. The tags are applied to the target resource group or subscription for the deployment. Each time you deploy the template you replace any tags there were previously applied.
+You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
# [JSON](#tab/json)
resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
-To apply the tags to a resource group, use either PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
+To apply the tags to a resource group, use either Azure PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
```azurepowershell-interactive New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
To work with tags through the Azure REST API, use:
## SDKs
-For samples of applying tags with SDKs, see:
+For examples of applying tags with SDKs, see:
* [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/resourcemanager/Azure.ResourceManager/samples/Sample2_ManagingResourceGroups.md) * [Java](https://github.com/Azure-Samples/resources-java-manage-resource-group/blob/master/src/main/java/com/azure/resourcemanager/resources/samples/ManageResourceGroup.java)
For samples of applying tags with SDKs, see:
## Inherit tags
-Tags applied to the resource group or subscription aren't inherited by the resources. To apply tags from a subscription or resource group to the resources, see [Azure Policies - tags](tag-policies.md).
+Resources don't inherit the tags you apply to a resource group or a subscription. To apply tags from a subscription or resource group to the resources, see [Azure Policies - tags](tag-policies.md).
## Tags and billing
-You can use tags to group your billing data. For example, if you're running multiple VMs for different organizations, use the tags to group usage by cost center. You can also use tags to categorize costs by runtime environment, such as the billing usage for VMs running in the production environment.
+You can use tags to group your billing data. If you're running multiple VMs for different organizations, for example, use the tags to group usage by cost center. You can also use tags to categorize costs by runtime environment, such as the billing usage for VMs running in the production environment.
-You can retrieve information about tags by downloading the usage file, a comma-separated values (CSV) file available from the Azure portal. For more information, see [Download or view your Azure billing invoice and daily usage data](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md). For services that support tags with billing, the tags appear in the **Tags** column.
+You can retrieve information about tags by downloading the usage file available from the Azure portal. For more information, see [Download or view your Azure billing invoice and daily usage data](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md). For services that support tags with billing, the tags appear in the **Tags** column.
For REST API operations, see [Azure Billing REST API Reference](/rest/api/billing/).
For REST API operations, see [Azure Billing REST API Reference](/rest/api/billin
The following limitations apply to tags: * Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
-* Each resource, resource group, and subscription can have a maximum of 50 tag name/value pairs. If you need to apply more tags than the maximum allowed number, use a JSON string for the tag value. The JSON string can contain many values that are applied to a single tag name. A resource group or subscription can contain many resources that each have 50 tag name/value pairs.
-* The tag name is limited to 512 characters, and the tag value is limited to 256 characters. For storage accounts, the tag name is limited to 128 characters, and the tag value is limited to 256 characters.
-* Tags can't be applied to classic resources such as Cloud Services.
-* Azure IP Groups and Azure Firewall Policies don't support PATCH operations, which means they don't support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az-network-ip-group-update) command.
+* Each resource, resource group, and subscription can have a maximum of 50 tag name-value pairs. If you need to apply more tags than the maximum allowed number, use a JSON string for the tag value. The JSON string can contain many of the values that you apply to a single tag name. A resource group or subscription can contain many resources that each have 50 tag name-value pairs.
+* The tag name has a limit of 512 characters and the tag value has a limit of 256 characters. For storage accounts, the tag name has a limit of 128 characters and the tag value has a limit of 256 characters.
+* Classic resources such as Cloud Services don't support tags.
+* Azure IP Groups and Azure Firewall Policies don't support PATCH operations. PATCH API method operations, therefore, can't update tags through the portal. Instead, you can use the update commands for those resources. You can update tags for an IP group, for example, with the [az network ip-group update](/cli/azure/network/ip-group#az-network-ip-group-update) command.
* Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/` > [!NOTE]
- > * Azure DNS zones don't support the use of spaces in the tag or a tag that starts with a number. Azure DNS tag names do not support special and unicode characters. The value can contain all characters.
+ > * Azure Domain Name System (DNS) zones don't support the use of spaces in the tag or a tag that starts with a number. Azure DNS tag names don't support special and unicode characters. The value can contain all characters.
> > * Traffic Manager doesn't support the use of spaces, `#` or `:` in the tag name. The tag name can't start with a number. >
The following limitations apply to tags:
> > * The following Azure resources only support 15 tags: > * Azure Automation
- > * Azure CDN
+ > * Azure Content Delivery Network (CDN)
> * Azure DNS (Zone and A records) > * Azure Private DNS (Zone, A records, and virtual network link)
azure-signalr Signalr Quickstart Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-dotnet-core.md
The code for this tutorial is available for download in the [AzureSignalR-sample
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note-dotnet.md)]
+Ready to start?
+
+> [!div class="nextstepaction"]
+> [Step by step build](#prerequisites)
+
+> [!div class="nextstepaction"]
+> [Try chat demo now](https://asrs-simplechat-live-demo.azurewebsites.net/)
+ ## Prerequisites * Install the [.NET Core SDK](https://dotnet.microsoft.com/download).
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
description: In this tutorial, you learn how to build and modify a Blazor Server
Previously updated : 09/09/2020 Last updated : 05/22/2022 ms.devlang: csharp
This tutorial shows you how to build and modify a Blazor Server app. You'll lear
> * Quick-deploy to Azure App Service in Visual Studio. > * Migrate from local SignalR to Azure SignalR Service.
+Ready to start?
+
+> [!div class="nextstepaction"]
+> [Step by step build](#prerequisites)
+
+> [!div class="nextstepaction"]
+> [Try Blazor demo now](https://asrs-blazorchat-live-demo.azurewebsites.net/chatroom)
+ ## Prerequisites * Install [.NET Core 3.0 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.0) (Version >= 3.0.100)
azure-web-pubsub Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview.md
There are many different ways to program with Azure Web PubSub service, as some
- **Use provided SDKs to manage the WebSocket connections in self-host app servers** - Azure Web PubSub service provides SDKs in C#, JavaScript, Java and Python to manage the WebSocket connections easily, including broadcast messages to the connections, add connections to some groups, or close the connections, etc. - **Send messages from server to clients via REST API** - Azure Web PubSub service provides REST API to enable applications to post messages to clients connected, in any REST capable programming languages.
+## Quick start
+
+> [!div class="nextstepaction"]
+> [Play with chat demo](https://azure.github.io/azure-webpubsub/demos/chat)
+
+> [!div class="nextstepaction"]
+> [Build a chat app](tutorial-build-chat.md)
+ ## Next steps [!INCLUDE [next step](includes/include-next-step.md)]
azure-web-pubsub Quickstart Use Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-sdk.md
Now let's use Azure Web PubSub SDK to publish a message to the connected client.
console.log('Usage: node publish <message>'); return 1; }
- const hub = "pubsub";
+ const hub = "myHub1";
let service = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, hub); // by default it uses `application/json`, specify contentType as `text/plain` if you want plain-text service.sendToAll(process.argv[2], { contentType: "text/plain" });
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Title: Troubleshoot the Azure Backup agent description: In this article, learn how to troubleshoot the installation and registration of the Azure Backup agent. Previously updated : 04/05/2022 Last updated : 05/31/2022
We recommend that you check the following before you start troubleshooting Micro
- You can use [Add Exclusion rules to existing policy](./backup-azure-manage-mars.md#add-exclusion-rules-to-existing-policy) to exclude unsupported, missing, or deleted files from your backup policy to ensure successful backups. -- Avoid deleting and recreating protected folders with the same names in the top-level folder. Doing so could result in the backup completing with warnings with the error *A critical inconsistency was detected, therefore changes cannot be replicated.* If you need to delete and recreate folders, then consider doing so in subfolders under the protected top-level folder.
+- Avoid deleting and recreating protected folders with the same names in the top-level folder. Doing so could result in the backup completing with warnings with the error: *A critical inconsistency was detected, therefore changes cannot be replicated.* If you need to delete and recreate folders, then consider doing so in subfolders under the protected top-level folder.
## Failed to set the encryption key for secure backups
We recommend that you check the following before you start troubleshooting Micro
| Error | Possible causes | Recommended actions | ||||
-| <br />Error 34506. The encryption passphrase stored on this computer is not correctly configured. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
+| <br />Error 34506. The encryption passphrase stored on this computer is not correctly configured. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. <li> If you've recently moved your scratch folder, ensure that the path of your scratch folder location matches the values of the registry key entries shown below: <br><br> **Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* <br><br>**Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config\CloudBackupProvider` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* |
## Backups don't run according to schedule
cdn Cdn Azure Cli Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/scripts/cli/cdn-azure-cli-create-endpoint.md
Last updated 03/09/2021 -
+ms.devlang: azurecli
+ms.tool: azure-cli
# Create an Azure CDN profile and endpoint using the Azure CLI
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 5/11/2022 Last updated : 5/26/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
->[!NOTE]
-
->The May Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the May Guest OS. This list is subject to change.
## May 2022 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 22-05 | [5013941] | Latest Cumulative Update(LCU) | 6.44 | May 10, 2022 |
-| Rel 22-05 | [5011486] | IE Cumulative Updates | 2.123, 3.110, 4.103 | Mar 8, 2022 |
-| Rel 22-05 | [5013944] | Latest Cumulative Update(LCU) | 7.12 | May 10, 2022 |
-| Rel 22-05 | [5013952] | Latest Cumulative Update(LCU) | 5.68 | May 10, 2022 |
-| Rel 22-05 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.123 | May 10, 2022 |
-| Rel 22-05 | [5012141] | .NET Framework 4.5.2 Security and Quality Rollup | 2.123 | Apr 12, 2022 |
-| Rel 22-05 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.103 | May 10, 2022 |
-| Rel 22-05 | [5012142] | .NET Framework 4.5.2 Security and Quality Rollup | 4.103 | Apr 12, 2022 |
-| Rel 22-05 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.110 | May 10, 2022 |
-| Rel 22-05 | [5012140] | . NET Framework 4.5.2 Security and Quality Rollup | 3.110 | Apr 12, 2022 |
-| Rel 22-05 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.44 | May 10, 2022 |
-| Rel 22-05 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.12 | May 10, 2022 |
-| Rel 22-05 | [5014012] | Monthly Rollup | 2.123 | May 10, 2022 |
-| Rel 22-05 | [5014017] | Monthly Rollup | 3.110 | May 10, 2022 |
-| Rel 22-05 | [5014011] | Monthly Rollup | 4.103 | May 10, 2022 |
-| Rel 22-05 | [5014027] | Servicing Stack update | 3.110 | May 10, 2022 |
-| Rel 22-05 | [5014025] | Servicing Stack update | 4.103 | May 10, 2022 |
-| Rel 22-05 | [4578013] | Standalone Security Update | 4.103 | Aug 19, 2020 |
-| Rel 22-05 | [5014026] | Servicing Stack update | 5.68 | May 10, 2022 |
-| Rel 22-05 | [5011649] | Servicing Stack update | 2.123 | Mar 8, 2022 |
-| Rel 22-05 | [4494175] | Microcode | 5.68 | Sep 1, 2020 |
-| Rel 22-05 | [4494174] | Microcode | 6.44 | Sep 1, 2020 |
+| Rel 22-05 | [5013941] | Latest Cumulative Update(LCU) | [6.44] | May 10, 2022 |
+| Rel 22-05 | [5011486] | IE Cumulative Updates | [2.123], [3.110], [4.103] | Mar 8, 2022 |
+| Rel 22-05 | [5013944] | Latest Cumulative Update(LCU) | [7.12] | May 10, 2022 |
+| Rel 22-05 | [5013952] | Latest Cumulative Update(LCU) | [5.68] | May 10, 2022 |
+| Rel 22-05 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | [2.123] | May 10, 2022 |
+| Rel 22-05 | [5012141] | .NET Framework 4.5.2 Security and Quality Rollup | [2.123] | Apr 12, 2022 |
+| Rel 22-05 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | [4.103] | May 10, 2022 |
+| Rel 22-05 | [5012142] | .NET Framework 4.5.2 Security and Quality Rollup | [4.103] | Apr 12, 2022 |
+| Rel 22-05 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | [3.110] | May 10, 2022 |
+| Rel 22-05 | [5012140] | . NET Framework 4.5.2 Security and Quality Rollup | [3.110] | Apr 12, 2022 |
+| Rel 22-05 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.44] | May 10, 2022 |
+| Rel 22-05 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | [7.12] | May 10, 2022 |
+| Rel 22-05 | [5014012] | Monthly Rollup | [2.123] | May 10, 2022 |
+| Rel 22-05 | [5014017] | Monthly Rollup | [3.110] | May 10, 2022 |
+| Rel 22-05 | [5014011] | Monthly Rollup | [4.103] | May 10, 2022 |
+| Rel 22-05 | [5014027] | Servicing Stack update | [3.110] | May 10, 2022 |
+| Rel 22-05 | [5014025] | Servicing Stack update | [4.103] | May 10, 2022 |
+| Rel 22-05 | [4578013] | Standalone Security Update | [4.103] | Aug 19, 2020 |
+| Rel 22-05 | [5014026] | Servicing Stack update | [5.68] | May 10, 2022 |
+| Rel 22-05 | [5011649] | Servicing Stack update | [2.123] | Mar 8, 2022 |
+| Rel 22-05 | [4494175] | Microcode | [5.68] | Sep 1, 2020 |
+| Rel 22-05 | [4494174] | Microcode | [6.44] | Sep 1, 2020 |
[5013941]: https://support.microsoft.com/kb/5013941 [5011486]: https://support.microsoft.com/kb/5011486
The following tables show the Microsoft Security Response Center (MSRC) updates
[5011649]: https://support.microsoft.com/kb/5011649 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[2.123]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.110]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.103]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.68]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.44]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.12]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## April 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 4/30/2022 Last updated : 5/26/2022 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **May 26, 2022**
+The May Guest OS has released.
+ ###### **April 30, 2022** The April Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.12_202205-01 | May 26, 2022 | Post 7.14 |
| WA-GUEST-OS-7.11_202204-01 | April 30, 2022 | Post 7.13 |
-| WA-GUEST-OS-7.10_202203-01 | March 19, 2022 | Post 7.12 |
+|~~WA-GUEST-OS-7.10_202203-01~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-7.9_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-7.8_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-7.6_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.44_202205-01 | May 26, 2022 | Post 6.46 |
| WA-GUEST-OS-6.43_202204-01 | April 30, 2022 | Post 6.45 |
-| WA-GUEST-OS-6.42_202203-01 | March 19, 2022 | Post 6.44 |
+|~~WA-GUEST-OS-6.42_202203-01~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-6.41_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-6.40_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-6.38_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.68_202205-01 | May 26, 2022 | Post 5.70 |
| WA-GUEST-OS-5.67_202204-01 | April 30, 2022 | Post 5.69 |
-| WA-GUEST-OS-5.66_202203-01 | March 19, 2022 | Post 5.68 |
+|~~WA-GUEST-OS-5.66_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-5.65_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-5.64_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-5.62_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.103_202205-01 | May 26, 2022 | Post 4.105 |
| WA-GUEST-OS-4.102_202204-01 | April 30, 2022 | Post 4.104 |
-| WA-GUEST-OS-4.101_202203-01 | March 19, 2022 | Post 4.103 |
+|~~WA-GUEST-OS-4.101_202203-01~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-4.100_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-4.99_202201-02~~| February 11 , 2022 | March 19, 2022 | |~~WA-GUEST-OS-4.97_202112-01~~| January 10 , 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.110_202205-01 | May 26, 2022 | Post 3.112 |
| WA-GUEST-OS-3.109_202204-01 | April 30, 2022 | Post 3.111 |
-| WA-GUEST-OS-3.108_202203-01 | March 19, 2022 | Post 3.110 |
+|~~WA-GUEST-OS-3.108_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-3.107_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-3.106_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-3.104_202112-01~~| January 10, 2022 | March 2, 2022|
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.123_202205-01 | May 26, 2022 | Post 2.125 |
| WA-GUEST-OS-2.122_202204-01 | April 30, 2022 | Post 2.124 |
-| WA-GUEST-OS-2.121_202203-01 | March 19, 2022 | Post 2.123 |
+|~~WA-GUEST-OS-2.121_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-2.120_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-2.119_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-2.117_202112-01~~| January 10, 2022 | March 2, 2022 |
cloud-shell Example Terraform Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/example-terraform-bash.md
vm-linux
Last updated 11/15/2017
+ms.tool: terraform
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Next, the phoneme sequence goes into the neural acoustic model to predict acoust
Neural text-to-speech voice models are trained by using deep neural networks based on the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
-You can adapt the neural text-to-speech engine to fit your needs. To create a custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
+You can adapt the neural text-to-speech engine to fit your needs. To create a custom neural voice, use [Speech Studio](https://aka.ms/speechstudio/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
## Custom Neural Voice project types
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Here's more information about the sequence of steps shown in the previous diagra
1. [Choose a model](how-to-custom-speech-choose-model.md) and create a Custom Speech project. Use a <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. 1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech-to-text offering for your applications, tools, and products.
-1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://speech.microsoft.com/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
+1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required. 1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended. 1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint.
cognitive-services How To Custom Commands Integrate Remote Skills https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-integrate-remote-skills.md
In this article, you will learn how to export a Custom Commands application as a remote skill.
+> [!NOTE]
+> Exporting a Custom Commands application as a remote skill is a limited preview feature.
+ ## Prerequisites > [!div class="checklist"] > * [Understanding of Bot Framework Skill](/azure/bot-service/skills-conceptual)
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
In this article, you'll learn how to deploy an endpoint for a Custom Speech mode
To create a custom endpoint, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. If this is your first endpoint, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
An endpoint can be updated to use another model that was created by the same Spe
To use a new model and redeploy the custom endpoint:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. 1. Select the link to an endpoint by name, and then select **Change model**. 1. Select the new model that you want the endpoint to use.
Logging data is available for export if you configured it while creating the end
To download the endpoint logs:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. 1. Select the link by endpoint name. 1. Under **Content logging**, select **Download log**.
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
You can test the accuracy of your custom model by creating a test. A test requir
Follow these steps to create a test:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Test models**. 1. Select **Create new test**. 1. Select **Evaluate accuracy** > **Next**.
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
If you plan to train a model with audio data, use a Speech resource in a [region
After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Train custom models**. 1. Select **Train a new model**. 1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list.
cognitive-services How To Custom Speech Transcription Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-transcription-editor.md
Datasets in the **Training and testing dataset** tab can't be updated. You can i
To import a dataset to the Editor, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**. 1. Select **Import data** 1. Select datasets. You can select audio data only, audio + human-labeled data, or both. For audio-only data, you can use the default models to automatically generate machine transcription after importing to the editor.
Once a dataset has been imported to the Editor, you can start editing the datase
To edit a dataset's transcription in the Editor, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**. 1. Select the link to a dataset by name. 1. From the **Audio + text files** table, select the link to an audio file by name.
Datasets in the Editor can be exported to the **Training and testing dataset** t
To export datasets from the Editor, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**. 1. Select the link to a dataset by name. 1. Select one or more rows from the **Audio + text files** table.
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
You need audio or text data for testing the accuracy of Microsoft speech recogni
To upload your own datasets in Speech Studio, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Upload data**. 1. Select the **Training data** or **Testing data** tab. 1. Select a dataset type, and then select **Next**.
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
A Speech service subscription is required before you can use Custom Neural Voice
Once you've created an Azure account and a Speech service subscription, you'll need to sign in to Speech Studio and connect your subscription. 1. Get your Speech service subscription key from the Azure portal.
-1. Sign in to [Speech Studio](https://speech.microsoft.com), and then select **Custom Voice**.
+1. Sign in to [Speech Studio](https://aka.ms/speechstudio/customvoice), and then select **Custom Voice**.
1. Select your subscription and create a speech project. 1. If you want to switch to another Speech subscription, select the **cog** icon at the top.
Content like data, models, tests, and endpoints are organized into projects in S
To create a custom voice project:
-1. Sign in to [Speech Studio](https://speech.microsoft.com).
+1. Sign in to [Speech Studio](https://aka.ms/speechstudio/customvoice).
1. Select **Text-to-Speech** > **Custom Voice** > **Create project**. See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
After the recordings are ready, follow [Prepare training data](how-to-custom-voi
### Training
-After you've prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
+After you've prepared the training data, go to [Speech Studio](https://aka.ms/speechstudio/customvoice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
### Testing
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
Before you can migrate to custom neural voice, your [application](https://aka.ms
> Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs. 1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and then [apply here](https://aka.ms/customneural).
-2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://speech.microsoft.com) using the same Azure subscription that you provide in your application.
+2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
> [!IMPORTANT] > To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
Now try Speech Studio to see how phrase list can improve recognition accuracy.
> [!NOTE] > You may be prompted to select your Azure subscription and Speech resource, and then acknowledge billing for your region.
-1. Sign in to [Speech Studio](https://speech.microsoft.com/).
-1. Select **Real-time Speech-to-text**.
+1. Go to **Real-time Speech-to-text** in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool).
1. You test speech recognition by uploading an audio file or recording audio with a microphone. For example, select **record audio with a microphone** and then say "Hi Rehaan, this is Jessie from Contoso bank. " Then select the red button to stop recording. 1. You should see the transcription result in the **Test results** text box. If "Rehaan", "Jessie", or "Contoso" were recognized incorrectly, you can add the terms to a phrase list in the next step. 1. Select **Show advanced options** and turn on **Phrase list**.
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
At this time, Custom Commands supports speech subscriptions created in regions t
## Go to the Speech Studio for Custom Commands
-1. In a web browser, go to [Speech Studio](https://speech.microsoft.com/).
+1. In a web browser, go to [Speech Studio](https://aka.ms/speechstudio/customcommands).
1. Enter your credentials to sign in to the portal. The default view is your list of Speech subscriptions.
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
# Speech service supported regions
-The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://speech.microsoft.com).
+The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://aka.ms/speechstudio/).
Keep in mind the following points:
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
Datasets for customer-created data assets, such as customized speech models, cus
While some customers use our default endpoints to transcribe audio or standard voices for speech synthesis, other customers create assets for customization.
-These assets are backed up regularly and automatically by the repositories themselves, so **no data loss will occur** if a region becomes unavailable. However, you must take steps to ensure service continuity in the event of a region outage.
+These assets are backed up regularly and automatically by the repositories themselves, so **no data loss will occur** if a region becomes unavailable. However, you must take steps to ensure service continuity if there's a region outage.
## How to monitor service availability
-If you use our default endpoints, you should configure your client code to monitor for errors, and if errors persist, be prepared to re-direct to another region of your choice where you have a service subscription.
+If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a service subscription.
Follow these steps to configure your client to monitor for errors:
Follow these steps to configure your client to monitor for errors:
3. From Azure portal, create Speech Service resources for each region. - If you have set a specific quota, you may also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md).
-4. Note that each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
+4. Each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
- Regional Speech service endpoints - [Regional subscription key and the region code](./rest-speech-to-text.md)
-5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here is sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
+5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here's sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
1. Since networks experience transient errors, for single connectivity issue occurrences, the suggestion is to retry. 2. For persistence redirect traffic to the new STS token service and Speech service endpoint. (For Text-to-Speech, reference sample code: [GitHub: TTS public voice switching region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L880).
-The recovery from regional failures for this usage type can be instantaneous and at a very low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal.
+The recovery from regional failures for this usage type can be instantaneous and at a low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal.
## Custom endpoint recovery
-Data assets, models or deployments in one region cannot be made visible or accessible in any other region.
+Data assets, models or deployments in one region can't be made visible or accessible in any other region.
You should create Speech Service resources in both a main and a secondary region by following the same steps as used for default endpoints. ### Custom Speech
-Custom Speech Service does not support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
+Custom Speech Service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
1. Create your custom model in one main region (Primary). 2. Run the [Model Copy API](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to replicate the custom model to all prepared regions (Secondary).
Custom Speech Service does not support automatic failover. We suggest the follow
- If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Configure your client to fail over on persistent errors as with the default endpoints usage.
-Your client code can monitor availability of your deployed models in your primary region, and redirect their audio traffic to the secondary region when the primary fails. If you do not require real-time failover, you can still follow these steps to prepare for a manual failover.
+Your client code can monitor availability of your deployed models in your primary region, and redirect their audio traffic to the secondary region when the primary fails. If you don't require real-time failover, you can still follow these steps to prepare for a manual failover.
#### Offline failover
-If you do not require real-time failover you can decide to import your data, create and deploy your models in the secondary region at a later time with the understanding that these tasks will take time to complete.
+If you don't require real-time failover you can decide to import your data, create and deploy your models in the secondary region at a later time with the understanding that these tasks will take time to complete.
#### Failover time requirements
This section provides general guidance about timing. The times were recorded to
- Model copy API call: **10 mins** - Client code reconfiguration and deployment: **Depending on the client system**
-It is nonetheless advisable to create keys for a primary and secondary region for production models with real-time requirements.
+It's nonetheless advisable to create keys for a primary and secondary region for production models with real-time requirements.
### Custom Voice
-Custom Voice does not support automatic failover. Handle real-time synthesis failures with these two options.
+Custom Voice doesn't support automatic failover. Handle real-time synthesis failures with these two options.
**Option 1: Fail over to public voice in the same region.**
Check the [public voices available](./language-support.md#prebuilt-neural-voices
**Option 2: Fail over to custom voice on another region.** 1. Create and deploy your custom voice in one main region (primary).
-2. Copy your custom voice model to another region (the secondary region) in [Speech Studio](https://speech.microsoft.com).
+2. Copy your custom voice model to another region (the secondary region) in [Speech Studio](https://aka.ms/speechstudio/).
3. Go to Speech Studio and switch to the Speech resource in the secondary region. Load the copied model and create a new endpoint. - Voice model deployment usually finishes **in 3 minutes**.
- - Note: additional endpoint is subjective to additional charges. [Check the pricing for model hosting here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+ - Each endpoint is subject to extra charges. [Check the pricing for model hosting here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
4. Configure your client to fail over to the secondary region. See sample code in C#: [GitHub: custom voice failover to secondary region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L920). ### Speaker Recognition
-Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically failover operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used in case of an outage.
+Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically fail over operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used if there's an outage.
-During an outage, Speaker Recognition service will automatically failover to a paired region and use the backed up data to continue processing requests until the main region is back online.
+During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
If you have multiple phrases to add, call `.addPhrase()` for each phrase to add
# [Custom speech-to-text](#tab/cstt)
-The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://speech.microsoft.com/customspeech).
+The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech).
The custom speech **Model ID** is required to run the container. For more information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
How to get information for the base model:
How to get information for the custom model:
-1. Go to the [Speech Studio](https://speech.microsoft.com/) portal.
+1. Go to the [Speech Studio](https://aka.ms/speechstudio/customspeech) portal.
1. Sign in if necessary, and go to **Custom Speech**. 1. Select your project, and go to **Deployment**. 1. Select the required endpoint.
You aren't able to see the existing value of the concurrent request limit parame
To create an increase request, you provide your deployment region and the custom endpoint ID. To get it, perform the following actions:
-1. Go to the [Speech Studio](https://speech.microsoft.com/) portal.
+1. Go to the [Speech Studio](https://aka.ms/speechstudio/customvoice) portal.
1. Sign in if necessary, and go to **Custom Voice**. 1. Select your project, and go to **Deployment**. 1. Select the required endpoint.
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
# What is Speech Studio?
-[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
+[Speech Studio](https://aka.ms/speechstudio/) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
## Speech Studio features In Speech Studio, the following Speech service features are available as project types:
-* **Real-time speech-to-text**: Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md).
+* [Real-time speech-to-text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md).
-* **Custom Speech**: Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
+* [Custom Speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
-* **Pronunciation assessment**: Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
+* [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
-* **Voice Gallery**: Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and human-like neural voices.
+* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and human-like neural voices.
-* **Custom Voice**: Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
+* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
-* **Audio Content Creation**: Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots, with the easy-to-use [Audio Content Creation](how-to-audio-content-creation.md) tool. With Speech Studio, you can export these audio files to use in your applications.
+* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots, with the easy-to-use [Audio Content Creation](how-to-audio-content-creation.md) tool. With Speech Studio, you can export these audio files to use in your applications.
-* **Custom Keyword**: A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
+* [Custom Keyword](https://aka.ms/speechstudio/customkeyword): A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
-* **Custom Commands**: Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
+* [Custom Commands](https://aka.ms/speechstudio/customcommands): Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
## Next steps
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/voice-assistants.md
The first step in creating a voice assistant is to decide what you want it to do
| If you want... | Consider using... | Examples | |-||-| |Open-ended conversation with robust skills integration and full deployment control | Azure Bot Service bot with [Direct Line Speech](direct-line-speech.md) channel | <ul><li>"I need to go to Seattle"</li><li>"What kind of pizza can I order?"</li></ul>
-|Voice-command or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>More examples at [Speech Studio](https://speech.microsoft.com/customcommands)</li></ul>
+|Voice-command or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>More examples at [Speech Studio](https://aka.ms/speechstudio/customcommands)</li></ul>
If you aren't yet sure what you want your assistant to do, we recommend [Direct Line Speech](direct-line-speech.md) as the best option. It offers integration with a rich set of tools and authoring aids, such as the [Virtual Assistant solution and enterprise template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md), to build on common patterns and use your existing knowledge sources.
cognitive-services Training And Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/training-and-model.md
Title: "Legacy: What are trainings and models? - Custom Translator"
+ Title: "Legacy: What are trainings and modeling? - Custom Translator"
description: A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive data sets are required training dataset, tuning dataset, and testing dataset.
#Customer intent: As a Custom Translator user, I want to concept of a model and training, so that I can efficiently use training, tuning and testing datasets the helps me build a translation model.
-# What are trainings and models?
+# What are training and modeling?
A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive document types are required: training, tuning, and testing. Dictionary document type can also be provided. For more information, _see_ [Sentence alignment](./sentence-alignment.md#suggested-minimum-number-of-sentences).
The test data should include parallel documents where the target language senten
You don't need more than 2,500 sentences as the testing data. When you let the system choose the testing set automatically, it will use a random subset of sentences from your bilingual training documents, and exclude these sentences from the training material itself.
-You can view the custom translations of the testing set, and compare them to the translations provided in your testing set, by navigating to the test tab within a model.
+You can view the custom translations of the testing set, and compare them to the translations provided in your testing set, by navigating to the test tab within a model.
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Previously updated : 12/03/2021 Last updated : 05/27/2022
The Language service enables you to send API requests asynchronously, using eith
Currently, the following features are available to be used asynchronously: * Entity linking
-* Extractive summarization
+* Document summarization
+* Conversation summarization
* Key phrase extraction * Language detection * Named Entity Recognition (NER)
-* Personally Identifiable Information (PII) detection
+* Customer content detection
* Sentiment analysis and opinion mining * Text Analytics for health
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Document summarization supports the following features:
This documentation contains the following article types: * [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=conversation-summarization) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/document-summarization.md) contain instructions for using the service in more specific or customized ways.
+* [**How-to guides**](how-to/conversation-summarization.md) contain instructions for using the service in more specific or customized ways.
Conversation summarization is a broad topic, consisting of several approaches to represent relevant information in text. The conversation summarization feature described in this documentation enables you to use abstractive text summarization to produce a summary of issues and resolutions in transcripts of web chats and service call transcripts between customer-service agents, and your customers.
cognitive-services Concept Active Inactive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-active-inactive-events.md
Title: Active and inactive events - Personalizer description: This article discusses the use of active and inactive events within the Personalizer service.--++ ms.
cognitive-services Concept Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-active-learning.md
Title: Learning policy - Personalizer description: Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different.--++ ms.
cognitive-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-apprentice-mode.md
Title: Apprentice mode - Personalizer description: Learn how to use apprentice mode to gain confidence in a model without changing any code.--++ ms.
Last updated 05/01/2020
# Use Apprentice mode to train Personalizer without affecting your existing application
-Due to the nature of **real-world** Reinforcement Learning, a Personalizer model can only be trained in a production environment. When deploying a new use case, the Personalizer model is not performing efficiently because it takes time for the model to be sufficiently trained. **Apprentice mode** is a learning behavior that eases this situation and allows you to gain confidence in the model ΓÇô without the developer changing any code.
+Due to the nature of **real-world** Reinforcement Learning, a Personalizer model can only be trained in a production environment. When deploying a new use case, the Personalizer model isn't performing efficiently because it takes time for the model to be sufficiently trained. **Apprentice mode** is a learning behavior that eases this situation and allows you to gain confidence in the model ΓÇô without the developer changing any code.
[!INCLUDE [Important Blue Box - Apprentice mode pricing tier](./includes/important-apprentice-mode.md)]
Apprentice mode gives you trust in the Personalizer service and its machine lear
The two main reasons to use Apprentice mode are:
-* Mitigating **Cold Starts**: Apprentice mode helps manage and assess the cost of a "new" model's learning time - when it is not returning the best action and not achieved a satisfactory level of effectiveness of around 60-80%.
+* Mitigating **Cold Starts**: Apprentice mode helps manage and assess the cost of a "new" model's learning time - when it isn't returning the best action and not achieved a satisfactory level of effectiveness of around 60-80%.
* **Validating Action and Context Features**: Features sent in actions and context may be inadequate or inaccurate - too little, too much, incorrect, or too specific to train Personalizer to attain the ideal effectiveness rate. Use [feature evaluations](concept-feature-evaluation.md) to find and fix issues with features. ## When should you use Apprentice mode? Use Apprentice mode to train Personalizer to improve its effectiveness through the following scenarios while leaving the experience of your users unaffected by Personalizer:
-* You are implementing Personalizer in a new use case.
-* You have significantly changed the features you send in Context or Actions.
-* You have significantly changed when and how you calculate rewards.
+* You're implementing Personalizer in a new use case.
+* You've significantly changed the features you send in Context or Actions.
+* You've significantly changed when and how you calculate rewards.
-Apprentice mode is not an effective way of measuring the impact Personalizer is having on reward scores. To measure how effective Personalizer is at choosing the best possible action for each Rank call, use [Offline evaluations](concepts-offline-evaluation.md).
+Apprentice mode isn't an effective way of measuring the impact Personalizer is having on reward scores. To measure how effective Personalizer is at choosing the best possible action for each Rank call, use [Offline evaluations](concepts-offline-evaluation.md).
## Who should use Apprentice mode?
Apprentice mode is useful for developers, data scientists and business decision
* **Data scientists** can use Apprentice mode to validate that the features are effective to train the Personalizer models, that the reward wait times arenΓÇÖt too long or short.
-* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (i.e. rewards) compared to existing business logic. This allows them to make a informed decision impacting user experience, where real revenue and user satisfaction are at stake.
+* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (i.e. rewards) compared to existing business logic. This allows them to make an informed decision impacting user experience, where real revenue and user satisfaction are at stake.
## Comparing Behaviors - Apprentice mode and Online mode
Learning when in Apprentice mode differs from Online mode in the following ways.
|--|--|--| |Impact on User Experience|You can use existing user behavior to train Personalizer by letting it observe (not affect) what your **default action** would have been and the reward it obtained. This means your usersΓÇÖ experience and the business results from them wonΓÇÖt be impacted.|Display top action returned from Rank call to affect user behavior.| |Learning speed|Personalizer will learn more slowly when in Apprentice mode than when learning in Online mode. Apprentice mode can only learn by observing the rewards obtained by your **default action**, which limits the speed of learning, as no exploration can be performed.|Learns faster because it can both exploit the current model and explore for new trends.|
-|Learning effectiveness "Ceiling"|Personalizer can approximate, very rarely match, and never exceed the performance of your base business logic (the reward total achieved by the **default action** of each Rank call). This approximation cieling is reduced by exploration. For example, with exploration at 20% it is very unlikely apprentice mode performance will exceed 80%, and 60% is a reasonable target at which to graduate to online mode.|Personalizer should exceed applications baseline, and over time where it stalls you should conduct on offline evaluation and feature evaluation to continue to get improvements to the model. |
-|Rank API value for rewardActionId|The users' experience doesnΓÇÖt get impacted, as _rewardActionId_ is always the first action you send in the Rank request. In other words, the Rank API does nothing visible for your application during Apprentice mode. Reward APIs in your application should not change how it uses the Reward API between one mode and another.|Users' experience will be changed by the _rewardActionId_ that Personalizer chooses for your application. |
+|Learning effectiveness "Ceiling"|Personalizer can approximate, very rarely match, and never exceed the performance of your base business logic (the reward total achieved by the **default action** of each Rank call). This approximation ceiling is reduced by exploration. For example, with exploration at 20% it's very unlikely apprentice mode performance will exceed 80%, and 60% is a reasonable target at which to graduate to online mode.|Personalizer should exceed applications baseline, and over time where it stalls you should conduct on offline evaluation and feature evaluation to continue to get improvements to the model. |
+|Rank API value for rewardActionId|The users' experience doesnΓÇÖt get impacted, as _rewardActionId_ is always the first action you send in the Rank request. In other words, the Rank API does nothing visible for your application during Apprentice mode. Reward APIs in your application shouldn't change how it uses the Reward API between one mode and another.|Users' experience will be changed by the _rewardActionId_ that Personalizer chooses for your application. |
|Evaluations|Personalizer keeps a comparison of the reward totals that your default business logic is getting, and the reward totals Personalizer would be getting if in Online mode at that point. A comparison is available in the Azure portal for that resource|Evaluate PersonalizerΓÇÖs effectiveness by running [Offline evaluations](concepts-offline-evaluation.md), which let you compare the total rewards Personalizer has achieved against the potential rewards of the applicationΓÇÖs baseline.| A note about apprentice mode's effectiveness:
Apprentice Mode attempts to train the Personalizer model by attempting to imitat
### Scenarios where Apprentice Mode May Not be Appropriate: #### Editorially chosen Content:
-In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors are not an algorithm, and the factors considered by editors can be nuanced and not included as features of the context and actions, Apprentice mode is unlikely to be able to predict the next baseline action. In these situations you can:
+In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors aren't an algorithm, and the factors considered by editors can be nuanced and not included as features of the context and actions, Apprentice mode is unlikely to be able to predict the next baseline action. In these situations you can:
-* Test Personalizer in Online Mode: Apprentice mode not predicting baselines does not imply Personalizer can't achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
+* Test Personalizer in Online Mode: Apprentice mode not predicting baselines doesn't imply Personalizer can't achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
* Add editorial considerations and recommendations as features: Ask your editors what factors influence their choices, and see if you can add those as features in your context and action. For example, editors in a media company may highlight content while a certain celebrity is in the news: This knowledge could be added as a Context feature. ### Factors that will improve and accelerate Apprentice Mode
-If apprentice mode is learning and attaining Matched rewards above zero but seems to be growing slowly (not getting to 60%..80% matched rewards within 2 weeks), it is possible that the challenge is having too little data. Taking the following steps could accelerate the learning.
+If apprentice mode is learning and attaining Matched rewards above zero but seems to be growing slowly (not getting to 60% to 80% matched rewards within two weeks), it's possible that the challenge is having too little data. Taking the following steps could accelerate the learning.
1. Adding more events with positive rewards over time: Apprentice mode will perform better in use cases where your application gets more than 100 positive rewards per day. For example, if a website rewarding a click has 2% clickthrough, it should be having at least 5,000 visits per day to have noticeable learning. 2. Try a reward score that is simpler and happens more frequently. For example going from "Did users finish reading the article" to "Did users start reading the article". 3. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will make them less similar.
-4. Reducing Actions per Event: Personalizer will use the Explore % setting to discover preferences and trends. When a Rank call has more actions, the chance of an Action being chosen for exploration becomes lower. Reduce the number of actions sent in each Rank call to a smaller number, to less than 10. This can be a temporary adjustement to show that Apprentice Mode has the right data to match rewards.
+4. Reducing Actions per Event: Personalizer will use the Explore % setting to discover preferences and trends. When a Rank call has more actions, the chance of an Action being chosen for exploration becomes lower. Reduce the number of actions sent in each Rank call to a smaller number, to less than 10. This can be a temporary adjustment to show that Apprentice Mode has the right data to match rewards.
## Using Apprentice mode to train with historical data If you have a significant amount of historical data, youΓÇÖd like to use to train Personalizer, you can use Apprentice mode to replay the data through Personalizer.
-Set up the Personalizer in Apprentice Mode and create a script that calls Rank with the actions and context features from the historical data. Call the Reward API based on your calculations of the records in this data. You will need approximately 50,000 historical events to see some results but 500,000 is recommended for higher confidence in the results.
+Set up the Personalizer in Apprentice Mode and create a script that calls Rank with the actions and context features from the historical data. Call the Reward API based on your calculations of the records in this data. You'll need approximately 50,000 historical events to see some results but 500,000 is recommended for higher confidence in the results.
-When training from historical data, it is recommended that the data sent in (features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set), matches the data (features and calculation of reward) available from the existing application.
+When training from historical data, it's recommended that the data sent in (features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set), matches the data (features and calculation of reward) available from the existing application.
Offline and post-facto data tends to be more incomplete and noisier and differs in format. While training from historical data is possible, the results from doing so may be inconclusive and not a good predictor of how well Personalizer will learn, especially if the features vary between past data and the existing application.
Typically for Personalizer, when compared to training with historical data, chan
## Using Apprentice Mode versus A/B Tests
-It is only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode. In Apprentice mode, only the **default action** is used, which means all users would effectively see the control experience.
+It's only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode. In Apprentice mode, only the **default action** is used, which means all users would effectively see the control experience.
Even if Personalizer is just the _treatment_, the same challenge is present when validating the data is good for training Personalizer. Apprentice mode could be used instead, with 100% of traffic, and with all users getting the control (unaffected) experience.
cognitive-services Concept Auto Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-auto-optimization.md
Title: Auto-optimize - Personalizer description: This article provides a conceptual overview of the auto-optimize feature for Azure Personalizer service.--++ ms.
cognitive-services Concept Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-feature-evaluation.md
Title: Feature evaluation - Personalizer description: When you run an Evaluation in your Personalizer resource from the Azure portal, Personalizer provides information about what features of context and actions are influencing the model. --++ ms.
When you run an Evaluation in your Personalizer resource from the [Azure portal]
This is useful in order to: * Imagine additional features you could use, getting inspiration from what features are more important in the model.
-* See what features are not important, and potentially remove them or further analyze what may be affecting usage.
+* See what features aren't important, and potentially remove them or further analyze what may be affecting usage.
* Provide guidance to editorial or curation teams about new content or products worth bringing into the catalog. * Troubleshoot common problems and mistakes that happen when sending features to Personalizer.
To see feature importance results, you must run an evaluation. The evaluation cr
The resulting information about feature importance represents the current Personalizer online model. The evaluation analyzes feature importance of the model saved at the end date of the evaluation period, after undergoing all the training done during the evaluation, with the current online learning policy.
-The feature importance results do not represent other policies and models tested or created during the evaluation. The evaluation will not include features sent to Personalizer after the end of the evaluation period.
+The feature importance results don't represent other policies and models tested or created during the evaluation. The evaluation won't include features sent to Personalizer after the end of the evaluation period.
## How to interpret the feature importance evaluation
Personalizer evaluates features by creating "groups" of features that have simil
Information about each Feature includes:
-* Whether the feature comes from Context or Actions.
-* Feature Key and Value.
+* Whether the feature comes from Context or Actions
+* Feature Key and Value
-For example, an ice cream shop ordering app may see "Context.Weather:Hot" as a very important feature.
+For example, an ice cream shop ordering app may see `Context.Weather:Hot` as a very important feature.
Personalizer displays correlations of features that, when taken into account together, produce higher rewards.
-For example, you may see "Context.Weather:Hot *with* Action.MenuItem:IceCream" as well as "Context.Weather:Cold *with* Action.MenuItem:WarmTea:
+For example, you may see `Context.Weather:Hot` *with* `Action.MenuItem:IceCream` as well as `Context.Weather:Cold` *with* `Action.MenuItem:WarmTea:`.
## Actions you can take based on feature evaluation
For example, you may see "Context.Weather:Hot *with* Action.MenuItem:IceCream" a
Get inspiration from the more important features in the model. For example, if you see "Context.MobileBattery:Low" in a video mobile app, you may think that connection type may also make customers choose to see one video clip over another, then add features about connectivity type and bandwidth into your app.
-### See what features are not important
+### See what features aren't important
-Potentially remove unimportant features or further analyze what may affect usage. Features may rank low for many reasons. One could be that genuinely the feature doesn't affect user behavior. But it could also mean that the feature is not apparent to the user.
+Potentially remove unimportant features or further analyze what may affect usage. Features may rank low for many reasons. One could be that genuinely the feature doesn't affect user behavior. But it could also mean that the feature isn't apparent to the user.
For example, a video site could see that "Action.VideoResolution=4k" is a low-importance feature, contradicting user research. The cause could be that the application doesn't even mention or show the video resolution, so users wouldn't change their behavior based on it. ### Provide guidance to editorial or curation teams
-Provide guidance about new content or products worth bringing into the catalog. Personalizer is designed to be a tool that augments human insight and teams. One way it does this is by providing information to editorial groups on what is it about products, articles or content that drives behavior. For example, the video application scenario may show that there is an important feature called "Action.VideoEntities.Cat:true", prompting the editorial team to bring in more cat videos.
+Provide guidance about new content or products worth bringing into the catalog. Personalizer is designed to be a tool that augments human insight and teams. One way it does this is by providing information to editorial groups on what is it about products, articles or content that drives behavior. For example, the video application scenario may show that there's an important feature called "Action.VideoEntities.Cat:true", prompting the editorial team to bring in more cat videos.
### Troubleshoot common problems and mistakes
Common problems and mistakes can be fixed by changing your application code so i
Common mistakes when sending features include the following:
-* Sending personally identifiable information (PII). PII specific to one individual (such as name, phone number, credit card numbers, IP Addresses) should not be used with Personalizer. If your application needs to track users, use a non-identifying UUID or some other UserID number. In most scenarios this is also problematic.
-* With large numbers of users, it is unlikely that each user's interaction will weigh more than all the population's interaction, so sending user IDs (even if non-PII) will probably add more noise than value to the model.
-* Sending date-time fields as precise timestamps instead of featurized time values. Having features such as Context.TimeStamp.Day=Monday or "Context.TimeStamp.Hour"="13" is more useful. There will be at most 7 or 24 feature values for each. But "Context.TimeStamp":"1985-04-12T23:20:50.52Z" is so precise that there will be no way to learn from it because it will never happen again.
+* Sending personally identifiable information (PII). PII specific to one individual (such as name, phone number, credit card numbers, IP Addresses) shouldn't be used with Personalizer. If your application needs to track users, use a non-identifying UUID or some other UserID number. In most scenarios this is also problematic.
+* With large numbers of users, it's unlikely that each user's interaction will weigh more than all the population's interaction, so sending user IDs (even if non-PII) will probably add more noise than value to the model.
+* Sending date-time fields as precise timestamps instead of featurized time values. Having features such as Context.TimeStamp.Day=Monday or "Context.TimeStamp.Hour"="13" is more useful. There will be at most 7 or 24 feature values for each. But `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is so precise that there will be no way to learn from it because it will never happen again.
## Next steps
cognitive-services Concept Multi Slot Personalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-multi-slot-personalization.md
Title: Multi-slot personalization description: Learn where and when to use single-slot and multi-slot personalization with the Personalizer Rank and Reward APIs. --++
cognitive-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-rewards.md
Title: Reward score - Personalizer description: The reward score indicates how well the personalization choice, RewardActionID, resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior. Personalizer trains its machine learning models by evaluating the rewards.--++ ms.
cognitive-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-exploration.md
Title: Exploration - Personalizer description: With exploration, Personalizer is able to continue delivering good results, even as user behavior changes. Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.--++ ms.
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
Title: "Features: Action and context - Personalizer" description: Personalizer uses features, information about actions and context, to make better ranking suggestions. Features can be very generic, or specific to an item.--++ ms.
cognitive-services Concepts Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-offline-evaluation.md
Title: Use the Offline Evaluation method - Personalizer description: This article will explain how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.--++ ms.
cognitive-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-reinforcement-learning.md
Title: Reinforcement Learning - Personalizer description: Personalizer uses information about actions and current context to make better ranking suggestions. The information about these actions and context are attributes or properties that are referred to as features.--++ ms.
While there are many subtypes and styles of reinforcement learning, this is how
* Your application provides information about each alternative and the context of the user. * Your application computes a _reward score_.
-Unlike some approaches to reinforcement learning, Personalizer does not require a simulation to work in. Its learning algorithms are designed to react to an outside world (versus control it) and learn from each data point with an understanding that it is a unique opportunity that cost time and money to create, and that there is a non-zero regret (loss of possible reward) if suboptimal performance happens.
+Unlike some approaches to reinforcement learning, Personalizer doesn't require a simulation to work in. Its learning algorithms are designed to react to an outside world (versus control it) and learn from each data point with an understanding that it's a unique opportunity that cost time and money to create, and that there's a non-zero regret (loss of possible reward) if suboptimal performance happens.
## What type of reinforcement learning algorithms does Personalizer use?
The explore/exploit traffic allocation is made randomly following the percentage
John Langford coined the name Contextual Bandits (Langford and Zhang [2007]) to describe a tractable subset of reinforcement learning and has worked on a half-dozen papers improving our understanding of how to learn in this paradigm: * Beygelzimer et al. [2011]
-* Dudík et al. [2011a,b]
+* Dudík et al. [2011a, b]
* Agarwal et al. [2014, 2012] * Beygelzimer and Langford [2009] * Li et al. [2010]
cognitive-services Concepts Scalability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-scalability-performance.md
Title: Scalability and Performance - Personalizer description: "High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance: latency and training throughput."--++ ms.
Some applications require low latencies when returning a rank. Low latencies are
Personalizer works by updating a model that is retrained based on messages sent asynchronously by Personalizer after Rank and Reward APIs. These messages are sent using an Azure EventHub for the application.
- It is unlikely most applications will reach the maximum joining and training throughput of Personalizer. While reaching this maximum will not slow down the application, it would imply Event Hub queues are getting filled internally faster than they can be cleaned up.
+ It's unlikely most applications will reach the maximum joining and training throughput of Personalizer. While reaching this maximum won't slow down the application, it would imply event hub queues are getting filled internally faster than they can be cleaned up.
## How to estimate your throughput requirements * Estimate the average number of bytes per ranking event adding the lengths of the context and action JSON documents. * Divide 20MB/sec by this estimated average bytes.
-For example, if your average payload has 500 features and each is an estimated 20 characters, then each event is approximately 10kb. With these estimates, 20,000,000 / 10,000 = 2,000 events/sec, which is about 173 million events/day.
+For example, if your average payload has 500 features and each is an estimated 20 characters, then each event is approximately 10 kb. With these estimates, 20,000,000 / 10,000 = 2,000 events/sec, which is about 173 million events/day.
-If you are reaching these limits, please contact our support team for architecture advice.
+If you're reaching these limits, please contact our support team for architecture advice.
## Next steps
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/encrypt-data-at-rest.md
Title: Personalizer service encryption of data at rest description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Personalizer, and how to enable and manage CMK. -+ Last updated 08/28/2020-+ #Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
cognitive-services Ethics Responsible Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/ethics-responsible-use.md
Title: Ethics and responsible use - Personalizer description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.--++ ms.
cognitive-services How Personalizer Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-personalizer-works.md
Title: How Personalizer Works - Personalizer description: The Personalizer _loop_ uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the Rank and Reward calls.--++ ms.
cognitive-services How To Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-create-resource.md
Title: Create Personalizer resource description: In this article, learn how to create a personalizer resource in the Azure portal for each feedback loop. --++ ms.
cognitive-services How To Learning Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-learning-behavior.md
Title: Configure learning behavior description: Apprentice mode gives you confidence in the Personalizer service and its machine learning capabilities, and provides metrics that the service is sent information that can be learned from ΓÇô without risking online traffic.--++ ms.
cognitive-services How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-manage-model.md
Title: Manage model and learning settings - Personalizer description: The machine-learned model and learning settings can be exported for backup in your own source control system.--++ ms.
From the Resource management's section for **Model and learning settings**, revi
## Clear data for your learning loop 1. In the Azure portal, for your Personalizer resource, on the **Model and learning settings** page, select **Clear data**.
-1. In order to clear all data, and reset the learning loop to the original state, select all 3 check boxes.
+1. In order to clear all data, and reset the learning loop to the original state, select all three check boxes.
![In Azure portal, clear data from Personalizer resource.](./media/settings/clear-data-from-personalizer-resource.png) |Value|Purpose| |--|--|
- |Logged personalization and reward data.|This logging data is used in offline evaluations. Clear the data if you are resetting your resource.|
+ |Logged personalization and reward data.|This logging data is used in offline evaluations. Clear the data if you're resetting your resource.|
|Reset the Personalizer model.|This model changes on every retraining. This frequency of training is specified in **upload model frequency** on the **Configuration** page. |
- |Set the learning policy to default.|If you have changed the learning policy as part of an offline evaluation, this resets to the original learning policy.|
+ |Set the learning policy to default.|If you've changed the learning policy as part of an offline evaluation, this resets to the original learning policy.|
1. Select **Clear selected data** to begin the clearing process. Status is reported in Azure notifications, in the top-right navigation.
cognitive-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-multi-slot.md
Title: How to use multi-slot with Personalizer description: Learn how to use multi-slot with Personalizer to improve content recommendations provided by the service. --++
cognitive-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-offline-evaluation.md
Title: How to perform offline evaluation - Personalizer description: This article will show you how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.--++ ms.
cognitive-services How To Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-settings.md
Title: Configure Personalizer description: Service configuration includes how the service treats rewards, how often the service explores, how often the model is retrained, and how much data is stored.--++ ms.
cognitive-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md
Title: "Quickstart: Create and use learning loop with SDK - Personalizer" description: This quickstart shows you how to create and manage your knowledge base using the Personalizer client library.--++ ms.
cognitive-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/terminology.md
Title: Terminology - Personalizer description: Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs.--++ ms.
cognitive-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
Title: "Tutorial: Azure Notebook - Personalizer" description: This tutorial simulates a Personalizer loop _system in an Azure Notebook, which suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is also available and stored in a coffee dataset.--++ ms.
cognitive-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-personalizer-chat-bot.md
Title: Use Personalizer in chat bot - Personalizer description: Customize a C# .NET chat bot with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.--++ ms.
cognitive-services Tutorial Use Personalizer Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-personalizer-web-app.md
Title: Use web app - Personalizer description: Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.--++ ms.
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/what-is-personalizer.md
Title: What is Personalizer? description: Personalizer is a cloud-based service that allows you to choose the best experience to show to your users, learning from their real-time behavior.--++ ms.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/whats-new.md
Title: What's new - Personalizer description: This article contains news about Personalizer.--++ ms.
cognitive-services Where Can You Use Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/where-can-you-use-personalizer.md
Title: Where and how to use - Personalizer description: Personalizer can be applied in any situation where your application can select the right item, action, or product to display - in order to make the experience better, achieve better business results, or improve productivity.--++ ms.
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
Title: Azure direct routing provisioning and configuration - Azure Communication Services
-description: Learn how to add a Session Border Controller and configure voice routing for Azure Communication Services direct routing
+ Title: Use direct routing to connect existing telephony service
+description: Learn how to add a Session Border Controller and configure voice routing for Azure Communication Services direct routing.
Previously updated : 06/30/2021 Last updated : 05/26/2022 +
-# Session Border Controllers and voice routing
+# Use direct routing to connect to existing telephony service
Azure Communication Services direct routing enables you to connect your existing telephony infrastructure to Azure. The article lists the high-level steps required for connecting a supported Session Border Controller (SBC) to direct routing and how voice routing works for the enabled Communication resource. [!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)]
For information about whether Azure Communication Services direct routing is the
### Configure using Azure portal 1. In the left navigation, select Direct routing under Voice Calling - PSTN and then select Configure from the Session Border Controller tab.
-1. Enter a fully qualified domain name and signaling port for the SBC.
-
-- SBC certificate must match the name; wildcard certificates are supported.-- The *.onmicrosoft.com domain canΓÇÖt be used for the FQDN of the SBC.
-For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./direct-routing-infrastructure.md).
- :::image type="content" source="../media/direct-routing-provisioning/add-session-border-controller.png" alt-text="Adding Session Border Controller.":::
-- When you're done, select Next.
-If everything set up correctly, you should see exchange of OPTIONS messages between Microsoft and your Session Border Controller, user your SBC monitoring/logs to validate the connection.
+2. Enter a fully qualified domain name and signaling port for the SBC.
+ - SBC certificate must match the name; wildcard certificates are supported.
+ - The *.onmicrosoft.com domain canΓÇÖt be used for the FQDN of the SBC.
+
+ For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./direct-routing-infrastructure.md).
+
+ :::image type="content" source="../media/direct-routing-provisioning/add-session-border-controller.png" alt-text="Screenshot of Adding Session Border Controller.":::
+
+3. When you're done, select Next.
+
+ If everything is set up correctly, you should see an exchange of OPTIONS messages between Microsoft and your Session Border Controller. Use your SBC monitoring/logs to validate the connection.
## Voice routing considerations
-Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific Session Border Controller (SBC) based on the called number pattern.
-When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) will try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource. If there's a match, the call goes through the direct routing trunk. If there's no match, the next step is to process the `alternateCallerId` parameter of the `callAgent.startCall` method. If the resource is enabled for Voice Calling (PSTN) and has at least one number purchased from Microsoft, the `alternateCallerId` is checked. If the `alternateCallerId` matches one of a purchased number for the resource, the call is routed through the Voice Calling (PSTN) using Microsoft infrastructure. If `alternateCallerId` parameter doesn't match any of the purchased numbers, the call will fail. The diagram below demonstrates the Azure Communication Services voice routing logic.
+Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific SBC based on the called number pattern.
+
+When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) will try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource.
+
+- If there's a match, the call goes through the direct routing trunk.
+- If there's no match, the next step is to process the `alternateCallerId` parameter of the `callAgent.startCall` method.
+- If the resource is enabled for Voice Calling (PSTN) and has at least one number purchased from Microsoft, the `alternateCallerId` is checked.
+- If the `alternateCallerId` matches a purchased number for the resource, the call is routed through the Voice Calling (PSTN) using Microsoft infrastructure.
+- If `alternateCallerId` parameter doesn't match any of the purchased numbers, the call will fail.
+
+The diagram below demonstrates the Azure Communication Services voice routing logic.
## Voice routing examples The following examples display voice routing in a call flow.
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
### Configure using Azure portal
-Give your Voice Route a name, specify the number pattern using regular expressions, and select SBC for that pattern.
+Give your voice route a name, specify the number pattern using regular expressions, and select SBC for that pattern.
Here are some examples of basic regular expressions: - `^\+\d+$` - matches a telephone number with one or more digits that start with a plus - `^+1(\d[10])$` - matches a telephone number with a ten digits after a `+1`
You can select multiple SBCs for a single pattern. In such a case, the routing a
### Delete using Azure portal
-#### To delete a Voice Route:
+#### To delete a voice route:
1. In the left navigation, go to Direct routing under Voice Calling - PSTN and then select the Voice Routes tab. 1. Select route or routes you want to delete using a checkbox. 1. Select Remove.
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
Previously updated : 01/10/2022 Last updated : 05/24/2022
Azure and Teams are interoperable. This interoperability gives organizations cho
- **Microsoft 365 + Azure hybrid.** Combine Microsoft 365 Teams and Bookings with a custom Azure application for the consumer experience. Organizations take advantage of Microsoft 365's employee familiarity but customize and embed the consumer visit experience in their own application. - **Azure custom.** Build the entire solution on Azure primitives: the business experience, the consumer experience, and scheduling systems.
-![Diagram of virtual visit implementation options](./media/sample-builder/virtual-visit-options.svg)
+![Diagram of virtual visit implementation options](./media/virtual-visits/virtual-visit-options.svg)
These three **implementation options** are columns in the table below, while each row provides a **use case** and the **enabling technologies**.
There are other ways to customize and combine Microsoft tools to deliver a virtu
## Extend Microsoft 365 with Azure The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid configurations are popular because they combine employee familiarity of Microsoft 365 with the ability to customize the consumer experience. TheyΓÇÖre also a good launching point to understanding more complex and customized architectures. The diagram below shows user steps for a virtual visit:
-![High-level architecture of a hybrid virtual visits solution](./media/sample-builder/virtual-visit-arch.svg)
+![High-level architecture of a hybrid virtual visits solution](./media/virtual-visits/virtual-visit-arch.svg)
1. Consumer schedules the visit using Microsoft 365 Bookings. 2. Consumer gets a visit reminder through SMS and Email. 3. Provider joins the visit using Microsoft Teams.
In this section weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft
This sample uses takes advantage of the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar.
-![Booking configuration experience](./media/sample-builder/bookings-url.png)
+![Screenshot of Booking configuration experience](./media/virtual-visits/bookings-url.png)
+
+Make sure online meeting is enable for the calendar by going to https://outlook.office.com/bookings/services.
+
+![Screenshot of Booking services configuration experience](./media/virtual-visits/bookings-services.png)
+
+And then make sure "Add online meeting" is enable.
+
+![Screenshot of Booking services online meeting configuration experience](./media/virtual-visits/bookings-services-online-meeting.png)
+ ### Step 2 ΓÇô Sample Builder Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder), or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard and configure if Chat or Screen Sharing should be enabled. Change themes and text to you match your application. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors.
-[ ![Sample builder start page](./media/sample-builder/sample-builder-start.png)](./media/sample-builder/sample-builder-start.png#lightbox)
+[ ![Screenshot of Sample builder start page](./media/virtual-visits/sample-builder-start.png)](./media/virtual-visits/sample-builder-start.png#lightbox)
### Step 3 - Deploy At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js).
-[ ![Sample builder deployment page](./media/sample-builder/sample-builder-landing.png)](./media/sample-builder/sample-builder-landing.png#lightbox)
+[ ![Screenshot of Sample builder deployment page](./media/virtual-visits/sample-builder-landing.png)](./media/virtual-visits/sample-builder-landing.png#lightbox)
The deployment launches an Azure Resource Manager (ARM) template that deploys the themed application you configured.
-![Sample builder arm template](./media/sample-builder/sample-builder-arm.png)
+![Screenshot of Sample builder arm template](./media/virtual-visits/sample-builder-arm.png)
After walking through the ARM template you can **Go to resource group**
-![Screenshot of a completed Azure Resource Manager Template](./media/sample-builder/azure-complete-deployment.png)
+![Screenshot of a completed Azure Resource Manager Template](./media/virtual-visits/azure-complete-deployment.png)
### Step 4 - Test The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services.
-![produced azure resources in azure portal](./media/sample-builder/azure-resources.png)
+![Screenshot of produced azure resources in azure portal](./media/virtual-visits/azure-resources.png)
+
+Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISIT` allows you to try out the consumer experience and join a Teams meeting. `https://<YOUR URL>/BOOK` embeds the Booking experience for consumer scheduling.
+
+![Screenshot of final view of azure app service](./media/virtual-visits/azure-resource-final.png)
+
+### Step 5 - Set deployed app URL in Bookings
-Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISITS` allows you to try out the consumer experience and join a Teams meeting. `https://<YOUR URL>/BOOK` embeds the Booking experience for consumer scheduling.
+Copy your application url into your calendar Business information setting by going to https://outlook.office.com/bookings/businessinformation.
-![final view of azure app service](./media/sample-builder/azure-resource-final.png)
+![Screenshot of final view of bookings business information](./media/virtual-visits/bookings-acs-app-integration-url.png)
## Going to production The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual visit: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production. ### Launching patterns
-Consumers want to jump directly to the virtual visit from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISITS`, Bookings will point users to `https://<YOUR URL>/VISITS?=<TEAMID>.`
+Consumers want to jump directly to the virtual visit from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISIT`, Bookings will point users to `https://<YOUR URL>/VISIT?MEETINGURL=<MEETING URL>.`
### Integrate into your existing app The app service generated by the Sample Builder is a stand-alone artifact, designed for desktop and mobile browsers. However you may have a website or mobile application already and need to migrate these experiences to that existing codebase. The code generated by the Sample Builder should help, but you can also use:
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
Get started with a sample Redis Cache and Python Custom Application [here](https
[Gramine](https://grapheneproject.io/) is a lightweight guest OS, designed to run a single Linux application with minimal host requirements. Gramine can run applications in an isolated environment. There's tooling support for converting existing Docker container applications to Gramine Shielded Containers (GSCs).
-For more information, see the Gramine's [sample application and deployment on AKS](https://graphene.readthedocs.io/en/latest/cloud-deployment.html#azure-kubernetes-service-aks)
+For more information, see the Gramine's [sample application and deployment on AKS](https://github.com/gramineproject/contrib/tree/master/Examples/aks-attestation)
### Occlum
Do you have questions about your implementation? Do you want to become an enable
- [Deploy AKS cluster with Intel SGX Confidential VM Nodes](./confidential-enclave-nodes-aks-get-started.md) - [Microsoft Azure Attestation](../attestation/overview.md) - [Intel SGX Confidential Virtual Machines](virtual-machine-solutions-sgx.md)-- [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
+- [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
properties:
periodSeconds: 3 - type: readiness tcpSocket:
- - port: 8081
+ port: 8081
initialDelaySeconds: 10 periodSeconds: 3 - type: startup
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
You can use your existing MongoDB apps with API for MongoDB by just changing the
This API stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. Cassandra API in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. Cassandra API is wire protocol compatible with the Apache Cassandra. You should consider Cassandra API if you want to benefit the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This means on Cassandra API you don't need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
-You can use Apache Cassandra client drivers to connect to the Cassandra API. The Cassandra API enables you to interact with data using the Cassandra Query Language (CQL), and tools like CQL shell, Cassandra client drivers that you're already familiar with. Cassandra API currently only supports OLTP scenarios. Using Cassandra API, you can also use the unique features of Azure Cosmos DB such as change feed. To learn more, see [Cassandra API](cassandra-introduction.md) article.
+You can use Apache Cassandra client drivers to connect to the Cassandra API. The Cassandra API enables you to interact with data using the Cassandra Query Language (CQL), and tools like CQL shell, Cassandra client drivers that you're already familiar with. Cassandra API currently only supports OLTP scenarios. Using Cassandra API, you can also use the unique features of Azure Cosmos DB such as [change feed](cassandra-change-feed.md). To learn more, see [Cassandra API](cassandra-introduction.md) article. If you're already familiar with Apache Cassandra, but new to Azure Cosmos DB, we recommend our article on [how to adapt to the Cassandra API if you are coming from Apache Cassandra](./cassandr).
## Gremlin API
cosmos-db Create Graph Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-console.md
You need to have an Azure subscription to create an Azure Cosmos DB account for
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.3** or earlier. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
+You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.13**. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
## Create a database account
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-dotnet.md
Now let's clone a Gremlin API app from GitHub, set the connection string, and ru
5. Restore the NuGet packages in the project. The restore operation should include the Gremlin.Net driver, and the Newtonsoft.Json package.
-6. You can also install the Gremlin.Net@v3.4.6 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
+6. You can also install the Gremlin.Net@v3.4.13 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
```bash
- nuget install Gremlin.NET -Version 3.4.6
+ nuget install Gremlin.NET -Version 3.4.13
``` > [!NOTE]
-> The Gremlin API currently only [supports Gremlin.Net up to v3.4.6](gremlin-support.md#compatible-client-libraries). If you install the latest version, you'll receive errors when using the service.
+> The supported Gremlin.NET driver version for Gremlin API is available [here](gremlin-support.md#compatible-client-libraries). Latest released versions of Gremlin.NET may see incompatibilities, so please check the linked table for compatibility updates.
## Review the code
cosmos-db Create Graph Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-java.md
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). - [Git](https://www.git-scm.com/downloads). -- [Gremlin-driver 3.4.0](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.0), this dependency is mentioned in the quickstart sample's pom.xml
+- [Gremlin-driver 3.4.13](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.13), this dependency is mentioned in the quickstart sample's pom.xml
## Create a database account
cosmos-db Gremlin Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/gremlin-support.md
Azure Cosmos DB Graph engine closely follows [Apache TinkerPop](https://tinkerpo
The following table shows popular Gremlin drivers that you can use against Azure Cosmos DB:
-| Download | Source | Getting Started | Supported connector version |
+| Download | Source | Getting Started | Supported/Recommended connector version |
| | | | |
-| [.NET](https://tinkerpop.apache.org/docs/3.4.6/reference/#gremlin-DotNet) | [Gremlin.NET on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Create Graph using .NET](create-graph-dotnet.md) | 3.4.6 |
-| [Java](https://mvnrepository.com/artifact/com.tinkerpop.gremlin/gremlin-java) | [Gremlin JavaDoc](https://tinkerpop.apache.org/javadocs/current/full/) | [Create Graph using Java](create-graph-java.md) | 3.2.0+ |
-| [Node.js](https://www.npmjs.com/package/gremlin) | [Gremlin-JavaScript on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript) | [Create Graph using Node.js](create-graph-nodejs.md) | 3.3.4+ |
-| [Python](https://tinkerpop.apache.org/docs/3.3.1/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](create-graph-python.md) | 3.2.7 |
+| [.NET](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-DotNet) | [Gremlin.NET on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Create Graph using .NET](create-graph-dotnet.md) | 3.4.13 |
+| [Java](https://mvnrepository.com/artifact/com.tinkerpop.gremlin/gremlin-java) | [Gremlin JavaDoc](https://tinkerpop.apache.org/javadocs/current/full/) | [Create Graph using Java](create-graph-java.md) | 3.4.13 |
+| [Python](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](create-graph-python.md) | 3.4.13 |
+| [Gremlin console](https://tinkerpop.apache.org/download.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.4.13 |
+| [Node.js](https://www.npmjs.com/package/gremlin) | [Gremlin-JavaScript on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript) | [Create Graph using Node.js](create-graph-nodejs.md) | 3.4.13 |
| [PHP](https://packagist.org/packages/brightzone/gremlin-php) | [Gremlin-PHP on GitHub](https://github.com/PommeVerte/gremlin-php) | [Create Graph using PHP](create-graph-php.md) | 3.1.0 | | [Go Lang](https://github.com/supplyon/gremcos/) | [Go Lang](https://github.com/supplyon/gremcos/) | | This library is built by external contributors. The Azure Cosmos DB team doesn't offer any support or maintain the library. |
-| [Gremlin console](https://tinkerpop.apache.org/download.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.2.0 + |
+
+> [!NOTE]
+> Gremlin client driver versions for __3.5.*__, __3.6.*__ have known compatibility issues, so we recommend using the latest supported 3.4.* driver versions listed above.
+> This table will be updated when compatibility issues have been addressed for these newer driver versions.
## Supported Graph Objects
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Previously updated : 04/06/2022 Last updated : 05/26/2022 # Secure access to data in Azure Cosmos DB
Azure Cosmos DB provides three ways to control access to your data.
| Access control type | Characteristics | ||| | [Primary/secondary keys](#primary-keys) | Shared secret allowing any management or data operation. It comes in both read-write and read-only variants. |
-| [Role-based access control](#rbac) | Fine-grained, role-based permission model using Azure Active Directory (AAD) identities for authentication. |
+| [Role-based access control](#rbac) | Fine-grained, role-based permission model using Azure Active Directory (Azure AD) identities for authentication. |
| [Resource tokens](#resource-tokens)| Fine-grained permission model based on native Azure Cosmos DB users and permissions. | ## <a id="primary-keys"></a> Primary/secondary keys
CosmosClient client = new CosmosClient(endpointUrl, authorizationKey);
Azure Cosmos DB exposes a built-in role-based access control (RBAC) system that lets you: -- Authenticate your data requests with an Azure Active Directory (AAD) identity.
+- Authenticate your data requests with an Azure Active Directory identity.
- Authorize your data requests with a fine-grained, role-based permission model. Azure Cosmos DB RBAC is the ideal access control method in situations where:
For an example of a middle tier service used to generate or broker resource toke
Azure Cosmos DB users are associated with a Cosmos database. Each database can contain zero or more Cosmos DB users. The following code sample shows how to create a Cosmos DB user using the [Azure Cosmos DB .NET SDK v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement). ```csharp
-//Create a user.
-Database database = benchmark.client.GetDatabase("SalesDatabase");
-
+// Create a user.
+Database database = client.GetDatabase("SalesDatabase");
User user = await database.CreateUserAsync("User 1"); ```
A permission resource is associated with a user and assigned to a specific resou
If you enable the [diagnostic logs on data-plane requests](cosmosdb-monitor-resource-logs.md), the following two properties corresponding to the permission are logged:
-* **resourceTokenPermissionId** - This property indicates the resource token permission Id that you have specified.
+* **resourceTokenPermissionId** - This property indicates the resource token permission ID that you have specified.
* **resourceTokenPermissionMode** - This property indicates the permission mode that you have set when creating the resource token. The permission mode can have values such as "all" or "read".
The following code sample shows how to create a permission resource, read the re
```csharp // Create a permission on a container and specific partition key value Container container = client.GetContainer("SalesDatabase", "OrdersContainer");
-user.CreatePermissionAsync(
+await user.CreatePermissionAsync(
new PermissionProperties(
- id: "permissionUser1Orders",
- permissionMode: PermissionMode.All,
+ id: "permissionUser1Orders",
+ permissionMode: PermissionMode.All,
container: container, resourcePartitionKey: new PartitionKey("012345"))); ```
user.CreatePermissionAsync(
The following code snippet shows how to retrieve the permission associated with the user created above and instantiate a new CosmosClient on behalf of the user, scoped to a single partition key. ```csharp
-//Read a permission, create user client session.
-PermissionProperties permissionProperties = await user.GetPermission("permissionUser1Orders")
+// Read a permission, create user client session.
+Permission permission = await user.GetPermission("permissionUser1Orders").ReadAsync();
-CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrResourceToken: permissionProperties.Token);
+CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrResourceToken: permission.Resource.Token);
``` ## Differences between RBAC and resource tokens
CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrR
|--|--|--| | Authentication | With Azure Active Directory (Azure AD). | Based on the native Azure Cosmos DB users<br>Integrating resource tokens with Azure AD requires extra work to bridge Azure AD identities and Azure Cosmos DB users. | | Authorization | Role-based: role definitions map allowed actions and can be assigned to multiple identities. | Permission-based: for each Azure Cosmos DB user, you need to assign data access permissions. |
-| Token scope | An AAD token carries the identity of the requester. This identity is matched against all assigned role definitions to perform authorization. | A resource token carries the permission granted to a specific Azure Cosmos DB user on a specific Azure Cosmos DB resource. Authorization requests on different resources may requires different tokens. |
-| Token refresh | The AAD token is automatically refreshed by the Azure Cosmos DB SDKs when it expires. | Resource token refresh is not supported. When a resource token expires, a new one needs to be issued. |
+| Token scope | An Azure AD token carries the identity of the requester. This identity is matched against all assigned role definitions to perform authorization. | A resource token carries the permission granted to a specific Azure Cosmos DB user on a specific Azure Cosmos DB resource. Authorization requests on different resources may require different tokens. |
+| Token refresh | The Azure AD token is automatically refreshed by the Azure Cosmos DB SDKs when it expires. | Resource token refresh is not supported. When a resource token expires, a new one needs to be issued. |
## Add users and assign roles
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
Title: Overview of Cost Management + Billing-+ description: You use Cost Management + Billing features to conduct billing administrative tasks and manage billing access to costs. You also use the features to monitor and control Azure spending and to optimize Azure resource use. keywords:
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md
Title: Assign access to Cost Management data-+ description: This article walks you though assigning permission to Cost Management data for various access scopes.
cost-management-billing Aws Integration Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-manage.md
Title: Manage AWS costs and usage in Cost Management-+ description: This article helps you understand how to use cost analysis and budgets in Cost Management to manage your AWS costs and usage.
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
Title: Set up AWS integration with Cost Management-+ description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management.
cost-management-billing Cost Analysis Built In Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-built-in-views.md
Title: Use built-in views in Cost analysis-+ description: This article helps you understand when to use which view, how each one provides unique insights about your costs and recommended next steps to investigate further.
cost-management-billing Cost Analysis Common Uses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md
Title: Common cost analysis uses in Cost Management-+ description: This article explains how you can get results for common cost analysis tasks in Cost Management.
cost-management-billing Cost Management Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-management-error-codes.md
Title: Troubleshoot common Cost Management errors-+ description: This article describes common Cost Management errors and provides information about solutions.
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
Title: Monitor usage and spending with cost alerts in Cost Management-+ description: This article describes how cost alerts help you monitor usage and spending in Cost Management.
cost-management-billing Cost Mgt Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-best-practices.md
Title: Optimize your cloud investment with Cost Management-+ description: This article helps get the most value out of your cloud investments, reduce your costs, and evaluate where your money is being spent.
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md
Title: Get started with Cost Management for partners-+ description: This article explains how partners use Cost Management features and how they enable access for their customers.
cost-management-billing Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/group-filter.md
Title: Group and filter options in Cost Management-+ description: This article explains how to use group and filter options in Cost Management.
cost-management-billing Ingest Azure Usage At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md
Title: Retrieve large cost datasets recurringly with exports from Cost Management-+ description: This article helps you regularly export large amounts of data with exports from Cost Management.
cost-management-billing Reporting Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reporting-get-started.md
For more information about credits, see [Track Microsoft Customer Agreement Azur
- [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md). - [Analyze Azure costs with the Power BI App](analyze-cost-data-azure-cost-management-power-bi-template-app.md).-- [Connect to Azure Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
+- [Connect to Microsoft Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
- [Create and manage exported data](tutorial-export-acm-data.md).
cost-management-billing Save Share Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/save-share-views.md
Title: Save and share customized views-+ description: This article explains how to save and share a customized view with others.
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
Title: Tutorial - Create and manage exported data from Cost Management-+ description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems.
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
Title: Understand Cost Management data-+ description: This article helps you better understand data that's included in Cost Management and how frequently it's processed, collected, shown, and closed.
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-work-scopes.md
Title: Understand and work with Cost Management scopes-+ description: This article helps you understand billing and resource management scopes available in Azure and how to use the scopes in Cost Management and APIs.
cost-management-billing Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/elevate-access-global-admin.md
Title: Elevate access to manage billing accounts-+ description: Describes how to elevate access for a Global Administrator to manage billing accounts using the Azure portal or REST API.
cost-management-billing Reservation Amortization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-amortization.md
Title: View amortized reservation costs-+ description: This article helps you understand what amortized reservation costs are and how to view them in cost analysis.
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Title: Identify anomalies and unexpected changes in cost-+ description: Learn how to identify anomalies and unexpected changes in cost.
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 04/13/2022 Last updated : 05/27/2022
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Previously updated : 09/29/2021 Last updated : 05/26/2022 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
Specifically, this Salesforce connector supports:
- Salesforce Developer, Professional, Enterprise, or Unlimited editions. - Copying data from and to Salesforce production, sandbox, and custom domain.
-The Salesforce connector is built on top of the Salesforce REST/Bulk API. When copying data from Salesforce, the connector automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; You can explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service.
+The Salesforce connector is built on top of the Salesforce REST/Bulk API. When copying data from Salesforce, the connector automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; You can explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service. When copying data to Salesforce, the connector uses BULK API v1.
>[!NOTE] >The connector no longer sets default version for Salesforce API. For backward compatibility, if a default API version was set before, it keeps working. The default value is 45.0 for source, and 40.0 for sink.
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 04/13/2022 Last updated : 05/27/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| Connector | Format | Dataset/inline | | | | -- | |[Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
-[Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
+|[Asana (Preview)](connector-asana.md#mapping-data-flow-properties) | | -/Γ£ô |
+|[Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô | | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Common Data Model](format-common-data-model.md#source-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br/>-/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
devtest-labs How To Move Schedule To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-schedule-to-new-region.md
Title: How to move a schedule to another region
-description: This article explains how to move schedules to another Azure region.
+ Title: Move a schedule to another region
+description: This article explains how to move a top level schedule to another Azure region.
Last updated 05/09/2022
-# Move schedules to another region
+# Move a schedule to another region
-In this article, you'll learn how to move schedules by using an Azure Resource Manager (ARM) template.
+In this article, you'll learn how to move a schedule by using an Azure Resource Manager (ARM) template.
DevTest Labs supports two types of schedules.
event-hubs Exceptions Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/exceptions-dotnet.md
try
{ // Read events using the consumer client }
-catch (EventHubsException ex) where
+catch (EventHubsException ex) when
(ex.Reason == EventHubsException.FailureReason.ConsumerDisconnected) { // Take action based on a consumer being disconnected
catch (EventHubsException ex) where
``` ## Next steps
-There are other exceptions that are documented in the [legacy article](event-hubs-messaging-exceptions.md). Some of them apply only to the legacy Event Hubs .NET client library.
+There are other exceptions that are documented in the [legacy article](event-hubs-messaging-exceptions.md). Some of them apply only to the legacy Event Hubs .NET client library.
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), is the standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and the web browser remain private and encrypted.
-To meet your security or compliance requirements, Azure Front Door (AFD) supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the backend. Since connections to the backend happen over the public IP. It's highly recommended you configure HTTPS as the forwarding protocol on your Azure Front Door to enforce end-to-end TLS encryption from the client to the backend.
+To meet your security or compliance requirements, Azure Front Door (AFD) supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the backend. Since connections to the backend happen over the public IP, it is highly recommended you configure HTTPS as the forwarding protocol on your Azure Front Door to enforce end-to-end TLS encryption from the client to the backend. TLS/SSL offload is also supported if you deploy a private backend with AFD Premium using the [PrivateLink](private-link.md) feature.
## End-to-end TLS encryption
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
After you create an Azure Front Door Standard/Premium profile, the default front
## Add a new custom domain > [!NOTE]
-> * When using Azure DNS, creating Apex domains isn't supported on Azure Front Door currently. There are other DNS providers that support CNAME flattening or DNS chasing that will allow APEX domains to be used for Azure Front Door Standard/Premium.
> * If a custom domain is validated in one of the Azure Front Door Standard, Premium, classic or classic Microsoft CDN profiles, then it can't be added to another profile. A custom domain is managed by Domains section in the portal. A custom domain can be created and validated before association to an endpoint. A custom domain and its subdomains can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Doors. You can also map custom domains with different subdomains to the same Front Door endpoint.
A custom domain is managed by Domains section in the portal. A custom domain can
| Internal error | If you see this error, retry by clicking the **Refresh** or **Regenerate** buttons. If you're still experiencing issues, raise a support request. | > [!NOTE]
-> 1. If the **Regenerate** button doesn't work, delete and recreate the domain.
-> 2. If the domain state doesn't reflect as expected, select the **Refresh** button.
+> 1. The default TTL for TXT record is 1 hour. When you need to regenerate the TXT record for re-validation, please pay attention to the TTL for the previous TXT record. If it doesn't expire, the validation will fail until the previous TXT record expires.
+> 2. If the **Regenerate** button doesn't work, delete and recreate the domain.
+> 3. If the domain state doesn't reflect as expected, select the **Refresh** button.
## Associate the custom domain with your Front Door Endpoint
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
Responses to these requests might also contain an HTML error page in the respons
There are several possible causes for this symptom. The overall reason is that your HTTP request isn't fully RFC-compliant.
-An example of noncompliance is a `POST` request sent without either a **Content-Length** or a **Transfer-Encoding** header. An example would be using `curl -X POST https://example-front-door.domain.com`. This request doesn't meet the requirements set out in [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.2). Azure Front Door would block it with an HTTP 411 response.
+An example of noncompliance is a `POST` request sent without either a **Content-Length** or a **Transfer-Encoding** header. An example would be using `curl -X POST https://example-front-door.domain.com`. This request doesn't meet the requirements set out in [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.2). Azure Front Door would block it with an HTTP 411 response. Such requests will not be logged.
This behavior is separate from the web application firewall (WAF) functionality of Azure Front Door. Currently, there's no way to disable this behavior. All HTTP requests must meet the requirements, even if the WAF functionality isn't in use.
governance Create Management Group Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-azure-cli.md
Title: "Quickstart: Create a management group with the Azure CLI"
description: In this quickstart, you use the Azure CLI to create a management group to organize your resources into a resource hierarchy. Last updated 08/17/2021 -
+ms.tool: azure-cli
# Quickstart: Create a management group with the Azure CLI
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
Title: "Quickstart: New policy assignment with Terraform"
description: In this quickstart, you use Terraform and HCL syntax to create a policy assignment to identify non-compliant resources. Last updated 08/17/2021
+ms.tool: terraform
# Quickstart: Create a policy assignment to identify non-compliant resources using Terraform
hdinsight Apache Hadoop On Premises Migration Best Practices Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-architecture.md
Previously updated : 12/06/2019 Last updated : 05/27/2019 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - architecture best practices
Some HDInsight Hive metastore best practices are as follows:
Read the next article in this series: -- [Infrastructure best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-infrastructure.md)
+- [Infrastructure best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-infrastructure.md)
hdinsight Troubleshoot Lost Key Vault Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-lost-key-vault-access.md
Title: Azure HDInsight clusters with disk encryption lose Key Vault access
description: Troubleshooting steps and possible resolutions for Key Vault access issues when interacting with Azure HDInsight clusters. Previously updated : 01/30/2020 Last updated : 05/27/2022 # Scenario: Azure HDInsight clusters with disk encryption lose Key Vault access
hdinsight Hdinsight Authorize Users To Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-authorize-users-to-ambari.md
description: 'How to manage Ambari user and group permissions for HDInsight clus
Previously updated : 11/27/2019 Last updated : 05/27/2022 # Authorize users for Apache Ambari Views
We have assigned our Azure AD domain user "hiveuser2" to the *Cluster User* role
* [Manage ESP HDInsight clusters](./domain-joined/apache-domain-joined-manage.md) * [Use the Apache Hive View with Apache Hadoop in HDInsight](hadoop/apache-hadoop-use-hive-ambari-view.md) * [Synchronize Azure AD users to the cluster](hdinsight-sync-aad-users-to-cluster.md)
-* [Manage HDInsight clusters by using the Apache Ambari REST API](./hdinsight-hadoop-manage-ambari-rest-api.md)
+* [Manage HDInsight clusters by using the Apache Ambari REST API](./hdinsight-hadoop-manage-ambari-rest-api.md)
hdinsight Hdinsight Business Continuity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-business-continuity-architecture.md
description: This article discusses the different possible business continuity a
keywords: hadoop high availability Previously updated : 10/07/2020 Last updated : 05/27/2022 # Azure HDInsight business continuity architectures
To learn more about the items discussed in this article, see:
* [Azure HDInsight business continuity](./hdinsight-business-continuity.md) * [Azure HDInsight highly available solution architecture case study](./hdinsight-high-availability-case-study.md)
-* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
+* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
hdinsight Hdinsight Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-business-continuity.md
description: This article gives an overview of best practices, single region ava
keywords: hadoop high availability Previously updated : 10/08/2020 Last updated : 05/27/2022 # Azure HDInsight business continuity
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
Title: Azure HDInsight for Visual Studio Code
description: Learn how to use the Spark & Hive Tools (Azure HDInsight) for Visual Studio Code. Use the tools to create and submit queries and scripts. Previously updated : 10/20/2020 Last updated : 05/27/2022
From the menu bar, go to **View** > **Command Palette**, and then enter **Azure:
## Next steps
-For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
+For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-powershell.md
Title: Create Apache Hadoop clusters using PowerShell - Azure HDInsight
description: Learn how to create Apache Hadoop, Apache HBase, Apache Storm, or Apache Spark clusters on Linux for HDInsight by using Azure PowerShell.
+ms.tool: azure-powershell
Last updated 12/18/2019
hdinsight Hdinsight High Availability Case Study https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-high-availability-case-study.md
description: This article is a fictional case study of a possible Azure HDInsigh
keywords: hadoop high availability Previously updated : 10/08/2020 Last updated : 05/27/2022 # Azure HDInsight highly available solution architecture case study
hdinsight Llap Schedule Based Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md
Title: HDInsight Interactive Query Autoscale(Schedule-Based) Guide and Best Practices
+ Title: HDInsight Interactive Query Autoscale(bchedule-based) guide and best practices
description: LLAP Autoscale Guide and Best Practices
Last updated 05/25/2022
-# Azure HDInsight Interactive Query Cluster (Hive LLAP) Schedule Based Autoscale
+# Azure HDInsight interactive query cluster (Hive LLAP) schedule based autoscale
This document provides the onboarding steps to enable schedule-based autoscale for Interactive Query (LLAP) Cluster type in Azure HDInsight. It includes some of the best practices to operate Autoscale in Hive-LLAP.
Disabling the WLM should be before the actual schedule of the scaling event and
Each time the Interactive Query cluster scales, the Autoscale smart probe would perform a silent update of the number of LLAP Daemons and the Concurrency in the Ambari since these configurations are static. These configs are updated to make sure if autoscale is in disabled state or LLAP Service restarts for some reason. It utilizes all the worker nodes resized at that time. Explicit restart of services to handle these stale config changes isn't required.
-### **Next Steps**
+### **Next steps**
If the above guidelines didn't resolve your query, visit one of the following. * Get answers from Azure experts through [Azure Community Support](https://azure.microsoft.com/support/community/).
If the above guidelines didn't resolve your query, visit one of the following.
* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
-## **Other References:**
+## **Other references:**
* [Interactive Query in Azure HDInsight](./apache-interactive-query-get-started.md) * [Create a cluster with Schedule-based Autoscaling](./apache-interactive-query-get-started.md) * [Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide](./hive-llap-sizing-guide.md)
- * [Hive Warehouse Connector in Azure HDInsight](./apache-hive-warehouse-connector.md)
+ * [Hive Warehouse Connector in Azure HDInsight](./apache-hive-warehouse-connector.md)
hdinsight Overview Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-data-lake-storage-gen2.md
description: Overview of Data Lake Storage Gen2 in HDInsight.
Previously updated : 04/21/2020 Last updated : 05/27/2022 # Azure Data Lake Storage Gen2 overview in HDInsight
For more information, see [Use the Azure Data Lake Storage Gen2 URI](../storage/
* [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) * [Introduction to Azure Storage](../storage/common/storage-introduction.md)
-* [Azure Data Lake Storage Gen1 overview](./overview-data-lake-storage-gen1.md)
+* [Azure Data Lake Storage Gen1 overview](./overview-data-lake-storage-gen1.md)
hdinsight Zookeeper Troubleshoot Quorum Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/zookeeper-troubleshoot-quorum-fails.md
Title: Apache ZooKeeper server fails to form a quorum in Azure HDInsight
description: Apache ZooKeeper server fails to form a quorum in Azure HDInsight Previously updated : 05/20/2020 Last updated : 05/28/2022 # Apache ZooKeeper server fails to form a quorum in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
- Get answers from Azure experts through [Azure Community Support](https://azure.microsoft.com/support/community/). - Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.-- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/troubleshoot-nas.md
description: Tips to avoid and fix configuration errors and other problems that
Previously updated : 05/26/2022 Last updated : 05/27/2022
Check these settings both on the NAS itself and also on any firewalls between th
## Check root squash settings
-Root squash settings can disrupt file access if they are improperly configured. You should check that the settings on each storage export and on the matching HPC Cache client access policies are consistent.
+Root squash settings can disrupt file access if they are improperly configured. You should check that the settings on each storage export and on the matching HPC Cache client access policies are appropriate.
Root squash prevents requests sent by a local superuser root on the client from being sent to a back-end storage system as root. It reassigns requests from root to a non-privileged user ID (UID) like 'nobody'.
Root squash can be configured in an HPC Cache system in these places:
* At the storage export - You can configure your storage system to reassign incoming requests from root to a non-privileged user ID (UID).
-These two settings should match. That is, if a storage system export squashes root, you should change its HPC Cache client access rule to also squash root. If the settings don't match, you can have access problems when you try to read or write to the back-end storage system through the HPC Cache.
+If your storage system export squashes root, you should update the HPC Cache client access rule for that storage target to also squash root. If not, you can have access problems when you try to read or write to the back-end storage system through the HPC Cache.
-This table illustrates the behavior for different root squash scenarios when a client request is sent as UID 0 (root). The scenarios marked with * are ***not recommended*** because they can cause access problems.
+This table illustrates the behavior for different root squash scenarios when a client request is sent as UID 0 (root). The scenario marked with * is ***not recommended*** because it can cause access problems.
| Setting | UID sent from client | UID sent from HPC Cache | Effective UID on back-end storage | |--|--|--|--| | no root squash | 0 (root) | 0 (root) | 0 (root) |
-| *root squash at HPC Cache only | 0 (root) | 65534 (nobody) | 65534 (nobody) |
+| root squash at HPC Cache only | 0 (root) | 65534 (nobody) | 65534 (nobody) |
| *root squash at NAS storage only | 0 (root) | 0 (root) | 65534 (nobody) | | root squash at HPC Cache and NAS | 0 (root) | 65534 (nobody) | 65534 (nobody) |
This table illustrates the behavior for different root squash scenarios when a c
## Check access on directory paths <!-- previously linked in prereqs article as allow-root-access-on-directory-paths -->
+<!-- check if this is still accurate - 05-2022 -->
For NAS systems that export hierarchical directories, check that Azure HPC Cache has appropriate access to each export level in the path to the files you are using.
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md
Last updated 08/30/2021 --+
+ms.tool: azure-cli
+ # This topic applies to device developers and solution builders.
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
+
+ Title: Use the REST API to add upload storage account configuration in Azure IoT Central
+description: How to use the IoT Central REST API to add upload storage account configuration in an application
++ Last updated : 05/12/2022++++++
+# How to use the IoT Central REST API to upload a file
+
+IoT Central lets you upload media and other files from connected devices to cloud storage. You configure the file upload capability in your IoT Central application, and then implement file uploads in your device code. In this article, learn how to:
+
+* Use the REST API to configure the file upload capability in your IoT Central application.
+* Test the file upload by running some sample device code.
+
+The IoT Central REST API lets you:
+
+* Add a file upload storage account configuration
+* Update a file upload storage account configuration
+* Get the file upload storage account configuration
+* Delete the file upload storage configuration
+
+Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
++
+## Prerequisites
+
+To test the file upload, install the following prerequisites in your local development environment:
+
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/Download)
+
+## Add a file upload storage account configuration
+
+### Create a storage account
+
+To use the Azure Storage REST API, you need a bearer token for the `management.azure.com` resource. To get a bearer token, you can use the Azure CLI:
+
+```azurecli
+az account get-access-token --resource https://management.azure.com
+```
+
+If you don't have a storage account for your blobs, you can use the following request to create one in your subscription:
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}?api-version=2021-09-01
+
+```
+
+The request headers have the following fields:
+
+* `subscriptionId` : The ID of the target subscription.
+* `resourceGroupName`: The name of the resource group in your subscription. The name is case insensitive.
+* `accountName` : The name of the storage account within the specified resource group. Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only.
+
+The request body has the following required fields:
+
+* `kind` : Type of storage account
+* `location` : The geo-location where the resource lives
+* `sku`: The SKU name.
+
+```json
+{
+ "kind": "BlockBlobStorage",
+ "location": "West US",
+ "sku": "Premium_LRS"
+}
+```
+
+### Create a container
+
+Use the following request to create a container called `fileuploads` in your storage account for your blobs:
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/blobServices/default/containers/fileuploads?api-version=2021-09-01
+```
+
+* `containerName` : Blob container names must be between 3 and 63 characters in length and use numbers, lower-case letters and dash (-) only. Every dash (-) character must be immediately preceded and followed by a letter or number.
+
+Send an empty request body with this request that looks like the following example:
+
+```json
+{
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "/subscriptions/your-subscription-id/resourceGroups/yourResourceGroupName/providers/Microsoft.Storage/storageAccounts/yourAccountName/blobServices/default/containers/fileuploads",
+ "name": "fileuploads",
+ "type": "Microsoft.Storage/storageAccounts/blobServices/containers"
+}
+```
+
+### Get the storage account keys
+
+Use the following request to retrieve that storage account keys that you need when you configure the upload in IoT Central:
+
+```http
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/listKeys?api-version=2021-09-01
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "keys": [
+ {
+ "creationTime": "2022-05-19T19:22:40.9132287Z",
+ "keyName": "key1",
+ "value": "j3UTm**************==",
+ "permissions": "FULL"
+ },
+ {
+ "creationTime": "2022-05-19T19:22:40.9132287Z",
+ "keyName": "key2",
+ "value": "Nbs3W**************==",
+ "permissions": "FULL"
+ }
+ ]
+}
+```
+
+### Create the upload configuration
+
+Use the following request to create a file upload blob storage account configuration in your IoT Central application:
+
+```http
+PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+The request body has the following fields:
+
+* `account`: The storage account name where to upload the file to.
+* `connectionString`: The connection string to connect to the storage account. Use one of the `value` values from the previous `listKeys` request as the `AccountKey` value.
+* `container`: The name of the container inside the storage account. The following example uses the name `fileuploads`.
+* `etag`: ETag to prevent conflict with multiple uploads
+* `sasTtl`: ISO 8601 duration standard, The amount of time the deviceΓÇÖs request to upload a file is valid before it expires.
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "fileuploads",
+ "sasTtl": "PT1H"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "fileuploads",
+ "sasTtl": "PT1H",
+ "state": "pending",
+ "etag": "\"7502ac89-0000-0300-0000-627eaf100000\""
+
+}
+
+```
+
+## Get the file upload storage account configuration
+
+Use the following request to retrieve details of a file upload blob storage account configuration in your IoT Central application:
++
+```http
+GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "yourContainerName",
+ "state": "succeeded",
+ "etag": "\"7502ac89-0000-0300-0000-627eaf100000\""
+
+}
+```
+
+## Update the file upload storage account configuration
+
+Use the following request to update a file upload blob storage account configuration in your IoT Central application:
+
+```http
+PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "yourContainerName2",
+ "sasTtl": "PT1H"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "yourContainerName2",
+ "sasTtl": "PT1H",
+ "state": "succeeded",
+ "etag": "\"7502ac89-0000-0300-0000-627eaf100000\""
+}
+```
+
+## Remove the file upload storage account configuration
+
+Use the following request to delete a storage account configuration:
+
+```http
+DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+## Test file upload
+
+After you [configure file uploads](#add-a-file-upload-storage-account-configuration) in your IoT Central application, you can test it with the sample code. If you haven't already cloned the file upload sample repository, use the following commands to clone it to a suitable location on your local machine and install the dependent packages:
+
+```
+git clone https://github.com/azure-Samples/iot-central-file-upload-device
+cd iotc-file-upload-device
+npm i
+npm build
+```
+
+### Create the device template and import the model
+
+To test the file upload you run a sample device application. Create a device template for the sample device to use.
+
+1. Open your application in IoT Central UI.
+
+1. Navigate to the **Device Templates** tab in the left pane, select **+ New**:
+
+1. Choose **IoT device** as the template type.
+
+1. On the **Customize** page of the wizard, enter a name such as *File Upload Device Sample* for the device template.
+
+1. On the **Review** page, select **Create**.
+
+1. Select **Import a model** and upload the *FileUploadDeviceDcm.json* manifest file from the folder `iotc-file-upload-device\setup` in the repository you downloaded previously.
+
+1. Select **Publish** to publish the device template.
+
+### Add a device
+
+To add a device to your Azure IoT Central application:
+
+1. Choose **Devices** on the left pane.
+
+1. Select the *File Upload Device Sample* device template which you created earlier.
+
+1. Select + **New** and select **Create**.
+
+1. Select the device which you created and Select **Connect**
+
+Copy the values for `ID scope`, `Device ID`, and `Primary key`. You'll use these values in the device sample code.
+
+### Run the sample code
+
+Open the git repository you downloaded in VS code. Create an ".env" file at the root of your project and add the values you copied above. The file should look like the sample below with the values you made a note of previously.
+
+```
+scopeId=<YOUR_SCOPE_ID>
+deviceId=<YOUR_DEVICE_ID>
+deviceKey=<YOUR_PRIMARY_KEY>
+modelId=dtmi:IoTCentral:IotCentralFileUploadDevice;1
+```
+
+Open the git repository you downloaded in VS code. Press F5 to run/debug the sample. In your terminal window you see that the device is registered and is connected to IoT Central:
+
+```
+
+Starting IoT Central device...
+ > Machine: Windows_NT, 8 core, freemem=6674mb, totalmem=16157mb
+Starting device registration...
+DPS registration succeeded
+Connecting the device...
+IoT Central successfully connected device: 7z1xo26yd8
+Sending telemetry: {
+ "TELEMETRY_SYSTEM_HEARTBEAT": 1
+}
+Sending telemetry: {
+ "TELEMETRY_SYSTEM_HEARTBEAT": 1
+}
+Sending telemetry: {
+ "TELEMETRY_SYSTEM_HEARTBEAT": 1
+}
+
+```
+
+The sample project comes with a sample file named *datafile.json*. This is the file that's uploaded when you use the **Upload File** command in your IoT Central application.
+
+To test this open your application and select the device you created. Select the **Command** tab and you see a button named **Run**. When you select that button the IoT Central app calls a direct method on your device to upload the file. You can see this direct method in the sample code in the /device.ts file. The method is named *uploadFileCommand*.
+
+Select the **Raw data** tab to verify the file upload status.
++
+You can also make a [REST API](/rest/api/storageservices/list-blobs) call to verify the file upload status in the storage container.
+
+## Next steps
+
+Now that you've learned how to configure file uploads with the REST API, a suggested next step is to [How to create device templates from IoT Central GUI.](howto-set-up-template.md#create-a-device-template)
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-publish-subscribe.md
The following JSON snippet is an example of an authorization policy that explici
When writing your authorization policy, keep in mind: - It requires `$edgeHub` twin schema version 1.2.
+ > [!IMPORTANT]
+ > Once your IoT Edge device is deployed, it currently won't display correctly in the Azure portal with schema version 1.2 (version 1.1 will be fine). This is a known bug and will be fixed soon. However, this won't affect your device, as it's still connected in IoT Hub and can be communicated with at any time using the Azure CLI.
+ :::image type="content" source="./media/how-to-publish-subscribe/unsupported-1.2-schema.png" alt-text="Screenshot of Azure portal error on the IoT Edge device page.":::
- By default, all operations are denied. - Authorization statements are evaluated in the order that they appear in the JSON definition. It starts by looking at `identities` and then selects the first *allow* or *deny* statements that match the request. If there are conflicts between these statements, the *deny* statement wins. - Several variables (for example, substitutions) can be used in the authorization policy:
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](http
## Update the security daemon
-The IoT Edge security daemon is a native component that needs to be updated using the package manager on the IoT Edge device.
+The IoT Edge security daemon is a native component that needs to be updated using the package manager on the IoT Edge device. View the [Update the security daemon](how-to-update-iot-edge.md#update-the-security-daemon) tutorial for a walk-through on Linux-based devices.
Check the version of the security daemon running on your device by using the command `iotedge version`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the version.
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md
Last updated 08/24/2021
-# Use Visual Studio 2019 to develop and debug modules for Azure IoT Edge
+# Use Visual Studio 2022 to develop and debug modules for Azure IoT Edge
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-This article shows you how to use Visual Studio 2019 to develop and debug Azure IoT Edge modules.
+This article shows you how to use Visual Studio 2022 to develop and debug Azure IoT Edge modules.
-The Azure IoT Edge Tools for Visual Studio extension provides the following benefits:
+The **Azure IoT Edge Tools for Visual Studio** extension provides the following benefits:
* Create, edit, build, run, and debug IoT Edge solutions and modules on your local development computer.
+* Code your Azure IoT modules in C or C# with the benefits of Visual Studio development.
* Deploy your IoT Edge solution to an IoT Edge device via Azure IoT Hub.
-* Code your Azure IoT modules in C or C# while having all of the benefits of Visual Studio development.
-* Manage IoT Edge devices and modules with UI.
+* Manage IoT Edge devices and modules with the UI.
-This article shows you how to use the Azure IoT Edge Tools for Visual Studio 2019 to develop your IoT Edge modules. You also learn how to deploy your project to an IoT Edge device. Currently, Visual Studio 2019 provides support for modules written in C and C#. The supported device architectures are Windows X64 and Linux X64 or ARM32. For more information about supported operating systems, languages, and architectures, see [Language and architecture support](module-development.md#language-and-architecture-support).
+Visual Studio 2022 provides support for modules written in C and C#. The supported device architectures are Windows x64 and Linux x64 or ARM32, while ARM64 is in preview. For more information about supported operating systems, languages, and architectures, see [Language and architecture support](module-development.md#language-and-architecture-support).
## Prerequisites
-This article assumes that you use a machine running Windows as your development machine. On Windows computers, you can develop either Windows or Linux modules.
+This article assumes that you use a machine running Windows as your development machine.
-* To develop modules with **Windows containers**, use a Windows computer running version 1809/build 17763 or newer.
-* To develop modules with **Linux containers**, use a Windows computer that meets the [requirements for Docker Desktop](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install).
+* On Windows computers, you can develop either Windows or Linux modules.
-Install Visual Studio on your development machine. Make sure you include the **Azure development** and **Desktop development with C++** workloads in your Visual Studio 2019 installation. You can [Modify Visual Studio 2019](/visualstudio/install/modify-visual-studio?view=vs-2019&preserve-view=true) to add the required workloads.
+ * To develop modules with **Windows containers**, use a Windows computer running version 1809/build 17763 or newer.
+ * To develop modules with **Linux containers**, use a Windows computer that meets the [requirements for Docker Desktop](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install).
-After your Visual Studio 2019 is ready, you also need the following tools and components:
+* Install Visual Studio on your development machine. Make sure you include the **Azure development** and **Desktop development with C++** workloads in your Visual Studio 2022 installation. Alternatively, you can [Modify Visual Studio 2022](/visualstudio/install/modify-visual-studio?view=vs-2022&preserve-view=true) to add the required workloads, if Visual Studio is already installed on your machine.
-* Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) from the Visual Studio marketplace to create an IoT Edge project in Visual Studio 2019.
+* Install the Azure IoT Edge Tools either from the Marketplace or from Visual Studio:
- > [!TIP]
- > If you are using Visual Studio 2017, download and install [Azure IoT Edge Tools for VS 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) from the Visual Studio marketplace
+ * Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs17iotedgetools) from the Visual Studio Marketplace.
+
+ > [!TIP]
+ > If you are using Visual Studio 2019, download and install [Azure IoT Edge Tools for VS 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) from the Visual Studio marketplace
+
+ * Or, in Visual Studio go to **Tools > Get Tools and Features**. The Visual Studio Installer will open. From the **Individual components** tab, select **Azure IoT Edge Tools for VS 2022**, then select **Install** in the lower right of the popup. Close the popup when finished.
+
+ If you only need to update your tools, go to the **Manage Extensions** window, expand **Updates > Visual Studio Marketplace**, select **Azure IoT Edge Tools** then select **Update**.
+
+ After the update is complete, select **Close** and restart Visual Studio.
-* Download and install [Docker Community Edition](https://docs.docker.com/install/) on your development machine to build and run your module images. You'll need to set Docker CE to run in either Linux container mode or Windows container mode, depending on the type of modules you are developing.
+* Download and install [Docker Community Edition](https://docs.docker.com/install/) on your development machine to build and run your module images. Set Docker CE to run in either Linux container mode or Windows container mode, depending on the type of modules you are developing.
-* Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (3.5/3.6/3.7/3.8) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal. Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0.
+* Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (3.5/3.6/3.7/3.8) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal.
```cmd pip install --upgrade iotedgehubdev ```
+
+ > [!TIP]
+ >Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0. You'll need to have a pre-existing IoT Edge device in the Azure portal and have your connection string ready during setup.
-* Install the Vcpkg library manager, and then install the **azure-iot-sdk-c package** for Windows.
+ You may need to restart Visual Studio to complete the installation.
+
+* Install the **Vcpkg** library manager
```cmd git clone https://github.com/Microsoft/vcpkg
After your Visual Studio 2019 is ready, you also need the following tools and co
bootstrap-vcpkg.bat ```
+ Install the **azure-iot-sdk-c** package for Windows
```cmd vcpkg.exe install azure-iot-sdk-c:x64-windows vcpkg.exe --triplet x64-windows integrate install
After your Visual Studio 2019 is ready, you also need the following tools and co
> [!TIP] > You can use a local Docker registry for prototype and testing purposes instead of a cloud registry.
-* To test your module on a device, you'll need an active IoT hub with at least one IoT Edge device. To quickly create an IoT Edge device for testing, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md). If you are running IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you start development in Visual Studio.
-
-### Check your tools version
+* To test your module on a device, you'll need an active IoT Hub with at least one IoT Edge device. To create an IoT Edge device for testing you can create one in the Azure portal or with the CLI:
-1. From the **Extensions** menu, select **Manage Extensions**. Expand **Installed > Tools** and you can find **Azure IoT Edge Tools for Visual Studio** and **Cloud Explorer for Visual Studio**.
+ * Creating one in the [Azure portal](https://portal.azure.com/) is the quickest. From the Azure portal, go to your IoT Hub resource. Select **IoT Edge** from the menu on the left and then select **Add IoT Edge Device**.
-1. Note the installed version. You can compare this version with the latest version on Visual Studio Marketplace ([Cloud Explorer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.CloudExplorerForVS2019), [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools))
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/create-new-iot-edge-device.png" alt-text="Screenshot of how to add a new I o T Edge device":::
+
+ A new popup called **Create a device** will appear. Add a name to your device (known as the Device ID), then select **Save** in the lower left.
+
+ Finally, confirm that your new device exists in your IoT Hub, from the **Device management > IoT Edge** menu. For more information on creating an IoT Edge device through the Azure portal, read [Create and provision an IoT Edge device on Linux using symmetric keys](how-to-provision-single-device-linux-symmetric.md).
-1. If your version is older than what's available on Visual Studio Marketplace, update your tools in Visual Studio as shown in the following section.
+ * To create an IoT Edge device with the CLI follow the steps in the quickstart for [Linux](quickstart-linux.md#register-an-iot-edge-device) or [Windows](quickstart.md#register-an-iot-edge-device). In the process of registering an IoT Edge device, you create an IoT Edge device.
-> [!NOTE]
-> If you are using Visual Studio 2022, [Cloud Explorer](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer?view=vs-2022&preserve-view=true) is retired. To deploy Azure IoT Edge modules, use [Azure CLI](how-to-deploy-modules-cli.md?view=iotedge-2020-11&preserve-view=true) or [Azure portal](how-to-deploy-modules-portal.md?view=iotedge-2020-11&preserve-view=true).
-
-### Update your tools
-
-1. In the **Manage Extensions** window, expand **Updates > Visual Studio Marketplace**, select **Azure IoT Edge Tools** or **Cloud Explorer for Visual Studio** and select **Update**.
-
-1. After the tools update is downloaded, close Visual Studio to trigger the tools update using the VSIX installer.
-
-1. In the installer, select **OK** to start and then **Modify** to update the tools.
-
-1. After the update is complete, select **Close** and restart Visual Studio.
+ If you are running the IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you start development in Visual Studio.
## Create an Azure IoT Edge project
-The IoT Edge project template in Visual Studio creates a solution that can be deployed to IoT Edge devices. First you create an Azure IoT Edge solution, and then you generate the first module in that solution. Each IoT Edge solution can contain more than one module.
+The IoT Edge project template in Visual Studio creates a solution that can be deployed to IoT Edge devices. In summary, first you'll create an Azure IoT Edge solution, and then you'll generate the first module in that solution. Each IoT Edge solution can contain more than one module.
+
+In all, we're going to build three projects in our solution. The main module that contains EdgeAgent and EdgeHub, in addition to the temperature sensor module, then you'll add two more IoT Edge modules.
> [!TIP]
-> The IoT Edge project structure created by Visual Studio is not the same as in Visual Studio Code.
+> The IoT Edge project structure created by Visual Studio is not the same as the one in Visual Studio Code.
1. In Visual Studio, create a new project.
-1. On the **Create a new project** page, search for **Azure IoT Edge**. Select the project that matches the platform and architecture for your IoT Edge device, and click **Next**.
+1. In the **Create a new project** window, search for **Azure IoT Edge**. Select the project that matches the platform and architecture for your IoT Edge device, and click **Next**.
:::image type="content" source="./media/how-to-visual-studio-develop-module/create-new-project.png" alt-text="Create New Project":::
-1. On the **Configure your new project** page, enter a name for your project and specify the location, then select **Create**.
+1. In the **Configure your new project** window, enter a name for your project and specify the location, then select **Create**.
-1. On the **Add Module** window, select the type of module you want to develop. You can also select **Existing module** to add an existing IoT Edge module to your deployment. Specify your module name and module image repository.
+1. In the **Add Module** window, select the type of module you want to develop. You can also select **Existing module** to add an existing IoT Edge module to your deployment. Specify your module name and module image repository.
- Visual Studio autopopulates the repository URL with **localhost:5000/<module name\>**. If you use a local Docker registry for testing, then **localhost** is fine. If you use Azure Container Registry, then replace **localhost:5000** with the login server from your registry's settings. The login server looks like **_\<registry name\>_.azurecr.io**.The final result should look like **\<*registry name*\>.azurecr.io/_\<module name\>_**.
+ Visual Studio autopopulates the repository URL with **localhost:5000/<module name\>**. If you use a local Docker registry for testing, then **localhost** is fine. If you use Azure Container Registry, then replace **localhost:5000** with the login server from your registry's settings.
+
+ The login server looks like **_\<registry name\>_.azurecr.io**.The final result should look like **\<*registry name*\>.azurecr.io/_\<module name\>_**, for example **my-registry-name.azurecr.io/my-module-name**.
Select **Add** to add your module to the project. ![Add Application and Module](./media/how-to-visual-studio-develop-csharp-module/add-module.png)
+ > [!NOTE]
+ >If you have an existing IoT Edge project, you can still change the repository URL by opening the **module.json** file. The repository URL is located in the 'repository' property of the JSON file.
+ Now you have an IoT Edge project and an IoT Edge module in your Visual Studio solution.
-The module folder contains a file for your module code, named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files provide the information needed to build your module as a Windows or Linux container.
+#### Project structure
+
+In your solution is a main project folder and a single module folder. Both are on the project level. The main project folder contains your deployment manifest.
-The project folder contains a list of all the modules included in that project. Right now it should show only one module, but you can add more. For more information about adding modules to a project, see the [Build and debug multiple modules](#build-and-debug-multiple-modules) section later in this article.
+The module project folder contains a file for your module code named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files included here provide the information needed to build your module as a Windows or Linux container.
+#### Deployment manifest of your project
-The project folder also contains a file named `deployment.template.json`. This file is a template of an IoT Edge deployment manifest, which defines all the modules that will run on a device along with how they will communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md). If you open this deployment template, you see that the two runtime modules, **edgeAgent** and **edgeHub** are included, along with the custom module that you created in this Visual Studio project. A fourth module named **SimulatedTemperatureSensor** is also included. This default module generates simulated data that you can use to test your modules, or delete if it's not necessary. To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
+The deployment manifest you'll edit is called `deployment.debug.template.json`. This file is a template of an IoT Edge deployment manifest, which defines all the modules that run on a device along with how they communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md).
+
+If you open this deployment template, you see that the two runtime modules, **edgeAgent** and **edgeHub** are included, along with the custom module that you created in this Visual Studio project. A fourth module named **SimulatedTemperatureSensor** is also included. This default module generates simulated data that you can use to test your modules, or delete if it's not necessary. To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
### Set IoT Edge runtime version The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is version 1.2. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio to match.
-1. In the Solution Explorer, right-click the name of your project and select **Set IoT Edge runtime version**.
+1. In the Solution Explorer, right-click the name of your main project and select **Set IoT Edge runtime version**.
- :::image type="content" source="./media/how-to-visual-studio-develop-module/set-iot-edge-runtime-version.png" alt-text="Right-click your project name and select set IoT Edge runtime version.":::
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/set-iot-edge-runtime-version.png" alt-text="Screenshot of how to find and select the menu item named 'Set I o T Edge Runtime version'.":::
-1. Use the drop-down menu to choose the runtime version that your IoT Edge devices are running, then select **OK** to save your changes.
+1. Use the drop-down menu to choose the runtime version that your IoT Edge devices are running, then select **OK** to save your changes. If no change was made, select **Cancel** to exit.
-1. Re-generate your deployment manifest with the new runtime version. Right-click the name of your project and select **Generate deployment for IoT Edge**.
+1. If you changed the version, re-generate your deployment manifest by right-clicking the name of your project and select **Generate deployment for IoT Edge**. This will generate a deployment manifest based on your deployment template and will appear in the **config** folder of your Visual Studio project.
-## Develop your module
+## Module infrastructure & development options
When you add a new module, it comes with default code that is ready to be built and deployed to a device so that you can start testing without touching any code. The module code is located within the module folder in a file named `Program.cs` (for C#) or `main.c` (for C).
When you're ready to customize the module template with your own code, use the [
## Set up the iotedgehubdev testing tool
-The IoT edgeHub dev tool provides a local development and debug experience. The tool helps start IoT Edge modules without the IoT Edge runtime so that you can create, develop, test, run, and debug IoT Edge modules and solutions locally. You don't have to push images to a container registry and deploy them to a device for testing.
+The Azure IoT EdgeHub Dev Tool provides a local development and debug experience. The tool helps start IoT Edge modules without the IoT Edge runtime so that you can create, develop, test, run, and debug IoT Edge modules and solutions locally. You don't have to push images to a container registry and deploy them to a device for testing.
For more information, see [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/).
-To initialize the tool, provide an IoT Edge device connection string from IoT Hub.
+To initialize the tool in Visual Studio:
-1. Retrieve the connection string of an IoT Edge device from the Azure portal, the Azure CLI, or the Visual Studio Cloud Explorer.
+1. Retrieve the connection string of your IoT Edge device (found in your IoT Hub) from the [Azure portal](https://portal.azure.com/) or from the Azure CLI.
-1. From the **Tools** menu, select **Azure IoT Edge Tools** > **Setup IoT Edge Simulator**.
+ If using the CLI to retrieve your connection string, use this command, replacing "**[device_id]**" and "**[hub_name]**" with your own values:
+
+ ```Azure CLI
+ az iot hub device-identity connection-string show --device-id [device_id] --hub-name [hub_name]
+ ```
+
+1. From the **Tools** menu in Visual Studio, select **Azure IoT Edge Tools** > **Setup IoT Edge Simulator**.
1. Paste the connection string and click **OK**.
To initialize the tool, provide an IoT Edge device connection string from IoT Hu
Typically, you'll want to test and debug each module before running it within an entire solution with multiple modules. >[!TIP]
->Make sure you have switched over to the correct Docker container mode, either Linux container mode or Windows container mode, depending on the type of IoT Edge module you are developing. From the Docker Desktop menu, you can toggle between the two types of modes. Select **Switch to Windows containers** to use Windows containers, or select **Switch to Linux containers** to use Linux containers.
+>Depending on the type of IoT Edge module you are developing, you may need to enable the correct Docker container mode: either Linux or Windows. From the Docker Desktop menu, you can toggle between the two types of modes. Select **Switch to Windows containers** or select **Switch to Linux containers**. For this tutorial, we use Linux.
+>
+>:::image type="content" source="./media/how-to-visual-studio-develop-module/system-tray.png" alt-text="Screenshot of how to find and select the menu item named 'Switch to Windows containers'.":::
-1. In **Solution Explorer**, right-click the module folder and select **Set as StartUp Project** from the menu.
+1. In **Solution Explorer**, right-click the module project folder and select **Set as StartUp Project** from the menu.
- ![Set Start-up Project](./media/how-to-visual-studio-develop-csharp-module/module-start-up-project.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/module-start-up-project.png" alt-text="Screenshot of how to set project as startup project.":::
-1. Press **F5** or click the run button in the toolbar to run the module. It may take 10&ndash;20 seconds the first time you do so.
+1. Press **F5** or click the run button in the toolbar to run the module. It may take 10&ndash;20 seconds the first time you do so. Be sure you don't have other Docker containers running that might bind the port you need for this project.
- ![Run Module](./media/how-to-visual-studio-develop-csharp-module/run-module.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/run-module.png" alt-text="Screenshot of how to run a module.":::
-1. You should see a .NET Core console app start if the module has been initialized successfully.
+1. You should see a .NET Core console app window appear if the module has been initialized successfully.
1. Set a breakpoint to inspect the module.
Typically, you'll want to test and debug each module before running it within an
curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages ```
- ![Debug Single Module](./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png" alt-text="Screenshot of the output console, Visual Studio project, and Bash window." lightbox="./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png":::
+
+ The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window, found when the debugger is running. Go to Debug > Windows > Locals.
- The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window.
+ In your Bash or shell, you should see a `{"message":"accepted"}` confirmation.
+
+ In your .NET console you should see:
+
+ ```dotnetcli
+ IoT Hub module client initialized.
+ Received message: 1, Body: [hello world]
+ ```
> [!TIP] > You can also use [PostMan](https://www.getpostman.com/) or other API tools to send messages instead of `curl`.
Typically, you'll want to test and debug each module before running it within an
After you're done developing a single module, you might want to run and debug an entire solution with multiple modules.
-1. In **Solution Explorer**, add a second module to the solution by right-clicking the project folder. On the menu, select **Add** > **New IoT Edge Module**.
+1. In **Solution Explorer**, add a second module to the solution by right-clicking the main project folder. On the menu, select **Add** > **New IoT Edge Module**.
+
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/add-new-module.png" alt-text="Screenshot of how to add a 'New I o T Edge Module' from the menu." lightbox="./media/how-to-visual-studio-develop-module/add-new-module.png":::
- ![Add a new module to an existing IoT Edge project](./media/how-to-visual-studio-develop-csharp-module/add-new-module.png)
+1. In the `Add module` window give your new module a name and replace the `localhost:5000` portion of the repository URL with your Azure Container Registry login server, like you did before.
-1. Open the file `deployment.template.json` and you'll see that the new module has been added in the **modules** section. A new route was also added to the **routes** section to send messages from the new module to IoT Hub. If you want to send data from the simulated temperature sensor to the new module, add another route like the following example:
+1. Open the file `deployment.debug.template.json` to see that the new module has been added in the **modules** section. A new route was also added to the **routes** section in `EdgeHub` to send messages from the new module to IoT Hub. To send data from the simulated temperature sensor to the new module, add another route with the following line of `JSON`. Replace `<NewModuleName>` (in two places) with your own module name.
```json "sensorTo<NewModuleName>": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/<NewModuleName>/inputs/input1\")" ```
-1. Right-click the project folder and select **Set as StartUp Project** from the context menu.
+1. Right-click the main project (for example, `IoTEdgeProject`) and select **Set as StartUp Project**.
-1. Create your breakpoints and then press **F5** to run and debug multiple modules simultaneously. You should see multiple .NET Core console app windows, which each window representing a different module.
+1. Create breakpoints in each module and then press **F5** to run and debug multiple modules simultaneously. You should see multiple .NET Core console app windows, with each window representing a different module.
- ![Debug Multiple Modules](./media/how-to-visual-studio-develop-csharp-module/debug-multiple-modules.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-csharp-module/debug-multiple-modules.png" alt-text="Screenshot of Visual Studio with two output consoles.":::
1. Press **Ctrl + F5** or select the stop button to stop debugging. ## Build and push images
-1. Make sure the IoT Edge project is the start-up project, not one of the individual modules. Select either **Debug** or **Release** as the configuration to build for your module images.
+1. Make sure the main IoT Edge project is the start-up project, not one of the individual modules. Select either **Debug** or **Release** as the configuration to build for your module images.
> [!NOTE] > When choosing **Debug**, Visual Studio uses `Dockerfile.(amd64|windows-amd64).debug` to build Docker images. This includes the .NET Core command-line debugger VSDBG in your container image while building it. For production-ready IoT Edge modules, we recommend that you use the **Release** configuration, which uses `Dockerfile.(amd64|windows-amd64)` without VSDBG.
-1. If you're using a private registry like Azure Container Registry (ACR), use the following Docker command to sign in to it. You can get the username and password from the **Access keys** page of your registry in the Azure portal. If you're using local registry, you can [run a local registry](https://docs.docker.com/registry/deploying/#run-a-local-registry).
+1. If you're using a private registry like Azure Container Registry (ACR), use the following Docker command to sign in to it. You can get the username and password from the **Access keys** page of your registry in the Azure portal.
```cmd docker login -u <ACR username> -p <ACR password> <ACR login server> ```
-1. If you're using a private registry like Azure Container Registry, you need to add your registry login information to the runtime settings found in the file `deployment.template.json`. Replace the placeholders with your actual ACR admin username, password, and registry name.
+1. Let's add the Azure Container Registry login information to the runtime settings found in the file `deployment.debug.template.json`. There are two ways to do this. You can either add your registry credentials to your `.env` file (most secure) or add them directly to your `deployment.debug.template.json` file.
+
+ **Add credentials to your `.env` file:**
+
+ In the Solution Explorer, click the button that will **Show All Files**. The `.env` file will appear. Add your Azure Container Registry username and password to your `.env` file. These credentials can be found on the **Access Keys** page of your Azure Container Registry in the Azure portal.
+
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/show-env-file.png" alt-text="Screenshot of button that will show all files in the Solution Explorer.":::
+
+ ```env
+ DEFAULT_RT_IMAGE=1.2
+ CONTAINER_REGISTRY_USERNAME_myregistry=<my-registry-name>
+ CONTAINER_REGISTRY_PASSWORD_myregistry=<my-registry-password>
+ ```
+
+ **Add credentials directly to `deployment.debug.template.json`:**
+
+ If you'd rather add your credentials directly to your deployment template, replace the placeholders with your actual ACR admin username, password, and registry name.
```json "settings": {
After you're done developing a single module, you might want to run and debug an
>[!NOTE] >This article uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
-1. In **Solution Explorer**, right-click the project folder and select **Build and Push IoT Edge Modules** to build and push the Docker image for each module.
+1. If you're using a local registry, you can [run a local registry](https://docs.docker.com/registry/deploying/#run-a-local-registry).
+
+1. Finally, in the **Solution Explorer**, right-click the main project folder and select **Build and Push IoT Edge Modules** to build and push the Docker image for each module. This might take a minute. When you see `Finished Build and Push IoT Edge Modules.` in your Output console of Visual Studio, you are done.
## Deploy the solution
-In the quickstart article that you used to set up your IoT Edge device, you deployed a module by using the Azure portal. You can also deploy modules using the Cloud Explorer for Visual Studio. You already have a deployment manifest prepared for your scenario, the `deployment.json` file and all you need to do is select a device to receive the deployment.
+In the quickstart article that you used to set up your IoT Edge device, you deployed a module by using the Azure portal. You can also deploy modules using the CLI in Visual Studio. You already have a deployment manifest template you've been observing throughout this tutorial. Let's generate a deployment manifest from that, then use an Azure CLI command to deploy your modules to your IoT Edge device in Azure.
-1. Open **Cloud Explorer** by clicking **View** > **Cloud Explorer**. Make sure you've logged in to Visual Studio 2019.
+1. Right-click on your main project in Visual Studio Solution Explorer and choose **Generate Deployment for IoT Edge**.
-1. In **Cloud Explorer**, expand your subscription, find your Azure IoT Hub and the Azure IoT Edge device you want to deploy.
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/generate-deployment.png" alt-text="Screenshot of location of the 'generate deployment' menu item.":::
-1. Right-click on the IoT Edge device to create a deployment for it. Navigate to the deployment manifest configured for your platform located in the **config** folder in your Visual Studio solution, such as `deployment.arm32v7.json`.
+1. Go to your local Visual Studio main project folder and look in the `config` folder. The file path might look like this: `C:\Users\<YOUR-USER-NAME>\source\repos\<YOUR-IOT-EDGE-PROJECT-NAME>\config`. Here you'll find the generated deployment manifest such as `deployment.amd64.debug.json`.
-1. Click the refresh button to see the new modules running along with the **SimulatedTemperatureSensor** module and **$edgeAgent** and **$edgeHub**.
+1. Check your `deployment.amd64.debug.json` file to confirm the `edgeHub` schema version is set to 1.2.
-## View generated data
+ ```json
+ "$edgeHub": {
+ "properties.desired": {
+ "schemaVersion": "1.2",
+ "routes": {
+ "IotEdgeModule2022ToIoTHub": "FROM /messages/modules/IotEdgeModule2022/outputs/* INTO $upstream",
+ "sensorToIotEdgeModule2022": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/IotEdgeModule2022/inputs/input1\")",
+ "IotEdgeModule2022bToIoTHub": "FROM /messages/modules/IotEdgeModule2022b/outputs/* INTO $upstream"
+ },
+ "storeAndForwardConfiguration": {
+ "timeToLiveSecs": 7200
+ }
+ }
+ }
+ ```
+ > [!TIP]
+ > The deployment template for Visual Studio 2022 requires the 1.2 schema version. If you need it to be 1.1 or 1.0, wait until after the deployment is generated (do not change it in `deployment.debug.template.json`). Generating a deployment will create a 1.2 schema by default. However, you can manually change `deployment.amd64.debug.json`, the generated manifest, if needed before deploying it to Azure.
+
+ > [!IMPORTANT]
+ > Once your IoT Edge device is deployed, it currently won't display correctly in the Azure portal with schema version 1.2 (version 1.1 will be fine). This is a known bug and will be fixed soon. However, this won't affect your device, as it's still connected in IoT Hub and can be communicated with at any time using the Azure CLI.
+ >
+ >:::image type="content" source="./media/how-to-publish-subscribe/unsupported-1.2-schema.png" alt-text="Screenshot of Azure portal error on the I o T Edge device page.":::
+
+1. Now let's deploy our manifest with an Azure CLI command. Open the Visual Studio **Developer Command Prompt** and change to the **config** directory.
+
+ ```cmd
+ cd config
+ ```
+
+1. From your **config** folder, execute the following deployment command. Replace the `[device id]`, `[hub name]`, and `[file path]` with your values.
-1. To monitor the D2C message for a specific IoT Edge device, select it in your IoT hub in **Cloud Explorer** and then click **Start Monitoring Built-in Event Endpoint** in the **Action** window.
+ ```cmd
+ az iot edge set-modules --device-id [device id] --hub-name [hub name] --content [file path]
+ ```
+
+ For example, your command might look like this:
+
+ ```cmd
+ az iot edge set-modules --device-id my-device-name --hub-name my-iot-hub-name --content deployment.amd64.debug.json
+ ```
+
+1. After running the command, you'll see a confirmation of deployment printed in `JSON` in your command prompt.
+
+### Confirm the deployment to your device
+
+To check that your IoT Edge modules were deployed to Azure, sign in to your device (or virtual machine), for example through SSH or Azure Bastion, and run the IoT Edge list command.
+
+```azurecli
+ iotedge list
+```
+
+You should see a list of your modules running on your device or virtual machine.
+
+```azurecli
+ NAME STATUS DESCRIPTION CONFIG
+ SimulatedTemperatureSensor running Up a day mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0
+ edgeAgent running Up a day mcr.microsoft.com/azureiotedge-agent:1.2
+ edgeHub running Up a day mcr.microsoft.com/azureiotedge-hub:1.2
+ myIotEdgeModule running Up 2 hours myregistry.azurecr.io/myiotedgemodule:0.0.1-amd64.debug
+ myIotEdgeModule2 running Up 2 hours myregistry.azurecr.io/myiotedgemodule2:0.0.1-amd64.debug
+```
+
+## View generated data
-1. To stop monitoring data, select **Stop Monitoring Built-in Event Endpoint** in the **Action** window.
+To monitor the device-to-cloud (D2C) messages for a specific IoT Edge device, review the [Tutorial: Monitor IoT Edge devices](tutorial-monitor-with-workbooks.md) to get started.
## Next steps
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
Before you put any device in production you should know how you're going to mana
* IoT Edge * CA certificates
-For more information, see [Update the IoT Edge runtime](how-to-update-iot-edge.md). The current methods for updating IoT Edge require physical or SSH access to the IoT Edge device. If you have many devices to update, consider adding the update steps to a script or use an automation tool like Ansible.
+[Device Update for IoT Hub](../iot-hub-device-update/index.yml) (Preview) is a service that enables you to deploy over-the-air updates (OTA) for your IoT Edge devices.
+
+Alternative methods for updating IoT Edge require physical or SSH access to the IoT Edge device. For more information, see [Update the IoT Edge runtime](how-to-update-iot-edge.md). To update multiple devices, consider adding the update steps to a script or use an automation tool like Ansible.
### Use Moby as the container engine
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| gpuName | GPU Device name | Name of GPU device to be used for passthrough. | | gpuPassthroughType | **DirectDeviceAssignment**, **ParaVirtualization**, or none (CPU only) | GPU Passthrough type | | gpuCount | Integer value between 1 and the number of the device's GPU cores | Number of GPU devices for the VM. <br><br>**Note**: If using ParaVirtualization, make sure to set gpuCount = 1 |
+| customSsh | None | Determines whether user wants to use their custom OpenSSH.Client installation. If present, ssh.exe must be available to the EFLOW PSM |
:::moniker-end <!-- end 1.1 -->
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| gpuName | GPU Device name | Name of GPU device to be used for passthrough. | | gpuPassthroughType | **DirectDeviceAssignment**, **ParaVirtualization**, or none (CPU only) | GPU Passthrough type | | gpuCount | Integer value between 1 and the number of the device's GPU cores | Number of GPU devices for the VM. <br><br>**Note**: If using ParaVirtualization, make sure to set gpuCount = 1 |
+| customSsh | None | Determines whether user wants to use their custom OpenSSH.Client installation. If present, ssh.exe must be available to the EFLOW PSM |
:::moniker-end <!-- end 1.2 -->
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
-| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).
+| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md).
| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) | | [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-provision-single-device-linux-x509.md) | | [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | X.509 auto-provisioning with DPS<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) |
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
Managed HSM local RBAC has several built-in roles. You can assign these roles to
|/keys/deletedKeys/delete||<center>X</center>|||||<center>X</center>| |/keys/backup/action|||<center>X</center>|||<center>X</center>| |/keys/restore/action|||<center>X</center>||||
-|/keys/export/action||<center>X</center>|||||
|/keys/release/action|||<center>X</center>|||| |/keys/import/action|||<center>X</center>|||| |**Key cryptographic operations**|
lab-services Quick Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-template.md
Get-AzLabServicesLab -Name $lab
Write-Host "Press [ENTER] to continue..." ```
-To verify educators can use the lab, navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about managing labs, see [View all labs](/azure/lab-services/how-to-manage-labs.md#)](how-to-manage-labs.md#view-all-labs).
+To verify educators can use the lab, navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about managing labs, see [View all labs](/azure/lab-services/how-to-manage-labs).
## Clean up resources
Alternately, an educator may delete a lab from the Azure Lab Services website: [
For a step-by-step tutorial that guides you through the process of creating a template, see: > [!div class="nextstepaction"]
-> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
logic-apps Logic Apps Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exception-handling.md
ms.suite: integration -+ Previously updated : 02/18/2021 Last updated : 05/26/2022 # Handle errors and exceptions in Azure Logic Apps
-The way that any integration architecture appropriately handles downtime or issues caused by dependent systems can pose a challenge. To help you create robust and resilient integrations that gracefully handle problems and failures, Logic Apps provides a first-class experience for handling errors and exceptions.
+The way that any integration architecture appropriately handles downtime or issues caused by dependent systems can pose a challenge. To help you create robust and resilient integrations that gracefully handle problems and failures, Azure Logic Apps provides a first-class experience for handling errors and exceptions.
<a name="retry-policies"></a> ## Retry policies
-For the most basic exception and error handling, you can use a *retry policy* in any action or trigger where supported, for example, see [HTTP action](../logic-apps/logic-apps-workflow-actions-triggers.md#http-trigger). A retry policy specifies whether and how the action or trigger retries a request when the original request times out or fails, which is any request that results in a 408, 429, or 5xx response. If no other retry policy is used, the default policy is used.
+For the most basic exception and error handling, you can use the *retry policy* when supported on a trigger or action, such as the [HTTP action](logic-apps-workflow-actions-triggers.md#http-trigger). If the trigger or action's original request times out or fails, resulting in a 408, 429, or 5xx response, the retry policy specifies that the trigger or action resend the request per policy settings.
-Here are the retry policy types:
+### Retry policy types
-| Type | Description |
-||-|
-| **Default** | This policy sends up to four retries at *exponentially increasing* intervals, which scale by 7.5 seconds but are capped between 5 and 45 seconds. |
-| **Exponential interval** | This policy waits a random interval selected from an exponentially growing range before sending the next request. |
-| **Fixed interval** | This policy waits the specified interval before sending the next request. |
-| **None** | Don't resend the request. |
+By default, the retry policy is set to the **Default** type.
+
+| Retry policy | Description |
+|--|-|
+| **Default** | This policy sends up to 4 retries at *exponentially increasing* intervals, which scale by 7.5 seconds but are capped between 5 and 45 seconds. For more information, review the [Default](#default) policy type. |
+| **None** | Don't resend the request. For more information, review the [None](#none) policy type. |
+| **Exponential Interval** | This policy waits a random interval, which is selected from an exponentially growing range before sending the next request. For more information, review the [Exponential Interval](#exponential-interval) policy type. |
+| **Fixed Interval** | This policy waits the specified interval before sending the next request. For more information, review the [Fixed Interval](#fixed-interval) policy type. |
|||
-For information about retry policy limits, see [Logic Apps limits and configuration](../logic-apps/logic-apps-limits-and-config.md#http-limits).
+<a name="retry-policy-limits"></a>
-### Change retry policy
+### Retry policy limits
-To select a different retry policy, follow these steps:
+For more information about retry policies, settings, limits, and other options, review [Retry policy limits](logic-apps-limits-and-config.md#retry-policy-limits).
-1. Open your logic app in Logic App Designer.
+### Change retry policy type in the designer
-1. Open the **Settings** for an action or trigger.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. If the action or trigger supports retry policies, under **Retry Policy**, select the type you want.
+1. Based on your [logic app type](logic-apps-overview.md#resource-environment-differences), open the trigger or action's **Settings**.
-Or, you can manually specify the retry policy in the `inputs` section for an action or trigger that supports retry policies. If you don't specify a retry policy, the action uses the default policy.
+ * **Consumption**: On the action shape, open the ellipses menu (**...**), and select **Settings**.
-```json
-"<action-name>": {
- "type": "<action-type>",
+ * **Standard**: On the designer, select the action. On the details pane, select **Settings**.
+
+1. If the trigger or action supports retry policies, under **Retry Policy**, select the policy type that you want.
+
+### Change retry policy type in the code view editor
+
+1. If necessary, confirm whether the trigger or action supports retry policies by completing the earlier steps in the designer.
+
+1. Open your logic app workflow in the code view editor.
+
+1. In the trigger or action definition, add the `retryPolicy` JSON object to that trigger or action's `inputs` object. Otherwise, if no `retryPolicy` object exists, the trigger or action uses the `default` retry policy.
+
+ ```json
"inputs": {
- "<action-specific-inputs>",
+ <...>,
"retryPolicy": { "type": "<retry-policy-type>",
- "interval": "<retry-interval>",
+ // The following properties apply to specific retry policies.
"count": <retry-attempts>,
- "minimumInterval": "<minimum-interval>",
- "maximumInterval": "<maximum-interval>"
+ "interval": "<retry-interval>",
+ "maximumInterval": "<maximum-interval>",
+ "minimumInterval": "<minimum-interval>"
},
- "<other-action-specific-inputs>"
+ <...>
}, "runAfter": {}
-}
-```
+ ```
-*Required*
+ *Required*
-| Value | Type | Description |
-|-||-|
-| <*retry-policy-type*> | String | The retry policy type you want to use: `default`, `none`, `fixed`, or `exponential` |
-| <*retry-interval*> | String | The retry interval where the value must use [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default minimum interval is `PT5S` and the maximum interval is `PT1D`. When you use the exponential interval policy, you can specify different minimum and maximum values. |
-| <*retry-attempts*> | Integer | The number of retry attempts, which must be between 1 and 90 |
-||||
+ | Property | Value | Type | Description |
+ |-|-||-|
+ | `type` | <*retry-policy-type*> | String | The retry policy type to use: `default`, `none`, `fixed`, or `exponential` |
+ | `count` | <*retry-attempts*> | Integer | For `fixed` and `exponential` policy types, the number of retry attempts, which is a value from 1 - 90. For more information, review [Fixed Interval](#fixed-interval) and [Exponential Interval](#exponential-interval). |
+ | `interval`| <*retry-interval*> | String | For `fixed` and `exponential` policy types, the retry interval value in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). For the `exponential` policy, you can also specify [optional maximum and minimum intervals](#optional-max-min-intervals). For more information, review [Fixed Interval](#fixed-interval) and [Exponential Interval](#exponential-interval). <br><br>**Consumption**: 5 seconds (`PT5S`) to 1 day (`P1D`). <br>**Standard**: For stateful workflows, 5 seconds (`PT5S`) to 1 day (`P1D`). For stateless workflows, 1 second (`PT1S`) to 1 minute (`PT1M`). |
+ |||||
-*Optional*
+ <a name="optional-max-min-intervals"></a>
-| Value | Type | Description |
-|-||-|
-| <*minimum-interval*> | String | For the exponential interval policy, the smallest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) |
-| <*maximum-interval*> | String | For the exponential interval policy, the largest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) |
-||||
+ *Optional*
-Here is more information about the different policy types.
+ | Property | Value | Type | Description |
+ |-|-||-|
+ | `maximumInterval` | <*maximum-interval*> | String | For the `exponential` policy, the largest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default value is 1 day (`P1D`). For more information, review [Exponential Interval](#exponential-interval). |
+ | `minimumInterval` | <*minimum-interval*> | String | For the `exponential` policy, the smallest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default value is 5 seconds (`PT5S`). For more information, review [Exponential Interval](#exponential-interval). |
+ |||||
-<a name="default-retry"></a>
+<a name="default"></a>
-### Default
+#### Default retry policy
-If you don't specify a retry policy, the action uses the default policy, which is actually an [exponential interval policy](#exponential-interval) that sends up to four retries at exponentially increasing intervals that are scaled by 7.5 seconds. The interval is capped between 5 and 45 seconds.
+If you don't specify a retry policy, the action uses the default policy. The default is actually an [exponential interval policy](#exponential-interval) that sends up to four retries at exponentially increasing intervals, which scales by 7.5 seconds. The interval is capped between 5 and 45 seconds.
-Though not explicitly defined in your action or trigger, here is how the default policy behaves in an example HTTP action:
+Though not explicitly defined in your action or trigger, the following example shows how the default policy behaves in an example HTTP action:
```json "HTTP": {
Though not explicitly defined in your action or trigger, here is how the default
} ```
-### None
+<a name="none"></a>
+
+### None - No retry policy
To specify that the action or trigger doesn't retry failed requests, set the <*retry-policy-type*> to `none`.
-### Fixed interval
+<a name="fixed-interval"></a>
+
+### Fixed interval retry policy
To specify that the action or trigger waits the specified interval before sending the next request, set the <*retry-policy-type*> to `fixed`.
This retry policy attempts to get the latest news two more times after the first
<a name="exponential-interval"></a>
-### Exponential interval
+### Exponential interval retry policy
+
+The exponential interval retry policy specifies that the trigger or action waits a random interval before sending the next request. This random interval is selected from an exponentially growing range. Optionally, you can override the default minimum and maximum intervals by specifying your own minimum and maximum intervals, based on whether you have a [Consumption or Standard logic app workflow](logic-apps-overview.md#resource-environment-differences).
-To specify that the action or trigger waits a random interval before sending the next request, set the <*retry-policy-type*> to `exponential`. The random interval is selected from an exponentially growing range. Optionally, you can also override the default minimum and maximum intervals by specifying your own minimum and maximum intervals.
+| Name | Consumption limit | Standard limit | Notes |
+||-|-|-|
+| Maximum delay | Default: 1 day | Default: 1 hour | To change the default limit in a Consumption logic app workflow, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in a Standard logic app workflow, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Minimum delay | Default: 5 sec | Default: 5 sec | To change the default limit in a Consumption logic app workflow, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in a Standard logic app workflow, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+|||||
**Random variable ranges**
-This table shows how Logic Apps generates a uniform random variable in the specified range for each retry up to and including the number of retries:
+For the exponential interval retry policy, the following table shows the general algorithm that Azure Logic Apps uses to generate a uniform random variable in the specified range for each retry. The specified range can be up to and including the number of retries.
| Retry number | Minimum interval | Maximum interval | |--|||
This table shows how Logic Apps generates a uniform random variable in the speci
<a name="control-run-after-behavior"></a>
-## Catch and handle failures by changing "run after" behavior
+## Manage the "run after" behavior
+
+When you add actions in the workflow designer, you implicitly declare the order to use for running those actions. After an action finishes running, that action is marked with a status such as **Succeeded**, **Failed**, **Skipped**, or **TimedOut**. By default, an action that you add in the designer runs only after the predecessor completes with **Succeeded** status. In an action's underlying definition, the `runAfter` property specifies that the predecessor action that must first finish and the statuses permitted for that predecessor before the successor action can run.
+
+When an action throws an unhandled error or exception, the action is marked **Failed**, and any successor action is marked **Skipped**. If this behavior happens for an action that has parallel branches, the Azure Logic Apps engine follows the other branches to determine their completion statuses. For example, if a branch ends with a **Skipped** action, that branch's completion status is based on that skipped action's predecessor status. After the workflow run completes, the engine determines the entire run's status by evaluating all the branch statuses. If any branch ends in failure, the entire workflow run is marked **Failed**.
+
+![Conceptual diagram with examples that show how run statuses are evaluated.](./media/logic-apps-exception-handling/status-evaluation-for-parallel-branches.png)
+
+To make sure that an action can still run despite its predecessor's status, you can change an action's "run after" behavior to handle the predecessor's unsuccessful statuses. That way, the action runs when the predecessor's status is **Succeeded**, **Failed**, **Skipped**, **TimedOut**, or all these statuses.
+
+For example, to run the Office 365 Outlook **Send an email** action after the Excel Online **Add a row into a table** predecessor action is marked **Failed**, rather than **Succeeded**, change the "run after" behavior using either the designer or code view editor.
+
+> [!NOTE]
+>
+> In the designer, the "run after" setting doesn't apply to the action that immediately
+> follows the trigger as the trigger must run successfully before the first action can run.
+
+<a name="change-run-after-designer"></a>
+
+### Change "run after" behavior in the designer
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open the logic app workflow in the designer.
+
+1. On the action shape, open the ellipses menu (**...**), and select **Configure run after**.
-When you add actions in the Logic App Designer, you implicitly declare the order to use for running those actions. After an action finishes running, that action is marked with a status such as `Succeeded`, `Failed`, `Skipped`, or `TimedOut`. In each action definition, the `runAfter` property specifies the predecessor action that must first finish and the statuses permitted for that predecessor before the successor action can run. By default, an action that you add in the designer runs only after the predecessor completes with `Succeeded` status.
+ ![Screenshot showing Consumption workflow designer and current action with ellipses and "Configure run after" selected.](./media/logic-apps-exception-handling/configure-run-after-consumption.png)
-When an action throws an unhandled error or exception, the action is marked `Failed`, and any successor action is marked `Skipped`. If this behavior happens for an action that has parallel branches, the Logic Apps engine follows the other branches to determine their completion statuses. For example, if a branch ends with a `Skipped` action, that branch's completion status is based on that skipped action's predecessor status. After the logic app run completes, the engine determines the entire run's status by evaluating all the branch statuses. If any branch ends in failure, the entire logic app run is marked `Failed`.
+ The action shape expands and shows the predecessor action for the currently selected action.
-![Examples that show how run statuses are evaluated](./media/logic-apps-exception-handling/status-evaluation-for-parallel-branches.png)
+ ![Screenshot showing Consumption workflow designer, current action, and "run after" status for predecessor action.](./media/logic-apps-exception-handling/predecessor-action-consumption.png)
-To make sure that an action can still run despite its predecessor's status, [customize an action's "run after" behavior](#customize-run-after) to handle the predecessor's unsuccessful statuses.
+1. Expand the predecessor action node to view all the "run after" statuses.
-<a name="customize-run-after"></a>
+ By default, the "run after" status is set to **is successful**. So, the predecessor action must run successfully before the currently selected action can run.
-### Customize "run after" behavior
+ ![Screenshot showing Consumption designer, current action, and default "run after" set to "is successful".](./media/logic-apps-exception-handling/default-run-after-status-consumption.png)
-You can customize an action's "run after" behavior so that the action runs when the predecessor's status is either `Succeeded`, `Failed`, `Skipped`, `TimedOut`, or any of these statuses. For example, to send an email after the Excel Online `Add_a_row_into_a_table` predecessor action is marked `Failed`, rather than `Succeeded`, change the "run after" behavior by following either step:
+1. Change the "run after" behavior to the status that you want. Make sure that you first select an option before you clear the default option. You have to always have at least one option selected.
-* In the design view, select the ellipses (**...**) button, and then select **Configure run after**.
+ The following example selects **has failed**.
- ![Configure "run after" behavior for an action](./media/logic-apps-exception-handling/configure-run-after-property-setting.png)
+ ![Screenshot showing Consumption designer, current action, and "run after" set to "has failed".](./media/logic-apps-exception-handling/failed-run-after-status-consumption.png)
- The action shape shows the default status that's required for the predecessor action, which is **Add a row into a table** in this example:
+1. To specify that the current action runs whether the predecessor action is marked as **Failed**, **Skipped**, or **TimedOut**, select the other statuses.
- ![Default "run after" behavior for an action](./media/logic-apps-exception-handling/change-run-after-property-status.png)
+ ![Screenshot showing Consumption designer, current action, and multiple "run after" statuses selected.](./media/logic-apps-exception-handling/run-after-multiple-statuses-consumption.png)
- Change the "run after" behavior to the status that you want, which is **has failed** in this example:
+1. When you're ready, select **Done**.
- ![Change "run after" behavior to "has failed"](./media/logic-apps-exception-handling/run-after-property-status-set-to-failed.png)
+### [Standard](#tab/standard)
- To specify that the action runs whether the predecessor action is marked as `Failed`, `Skipped` or `TimedOut`, select the other statuses:
+1. In the [Azure portal](https://portal.azure.com), open the logic app workflow in the designer.
- ![Change "run after" behavior to have any other status](./media/logic-apps-exception-handling/run-after-property-multiple-statuses.png)
+1. On the designer, select the action shape. On the details pane, select **Run After**.
-* In code view, in the action's JSON definition, edit the `runAfter` property, which follows this syntax:
+ ![Screenshot showing Standard workflow designer and current action details pane with "Run After" selected.](./media/logic-apps-exception-handling/configure-run-after-standard.png)
- ```json
- "<action-name>": {
- "inputs": {
- "<action-specific-inputs>"
- },
- "runAfter": {
- "<preceding-action>": [
- "Succeeded"
- ]
- },
- "type": "<action-type>"
- }
- ```
+ The **Run After** pane shows the predecessor action for the currently selected action.
- For this example, change the `runAfter` property from `Succeeded` to `Failed`:
+ ![Screenshot showing Standard designer, current action, and "run after" status for predecessor action.](./media/logic-apps-exception-handling/predecessor-action-standard.png)
- ```json
- "Send_an_email_(V2)": {
- "inputs": {
- "body": {
- "Body": "<p>Failed to&nbsp;add row to &nbsp;@{body('Add_a_row_into_a_table')?['Terms']}</p>",,
- "Subject": "Add row to table failed: @{body('Add_a_row_into_a_table')?['Terms']}",
- "To": "Sophia.Owen@fabrikam.com"
- },
- "host": {
- "connection": {
- "name": "@parameters('$connections')['office365']['connectionId']"
- }
- },
- "method": "post",
- "path": "/v2/Mail"
- },
- "runAfter": {
- "Add_a_row_into_a_table": [
- "Failed"
- ]
- },
- "type": "ApiConnection"
- }
- ```
+1. Expand the predecessor action node to view all the "run after" statuses.
- To specify that the action runs whether the predecessor action is marked as `Failed`, `Skipped` or `TimedOut`, add the other statuses:
+ By default, the "run after" status is set to **is successful**. So, the predecessor action must run successfully before the currently selected action can run.
- ```json
- "runAfter": {
- "Add_a_row_into_a_table": [
- "Failed", "Skipped", "TimedOut"
- ]
- },
- ```
+ ![Screenshot showing Standard designer, current action, and default "run after" set to "is successful".](./media/logic-apps-exception-handling/change-run-after-status-standard.png)
+
+1. Change the "run after" behavior to the status that you want. Make sure that you first select an option before you clear the default option. You have to always have at least one option selected.
+
+ The following example selects **has failed**.
+
+ ![Screenshot showing Standard designer, current action, and "run after" set to "has failed".](./media/logic-apps-exception-handling/failed-run-after-status-standard.png)
+
+1. To specify that the current action runs whether the predecessor action is marked as **Failed**, **Skipped**, or **TimedOut**, select the other statuses.
+
+ ![Screenshot showing Standard designer, current action, and multiple "run after" statuses selected.](./media/logic-apps-exception-handling/run-after-multiple-statuses-standard.png)
+
+1. To require that more than one predecessor action runs, each with their own "run after" statuses, expand the **Select actions** list. Select the predecessor actions that you want, and specify their required "run after" statuses.
+
+ ![Screenshot showing Standard designer, current action, and multiple predecessor actions available.](./media/logic-apps-exception-handling/multiple-predecessor-actions-standard.png)
+
+1. When you're ready, select **Done**.
+++
+### Change "run after" behavior in the code view editor
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the code view editor.
+
+1. In the action's JSON definition, edit the `runAfter` property, which has the following syntax:
+
+ ```json
+ "<action-name>": {
+ "inputs": {
+ "<action-specific-inputs>"
+ },
+ "runAfter": {
+ "<preceding-action>": [
+ "Succeeded"
+ ]
+ },
+ "type": "<action-type>"
+ }
+ ```
+
+1. For this example, change the `runAfter` property from `Succeeded` to `Failed`:
+
+ ```json
+ "Send_an_email_(V2)": {
+ "inputs": {
+ "body": {
+ "Body": "<p>Failed to add row to table: @{body('Add_a_row_into_a_table')?['Terms']}</p>",
+ "Subject": "Add row to table failed: @{body('Add_a_row_into_a_table')?['Terms']}",
+ "To": "Sophia.Owen@fabrikam.com"
+ },
+ "host": {
+ "connection": {
+ "name": "@parameters('$connections')['office365']['connectionId']"
+ }
+ },
+ "method": "post",
+ "path": "/v2/Mail"
+ },
+ "runAfter": {
+ "Add_a_row_into_a_table": [
+ "Failed"
+ ]
+ },
+ "type": "ApiConnection"
+ }
+ ```
+
+1. To specify that the action runs whether the predecessor action is marked as `Failed`, `Skipped` or `TimedOut`, add the other statuses:
+
+ ```json
+ "runAfter": {
+ "Add_a_row_into_a_table": [
+ "Failed", "Skipped", "TimedOut"
+ ]
+ },
+ ```
<a name="scopes"></a> ## Evaluate actions with scopes and their results
-Similar to running steps after individual actions with the `runAfter` property, you can group actions together inside a [scope](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md). You can use scopes when you want to logically group actions together, assess the scope's aggregate status, and perform actions based on that status. After all the actions in a scope finish running, the scope itself gets its own status.
+Similar to running steps after individual actions with the "run after" setting, you can group actions together inside a [scope](logic-apps-control-flow-run-steps-group-scopes.md). You can use scopes when you want to logically group actions together, assess the scope's aggregate status, and perform actions based on that status. After all the actions in a scope finish running, the scope itself gets its own status.
-To check a scope's status, you can use the same criteria that you use to check a logic app's run status, such as `Succeeded`, `Failed`, and so on.
+To check a scope's status, you can use the same criteria that you use to check a workflow run status, such as **Succeeded**, **Failed**, and so on.
-By default, when all the scope's actions succeed, the scope's status is marked `Succeeded`. If the final action in a scope results as `Failed` or `Aborted`, the scope's status is marked `Failed`.
+By default, when all the scope's actions succeed, the scope's status is marked **Succeeded**. If the final action in a scope is marked **Failed** or **Aborted**, the scope's status is marked **Failed**.
-To catch exceptions in a `Failed` scope and run actions that handle those errors, you can use the `runAfter` property for that `Failed` scope. That way, if *any* actions in the scope fail, and you use the `runAfter` property for that scope, you can create a single action to catch failures.
+To catch exceptions in a **Failed** scope and run actions that handle those errors, you can use the "run after" setting that **Failed** scope. That way, if *any* actions in the scope fail, and you use the "run after" setting for that scope, you can create a single action to catch failures.
-For limits on scopes, see [Limits and config](../logic-apps/logic-apps-limits-and-config.md).
+For limits on scopes, see [Limits and config](logic-apps-limits-and-config.md).
<a name="get-results-from-failures"></a> ### Get context and results for failures
-Although catching failures from a scope is useful, you might also want context to help you understand exactly which actions failed plus any errors or status codes that were returned. The [`result()` function](../logic-apps/workflow-definition-language-functions-reference.md#result) returns the results from the top-level actions in a scoped action by accepting a single parameter, which is the scope's name, and returning an array that contains the results from those first-level actions. These action objects include the same attributes as those returned by the `actions()` function, such as the action's start time, end time, status, inputs, correlation IDs, and outputs.
+Although catching failures from a scope is useful, you might also want more context to help you learn the exact failed actions plus any errors or status codes. The [`result()` function](workflow-definition-language-functions-reference.md#result) returns the results from the top-level actions in a scoped action. This function accepts the scope's name as a single parameter, and returns an array with the results from those top-level actions. These action objects have the same attributes as the attributes returned by the `actions()` function, such as the action's start time, end time, status, inputs, correlation IDs, and outputs.
> [!NOTE]
-> The `result()` function returns the results from *only* the first-level actions and not from deeper nested actions such as switch or condition actions.
+>
+> The `result()` function returns the results *only* from the top-level actions
+> and not from deeper nested actions such as switch or condition actions.
-To get context about the actions that failed in a scope, you can use the `@result()` expression with the scope's name and the `runAfter` property. To filter down the returned array to actions that have `Failed` status, you can add the [**Filter Array** action](logic-apps-perform-data-operations.md#filter-array-action). To run an action for a returned failed action, take the returned filtered array and use a [**For each** loop](../logic-apps/logic-apps-control-flow-loops.md).
+To get context about the actions that failed in a scope, you can use the `@result()` expression with the scope's name and the "run after" setting. To filter down the returned array to actions that have **Failed** status, you can add the [**Filter Array** action](logic-apps-perform-data-operations.md#filter-array-action). To run an action for a returned failed action, take the returned filtered array and use a [**For each** loop](logic-apps-control-flow-loops.md).
-Here's an example, followed by a detailed explanation, that sends an HTTP POST request with the response body for any actions that failed within the scope action named "My_Scope":
+The following JSON example sends an HTTP POST request with the response body for any actions that failed within the scope action named **My_Scope**. A detailed explanation follows the example.
```json "Filter_array": {
Here's an example, followed by a detailed explanation, that sends an HTTP POST r
} ```
-Here's a detailed walkthrough that describes what happens in this example:
+The following steps describe what happens in this example:
-1. To get the result from all actions inside "My_Scope", the **Filter Array** action uses this filter expression: `@result('My_Scope')`
+1. To get the result from all actions inside **My_Scope**, the **Filter Array** action uses this filter expression: `@result('My_Scope')`
-1. The condition for **Filter Array** is any `@result()` item that has a status equal to `Failed`. This condition filters the array that has all the action results from "My_Scope" down to an array with only the failed action results.
+1. The condition for **Filter Array** is any `@result()` item that has a status equal to `Failed`. This condition filters the array that has all the action results from **My_Scope** down to an array with only the failed action results.
1. Perform a `For_each` loop action on the *filtered array* outputs. This step performs an action for each failed action result that was previously filtered.
To perform different exception handling patterns, you can use the expressions pr
## Set up Azure Monitor logs
-The previous patterns are great way to handle errors and exceptions within a run, but you can also identify and respond to errors independent of the run itself. [Azure Monitor](../azure-monitor/overview.md) provides a simple way to send all workflow events, including all run and action statuses, to a [Log Analytics workspace](../azure-monitor/logs/data-platform-logs.md), [Azure storage account](../storage/blobs/storage-blobs-overview.md), or [Azure Event Hubs](../event-hubs/event-hubs-about.md).
+The previous patterns are useful ways to handle errors and exceptions that happen within a run. However, you can also identify and respond to errors that happen independently from the run. [Azure Monitor](../azure-monitor/overview.md) provides a streamlined way to send all workflow events, including all run and action statuses, to a destination. For example, you can send events to a [Log Analytics workspace](../azure-monitor/logs/data-platform-logs.md), [Azure storage account](../storage/blobs/storage-blobs-overview.md), or [Azure Event Hubs](../event-hubs/event-hubs-about.md).
To evaluate run statuses, you can monitor the logs and metrics, or publish them into any monitoring tool that you prefer. One potential option is to stream all the events through Event Hubs into [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/). In Stream Analytics, you can write live queries based on any anomalies, averages, or failures from the diagnostic logs. You can use Stream Analytics to send information to other data sources, such as queues, topics, SQL, Azure Cosmos DB, or Power BI. ## Next steps
-* [See how a customer builds error handling with Azure Logic Apps](../logic-apps/logic-apps-scenario-error-and-exception-handling.md)
-* [Find more Logic Apps examples and scenarios](../logic-apps/logic-apps-examples-and-scenarios.md)
+* [See how a customer builds error handling with Azure Logic Apps](logic-apps-scenario-error-and-exception-handling.md)
+* [Find more Azure Logic Apps examples and scenarios](logic-apps-examples-and-scenarios.md)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
The following table lists the values for a single workflow run:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-|
-| Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <p><p>**Note**: If the workflow's run duration exceeds the retention limit, that run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <p><p>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>For more information, review [Change duration and run history retention in storage](#change-retention). |
+| Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <p><p>**Note**: If the workflow's run duration exceeds the retention limit, this run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <p><p>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>For more information, review [Change duration and run history retention in storage](#change-retention). |
| Run duration | 90 days | - Stateful workflow: 90 days <br>(Default) <p><p>- Stateless workflow: 5 min <br>(Default) | 366 days | The amount of time that a workflow can continue running before forcing a timeout. <p><p>The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <p>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <p><p>For more information, review [Change run duration and history retention in storage](#change-duration). | | Recurrence interval | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days || ||||||
For more information about your logic app resource definition, review [Overview:
Azure Logic Apps supports write operations, including inserts and updates, through the on-premises data gateway. However, these operations have [limits on their payload size](/data-integration/gateway/service-gateway-onprem#considerations).
+<a name="retry-policy-limits"></a>
+
+## Retry policy limits
+
+The following table lists the retry policy limits for a trigger or action, based on whether you have a [Consumption or Standard logic app workflow](logic-apps-overview.md#resource-environment-differences).
+
+| Name | Consumption limit | Standard limit | Notes |
+||-|-|-|
+| Retry attempts | - Default: 4 attempts <br> - Max: 90 attempts | - Default: 4 attempts | To change the default limit in Consumption logic app workflows, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). To change the default limit in Standard logic app workflows, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Retry interval | None | Default: 7 sec | To change the default limit in Consumption logic app workflows, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in Standard logic app workflows, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+|||||
+ <a name="variables-action-limits"></a> ## Variables action limits
By default, the HTTP action and APIConnection actions follow the [standard async
| Request URL character limit | 16,384 characters | | ||||
-<a name="retry-policy-limits"></a>
-
-### Retry policy
-
-| Name | Multi-tenant limit | Single-tenant limit | Notes |
-||--||-|
-| Retry attempts | - Default: 4 attempts <br> - Max: 90 attempts | - Default: 4 attempts | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Retry interval | None | Default: 7 sec | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Retry max delay | Default: 1 day | Default: 1 hour | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Retry min delay | Default: 5 sec | Default: 5 sec | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-|||||
- <a name="authentication-limits"></a> ### Authentication limits
logic-apps Quickstart Logic Apps Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-powershell.md
ms.suite: integration -
+ms.tool: azure-powershell
+ Last updated 05/03/2022
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| Return value | Type | Description | | | - | -- |
-| <*converted-timestamp*> | String | The timestamp converted to the target time zone |
+| <*converted-timestamp*> | String | The timestamp converted to the target time zone without the timezone UTC offset. |
|||| *Example 1*
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-terraform.md
Last updated 01/05/2022
+ms.tool: terraform
# Manage Azure Machine Learning workspaces using Terraform
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
First you'll install the v2 SDK on your compute instance:
1. Now on the terminal, run the command: ```
- git clone --depth 1 https://github.com/Azure/azureml-examples --branch sdk-preview
+ git clone --depth 1 https://github.com/Azure/azureml-examples
``` 1. On the left, select **Notebooks**.
Before creating the pipeline, you'll set up the resources the pipeline will use:
Before we dive in the code, you'll need to connect to your Azure ML workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. -
-```python
-# handle to the workspace
-from azure.ai.ml import MLClient
-
-# Authentication package
-from azure.identity import DefaultAzureCredential
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=import-mlclient)]
In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find your Subscription ID: 1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
In the next cell, enter your Subscription ID, Resource Group name and Workspace
:::image type="content" source="media/tutorial-pipeline-python-sdk/find-info.png" alt-text="Screenshot shows how to find values needed for your code.":::
-```python
-# get a handle to the workspace
-ml_client = MLClient(
- DefaultAzureCredential(),
- subscription_id="<SUBSCRIPTION_ID>",
- resource_group_name="<RESOURCE_GROUP>",
- workspace_name="<AML_WORKSPACE_NAME>",
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client)]
The result is a handler to the workspace that you'll use to manage other resources and jobs.
The data you use for training is usually in one of the locations below:
Azure ML uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. In the section below, you'll consume some data from web url as one example. Data from other sources can be created as well.
-```python
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-web_path = "https://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls"
-
-credit_data = Data(
- name="creditcard_defaults",
- path=web_path,
- type=AssetTypes.URI_FILE,
- description="Dataset for credit card defaults",
- tags={"source_type": "web", "source": "UCI ML Repo"},
- version='1.0.0'
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=credit_data)]
This code just created a `Data` asset, ready to be consumed as an input by the pipeline that you'll define in the next sections. In addition, you can register the dataset to your workspace so it becomes reusable across pipelines.
Registering the dataset will enable you to:
Since this is the first time that you're making a call to the workspace, you may be asked to authenticate. Once the authentication is complete, you'll then see the dataset registration completion message. -
-```python
-credit_data = ml_client.data.create_or_update(credit_data)
-print(
- f"Dataset with name {credit_data.name} was registered to workspace, the dataset version is {credit_data.version}"
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-credit_data)]
In the future, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`.
+## Create a compute resource to run your pipeline
+
+Each step of an Azure ML pipeline can use a different compute resource for running the specific job of that step. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
+
+In this section, you'll provision a Linux compute cluster.
+
+For this tutorial you only need a basic cluster, so we'll use a Standard_DS3_v2 model with 2 vCPU cores, 7 GB RAM and create an Azure ML Compute.
+
+> [!TIP]
+> If you already have a compute cluster, replace "cpu-cluster" in the code below with the name of your cluster. This will keep you from creating another one.
+
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=cpu_cluster)]
## Create a job environment for pipeline steps
So far, you've created a development environment on the compute instance, your d
In this example, you'll create a conda environment for your jobs, using a conda yaml file. First, create a directory to store the file in. -
-```python
-import os
-dependencies_dir = "./dependencies"
-os.makedirs(dependencies_dir, exist_ok=True)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=dependencies_dir)]
Now, create the file in the dependencies directory.
-```python
-%%writefile {dependencies_dir}/conda.yml
-name: model-env
-channels:
- - conda-forge
-dependencies:
- - python=3.8
- - numpy=1.21.2
- - pip=21.2.4
- - scikit-learn=0.24.2
- - scipy=1.7.1
- - pandas>=1.1,<1.2
- - pip:
- - azureml-defaults==1.38.0
- - azureml-mlflow==1.38.0
- - inference-schema[numpy-support]==1.3.0
- - joblib==1.0.1
- - xlrd==2.0.1
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=conda.yml)]
The specification contains some usual packages, that you'll use in your pipeline (numpy, pip), together with some Azure ML specific packages (azureml-defaults, azureml-mlflow).
The Azure ML packages aren't mandatory to run Azure ML jobs. However, adding the
Use the *yaml* file to create and register this custom environment in your workspace:
-```Python
-from azure.ai.ml.entities import Environment
-
-custom_env_name = "aml-scikit-learn"
-
-pipeline_job_env = Environment(
- name=custom_env_name,
- description="Custom environment for Credit Card Defaults pipeline",
- tags={"scikit-learn": "0.24.2", "azureml-defaults": "1.38.0"},
- conda_file=os.path.join(dependencies_dir, "conda.yml"),
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
- version="1.0.0"
-)
-pipeline_job_env = ml_client.environments.create_or_update(pipeline_job_env)
-
-print(
- f"Environment with name {pipeline_job_env.name} is registered to workspace, the environment version is {pipeline_job_env.version}"
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=custom_env_name)]
## Build the training pipeline
Let's start by creating the first component. This component handles the preproce
First create a source folder for the data_prep component:
-```python
-import os
-
-data_prep_src_dir = "./components/data_prep"
-os.makedirs(data_prep_src_dir, exist_ok=True)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=data_prep_src_dir)]
This script performs the simple task of splitting the data into train and test datasets. Azure ML mounts datasets as folders to the computes, therefore, we created an auxiliary `select_first_file` function to access the data file inside the mounted input folder. [MLFlow](https://mlflow.org/docs/latest/tracking.html) will be used to log the parameters and metrics during our pipeline run.
-```python
-%%writefile {data_prep_src_dir}/data_prep.py
-import os
-import argparse
-import pandas as pd
-from sklearn.model_selection import train_test_split
-import logging
-import mlflow
--
-def main():
- """Main function of the script."""
-
- # input and output arguments
- parser = argparse.ArgumentParser()
- parser.add_argument("--data", type=str, help="path to input data")
- parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
- parser.add_argument("--train_data", type=str, help="path to train data")
- parser.add_argument("--test_data", type=str, help="path to test data")
- args = parser.parse_args()
-
- # Start Logging
- mlflow.start_run()
-
- print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
-
- print("input data:", args.data)
-
- credit_df = pd.read_excel(args.data, header=1, index_col=0)
-
- mlflow.log_metric("num_samples", credit_df.shape[0])
- mlflow.log_metric("num_features", credit_df.shape[1] - 1)
-
- credit_train_df, credit_test_df = train_test_split(
- credit_df,
- test_size=args.test_train_ratio,
- )
-
- # output paths are mounted as folder, therefore, we are adding a filename to the path
- credit_train_df.to_csv(os.path.join(args.train_data, "data.csv"), index=False)
-
- credit_test_df.to_csv(os.path.join(args.test_data, "data.csv"), index=False)
-
- # Stop Logging
- mlflow.end_run()
--
-if __name__ == "__main__":
- main()
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=def-main)]
Now that you have a script that can perform the desired task, create an Azure ML Component from it. You'll use the general purpose **CommandComponent** that can run command line actions. This command line action can directly call system commands or run a script. The inputs/outputs are specified on the command line via the `${{ ... }}` notation.
-```python
-%%writefile {data_prep_src_dir}/data_prep.yml
-# <component>
-name: data_prep_credit_defaults
-display_name: Data preparation for training
-# version: 1 # Not specifying a version will automatically update the version
-type: command
-inputs:
- data:
- type: uri_folder
- test_train_ratio:
- type: number
-outputs:
- train_data:
- type: uri_folder
- test_data:
- type: uri_folder
-code: .
-environment:
- # for this step, we'll use an AzureML curate environment
- azureml:aml-scikit-learn:1.0.0
-command: >-
- python data_prep.py
- --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}}
- --train_data ${{outputs.train_data}} --test_data ${{outputs.test_data}}
-# </component>
-```
-
-Once the `yaml` file and the script are ready, you can create your component using `load_component()`.
-
-```python
-# importing the Component Package
-from azure.ai.ml.entities import load_component
-
-# Loading the component from the yml file
-data_prep_component = load_component(yaml_file=os.path.join(data_prep_src_dir, "data_prep.yml"))
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=data_prep_component)]
Optionally, register the component in the workspace for future re-use.
-```python
-data_prep_component = ml_client.create_or_update(data_prep_component)
-
-print(
- f"Component {data_prep_component.name} with Version {data_prep_component.version} is registered"
-)
-```
- ## Create component 2: training (using yaml definition) The second component that you'll create will consume the training and test data, train a tree based model and return the output model. You'll use Azure ML logging capabilities to record and visualize the learning progress.
You used the `CommandComponent` class to create your first component. This time
Create the directory for this component:
-```python
-import os
-train_src_dir = "./components/train"
-os.makedirs(train_src_dir, exist_ok=True)
-```
-
-Create the training script in the directory:
-
-```python
-%%writefile {train_src_dir}/train.py
-import argparse
-from sklearn.ensemble import GradientBoostingClassifier
-from sklearn.metrics import classification_report
-from azureml.core.model import Model
-from azureml.core import Run
-import os
-import pandas as pd
-import joblib
-import mlflow
--
-def select_first_file(path):
- """Selects first file in folder, use under assumption there is only one file in folder
- Args:
- path (str): path to directory or file to choose
- Returns:
- str: full path of selected file
- """
- files = os.listdir(path)
- return os.path.join(path, files[0])
--
-# Start Logging
-mlflow.start_run()
-
-# enable autologging
-mlflow.sklearn.autolog()
-
-# This line creates a handles to the current run. It is used for model registration
-run = Run.get_context()
-
-os.makedirs("./outputs", exist_ok=True)
--
-def main():
- """Main function of the script."""
-
- # input and output arguments
- parser = argparse.ArgumentParser()
- parser.add_argument("--train_data", type=str, help="path to train data")
- parser.add_argument("--test_data", type=str, help="path to test data")
- parser.add_argument("--n_estimators", required=False, default=100, type=int)
- parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
- parser.add_argument("--registered_model_name", type=str, help="model name")
- parser.add_argument("--model", type=str, help="path to model file")
- args = parser.parse_args()
-
- # paths are mounted as folder, therefore, we are selecting the file from folder
- train_df = pd.read_csv(select_first_file(args.train_data))
-
- # Extracting the label column
- y_train = train_df.pop("default payment next month")
-
- # convert the dataframe values to array
- X_train = train_df.values
-
- # paths are mounted as folder, therefore, we are selecting the file from folder
- test_df = pd.read_csv(select_first_file(args.test_data))
-
- # Extracting the label column
- y_test = test_df.pop("default payment next month")
-
- # convert the dataframe values to array
- X_test = test_df.values
-
- print(f"Training with data of shape {X_train.shape}")
-
- clf = GradientBoostingClassifier(
- n_estimators=args.n_estimators, learning_rate=args.learning_rate
- )
- clf.fit(X_train, y_train)
-
- y_pred = clf.predict(X_test)
-
- print(classification_report(y_test, y_pred))
-
- # setting the full path of the model file
- model_file = os.path.join(args.model, "model.pkl")
- with open(model_file, "wb") as mf:
- joblib.dump(clf, mf)
-
- # Registering the model to the workspace
- model = Model.register(
- run.experiment.workspace,
- model_name=args.registered_model_name,
- model_path=model_file,
- tags={"type": "sklearn.GradientBoostingClassifier"},
- description="Model created in Azure ML on credit card defaults dataset",
- )
-
- # Stop Logging
- mlflow.end_run()
--
-if __name__ == "__main__":
- main()
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train_src_dir)]
As you can see in this training script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
For the environment of this step, you'll use one of the built-in (curated) Azure
First, create the *yaml* file describing the component:
-```python
-%%writefile {train_src_dir}/train.yml
-# <component>
-name: train_credit_defaults_model
-display_name: Train Credit Defaults Model
-# version: 1 # Not specifying a version will automatically update the version
-type: command
-inputs:
- train_data:
- type: uri_folder
- test_data:
- type: uri_folder
- learning_rate:
- type: number
- registered_model_name:
- type: string
-outputs:
- model:
- type: uri_folder
-code: .
-environment:
- # for this step, we'll use an AzureML curate environment
- azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:21
-command: >-
- python train.py
- --train_data ${{inputs.train_data}}
- --test_data ${{inputs.test_data}}
- --learning_rate ${{inputs.learning_rate}}
- --registered_model_name ${{inputs.registered_model_name}}
- --model ${{outputs.model}}
-# </component>
-
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train.yml)]
Now create and register the component:
-```python
-# importing the Component Package
-from azure.ai.ml.entities import load_component
-
-# Loading the component from the yml file
-train_component = load_component(yaml_file=os.path.join(train_src_dir, "train.yml"))
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train_component)]
-```python
-# Now we register the component to the workspace
-train_component = ml_client.create_or_update(train_component)
-
-# Create (register) the component in your workspace
-print(
- f"Component {train_component.name} with Version {train_component.version} is registered"
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-train_component)]
## Create the pipeline from components
To code the pipeline, you use a specific `@dsl.pipeline` decorator that identifi
Here, we used *input data*, *split ratio* and *registered model name* as input variables. We then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
-> [!IMPORTANT]
-> In the code below, replace `<CPU-CLUSTER-NAME>` with the name you used when you created a compute cluster in the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
-
-```python
-# the dsl decorator tells the sdk that we are defining an Azure ML pipeline
-from azure.ai.ml import dsl, Input, Output
-
-@dsl.pipeline(
- compute="<CPU-CLUSTER-NAME>",
- description="E2E data_perp-train pipeline",
-)
-def credit_defaults_pipeline(
- pipeline_job_data_input,
- pipeline_job_test_train_ratio,
- pipeline_job_learning_rate,
- pipeline_job_registered_model_name,
-):
- # using data_prep_function like a python call with its own inputs
- data_prep_job = data_prep_component(
- data=pipeline_job_data_input,
- test_train_ratio=pipeline_job_test_train_ratio,
- )
-
- # using train_func like a python call with its own inputs
- train_job = train_component(
- train_data=data_prep_job.outputs.train_data, # note: using outputs from previous step
- test_data=data_prep_job.outputs.test_data, # note: using outputs from previous step
- learning_rate=pipeline_job_learning_rate, # note: using a pipeline input as parameter
- registered_model_name=pipeline_job_registered_model_name,
- )
-
- # a pipeline returns a dict of outputs
- # keys will code for the pipeline output identifier
- return {
- "pipeline_job_train_data": data_prep_job.outputs.train_data,
- "pipeline_job_test_data": data_prep_job.outputs.test_data,
- }
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=pipeline)]
Now use your pipeline definition to instantiate a pipeline with your dataset, split rate of choice and the name you picked for your model.
-```python
-registered_model_name = "credit_defaults_model"
-
-# Let's instantiate the pipeline with the parameters of our choice
-pipeline = credit_defaults_pipeline(
- # pipeline_job_data_input=credit_data,
- pipeline_job_data_input=Input(type="uri_file", path=web_path),
- pipeline_job_test_train_ratio=0.2,
- pipeline_job_learning_rate=0.25,
- pipeline_job_registered_model_name=registered_model_name,
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=registered_model_name)]
## Submit the job
Here you'll also pass an experiment name. An experiment is a container for all t
Once completed, the pipeline will register a model in your workspace as a result of training.
-```python
-import webbrowser
-# submit the pipeline job
-returned_job = ml_client.jobs.create_or_update(
- pipeline,
-
- # Project's name
- experiment_name="e2e_registered_components",
-)
-# open the pipeline in web browser
-webbrowser.open(returned_job.services["Studio"].endpoint)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=returned_job)]
An output of "False" is expected from the above cell. You can track the progress of your pipeline, by using the link generated in the cell above.
Now deploy your machine learning model as a web service in the Azure cloud.
To deploy a machine learning service, you'll usually need: * The model assets (filed, metadata) that you want to deploy. You've already registered these assets in your training component.
-* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns.
-
-## Create an inference script
-
-The two things you need to accomplish in your inference script are:
-
-* Load your model (using a function called `init()`)
-* Run your model on input data (using a function called `run()`)
-
-In the following implementation the `init()` function loads the model, and the run function expects the data in `json` format with the input data stored under `data`.
-
-```python
-deploy_dir = "./deploy"
-os.makedirs(deploy_dir, exist_ok=True)
-```
-
-```python
-%%writefile {deploy_dir}/score.py
-import os
-import logging
-import json
-import numpy
-import joblib
--
-def init():
- """
- This function is called when the container is initialized/started, typically after create/update of the deployment.
- You can write the logic here to perform init operations like caching the model in memory
- """
- global model
- # AZUREML_MODEL_DIR is an environment variable created during deployment.
- # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
- model_path = os.path.join(os.getenv("AZUREML_MODEL_DIR"), "model.pkl")
- # deserialize the model file back into a sklearn model
- model = joblib.load(model_path)
- logging.info("Init complete")
--
-def run(raw_data):
- """
- This function is called for every invocation of the endpoint to perform the actual scoring/prediction.
- In the example we extract the data from the json input and call the scikit-learn model's predict()
- method and return the result back
- """
- logging.info("Request received")
- data = json.loads(raw_data)["data"]
- data = numpy.array(data)
- result = model.predict(data)
- logging.info("Request processed")
- return result.tolist()
-```
+* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns. When using a MLFlow model, as in this tutorial, this script is automatically created for you
## Create a new online endpoint Now that you have a registered model and an inference script, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier).
-```python
-import uuid
-
-# Creating a unique name for the endpoint
-online_endpoint_name = "credit-endpoint-" + str(uuid.uuid4())[:8]
-
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=online_endpoint_name)]
-```Python
-from azure.ai.ml.entities import (
- ManagedOnlineEndpoint,
- ManagedOnlineDeployment,
- CodeConfiguration,
- Model,
- Environment,
-)
-
-# create an online endpoint
-endpoint = ManagedOnlineEndpoint(
- name=online_endpoint_name,
- description="this is an online endpoint",
- auth_mode="key",
- tags={
- "training_dataset": "credit_defaults",
- "model_type": "sklearn.GradientBoostingClassifier",
- },
-)
-
-endpoint = ml_client.begin_create_or_update(endpoint)
-
-print(f"Endpint {endpoint.name} provisioning state: {endpoint.provisioning_state}")
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=endpoint)]
Once you've created an endpoint, you can retrieve it as below:
-```python
-endpoint = ml_client.online_endpoints.get(name = online_endpoint_name)
-
-print(f"Endpint \"{endpoint.name}\" with provisioning state \"{endpoint.provisioning_state}\" is retrieved")
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-endpoint)]
## Deploy the model to the endpoint
Once the endpoint is created, deploy the model with the entry script. Each endpo
You can check the *Models* page on the Azure ML studio, to identify the latest version of your registered model. Alternatively, the code below will retrieve the latest version number for you to use. -
-```python
-# Let's pick the latest version of the model
-latest_model_version = max(
- [int(m.version) for m in ml_client.models.list(name=registered_model_name)]
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=latest_model_version)]
Deploy the latest version of the model. > [!NOTE] > Expect this deployment to take approximately 6 to 8 minutes. -
-```python
-# picking the model to deploy. Here we use the latest version of our registered model
-model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
--
-#create an online deployment.
-blue_deployment = ManagedOnlineDeployment(
- name='blue',
- endpoint_name=online_endpoint_name,
- model=model,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:21",
- code_configuration=CodeConfiguration(
- code=deploy_dir,
- scoring_script="score.py"),
- instance_type='Standard_DS3_v2',
- instance_count=1)
-
-blue_deployment = ml_client.begin_create_or_update(blue_deployment)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=model)]
### Test with a sample query
Now that the model is deployed to the endpoint, you can run inference with it.
Create a sample request file following the design expected in the run method in the score script.
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=sample-request.json)]
-```python
-%%writefile {deploy_dir}/sample-request.json
-{"data": [
- [20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0],
- [10,9,8,7,6,5,4,3,2,1, 10,9,8,7,6,5,4,3,2,1,10,9,8]
-]}
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=write-sample-request)]
-```python
-# test the blue deployment with some sample data
-ml_client.online_endpoints.invoke(
- endpoint_name=online_endpoint_name,
- request_file="./deploy/sample-request.json",
- deployment_name='blue'
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client.online_endpoints.invoke)]
## Clean up resources
If you're not going to use the endpoint, delete it to stop using the resource.
> [!NOTE] > Expect this step to take approximately 6 to 8 minutes.
-```python
-ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client.online_endpoints.begin_delete)]
## Next steps
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
Title: Plan a SaaS offer for the Microsoft commercial marketplace - Azure Marketplace
-description: Plan for a new software as a service (SaaS) offer for listing or selling in Microsoft AppSource, Azure Marketplace, or through the Cloud Solution Provider (CSP) program using the commercial marketplace program in Microsoft Partner Center.
+description: Plan a new software as a service (SaaS) offer for selling in Microsoft AppSource, Azure Marketplace, or through the Cloud Solution Provider (CSP) program using the commercial marketplace program in Microsoft Partner Center.
Previously updated : 10/26/2021 Last updated : 05/26/2022 # Plan a SaaS offer for the commercial marketplace
When you publish a SaaS offer, it will be listed in Microsoft AppSource, Azure M
If your SaaS offer is *both* an IT solution (Azure Marketplace) and a business solution (AppSource), select a category and a subcategory applicable to each online store. Offers published to both online stores should have a value proposition as an IT solution *and* a business solution. > [!IMPORTANT]
-> SaaS offers with [metered billing](partner-center-portal/saas-metered-billing.md) are available through Azure Marketplace and the Azure portal. SaaS offers with only private plans are available through the Azure portal and AppSource.
+> SaaS offers with [metered billing](partner-center-portal/saas-metered-billing.md) are available through Azure Marketplace and the Azure portal. SaaS offers with only private plans are only available through the Azure portal.
| Metered billing | Public plan | Private plan | Available in: | ||||| | Yes | Yes | No | Azure Marketplace and Azure portal | | Yes | Yes | Yes | Azure Marketplace and Azure portal* | | Yes | No | Yes | Azure portal only |
-| No | No | Yes | Azure portal and AppSource |
+| No | No | Yes | Azure portal only |
-&#42; The private plan of the offer will only be available via the Azure portal and AppSource.
+&#42; The private plan of the offer will only be available via the Azure portal.
For example, an offer with metered billing and a private plan only (no public plan), will be purchased by customers in the Azure portal. Learn more about [Private offers in Microsoft commercial marketplace](private-offers.md).
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
Modifying the parameter `replicate_wild_ignore_table` used to create replication
- The source server version must be at least MySQL version 5.7. - Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7 or both must be MySQL version 8.0.-- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication.
+- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html) if your MySQL version is greater than 8.0.23.
- The source server should use the MySQL InnoDB engine. - User must have permissions to configure binary logging and create new users on the source server. - Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Automatic backups, both snapshots and log backups, are performed on locally redu
>[!Note] >For both zone-redundant and same-zone HA:
->* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.
+>* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html) if your MySQL version is greater than 8.0.23.
>* The standby server isn't available for read or write operations. It's a passive standby to enable fast failover. >* Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
-+ Last updated 03/01/2021
+ms.tool: azure-cli
# Quickstart: Connect and query with Azure CLI with Azure Database for MySQL - Flexible Server
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md
On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www
#### Do I need to make any changes on my client to maintain connectivity?
-No change is required on client side. If you followed our previous recommendation below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
+> [!NOTE]
+> If you are using PHP driver with [enableRedirect](./how-to-redirection.md) kindly follow the steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) to avoid connection failures.
+
+No change is required on client side. If you followed steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
-###### Previous recommendation
+###### Create a combined CA certificate
To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps:
To verify if you're using SSL connection to connect to the server refer [SSL ver
No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
+#### Why do I need to update my root certificate if I am using PHP driver with [enableRedirect](./how-to-redirection.md) ?
+To address compliance requirements, the CA certificates of the host server were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2. With this update, database connections using the PHP Client driver with enableRedirect can no longer connect to the server, as the client devices are unaware of the certificate change and the new root CA details. Client devices that use PHP redirection drivers connect directly to the host server, bypassing the gateway. Refer this [link](single-server-overview.md#high-availability) for more on architecture of Azure Database for MySQL Single Server.
+ #### What if I have further questions? For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
If you are using an older version of the mysqlnd_azure extension (version 1.0.0-
|`on` or `1`|- If the connection does not use SSL on the driver side, no connection will be made. The following error will be returned: *"mysqlnd_azure.enableRedirect is on, but SSL option is not set in connection string. Redirection is only possible with SSL."*<br>- If SSL is used on the driver side, but redirection is not supported on the server, the first connection is aborted and the following error is returned: *"Connection aborted because redirection is not enabled on the MySQL server or the network package doesn't meet redirection protocol."*<br>- If the MySQL server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection.| |`preferred` or `2`<br> (default value)|- mysqlnd_azure will use redirection if possible.<br>- If the connection does not use SSL on the driver side, the server does not support redirection, or the redirected connection fails to connect for any non-fatal reason while the proxy connection is still a valid one, it will fall back to the first proxy connection.|
+For successful connection to Azure database for MySQL Single server using `mysqlnd_azure.enableRedirect` you need to follow mandatory steps of combining your root certificate as per the compliance requirements. For more details on please visit [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity).
+ The subsequent sections of the document will outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter. ### Ubuntu Linux
openshift Howto Enable Fips Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-enable-fips-openshift.md
+
+ Title: Enable FIPS on an Azure Red Hat OpenShift cluster
+description: Learn how to enable FIPS on an Azure Red Hat OpenShift cluster.
++ Last updated : 5/5/2022++
+keywords: aro, openshift, az aro, red hat, cli, azure, FIPS
+#Customer intent: I need to understand how to enable FIPS on an Azure Red Hat OpenShift cluster.
++
+# Enable FIPS for an Azure Red Hat OpenShift cluster
+
+This article explains how to enable Federal Information Processing Standard (FIPS) for an Azure Red Hat OpenShift cluster.
+
+The Federal Information Processing Standard (FIPS) 140 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products, and systems. Testing against the FIPS 140 standard is maintained by the Cryptographic Module Validation Program (CMVP), a joint effort between the US National Institute of Standards and Technology (NIST) and the Canadian Centre for Cyber Security, a branch of the Communications Security Establishment (CSE) of Canada.
+
+## Support for FIPS cryptography
+
+Starting with Release 4.10, you can deploy an Azure Red Hat OpenShift cluster in FIPS mode. FIPS mode ensures the control plane is using FIPS 140-2 cryptographic modules. All workloads and operators deployed on a cluster need to use FIPS 140-2 in order to be FIPS compliant.
+
+You can install an Azure Red Hat OpenShift cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture.
+
+> [!NOTE]
+> If you're using Azure File storage, you can't enable FIPS mode.
+
+## To enable FIPS on your Azure Red Hat OpenShift cluster
+
+To enable FIPs on your Azure Red Hat OpeShift cluster, define the following parameters as environment variables:
+
+```azurecli-interactive
+az aro create \
+ --resource-group $RESOURCEGROUP \
+ --name $CLUSTER \
+ --vnet aro-vnet \
+ --master-subnet master-subnet \
+ --worker-subnet worker-subnet
+ --fips
+```
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
-+
+ms.tool: azure-cli
Last updated 11/30/2021
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-azure-powershell.md
ms.devlang: azurepowershell-
+ms.tool: azure-powershell
+ Last updated 06/08/2020
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table for the mobile network site resour
|The region in which youΓÇÖre creating the mobile network site resource. We recommend that you use the East US region. |**Instance details: Region**| |The mobile network resource representing the private mobile network to which youΓÇÖre adding the site. |**Instance details: Mobile network**|
-## Collect custom location information
+## Collect packet core configuration values
-Identify the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
+Collect all the values in the following table for the packet core instance that will run in the site.
+
+ |Value |Field name in Azure portal |
+ |||
+ |The core technology type the packet core instance should support (5G or 4G). |**Technology type**|
+ |The custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).</br></br> If you're going to create your site using the Azure portal, collect the name of the custom location.</br></br> If you're going to create your site using an ARM template, collect the full resource ID of the custom location.|**Custom location**|
-- If you're going to create your site using the Azure portal, collect the name of the custom location.-- If you're going to create your site using an ARM template, collect the full resource ID of the custom location. ## Collect access network values
-Collect all the values in the following table to define the packet core instance's connection to the access network over the N2 and N3 interfaces.
+Collect all the values in the following table to define the packet core instance's connection to the access network over the control plane and user plane interfaces. The field name displayed in the Azure portal will depend on the value you have chosen for **Technology type**, as described in [Collect packet core configuration values](#collect-packet-core-configuration-values).
> [!IMPORTANT]
-> Where noted, you must use the same values you used when deploying the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device for this site. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
+> For all values in this table, you must use the same values you used when deploying the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device for this site. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
|Value |Field name in Azure portal | |||
- | The IP address for the packet core instance N2 signaling interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 address (signaling)**|
- | The IP address for the packet core instance N3 interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
- | The network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 subnet** and **N3 subnet**|
- | The access subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 gateway** and **N3 gateway**|
+ | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface. |**N2 address (signaling)** (for 5G) or **S1-MME address** (for 4G).|
+ | The IP address for the user plane interface on the access network. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface. |N/A. You'll only need this value if you're using an ARM template to create the site.|
+ | The network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. |**N2 subnet** and **N3 subnet** (for 5G), or **S1-MME subnet** and **S1-U subnet** (for 4G).|
+ | The access subnet default gateway. |**N2 gateway** and **N3 gateway** (for 5G), or **S1-MME gateway** and **S1-U gateway** (for 4G).|
## Collect data network values
-Collect all the values in the following table to define the packet core instance's connection to the data network over the N6 interface.
+Collect all the values in the following table to define the packet core instance's connection to the data network over the user plane interface.
> [!IMPORTANT] > Where noted, you must use the same values you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
Collect all the values in the following table to define the packet core instance
|Value |Field name in Azure portal | ||| |The name of the data network. |**Data network**|
- | The IP address for the packet core instance N6 interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
- |The network address of the data subnet in CIDR notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 subnet**|
- |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 gateway**|
- | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
- | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support static IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The IP address for the user plane interface on the data network. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface. You identified the IP address for this interface in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
+ |The network address of the data subnet in CIDR notation. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6/SGi subnet**|
+ |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6/SGi gateway**|
+ | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. |**NAPT**| ## Next steps
private-5g-core Collect Required Information For Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-service.md
Collect each of the values in the table below for your service.
| The name of the service. This name must only contain alphanumeric characters, dashes, or underscores. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Service name** |Yes| | A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer. This value must be an integer between 0 and 255 and must be unique among all services configured on the packet core instance. A lower value means a higher priority. | **Service precedence** |Yes| | The maximum bit rate (MBR) for uplink traffic (traveling away from user equipment (UEs)) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Uplink** | Yes|
-| The maximum bit rate (MBR) for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** | Yes|
-| The default QoS Flow Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** |No. Defaults to 9.|
-| The default 5G QoS Indicator (5QI) value for this service. The 5QI value identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>We recommend you choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). Non-GBR QoS Flows are in the following ranges: 5-9; 69-70; 79-80.</br></br>You can also choose a non-standardized 5QI value.</p><p>Azure Private 5G Core doesn't support 5QI values corresponding GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |No. Defaults to 9.|
-| The default QoS Flow preemption capability for QoS Flows for this service. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** |No. Defaults to **May not preempt**.|
-| The default QoS Flow preemption vulnerability for QoS Flows for this service. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** |No. Defaults to **Preemptable**.|
+| The MBR for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** | Yes|
+| The default Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** |No. Defaults to 9.|
+| The default 5G QoS Indicator (5QI) or QoS class identifier (QCI) value for this service. The 5QI (for 5G networks) or QCI (for 4G networks) value identifies a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value.</p><p>Azure Private 5G Core doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |No. Defaults to 9.|
+| The default preemption capability for QoS flows or EPS bearers for this service. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** | **Preemption capability** |No. Defaults to **May not preempt**.|
+| The default preemption vulnerability for QoS flows or EPS bearers for this service. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** |No. Defaults to **Preemptable**.|
## Data flow policy rule(s)
private-5g-core Collect Required Information For Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-sim-policy.md
Collect each of the values in the table below for your SIM policy.
|--|--|--| | The name of the private mobile network for which you're configuring this SIM policy. | N/A | Yes | | The SIM policy name. The name must be unique across all SIM policies configured for the private mobile network. | **Policy name** |Yes|
-| The UE-AMBR for traffic traveling away from UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Uplink** |Yes|
-| The UE-AMBR for traffic traveling towards UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Downlink** |Yes|
+| The UE-AMBR for traffic traveling away from UEs across all non-GBR QoS flows or EPS bearers. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Total bandwidth allowed - Uplink** |Yes|
+| The UE-AMBR for traffic traveling towards UEs across all non-GBR QoS flows or EPS bearers. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Total bandwidth allowed - Downlink** |Yes|
| The interval between UE registrations for UEs using SIMs to which this SIM policy is assigned, given in seconds. Choose an integer that is 30 or greater. If you omit the interval when first creating the SIM policy, it will default to 3,240 seconds (54 minutes). | **Registration timer** |No. Defaults to 3,240 seconds.| | The subscriber profile ID for RAT/Frequency Priority ID (RFSP ID) for this SIM policy, as defined in TS 36.413. If you want to set an RFSP ID, you must specify an integer between 1 and 256. | **RFSP index** |No. Defaults to no value.| ## Collect information for the network scope
-Within each SIM policy, you'll have a *network scope*. The network scope represents the data network to which SIMs assigned to the SIM policy will have access. It allows you to define the QoS policy settings used for the default QoS Flow for PDU sessions involving these SIMs. These settings include the session aggregated maximum bit rate (Session-AMBR), 5G QoS Indicator (5QI) value, and Allocation and Retention Policy (ARP) priority level. You can also determine the services that will be offered to SIMs.
+Within each SIM policy, you'll have a *network scope*. The network scope represents the data network to which SIMs assigned to the SIM policy will have access. It allows you to define the QoS policy settings used for the default QoS flow for PDU sessions involving these SIMs. These settings include the session aggregated maximum bit rate (Session-AMBR), 5G QoS identifier (5QI) or QoS class identifier (QCI) value, and Allocation and Retention Policy (ARP) priority level. You can also determine the services that will be offered to SIMs.
Collect each of the values in the table below for the network scope.
Collect each of the values in the table below for the network scope.
|--|--|--| |The Data Network Name (DNN) of the data network. The DNN must match the one you used when creating the private mobile network. | **Data network** | Yes | |The names of the services permitted on the data network. You must have already configured your chosen services. For more information on services, see [Policy control](policy-control.md). | **Service configuration** | No. The SIM policy will only use the service you configure using the same template. |
-|The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Uplink** | Yes |
-|The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Downlink** | Yes |
-|The default 5G QoS Indicator (5QI) value for this data network. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>Choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). These values are in the following ranges: 5-9; 69-70; 79-80. </br></br>You can also choose a non-standardized 5QI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI values corresponding to GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** | No. Defaults to 9. |
-|The default QoS Flow Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** | No. Defaults to 1. |
-|The default QoS Flow preemption capability for QoS Flows on this data network. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** | No. Defaults to **May not preempt**.|
-|The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** | No. Defaults to **Preemptable**.|
-|The default PDU session type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Default session type** | No. Defaults to **IPv4**.|
-|An additional PDU session type that Azure Private 5G Core supports for this data network. This type must not match the default type mentioned above. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Additional allowed session types** |No. Defaults to no value.|
+|The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Uplink** | Yes |
+|The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Downlink** | Yes |
+|The default 5QI (for 5G) or QCI (for 4G) value for this data network. These values identify a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers.</br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** | No. Defaults to 9. |
+|The default Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** | No. Defaults to 1. |
+|The default preemption capability for QoS flows or EPS bearers on this data network. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** | **Preemption capability** | No. Defaults to **May not preempt**.|
+|The default preemption vulnerability for QoS flows or EPS bearers on this data network. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** | No. Defaults to **Preemptable**.|
+|The default PDU session or PDN connection type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Default session type** | No. Defaults to **IPv4**.|
+|An additional PDU session or PDN connection type that Azure Private 5G Core supports for this data network. This type must not match the default type mentioned above. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Additional allowed session types** |No. Defaults to no value.|
## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
Contact your trials engineer and ask them to register your Azure subscription fo
Once your trials engineer has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+## Choose the core technology type (5G or 4G)
+
+Choose whether each site in the private mobile network should provide coverage for 5G or 4G user equipment (UEs). A single site cannot support 5G and 4G UEs simultaneously. If you're deploying multiple sites, you can choose to have some sites support 5G UEs and others support 4G UEs.
+ ## Allocate subnets and IP addresses Azure Private 5G Core requires a management network, access network, and data network. These networks can all be part of the same, larger network, or they can be separate. The approach you use depends on your traffic separation requirements.
For each of these networks, allocate a subnet and then identify the listed IP ad
- Network address in CIDR notation. - Default gateway. - One IP address for port 5 on the Azure Stack Edge Pro device. -- One IP address for the packet core instance's N2 signaling interface. -- One IP address for the packet core instance's N3 interface.
+- One IP address for the control plane interface. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface.
+- One IP address for the user plane interface. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface.
### Data network - Network address in CIDR notation. - Default gateway. - One IP address for port 6 on the Azure Stack Edge Pro device.-- One IP address for the packet core instance's N6 interface.
+- One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.
## Allocate user equipment (UE) IP address pools
For each site you're deploying, do the following:
For each site you're deploying, do the following. - Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices).-- If you're not enabling NAPT as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated for the packet core instance's N6 interface.
+- If you're not enabling NAPT as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated to the packet core instance's user plane interface on the data network.
## Order and set up your Azure Stack Edge Pro device(s)
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Azure Private 5G Core Preview private mobile networks include one or more *sites
## Prerequisites -- Complete the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses), [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools), and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
+- Carry out the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) for your new site.
- Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md). - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
-## Create the Mobile Network Site resource
+## Create the mobile network site resource
-In this step, you'll create the **Mobile Network Site** resource representing the physical enterprise location of your Azure Stack Edge device, which will host the packet core instance.
+In this step, you'll create the mobile network site resource representing the physical enterprise location of your Azure Stack Edge device, which will host the packet core instance.
1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal). 1. Search for and select the **Mobile Network** resource representing the private mobile network to which you want to add a site.
- :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a mobile network resource.":::
1. On the **Get started** tab, select **Create sites**.
In this step, you'll create the **Mobile Network Site** resource representing th
1. In the **Packet core** section, set the fields as follows:
- - Set **Technology type** to *5G*.
+ - Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Technology type** and **Custom location** fields.
- Leave the **Version** field blank unless you've been instructed to do otherwise by your support representative.
- - Set **Custom location** to the custom location you collected in [Collect custom location information](collect-required-information-for-a-site.md#collect-custom-location-information).
1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. Note the following:
- - Use the same value for both the **N2 subnet** and **N3 subnet** fields.
- - Use the same value for both the **N2 gateway** and **N3 gateway** fields.
+ - Use the same value for both the **N2 subnet** and **N3 subnet** fields (if this site will support 5G user equipment (UEs)).
+ - Use the same value for both the **N2 gateway** and **N3 gateway** fields (if this site will support 5G UEs).
+ - Use the same value for both the **S1-MME subnet** and **S1-U subnet** fields (if this site will support 4G UEs).
+ - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
1. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields in the **Attached data networks** section. Note that you can only connect the packet core instance to a single data network. 1. Select **Review + create**.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites -- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- Complete the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
+- Carry out the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) for your new site.
- Identify the names of the interfaces corresponding to ports 5 and 6 on your Azure Stack Edge Pro device. - Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md).
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
## Review the template
The template used in this how-to guide is from [Azure Quickstart Templates](http
Four Azure resources are defined in the template. - [**Microsoft.MobileNetwork/mobileNetworks/sites**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/sites): a resource representing your site as a whole.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network, including the IP address for the N6 interface and data subnet configuration.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the N3 interface.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the N2 interface.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the user plane interface on the access network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the control plane interface on the access network.
## Deploy the template
Four Azure resources are defined in the template.
| **Existing Data Network Name** | Enter the name of the data network to which your private mobile network connects. | | **Site Name** | Enter a name for your site. | | **Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- | **Control Plane Access Ip Address** | Enter the IP address for the packet core instance's N2 signaling interface. |
- | **Data Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- | **Data Plane Access Interface Ip Address** | Enter the IP address for the packet core instance's N3 interface. |
+ | **Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
+ | **User Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ | **User Plane Access Interface Ip Address** | Enter the IP address for the user plane interface on the access network. |
| **Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. | | **Access Gateway** | Enter the access subnet default gateway. | | **User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
- | **User Plane Data Interface Ip Address** | Enter the IP address for the packet core instance's N6 interface. |
+ | **User Plane Data Interface Ip Address** | Enter the IP address for the user plane interface on the data network. |
| **User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. | | **User Plane Data Interface Gateway** | Enter the data subnet default gateway. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- | **Core Network Technology** | Leave this field unchanged. |
+ | **Core Network Technology** | Enter `5GC` for 5G, or `EPC` for 4G. |
| **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. | | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
- [**Microsoft.MobileNetwork/mobileNetworks/services**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/services): a resource representing a service. - [**Microsoft.MobileNetwork/mobileNetworks/simPolicies**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/simPolicies): a resource representing a SIM policy. - [**Microsoft.MobileNetwork/mobileNetworks/sites**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/sites): a resource representing your site as a whole.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network, including the IP address for the N6 interface and data subnet configuration.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the N3 interface.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the N2 interface.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the user plane interface on the access network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the control plane interface on the access network.
- [**Microsoft.MobileNetwork/mobileNetworks**](/azure/templates/microsoft.mobilenetwork/mobilenetworks): a resource representing the private mobile network as a whole. - [**Microsoft.MobileNetwork/sims:**](/azure/templates/microsoft.mobilenetwork/sims) a resource representing a physical SIM or eSIM.
The following Azure resources are defined in the template.
|**Sim Policy Name** | Leave this field unchanged. | |**Slice Name** | Leave this field unchanged. | |**Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- |**Control Plane Access Ip Address** | Enter the IP address for the packet core instance's N2 signaling interface. |
+ |**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
|**User Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- |**User Plane Access Interface Ip Address** | Enter the IP address for the packet core instance's N3 interface. |
+ |**User Plane Access Interface Ip Address** | Enter the IP address for the user plane interface on the access network. |
|**Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. | |**Access Gateway** | Enter the access subnet default gateway. | |**User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
- |**User Plane Data Interface Ip Address** | Enter the IP address for the packet core instance's N6 interface. |
+ |**User Plane Data Interface Ip Address** | Enter the IP address for the user plane interface on the data network. |
|**User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. | |**User Plane Data Interface Gateway** | Enter the data subnet default gateway. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- |**Core Network Technology** | Leave this field unchanged. |
+ |**Core Network Technology** | Enter `5GC` for 5G, or `EPC` for 4G. |
|**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.| |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
private-5g-core Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/distributed-tracing.md
The distributed tracing web GUI provides two search tabs to allow you to search
If you can't see the **Search** heading, select the **Search** button in the top-level menu. -- **SUPI** - Allows you to search for activity involving a particular subscriber using their Subscription Permanent Identifier (SUPI). This tab also provides an **Errors** panel, which allows you to filter the results by error condition. To search for activity for a particular subscriber, enter all of the initial digits of the subscriber's SUPI into the text box on the **SUPI search** panel.
+- **SUPI** - Allows you to search for activity involving a particular subscriber using their subscription permanent identifier (SUPI) or, in 4G networks, their international mobile subscriber identity (IMSI). This tab also provides an **Errors** panel, which allows you to filter the results by error condition. To search for activity for a particular subscriber, enter all of the initial digits of the subscriber's SUPI or IMSI into the text box on the **SUPI search** panel.
- **Errors** - Allows you to search for error condition occurrences across all subscribers. To search for occurrences of error conditions across all subscribers, select the **Errors** tab and then use the drop-down menus on the **Error** panel to select an error category and, optionally, a specific error. :::image type="content" source="media\distributed-tracing\distributed-tracing-search-display.png" alt-text="Screenshot of the Search display in the distributed tracing web G U I, showing the S U P I and Errors tabs.":::
You can select an entry in the search results to view detailed information for t
When you select a specific result, the display shows the following tabs containing different categories of information. > [!NOTE]
-> In addition to the tabs described below, the distributed tracing web GUI also includes a **User Experience** tab. This tab is not used by Azure Private 5G Core Preview and will not display any information.
+> In addition to the tabs described below, the distributed tracing web GUI also includes a **User Experience** tab. This tab is not used by Azure Private 5G Core and will not display any information.
### Summary view
private-5g-core Key Components Of A Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/key-components-of-a-private-mobile-network.md
This article introduces the key physical components of a private mobile network deployed through Azure Private 5G Core Preview. It also details the resources you'll use to manage the private mobile network through Azure.
-Each private mobile network contains one or more *sites*. A site is a physical enterprise location (for example, Contoso Corporation's Chicago Factory) that will provide coverage for 5G user equipment (UEs). The following diagram shows the main components of a single site.
+Each private mobile network contains one or more *sites*. A site is a physical enterprise location (for example, Contoso Corporation's Chicago Factory) that will provide coverage for user equipment (UEs). The following diagram shows the main components of a single site.
:::image type="content" source="media/key-components-of-a-private-mobile-network/site-physical-components.png" alt-text="Diagram displaying the main components of a site in a private mobile network":::
Each private mobile network contains one or more *sites*. A site is a physical e
When you add a site to your private mobile network, you'll create a *Kubernetes cluster* on the Azure Stack Edge device. This serves as the platform for the packet core instance. -- Each packet core instance connects to a radio access network (RAN) to provide coverage for 5G UEs. You'll source your RAN from a third party.
+- Each packet core instance connects to a radio access network (RAN) to provide coverage for UEs. You'll source your RAN from a third party.
## Azure Private 5G Core resources
private-5g-core Monitor Private 5G Core With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-log-analytics.md
Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You can write queries to retrieve records or visualize data in charts, allowing you to monitor and analyze activity in your private mobile network.
+> [!IMPORTANT]
+> Log Analytics currently can only be used to monitor private mobile networks that support 5G UEs. You can still monitor private mobile networks supporting 4G UEs from the local network using the [packet core dashboards](packet-core-dashboards.md).
+ ## Enable Log Analytics You'll need to carry out the steps in [Enabling Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md) before you can use Log Analytics with Azure Private 5G Core.
private-5g-core Packet Core Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/packet-core-dashboards.md
You can access the following packet core dashboards:
- The **Device and Session Statistics dashboard** provides information about the device and session procedures being processed by the packet core instance.
+ > [!IMPORTANT]
+ > The **Device and Session Statistics dashboard** only displays metrics for packet core instances that support 5G UEs. It does not currently display any metrics related to 4G activity.
+ :::image type="content" source="media/packet-core-dashboards/packet-core-device-session-stats-dashboard.png" alt-text="Screenshot of the Device and Session Statistics dashboard. It shows panels for device authentication, device registration, device context, and P D U session procedures." lightbox="media/packet-core-dashboards/packet-core-device-session-stats-dashboard.png"::: - The **Uplink and Downlink Statistics dashboard** provides detailed statistics on the user plane traffic being handled by the packet core instance.
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
Azure Private 5G Core Preview provides flexible traffic handling. You can customize how your packet core instance applies quality of service (QoS) characteristics to traffic. You can also block or limit certain flows.
-## 5G quality of service (QoS) and QoS Flows
-The packet core instance is a key component in establishing *protocol data unit (PDU) sessions*, which are used to transport user plane traffic between a UE and the data network. Within each PDU session, there are one or more *service data flows (SDFs)*. Each SDF is a single IP flow or a set of aggregated IP flows of UE traffic that is used for a specific service.
+## 5G quality of service (QoS) and QoS flows
+
+In 5G networks, the packet core instance is a key component in establishing *protocol data unit (PDU)* sessions, which are used to transport user plane traffic between a UE and the data network. Within each PDU session, there are one or more *service data flows (SDFs)*. Each SDF is a single IP flow or a set of aggregated IP flows of UE traffic that is used for a specific service.
Each SDF may require a different set of QoS characteristics, including prioritization and bandwidth limits. For example, an SDF carrying traffic used for industrial automation will need to be handled differently to an SDF used for internet browsing.
-To ensure the correct QoS characteristics are applied, each SDF is bound to a *QoS Flow*. Each QoS Flow has a unique *QoS profile*, which identifies the QoS characteristics that should be applied to any SDFs bound to the QoS Flow. Multiple SDFs with the same QoS requirements can be bound to the same QoS Flow.
+To ensure the correct QoS characteristics are applied, each SDF is bound to a *QoS flow*. Each QoS flow has a unique *QoS profile*, which identifies the QoS characteristics that should be applied to any SDFs bound to the QoS flow. Multiple SDFs with the same QoS requirements can be bound to the same QoS flow.
A *QoS profile* has two main components. -- A *5G QoS identifier (5QI)*. The 5QI value corresponds to a set of QoS characteristics that should be used for the QoS Flow. These characteristics include guaranteed and maximum bitrates, priority levels, and limits on latency, jitter, and error rate. The 5QI is given as a scalar number.
+- A *5G QoS identifier (5QI)*. The 5QI value corresponds to a set of QoS characteristics that should be used for the QoS flow. These characteristics include guaranteed and maximum bitrates, priority levels, and limits on latency, jitter, and error rate. The 5QI is given as a scalar number.
- You can find more information on 5QI and each of the QoS characteristics in 3GPP TS 23.501. You can also find definitions for standardized (or non-dynamic) 5QI values.
+ You can find more information on 5QI values and each of the QoS characteristics in 3GPP TS 23.501. You can also find definitions for standardized (or non-dynamic) 5QI values.
The required parameters for each 5QI value are pre-configured in the Next Generation Node B (gNB). > [!NOTE]
-> Azure Private 5G Core does not support dynamically assigned 5QI, where specific QoS characteristics are signalled to the gNB during QoS Flow creation.
+> Azure Private 5G Core does not support dynamically assigned 5QI, where specific QoS characteristics are signalled to the gNB during QoS flow creation.
+
+- An *allocation and retention priority (ARP) value*. The ARP value defines a QoS flow's importance. It controls whether a particular QoS flow should be retained or preempted when there's resource constraint in the network, based on its priority compared to other QoS flows. The QoS profile may also define whether the QoS flow can preempt or be preempted by another QoS flow.
+
+Each unique QoS flow is assigned a unique *QoS flow ID (QFI)*, which is used by network elements to map SDFs to QoS flows.
+
+## 4G QoS and EPS bearers
+
+The packet core instance performs a very similar role in 4G networks to that described in [5G quality of service (QoS) and QoS flows](#5g-quality-of-service-qos-and-qos-flows).
-- An *allocation and retention priority (ARP) value*. The ARP value defines a QoS Flow's importance. It controls whether a particular QoS Flow should be retained or preempted when there's resource constraint in the network, based on its priority compared to other QoS Flows. The QoS profile may also define whether the QoS Flow can preempt or be preempted by another QoS Flow.
+In 4G networks, the packet core instance helps to establish *packet data network (PDN) connections* to transport user plane traffic. PDN connections also contain one or more SDFs.
-Each unique QoS Flow is assigned a unique *QoS Flow ID (QFI)*, which is used by network elements to map SDFs to QoS Flows.
+The SDFs are bound to *Evolved Packet System (EPS) bearers*. EPS bearers are also assigned a QoS profile, which comprises two components.
+
+- A *QoS class identifier (QCI)*, which is the equivalent of a 5QI in 5G networks.
+
+ You can find more information on QCI values in 3GPP 23.203. Each standardized QCI value is mapped to a 5QI value.
+
+- An ARP value. This works in the same way as in 5G networks to define an EPS bearer's importance.
+
+Each EPS bearer is assigned an *EPS bearer ID (EBI)*, which is used by network elements to map SDFs to EPS bearers.
## Azure Private 5G Core policy control configuration
-Azure Private 5G Core provides configuration to allow you to determine the QoS Flows the packet core instance will create and bind to SDFs during PDU session establishment. You can configure two primary resource types - *services* and *SIM policies*.
+Azure Private 5G Core provides configuration to allow you to determine the QoS flows or EPS bearers the packet core instance will create and bind to SDFs when establishing PDU sessions or PDN connections. You can configure two primary resource types - *services* and *SIM policies*.
### Services
A *service* is a representation of a set of QoS characteristics that you want to
Each service includes: -- A set of QoS characteristics that should be applied on SDFs matching the service. The packet core instance will use these characteristics to create a QoS Flow to bind to matching SDFs. You can specify the following QoS settings on a service:
+- A set of QoS characteristics that should be applied on SDFs matching the service. The packet core instance will use these characteristics to create a QoS flow or EPS bearer to bind to matching SDFs. You can specify the following QoS settings on a service:
- The maximum bit rate (MBR) for uplink traffic (away from the UE) across all matching SDFs. - The MBR for downlink traffic (towards the UE) across all matching SDFs. - An ARP priority value.
- - A 5QI value.
- - A preemption capability setting. This setting determines whether the QoS Flow created for this service can preempt another QoS Flow with a lower ARP priority level.
- - A preemption vulnerability setting. This setting determines whether the QoS Flow created for this service can be preempted by another QoS Flow with a higher ARP priority level.
+ - A 5QI value. This is mapped to a QCI value when used in 4G networks.
+ - A preemption capability setting. This setting determines whether the QoS flow or EPS bearer created for this service can preempt another QoS flow or EPS bearer with a lower ARP priority level.
+ - A preemption vulnerability setting. This setting determines whether the QoS flow or EPS bearer created for this service can be preempted by another QoS flow or EPS bearer with a higher ARP priority level.
- One or more *data flow policy rules*, which identify the SDFs to which the service should be applied. You can configure each rule with the following to determine when it's applied and the effect it will have:
Each SIM policy includes:
- A *network scope*, which defines how SIMs assigned to this SIM policy will connect to the data network. You can use the network scope to determine the following settings: - The services (as described in [Services](#services)) offered to SIMs on this data network.
- - A set of QoS characteristics that will be used to form the default QoS Flow for PDU sessions involving assigned SIMs on this data network.
+ - A set of QoS characteristics that will be used to form the default QoS flow for PDU sessions (or EPS bearer for PDN connections in 4G networks).
You can create multiple SIM policies to offer different QoS policy settings to separate groups of SIMs on the same data network. For example, you may want to create SIM policies with differing sets of services.
-## Creating and assigning QoS Flows during PDU session establishment
-
-During PDU session establishment, the packet core instance takes the following steps:
-
-1. Identifies the SIM resource representing the UE involved in the PDU session and its associated SIM policy (as described in [SIM policies](#sim-policies)).
-1. Creates a default QoS Flow for the PDU session using the configured values on the SIM policy.
-1. Identifies whether the SIM policy has any associated services (as described in [Services](#services)). If it does, the packet core instance creates extra QoS Flows using the QoS characteristics defined on these services.
-1. Signals the QoS Flows and any non-default characteristics to the gNodeB.
-1. Sends a set of QoS rules (including SDF definitions taken from associated services) to the UE. The UE uses these rules to take the following steps:
-
- - Checks uplink packets against the SDFs.
- - Applies any necessary traffic control.
- - Identifies the QoS Flow to which each SDF should be bound.
- - Marks packets with the appropriate QFI. The QFI ensures packets receive the correct QoS handling between the UE and the packet core instance without further inspection.
-
-1. Inspects downlink packets to check their properties against the data flow templates of the associated services, and then takes the following steps based on this matching:
-
- - Applies any necessary traffic control.
- - Identifies the QoS Flow to which each SDF should be bound.
- - Applies any necessary QoS treatment.
- - Marks packets with the QFI corresponding to the correct QoS Flow. The QFI ensures the packets receive the correct QoS handling between the packet core instance and data network without further inspection.
- ## Designing your policy control configuration Azure Private 5G Core policy control configuration is flexible, allowing you to configure new services and SIM policies whenever you need, based on the changing requirements of your private mobile network.
When you first come to design the policy control configuration for your own priv
You can also use the example Azure Resource Manager template (ARM template) in [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md) to quickly create a SIM policy with a single associated service.
+## QoS flow and EPS bearer creation and assignment
+
+This section describes how the packet core instance uses policy control configuration to create and assign QoS flows and EPS bearers. We describe the steps using 5G concepts for clarity, but the packet core instance takes the same steps in 4G networks. The table below gives the equivalent 4G concepts for reference.
+
+|5G |4G |
+|||
+|PDU session | PDN connection |
+|QoS flow | EPS bearer |
+| gNodeB | eNodeB |
+
+During PDU session establishment, the packet core instance takes the following steps:
+
+1. Identifies the SIM resource representing the UE involved in the PDU session and its associated SIM policy (as described in [SIM policies](#sim-policies)).
+1. Creates a default QoS flow for the PDU session using the configured values on the SIM policy.
+1. Identifies whether the SIM policy has any associated services (as described in [Services](#services)). If it does, the packet core instance creates extra QoS flows using the QoS characteristics defined on these services.
+1. Signals the QoS flows and any non-default characteristics to the gNodeB.
+1. Sends a set of QoS rules (including SDF definitions taken from associated services) to the UE. The UE uses these rules to take the following steps:
+
+ - Checks uplink packets against the SDFs.
+ - Applies any necessary traffic control.
+ - Identifies the QoS flow to which each SDF should be bound.
+ - In 5G networks only, the UE marks packets with the appropriate QFI. The QFI ensures packets receive the correct QoS handling between the UE and the packet core instance without further inspection.
+
+1. Inspects downlink packets to check their properties against the data flow templates of the associated services, and then takes the following steps based on this matching:
+
+ - Applies any necessary traffic control.
+ - Identifies the QoS flow to which each SDF should be bound.
+ - Applies any necessary QoS treatment.
+ - In 5G networks only, the packet core instance marks packets with the QFI corresponding to the correct QoS flow. The QFI ensures the packets receive the correct QoS handling between the packet core instance and data network without further inspection.
+ ## Next steps - [Learn how to create an example set of policy control configuration](tutorial-create-example-set-of-policy-control-configuration.md)
private-5g-core Private 5G Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-5g-core-overview.md
Azure Private 5G Core instantiates a single private mobile network distributed a
You can also deploy packet core instances in 4G mode to support Private Long-Term Evolution (LTE) use cases. For example, you can use the 4G Citizens Broadband Radio Service (CBRS) spectrum. 4G mode uses the same cloud-native components as 5G mode (such as the UPF). This is in contrast to other solutions that need to revert to a legacy 4G stack.
-The following diagram shows the network functions supported by a packet core instance. It also shows the interfaces these network functions use to interoperate with third-party components. Note that when running in 4G mode, the Unified Data Repository (UDR) performs the role that would usually be performed by a Home Subscriber Store (HSS).
+The following diagram shows the network functions supported by a packet core instance. It also shows the interfaces these network functions use to interoperate with third-party components.
- Diagram displaying the packet core architecture. The packet core includes the following 5G network functions: the A M F, the S M F, the U P F, the U D R, the N R F, the P C F, the U D M, and the A U S F. The A M F communicates with 5G user equipment over the N1 interface. A G Node B provided by a Microsoft partner communicates with the A M F over the N2 interface and the U P F over the N3 interface. The U P F communicates with the data network over the N6 interface. When operating in 4G mode, the packet core includes S 11 I W F and M M E network functions. The S 11 I W F communicates with the M M E over the S 11 interface. An E Node B provided by a Microsoft partner communicates with the M M E over the S 1 C interface.
+ Diagram displaying the packet core architecture. The packet core includes the following 5G network functions: the A M F, the S M F, the U P F, the U D R, the N R F, the P C F, the U D M, and the A U S F. The A M F communicates with 5G user equipment over the N1 interface. A G Node B provided by a Microsoft partner communicates with the A M F over the N2 interface and the U P F over the N3 interface. The U P F communicates with the data network over the N6 interface. When operating in 4G mode, the packet core includes M M E Proxy and M M E network functions. The M M E Proxy communicates with the M M E over the S 11 interface. An E Node B provided by a Microsoft partner communicates with the M M E over the S 1 M M E interface.
:::image-end::: Each packet core instance is connected to the local RAN network to provide coverage for cellular wireless devices. You can choose to limit these devices to local connectivity. Alternatively, you can provide multiple routes to the cloud, internet, or other enterprise data centers running IoT and automation applications.
-## Support for 5GC features
+## Feature support
### Supported 5G network functions
Each packet core instance is connected to the local RAN network to provide cover
- Unified Data Repository (UDR) - Network Repository Function (NRF)
-### Supported 5G procedures
+### Supported 4G network functions
-For information on Azure Private 5G Core's support for standards-based 5G procedures, see [Statement of compliance - Azure Private 5G Core](statement-of-compliance.md).
+Azure Private 5G Core uses the following network functions when supporting 4G UEs, in addition to the 5G network functions listed above.
+
+- Mobile Management Entity (MME)
+- MME-Proxy - The MME-Proxy works to allow 4G UEs to be served by 5G network functions.
+
+The following 5G network functions perform specific roles when supporting 4G UEs.
+
+- The UDR operates as a Home Subscriber Store (HSS).
+- The UPF operates as a System Architecture Evolution Gateway (SAEGW-U).
+
+### Supported 5G and 4G procedures
+
+For information on Azure Private 5G Core's support for standards-based 5G and 4G procedures, see [Statement of compliance - Azure Private 5G Core](statement-of-compliance.md).
### User equipment (UE) authentication and security context management Azure Private 5G Core supports the following authentication methods: -- Authentication using Subscription Permanent Identifiers (SUPI) and 5G Globally Unique Temporary Identities (5G-GUTI).-- 5G Authentication and Key Agreement (5G-AKA) for mutual authentication between UEs and the network.
+- Authentication using Subscription Permanent Identifiers (SUPI) and 5G Globally Unique Temporary Identities (5G-GUTI) for 5G user equipment (UEs).
+- Authentication using International Mobile Subscriber Identities (IMSI) and Globally Unique Temporary Identities (GUTI) for 4G UEs.
+- 5G Authentication and Key Agreement (5G-AKA) for mutual authentication between 5G UEs and the network.
+- Evolved Packet System based Authentication and Key Agreement (EPS-AKA) for mutual authentication between 4G UEs and the network.
The packet core instance performs ciphering and integrity protection of 5G non-access stratum (NAS). During UE registration, the UE includes its security capabilities for 5G NAS with 128-bit keys.
private-5g-core Statement Of Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/statement-of-compliance.md
All packet core network functions are compliant with Release 15 of the 3GPP spec
- TS 23.401: General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access. - TS 29.272: Evolved Packet System (EPS); Mobility Management Entity (MME) and Serving GPRS Support Node (SGSN) related interfaces based on Diameter protocol. - TS 29.274: 3GPP Evolved Packet System (EPS); Evolved General Packet Radio Service (GPRS) Tunneling Protocol for Control plane (GTPv2-C); Stage 3.
+- TS 33.401: 3GPP System Architecture Evolution (SAE); Security architecture.
- TS 36.413: Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1 Application Protocol (S1AP). ### Policy and charging control (PCC) framework
The implementation of all of the 3GPP specifications given in [3GPP specificatio
- IETF RFC 768: User Datagram Protocol. - IETF RFC 791: Internet Protocol.-- IETF RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers.
+- IETF RFC 2279: UTF-8, a transformation format of ISO 10646.
- IETF RFC 2460: Internet Protocol, Version 6 (IPv6) Specification.
+- IETF RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers.
+- IETF RFC 3748: Extensible Authentication Protocol (EAP).
+- IETF RFC 3986: Uniform Resource Identifier (URI): Generic Syntax.
+- IETF RFC 4187: Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA).
- IETF RFC 4291: IP Version 6 Addressing Architecture. - IETF RFC 4960: Stream Control Transmission Protocol.-- IETF RFC 2279: UTF-8, a transformation format of ISO 10646.-- IETF RFC 3986: Uniform Resource Identifier (URI): Generic Syntax.
+- IETF RFC 5448: Improved Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA').
- IETF RFC 5789: PATCH Method for HTTP.
+- IETF RFC 6458: Sockets API Extensions for the Stream Control Transmission Protocol (SCTP).
+- IETF RFC 6733: Diameter Base Protocol.
+- IETF RFC 6749: The OAuth 2.0 Authorization Framework.
- IETF RFC 6902: JavaScript Object Notation (JSON) Patch. - IETF RFC 7396: JSON Merge Patch. - IETF RFC 7540: Hypertext Transfer Protocol Version 2 (HTTP/2). - IETF RFC 7807: Problem Details for HTTP APIs. - IETF RFC 8259: The JavaScript Object Notation (JSON) Data Interchange Format.-- IETF RFC 3748: Extensible Authentication Protocol (EAP).-- IETF RFC 4187: Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA).-- IETF RFC 5448: Improved Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA').-- IETF RFC 6749: The OAuth 2.0 Authorization Framework. ## ITU-T Recommendations
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|| [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | || [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No | || [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | [Yes (Preview)](how-to-data-owner-policies-azure-sql-db.md) |
-|| [Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md)| [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | No* | No |
+|| [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)| [Yes](register-scan-azure-sql-managed-instance.md#scan) | [Yes](register-scan-azure-sql-managed-instance.md#scan) | No* | No |
|| [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| |Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | || [Cassandra](register-scan-cassandra-source.md)|[Yes](register-scan-cassandra-source.md#register) | No | [Yes](register-scan-cassandra-source.md#lineage)| No|
The following file types are supported for scanning, for schema extraction, and
Currently, nested data is only supported for JSON content.
-For all [system supported file types](#file-types-supported-for-scanning), if there is nested JSON content in a column, then the scanner parses the nested JSON data and surfaces it within the schema tab of the asset.
+For all [system supported file types](#file-types-supported-for-scanning), if there's nested JSON content in a column, then the scanner parses the nested JSON data and surfaces it within the schema tab of the asset.
-Nested data, or nested schema parsing, is not supported in SQL. A column with nested data will be reported and classified as is, and subdata will not be parsed.
+Nested data, or nested schema parsing, isn't supported in SQL. A column with nested data will be reported and classified as is, and subdata won't be parsed.
## Sampling within a file
For all structured file formats, Microsoft Purview scanner samples files in the
- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower. - For document file formats, it samples the first 20 MB of each file.
- - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Microsoft Purview captures only basic meta data like file name and fully qualified name.
+ - If a document file is larger than 20 MB, then it isn't subject to a deep scan (subject to classification). In that case, Microsoft Purview captures only basic meta data like file name and fully qualified name.
- For **tabular data sources (SQL)**, it samples the top 128 rows. - For **Azure Cosmos DB (SQL API)**, up to 300 distinct properties from the first 10 documents in a container will be collected for the schema and for each property, values from up to 128 documents or the first 1 MB will be sampled.
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Currently, the following data sources are supported to have a managed private en
- Azure Blob Storage - Azure Data Lake Storage Gen 2 - Azure SQL Database -- Azure SQL Database Managed Instance
+- Azure SQL Managed Instance
- Azure Cosmos DB - Azure Synapse Analytics - Azure Files
Additionally, you can deploy managed private endpoints for your Azure Key Vault
### Managed Virtual Network
-A Managed Virtual Network in Microsoft Purview is a virtual network which is deployed and managed by Azure inside the same region as Microsoft Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
+A Managed Virtual Network in Microsoft Purview is a virtual network that is deployed and managed by Azure inside the same region as Microsoft Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet-architecture.png" alt-text="Microsoft Purview Managed Virtual Network architecture":::
-You can deploy an Azure Managed Integration Runtime within a Microsoft Purview Managed Virtual Network. From there, the Managed VNet Runtime will leverage private endpoints to securely connect to and scan supported data sources.
+You can deploy an Azure Managed Integration Runtime within a Microsoft Purview Managed Virtual Network. From there, the Managed VNet Runtime will use private endpoints to securely connect to and scan supported data sources.
Creating a Managed VNet Runtime within Managed Virtual Network ensures that data integration process is isolated and secure.
Only a Managed private endpoint in an approved state can send traffic to a given
### Interactive authoring
-Interactive authoring capabilities is used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure Integration Runtime which is in Purview-Managed Virtual Network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time To Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
+Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure Integration Runtime that is in Purview-Managed Virtual Network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time To Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-interactive-authoring.png" alt-text="Interactive authoring":::
Interactive authoring capabilities is used for functionalities like test connect
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview account, ensure you meet the following prerequisites:
-1. An Microsoft Purview account deployed in one of the [supported regions](#supported-regions).
+1. A Microsoft Purview account deployed in one of the [supported regions](#supported-regions).
2. From Microsoft Purview roles, you must be a data curator at root collection level in your Microsoft Purview account. 3. From Azure RBAC roles, you must be contributor on the Microsoft Purview account and data source to approve private links.
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-region.png" alt-text="Screenshot that shows to create a Managed VNet Runtime":::
-5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in the Microsoft Purview governance portal for creating managed private endpoints for Microsoft Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
+5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in the Microsoft Purview governance portal for creating managed private endpoints for Microsoft Purview and its Managed Storage Account. Select on each workflow to approve the private endpoint for the corresponding Azure resource.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-workflows.png" alt-text="Screenshot that shows deployment of a Managed VNet Runtime":::
-6. In Azure portal, from your Microsoft Purview account resource blade, approve the managed private endpoint. From Managed storage account blade approve the managed private endpoints for blob and queue
+6. In Azure portal, from your Microsoft Purview account resource window, approve the managed private endpoint. From Managed storage account page approve the managed private endpoints for blob and queue
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Microsoft Purview":::
To scan any data sources using Managed VNet Runtime, you need to deploy and appr
2. Select **+ New**.
-3. From the list of supported data sources, select the type that corresponds to the data source you are planning to scan using Managed VNet Runtime.
+3. From the list of supported data sources, select the type that corresponds to the data source you're planning to scan using Managed VNet Runtime.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source.png" alt-text="Screenshot that shows how to create a managed private endpoint for data sources":::
-4. Provide a name for the managed private endpoint, select the Azure subscription and the data source from the drop down lists. Select **create**.
+4. Provide a name for the managed private endpoint, select the Azure subscription and the data source from the drop-down lists. Select **create**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe.png" alt-text="Screenshot that shows how to select data source for setting managed private endpoint":::
-5. From the list of managed private endpoints, click on the newly created managed private endpoint for your data source and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
+5. From the list of managed private endpoints, select on the newly created managed private endpoint for your data source and then select on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-approval.png" alt-text="Screenshot that shows the approval for managed private endpoint for data sources":::
-6. By clicking on the link, you are redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint and select **approve**.
+6. By clicking on the link, you're redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint and select **approve**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe-azure.png" alt-text="Screenshot that shows how to approve a private endpoint for data sources in Azure portal":::
To scan any data sources using Managed VNet Runtime, you need to deploy and appr
### Register and scan a data source using Managed VNet Runtime #### Register data source
-It is important to register the data source in Microsoft Purview prior to setting up a scan for the data source. Follow these steps to register data source if you haven't yet registered it.
+It's important to register the data source in Microsoft Purview prior to setting up a scan for the data source. Follow these steps to register data source if you haven't yet registered it.
1. Go to your Microsoft Purview account. 1. Select **Data Map** on the left menu.
To set up a scan using Account Key or SQL Authentication follow these steps:
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault":::
-6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop down lists. Select **create**.
+6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop-down lists. Select **create**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in the Microsoft Purview governance portal":::
-7. From the list of managed private endpoints, click on the newly created managed private endpoint for your Azure Key Vault and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
+7. From the list of managed private endpoints, select on the newly created managed private endpoint for your Azure Key Vault and then select on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-approve.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Azure Key Vault":::
-8. By clicking on the link, you are redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint for your Azure Key Vault and select **approve**.
+8. By clicking on the link, you're redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint for your Azure Key Vault and select **approve**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-az-approve.png" alt-text="Screenshot that shows how to approve a private endpoint for an Azure Key Vault in Azure portal":::
To set up a scan using Account Key or SQL Authentication follow these steps:
14. Under **Connect via integration runtime**, select the newly created Managed VNet Runtime.
-15. For **Credential** Select the credential you have registered earlier, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**.
+15. For **Credential** Select the credential you've registered earlier, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-scan.png" alt-text="Screenshot that shows how to create a new scan using Managed VNet and a SPN":::
purview Concept Asset Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-asset-normalization.md
When ingesting assets into the Microsoft Purview data map, different sources updating the same data asset may send similar, but slightly different qualified names. While these qualified names represent the same asset, slight differences such as an extra character or different capitalization may cause these assets on the surface to appear different. To avoid storing duplicate entries and causing confusion when consuming the data catalog, Microsoft Purview applies normalization during ingestion to ensure all fully qualified names of the same entity type are in the same format.
-For example, you scan in an Azure Blob with the qualified name `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`. This blob is also consumed by an Azure Data Factory pipeline which will then add lineage information to the asset. The ADF pipeline may be configured to read the file as `https://myAccount.file.core.windows.net//myshare/folderA/folderB/my-file.parquet`. While the qualified name is different, this ADF pipeline is consuming the same piece of data. Normalization ensures that all the metadata from both Azure Blob Storage and Azure Data Factory is visible on a single asset, `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`.
+For example, you scan in an Azure Blob with the qualified name `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`. This blob is also consumed by an Azure Data Factory pipeline that will then add lineage information to the asset. The ADF pipeline may be configured to read the file as `https://myAccount.file.core.windows.net//myshare/folderA/folderB/my-file.parquet`. While the qualified name is different, this ADF pipeline is consuming the same piece of data. Normalization ensures that all the metadata from both Azure Blob Storage and Azure Data Factory is visible on a single asset, `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`.
## Normalization rules
Before: `https://myaccount.file.core.windows.net/myshare/{folderA}/folder{B/`
After: `https://myaccount.file.core.windows.net/myshare/%7BfolderA%7D/folder%7BB/` ### Trim section spaces
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
Before: `https://myaccount.file.core.windows.net/myshare/ folder A/folderB /` After: `https://myaccount.file.core.windows.net/myshare/folder A/folderB/` ### Remove hostname spaces
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
Before: `https://myaccount .file. core.win dows. net/myshare/folderA/folderB/` After: `https://myaccount.file.core.windows.net/myshare/folderA/folderB/` ### Remove square brackets
-Applies to: Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool
+Applies to: Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool
Before: `mssql://foo.database.windows.net/[bar]/dbo/[foo bar]`
After: `mssql://foo.database.windows.net/bar/dbo/foo%20bar`
> Spaces between two square brackets will be encoded ### Lowercase scheme
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
Before: `HTTPS://myaccount.file.core.windows.net/myshare/folderA/folderB/` After: `https://myaccount.file.core.windows.net/myshare/folderA/folderB/` ### Lowercase hostname
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
Before: `https://myAccount.file.Core.Windows.net/myshare/folderA/folderB/`
Before: `https://myAccount.file.core.windows.net/myshare/folderA/data.TXT`
After: `https://myaccount.file.core.windows.net/myshare/folderA/data.txt` ### Remove duplicate slash
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
Before: `https://myAccount.file.core.windows.net//myshare/folderA////folderB/`
Before: `https://mystore.azuredatalakestore.net/folderA/folderB/abc.csv`
After: `adl://mystore.azuredatalakestore.net/folderA/folderB/abc.csv` ### Remove Trailing Slash
-Remove the trailing slash from higher level assets for Azure Blob, ADLS Gen1,and ADLS Gen2
+Remove the trailing slash from higher level assets for Azure Blob, ADLS Gen1, and ADLS Gen2
Applies to: Azure Blob, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-sensitivity-label.md
Sensitivity labels are supported in the Microsoft Purview Data Map for the follo
|Data type |Sources | ||| |Automatic labeling for files | - Azure Blob Storage</br>- Azure Files</br>- Azure Data Lake Storage Gen 1 and Gen 2</br>- Amazon S3|
-|Automatic labeling for schematized data assets | - SQL server</br>- Azure SQL database</br>- Azure SQL Database Managed Instance</br>- Azure Synapse Analytics workspaces</br>- Azure Cosmos Database (SQL API)</br> - Azure database for MySQL</br> - Azure database for PostgreSQL</br> - Azure Data Explorer</br> |
+|Automatic labeling for schematized data assets | - SQL server</br>- Azure SQL database</br>- Azure SQL Managed Instance</br>- Azure Synapse Analytics workspaces</br>- Azure Cosmos Database (SQL API)</br> - Azure database for MySQL</br> - Azure database for PostgreSQL</br> - Azure Data Explorer</br> |
| | | ## Labeling for SQL databases
purview How To Automatically Label Your Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-automatically-label-your-content.md
For more information on how to set up scans on various assets in the Microsoft P
|Source |Reference | ||| |**Files within Storage** | [Register and Scan Azure Blob Storage](register-scan-azure-blob-storage-source.md) </br> [Register and scan Azure Files](register-scan-azure-files-storage-source.md) [Register and scan Azure Data Lake Storage Gen1](register-scan-adls-gen1.md) </br>[Register and scan Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</br>[Register and scan Amazon S3](register-scan-amazon-s3.md) |
-|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos Database (SQL API)](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
+|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos Database (SQL API)](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
| | | ## View labels on assets in the catalog
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
Title: Resource group and subscription access provisioning by data owner
+ Title: Resource group and subscription access provisioning by data owner (preview)
description: Step-by-step guide showing how a data owner can create access policies to resource groups or subscriptions. Previously updated : 05/10/2022 Last updated : 05/27/2022
-# Resource group and subscription access provisioning by data owner (preview)
+# Resource group and subscription access provisioning by data owner (Preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] [Access policies](concept-data-owner-policies.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
Title: Access provisioning by data owner to Azure Storage datasets
+ Title: Access provisioning by data owner to Azure Storage datasets (preview)
description: Step-by-step guide showing how data owners can create access policies to datasets in Azure Storage
Previously updated : 05/12/2022 Last updated : 05/27/2022
-# Access provisioning by data owner to Azure Storage datasets (preview)
+# Access provisioning by data owner to Azure Storage datasets (Preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Title: Authoring and publishing data owner access policies
+ Title: Authoring and publishing data owner access policies (preview)
description: Step-by-step guide on how a data owner can author and publish access policies in Microsoft Purview
Previously updated : 4/18/2022 Last updated : 05/27/2022 # Authoring and publishing data owner access policies (Preview)
purview How To Integrate With Azure Security Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-integrate-with-azure-security-products.md
This document explains the steps required for connecting a Microsoft Purview acc
Microsoft Purview provides rich insights into the sensitivity of your data. This makes it valuable to security teams using Microsoft Defender for Cloud to manage the organizationΓÇÖs security posture and protect against threats to their workloads. Data resources remain a popular target for malicious actors, making it crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments. The integration with Microsoft Purview expands visibility into the data layer, enabling security teams to prioritize resources that contain sensitive data.
-To take advantage of this [enrichment in Microsoft Defender for Cloud](../security-center/information-protection.md), no additional steps are needed in Microsoft Purview. Start exploring the security enrichments with Microsoft Defender for Cloud's [Inventory page](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/25) where you can see the list of data sources with classifications and sensitivity labels.
+To take advantage of this [enrichment in Microsoft Defender for Cloud](../security-center/information-protection.md), no more steps are needed in Microsoft Purview. Start exploring the security enrichments with Microsoft Defender for Cloud's [Inventory page](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/25) where you can see the list of data sources with classifications and sensitivity labels.
### Supported data sources The integration supports data sources in Azure and AWS; sensitive data discovered in these resources is shared with Microsoft Defender for Cloud:
The integration supports data sources in Azure and AWS; sensitive data discovere
- [Azure Files](./register-scan-azure-files-storage-source.md) - [Azure Database for MySQL](./register-scan-azure-mysql-database.md) - [Azure Database for PostgreSQL](./register-scan-azure-postgresql.md)-- [Azure SQL Managed Instance](./register-scan-azure-sql-database-managed-instance.md)
+- [Azure SQL Managed Instance](./register-scan-azure-sql-managed-instance.md)
- [Azure Dedicated SQL pool (formerly SQL DW)](./register-scan-azure-synapse-analytics.md) - [Azure SQL Database](./register-scan-azure-sql-database.md) - [Azure Synapse Analytics (Workspace)](./register-scan-synapse-workspace.md)
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
This article describes how you can create credentials in Microsoft Purview. Thes
A credential is authentication information that Microsoft Purview can use to authenticate to your registered data sources. A credential object can be created for various types of authentication scenarios, such as Basic Authentication requiring username/password. Credential capture specific information required to authenticate, based on the chosen type of authentication method. Credentials use your existing Azure Key Vaults secrets for retrieving sensitive authentication information during the Credential creation process.
-In Microsoft Purview, there are few options to use as authentication method to scan data sources such as the following options. Learn from each [data source article](azure-purview-connector-overview.md) for the its supported authentication.
+In Microsoft Purview, there are few options to use as authentication method to scan data sources such as the following options. Learn from each [data source article](azure-purview-connector-overview.md) for its supported authentication.
- [Microsoft Purview system-assigned managed identity](#use-microsoft-purview-system-assigned-managed-identity-to-set-up-scans) - [User-assigned managed identity](#create-a-user-assigned-managed-identity) (preview)
If you're using the Microsoft Purview system-assigned managed identity (SAMI) to
- [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#authentication-for-a-scan) - [Azure SQL Database](register-scan-azure-sql-database.md)-- [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md#authentication-for-registration)
+- [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md#authentication-for-registration)
- [Azure Synapse Workspace](register-scan-synapse-workspace.md#authentication-for-registration) - [Azure Synapse dedicated SQL pools (formerly SQL DW)](register-scan-azure-synapse-analytics.md#authentication-for-registration)
The following steps will show you how to create a UAMI for Microsoft Purview to
* [Azure Data Lake Gen 1](register-scan-adls-gen1.md) * [Azure Data Lake Gen 2](register-scan-adls-gen2.md) * [Azure SQL Database](register-scan-azure-sql-database.md)
-* [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)
+* [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)
* [Azure SQL Dedicated SQL pools](register-scan-azure-synapse-analytics.md) * [Azure Blob Storage](register-scan-azure-blob-storage-source.md)
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
Previously updated : 04/14/2022 Last updated : 05/27/2022 # Microsoft Purview product glossary
Information that is associated with data assets in Microsoft Purview, for exampl
## Approved The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request. ## Asset
-Any single object that is stored within a Microsoft Purview data catalog.
+Any single object that is stored within a Microsoft Purview Data Catalog.
> [!NOTE] > A single object in the catalog could potentially represent many objects in storage, for example, a resource set is an asset but it's made up of many partition files in storage. ## Azure Information Protection
An individual who is associated with an entity in the data catalog.
An operation that manages resources in your subscription, such as role-based access control and Azure policy, that are sent to the Azure Resource Manager end point. Control plane operations can also apply to resources outside of Azure across on-premises, multicloud, and SaaS sources. ## Credential A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group to grant access to a data asset.
-## Data catalog
-Microsoft Purview features that enable customers to view and manage the metadata for assets in your data estate.
+## Data Catalog
+A searchable inventory of assets and their associated metadata that allows users to find and curate data across a data estate. The Data Catalog also includes a business glossary where subject matter experts can provide terms and definitions to add a business context to an asset.
## Data curator A role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. ## Data map
An asset that has been scanned, classified (when applicable), and added to the M
## Insight reader A role that provides read-only access to insights reports for collections where the insights reader also has the **Data reader** role. ## Data Estate Insights
-An area within Microsoft Purview where you can view reports that summarize information about your data.
+An area of the Microsoft Purview governance portal that provides up-to-date reports and actionable insights about the data estate.
## Integration runtime The compute infrastructure used to scan in a data source. ## Lineage
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
To learn how to add permissions on each resource type within a subscription or r
- [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#authentication-for-a-scan) - [Azure SQL Database](register-scan-azure-sql-database.md#authentication-for-a-scan)-- [Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md#authentication-for-registration)
+- [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md#authentication-for-registration)
- [Azure Synapse Analytics](register-scan-azure-synapse-analytics.md#authentication-for-registration) ### Steps to register
To create and run a new scan, do the following:
Each credential will be considered as the method of authentication for all the resources under a particular type. You must set the chosen credential on the resources in order to successfully scan them, as described [earlier in this article](#authentication-for-registration). 1. Within each type, you can select to either scan all the resources or scan a subset of them by name: - If you leave the option as **All**, then future resources of that type will also be scanned in future scan runs.
- - If you select specific storage accounts or SQL databases, then future resources of that type created within this subscription or resource group will not be included for scans, unless the scan is explicitly edited in the future.
+ - If you select specific storage accounts or SQL databases, then future resources of that type created within this subscription or resource group won't be included for scans, unless the scan is explicitly edited in the future.
1. Select **Test connection**. This will first test access to check if you've applied the Microsoft Purview MSI file as a reader on the subscription or resource group. If you get an error message, follow [these instructions](#prerequisites-for-registration) to resolve it. Then it will test your authentication and connection to each of your selected sources and generate a report. The number of sources selected will impact the time it takes to generate this report. If failed on some resources, hovering over the **X** icon will display the detailed error message.
- :::image type="content" source="media/register-scan-azure-multiple-sources/test-connection.png" alt-text="Screenshot showing the scan set up slider, with the Test Connection button highlighted.":::
+ :::image type="content" source="media/register-scan-azure-multiple-sources/test-connection.png" alt-text="Screenshot showing the scan setup slider, with the Test Connection button highlighted.":::
:::image type="content" source="media/register-scan-azure-multiple-sources/test-connection-report.png" alt-text="Screenshot showing an example test connection report, with some connections passing and some failing. Hovering over one of the failed connections shows a detailed error report.":::
-1. After you test connection has passed, select **Continue** to proceed.
+1. After your test connection has passed, select **Continue** to proceed.
1. Select scan rule sets for each resource type that you chose in the previous step. You can also create scan rule sets inline.
To manage a scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-managed-instance.md
+
+ Title: 'Connect to and manage Azure SQL Managed Instance'
+description: This guide describes how to connect to Azure SQL Managed Instance in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure SQL Managed Instance source.
+++++ Last updated : 11/02/2021+++
+# Connect to and manage an Azure SQL Managed Instance in Microsoft Purview
+
+This article outlines how to register and Azure SQL Managed Instance, as well as how to authenticate and interact with the Azure SQL Managed Instance in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md)
+
+## Supported capabilities
+
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|
+||||||||
+| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No | No** |
+
+\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* An active [Microsoft Purview account](create-catalog-portal.md).
+
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+
+* [Configure public endpoint in Azure SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure)
+
+ > [!Note]
+ > We now support scanning Azure SQL Managed Instances over the private connection using Microsoft Purview ingestion private endpoints and a self-hosted integration runtime VM.
+ > For more information related to prerequisites, see [Connect to your Microsoft Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md)
+
+## Register
+
+This section describes how to register an Azure SQL Managed Instance in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
+
+### Authentication for registration
+
+If you need to create new authentication, you need to [authorize database access to SQL Database Managed Instance](/azure/azure-sql/database/logins-create-manage). There are three authentication methods that Microsoft Purview supports today:
+
+- [System or user assigned managed identity](#system-or-user-assigned-managed-identity-to-register)
+- [Service Principal](#service-principal-to-register)
+- [SQL authentication](#sql-authentication-to-register)
+
+#### System or user assigned managed identity to register
+
+You can use either your Microsoft Purview system-assigned managed identity (SAMI), or a [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) (UAMI) to authenticate. Both options allow you to assign authentication directly to Microsoft Purview, like you would for any other user, group, or service principal. The Microsoft Purview system-assigned managed identity is created automatically when the account is created and has the same name as your Microsoft Purview account. A user-assigned managed identity is a resource that can be created independently. To create one, you can follow our [user-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
+
+You can find your managed identity Object ID in the Azure portal by following these steps:
+
+For Microsoft Purview accountΓÇÖs system-assigned managed identity:
+1. Open the Azure portal, and navigate to your Microsoft Purview account.
+1. Select the **Properties** tab on the left side menu.
+1. Select the **Managed identity object ID** value and copy it.
+
+For user-assigned managed identity (preview):
+1. Open the Azure portal, and navigate to your Microsoft Purview account.
+1. Select the Managed identities tab on the left side menu
+1. Select the user assigned managed identities, select the intended identity to view the details.
+1. The object (principal) ID is displayed in the overview essential section.
+
+Either managed identity will need permission to get metadata for the database, schemas and tables, and to query the tables for classification.
+- Create an Azure AD user in Azure SQL Managed Instance by following the prerequisites and tutorial on [Create contained users mapped to Azure AD identities](/azure/azure-sql/database/authentication-aad-configure?tabs=azure-powershell#create-contained-users-mapped-to-azure-ad-identities)
+- Assign `db_datareader` permission to the identity.
+
+#### Service Principal to register
+
+There are several steps to allow Microsoft Purview to use service principal to scan your Azure SQL Managed Instance.
+
+#### Create or use an existing service principal
+
+To use a service principal, you can use an existing one or create a new one. If you're going to use an existing service principal, skip to the next step.
+If you have to create a new Service Principal, follow these steps:
+
+ 1. Navigate to the [Azure portal](https://portal.azure.com).
+ 1. Select **Azure Active Directory** from the left-hand side menu.
+ 1. Select **App registrations**.
+ 1. Select **+ New application registration**.
+ 1. Enter a name for the **application** (the service principal name).
+ 1. Select **Accounts in this organizational directory only**.
+ 1. For Redirect URI, select **Web** and enter any URL you want; it doesn't have to be real or work.
+ 1. Then select **Register**.
+