Updates from: 05/28/2022 01:11:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
This table shows support for authenticating Azure Active Directory (Azure AD) an
| | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | | **Windows** | ![Chrome supports USB on Windows for Azure AD accounts.][y] | ![Chrome supports NFC on Windows for Azure AD accounts.][y] | ![Chrome supports BLE on Windows for Azure AD accounts.][y] | ![Edge supports USB on Windows for Azure AD accounts.][y] | ![Edge supports NFC on Windows for Azure AD accounts.][y] | ![Edge supports BLE on Windows for Azure AD accounts.][y] | ![Firefox supports USB on Windows for Azure AD accounts.][y] | ![Firefox supports NFC on Windows for Azure AD accounts.][y] | ![Firefox supports BLE on Windows for Azure AD accounts.][y] | ![Safari supports USB on Windows for Azure AD accounts.][n] | ![Safari supports NFC on Windows for Azure AD accounts.][n] | ![Safari supports BLE on Windows for Azure AD accounts.][n] | | **macOS** | ![Chrome supports USB on macOS for Azure AD accounts.][y] | ![Chrome supports NFC on macOS for Azure AD accounts.][n] | ![Chrome supports BLE on macOS for Azure AD accounts.][n] | ![Edge supports USB on macOS for Azure AD accounts.][y] | ![Edge supports NFC on macOS for Azure AD accounts.][n] | ![Edge supports BLE on macOS for Azure AD accounts.][n] | ![Firefox supports USB on macOS for Azure AD accounts.][n] | ![Firefox supports NFC on macOS for Azure AD accounts.][n] | ![Firefox supports BLE on macOS for Azure AD accounts.][n] | ![Safari supports USB on macOS for Azure AD accounts.][n] | ![Safari supports NFC on macOS for Azure AD accounts.][n] | ![Safari supports BLE on macOS for Azure AD accounts.][n] |
-| **ChromeOS** | ![Chrome supports USB on ChromeOS for Azure AD accounts.][y] | ![Chrome supports NFC on ChromeOS for Azure AD accounts.][n] | ![Chrome supports BLE on ChromeOS for Azure AD accounts.][n] | ![Edge supports USB on ChromeOS for Azure AD accounts.][n] | ![Edge supports NFC on ChromeOS for Azure AD accounts.][n] | ![Edge supports BLE on ChromeOS for Azure AD accounts.][n] | ![Firefox supports USB on ChromeOS for Azure AD accounts.][n] | ![Firefox supports NFC on ChromeOS for Azure AD accounts.][n] | ![Firefox supports BLE on ChromeOS for Azure AD accounts.][n] | ![Safari supports USB on ChromeOS for Azure AD accounts.][n] | ![Safari supports NFC on ChromeOS for Azure AD accounts.][n] | ![Safari supports BLE on ChromeOS for Azure AD accounts.][n] |
+| **ChromeOS** | ![Chrome supports USB on ChromeOS for Azure AD accounts.][y]* | ![Chrome supports NFC on ChromeOS for Azure AD accounts.][n] | ![Chrome supports BLE on ChromeOS for Azure AD accounts.][n] | ![Edge supports USB on ChromeOS for Azure AD accounts.][n] | ![Edge supports NFC on ChromeOS for Azure AD accounts.][n] | ![Edge supports BLE on ChromeOS for Azure AD accounts.][n] | ![Firefox supports USB on ChromeOS for Azure AD accounts.][n] | ![Firefox supports NFC on ChromeOS for Azure AD accounts.][n] | ![Firefox supports BLE on ChromeOS for Azure AD accounts.][n] | ![Safari supports USB on ChromeOS for Azure AD accounts.][n] | ![Safari supports NFC on ChromeOS for Azure AD accounts.][n] | ![Safari supports BLE on ChromeOS for Azure AD accounts.][n] |
| **Linux** | ![Chrome supports USB on Linux for Azure AD accounts.][y] | ![Chrome supports NFC on Linux for Azure AD accounts.][n] | ![Chrome supports BLE on Linux for Azure AD accounts.][n] | ![Edge supports USB on Linux for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on Linux for Azure AD accounts.][n] | ![Firefox supports BLE on Linux for Azure AD accounts.][n] | ![Safari supports USB on Linux for Azure AD accounts.][n] | ![Safari supports NFC on Linux for Azure AD accounts.][n] | ![Safari supports BLE on Linux for Azure AD accounts.][n] | | **iOS** | ![Chrome supports USB on iOS for Azure AD accounts.][n] | ![Chrome supports NFC on iOS for Azure AD accounts.][n] | ![Chrome supports BLE on iOS for Azure AD accounts.][n] | ![Edge supports USB on iOS for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on iOS for Azure AD accounts.][n] | ![Firefox supports BLE on iOS for Azure AD accounts.][n] | ![Safari supports USB on iOS for Azure AD accounts.][n] | ![Safari supports NFC on iOS for Azure AD accounts.][n] | ![Safari supports BLE on iOS for Azure AD accounts.][n] | | **Android** | ![Chrome supports USB on Android for Azure AD accounts.][n] | ![Chrome supports NFC on Android for Azure AD accounts.][n] | ![Chrome supports BLE on Android for Azure AD accounts.][n] | ![Edge supports USB on Android for Azure AD accounts.][n] | ![Edge supports NFC on Android for Azure AD accounts.][n] | ![Edge supports BLE on Android for Azure AD accounts.][n] | ![Firefox supports USB on Android for Azure AD accounts.][n] | ![Firefox supports NFC on Android for Azure AD accounts.][n] | ![Firefox supports BLE on Android for Azure AD accounts.][n] | ![Safari supports USB on Android for Azure AD accounts.][n] | ![Safari supports NFC on Android for Azure AD accounts.][n] | ![Safari supports BLE on Android for Azure AD accounts.][n] | -
+*Key Registration is currently not supported with ChromeOS/Chrome Browser.
## Unsupported browsers
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Title: Terms of use - Azure Active Directory | Microsoft Docs
+ Title: Terms of use in Azure Active Directory
description: Get started using Azure Active Directory terms of use to present information to employees or guests before getting access. Previously updated : 01/12/2022 Last updated : 05/26/2022 -+
Azure AD terms of use policies have the following capabilities:
To use and configure Azure AD terms of use policies, you must have: -- Azure AD Premium P1, P2, EMS E3, or EMS E5 subscription.
+- Azure AD Premium P1, P2, EMS E3, or EMS E5 licenses.
- If you don't have one of these subscriptions, you can [get Azure AD Premium](../fundamentals/active-directory-get-started-premium.md) or [enable Azure AD Premium trial](https://azure.microsoft.com/trial/get-started-active-directory/). - One of the following administrator accounts for the directory you want to configure: - Global Administrator
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to Azure as a Global Administrator, Security Administrator, or Conditional Access Administrator.
-1. Navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
-
- ![Conditional Access - Terms of use blade](./media/terms-of-use/tou-blade.png)
-
-1. Click **New terms**.
-
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Select, **New terms**.
+
![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png) 1. In the **Name** box, enter a name for the terms of use policy that will be used in the Azure portal.
-1. In the **Display name** box, enter a title that users see when they sign in.
1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it. 1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user will see will be based on their browser preferences.
+1. In the **Display name** box, enter a title that users see when they sign in.
1. To require end users to view the terms of use policy before accepting them, set **Require users to expand the terms of use** to **On**. 1. To require end users to accept your terms of use policy on every device they're accessing from, set **Require users to consent on every device** to **On**. Users may be required to install other applications if this option is enabled. For more information, see [Per-device terms of use](#per-device-terms-of-use). 1. If you want to expire terms of use policy consents on a schedule, set **Expire consents** to **On**. When set to On, two more schedule settings are displayed.
Once you've completed your terms of use policy document, use the following proce
| Alice | Jan 1 | Jan 31 | Mar 2 | Apr 1 | | Bob | Jan 15 | Feb 14 | Mar 16 | Apr 15 |
- It is possible to use the **Expire consents** and **Duration before re-acceptance required (days)** settings together, but typically you use one or the other.
+ It's possible to use the **Expire consents** and **Duration before re-acceptance required (days)** settings together, but typically you use one or the other.
1. Under **Conditional Access**, use the **Enforce with Conditional Access policy template** list to select the template to enforce the terms of use policy.
- ![Conditional Access drop-down list to select a policy template](./media/terms-of-use/conditional-access-templates.png)
- | Template | Description | | | |
- | **Access to cloud apps for all guests** | A Conditional Access policy will be created for all guests and all cloud apps. This policy impacts the Azure portal. Once this is created, you might be required to sign out and sign in. |
- | **Access to cloud apps for all users** | A Conditional Access policy will be created for all users and all cloud apps. This policy impacts the Azure portal. Once this is created, you'll be required to sign out and sign in. |
| **Custom policy** | Select the users, groups, and apps that this terms of use policy will be applied to. | | **Create Conditional Access policy later** | This terms of use policy will appear in the grant control list when creating a Conditional Access policy. |
- >[!IMPORTANT]
- >Conditional Access policy controls (including terms of use policies) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
+ > [!IMPORTANT]
+ > Conditional Access policy controls (including terms of use policies) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
Custom Conditional Access policies enable granular terms of use policies, down to a specific cloud application or group of users. For more information, see [Quickstart: Require terms of use to be accepted before accessing cloud apps](require-tou.md).
-1. Click **Create**.
+1. Select **Create**.
If you selected a custom Conditional Access template, then a new screen appears that allows you to create the custom Conditional Access policy.
Once you've completed your terms of use policy document, use the following proce
You should now see your new terms of use policies.
- ![New terms of use listed in the terms of use blade](./media/terms-of-use/create-tou.png)
- ## View report of who has accepted and declined The Terms of use blade shows a count of the users who have accepted and declined. These counts and who accepted/declined are stored for the life of the terms of use policy.
The Terms of use blade shows a count of the users who have accepted and declined
![Terms of use blade listing the number of user show have accepted and declined](./media/terms-of-use/view-tou.png)
-1. For a terms of use policy, click the numbers under **Accepted** or **Declined** to view the current state for users.
+1. For a terms of use policy, select the numbers under **Accepted** or **Declined** to view the current state for users.
![Terms of use consents pane listing the users that have accepted](./media/terms-of-use/accepted-tou.png)
-1. To view the history for an individual user, click the ellipsis (**...**) and then **View History**.
+1. To view the history for an individual user, select the ellipsis (**...**) and then **View History**.
![View History context menu for a user](./media/terms-of-use/view-history-menu.png)
If you want to view more activity, Azure AD terms of use policies include audit
To get started with Azure AD audit logs, use the following procedure:
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select a terms of use policy.
-1. Click **View audit logs**.
-
- ![Terms of use blade with the View audit logs option highlighted](./media/terms-of-use/audit-tou.png)
-
+1. Select **View audit logs**.
1. On the Azure AD audit logs screen, you can filter the information using the provided lists to target specific audit log information.
- You can also click **Download** to download the information in a .csv file for use locally.
+ You can also select **Download** to download the information in a .csv file for use locally.
![Azure AD audit logs screen listing date, target policy, initiated by, and activity](./media/terms-of-use/audit-logs-tou.png)
- If you click a log, a pane appears with more activity details.
+ If you select a log, a pane appears with more activity details.
![Activity details for a log showing activity, activity status, initiated by, target policy](./media/terms-of-use/audit-log-activity-details.png)
Users can review and see the terms of use policies that they've accepted by usin
You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to edit.
-1. Click **Edit terms**.
-1. In the Edit terms of use pane, you can change the following:
- - **Name** ΓÇô this is the internal name of the ToU that isn't shared with end users
- - **Display name** ΓÇô this is the name that end users can see when viewing the ToU
- - **Require users to expand the terms of use** ΓÇô Setting this to **On** will force the end user to expand the terms of use policy document before accepting it.
+1. Select **Edit terms**.
+1. In the Edit terms of use pane, you can change the following options:
+ - **Name** ΓÇô the internal name of the ToU that isn't shared with end users
+ - **Display name** ΓÇô the name that end users can see when viewing the ToU
+ - **Require users to expand the terms of use** ΓÇô Setting this option to **On** will force the end user to expand the terms of use policy document before accepting it.
- (Preview) You can **update an existing terms of use** document - You can add a language to an existing ToU
You can edit some details of terms of use policies, but you can't modify an exis
![Edit showing different language options ](./media/terms-of-use/edit-terms-use.png)
-1. Once you're done, click **Save** to save your changes.
+1. Once you're done, select **Save** to save your changes.
## Update the version or pdf of an existing terms of use
-1. Sign in to Azure and navigate to [Terms of use](https://aka.ms/catou)
-2. Select the terms of use policy you want to edit.
-3. Click **Edit terms**.
-4. For the language that you would like to update a new version, click **Update** under the action column
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Select the terms of use policy you want to edit.
+1. Select **Edit terms**.
+1. For the language that you would like to update a new version, select **Update** under the action column
![Edit terms of use pane showing name and expand options](./media/terms-of-use/edit-terms-use.png)
-5. In the pane on the right, upload the pdf for the new version
-6. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who haven't consented before or whose consent expires will see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
+1. In the pane on the right, upload the pdf for the new version
+1. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who haven't consented before or whose consent expires will see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
![Edit terms of use re-accept option highlighted](./media/terms-of-use/re-accept.png)
-7. Once you've uploaded your new pdf and decided on reaccept, click Add at the bottom of the pane.
-8. You'll now see the most recent version under the Document column.
+1. Once you've uploaded your new pdf and decided on reaccept, select Add at the bottom of the pane.
+1. You'll now see the most recent version under the Document column.
## View previous versions of a ToU
-1. Sign in to Azure and navigate to **Terms of use** at https://aka.ms/catou.
-2. Select the terms of use policy for which you want to view a version history.
-3. Click on **Languages and version history**
-4. Click on **See previous versions.**
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Select the terms of use policy for which you want to view a version history.
+1. Select **Languages and version history**
+1. Select **See previous versions.**
![document details including language versions](./media/terms-of-use/document-details.png)
-5. You can click on the name of the document to download that version
+1. You can select the name of the document to download that version
## See who has accepted each version
-1. Sign in to Azure and navigate to **Terms of use** at https://aka.ms/catou.
-2. To see who has currently accepted the ToU, click on the number under the **Accepted** column for the ToU you want.
-3. By default, the next page will show you the current state of each users acceptance to the ToU
-4. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each users events in details about each version and what happened.
-5. Alternatively, you can select a specific version from the **Version** drop-down to see who has accepted that specific version.
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. To see who has currently accepted the ToU, select the number under the **Accepted** column for the ToU you want.
+1. By default, the next page will show you the current state of each user's acceptance to the ToU
+1. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each users events in details about each version and what happened.
+1. Alternatively, you can select a specific version from the **Version** drop-down to see who has accepted that specific version.
## Add a ToU language The following procedure describes how to add a ToU language.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to edit.
-1. Click **Edit Terms**
-1. Click **Add language** at the bottom of the page.
+1. Select **Edit Terms**
+1. Select **Add language** at the bottom of the page.
1. In the Add terms of use language pane, upload your localized PDF, and select the language. ![Terms of use selected and showing the Languages tab in the details pane](./media/terms-of-use/select-language.png)
-1. Click **Add language**.
-1. Click **Save**
+1. Select **Add language**.
+1. Select **Save**
-1. Click **Add** to add the language.
+1. Select **Add** to add the language.
## Per-device terms of use
If a user is using browser that isn't supported, they'll be asked to use a diffe
You can delete old terms of use policies using the following procedure.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to remove.
-1. Click **Delete terms**.
-1. In the message that appears asking if you want to continue, click **Yes**.
+1. Select **Delete terms**.
+1. In the message that appears asking if you want to continue, select **Yes**.
![Message asking for confirmation to delete terms of use](./media/terms-of-use/delete-tou.png)
You can configure a Conditional Access policy for the Microsoft Intune Enrollmen
A: Terms of use can only be accepted when authenticating interactively. **Q: How do I see when/if a user has accepted a terms of use?**<br />
-A: On the Terms of use blade, click the number under **Accepted**. You can also view or search the accept activity in the Azure AD audit logs. For more information, see View report of who has accepted and declined and [View Azure AD audit logs](#view-azure-ad-audit-logs).
+A: On the Terms of use blade, select the number under **Accepted**. You can also view or search the accept activity in the Azure AD audit logs. For more information, see View report of who has accepted and declined and [View Azure AD audit logs](#view-azure-ad-audit-logs).
**Q: How long is information stored?**<br /> A: The user counts in the terms of use report and who accepted/declined are stored for the life of the terms of use. The Azure AD audit logs are stored for 30 days.
A: The user counts in the terms of use report and who accepted/declined are stor
A: The terms of use report is stored for the lifetime of that terms of use policy, while the Azure AD audit logs are stored for 30 days. Also, the terms of use report only displays the users current consent state. For example, if a user declines and then accepts, the terms of use report will only show that user's accept. If you need to see the history, you can use the Azure AD audit logs. **Q: If hyperlinks are in the terms of use policy PDF document, will end users be able to click them?**<br />
-A: Yes, end users are able to select hyperlinks to other pages but links to sections within the document are not supported. Also, hyperlinks in terms of use policy PDFs do not work when accessed from the Azure AD MyApps/MyAccount portal.
+A: Yes, end users are able to select hyperlinks to other pages but links to sections within the document aren't supported. Also, hyperlinks in terms of use policy PDFs don't work when accessed from the Azure AD MyApps/MyAccount portal.
**Q: Can a terms of use policy support multiple languages?**<br /> A: Yes. Currently there are 108 different languages an administrator can configure for a single terms of use policy. An administrator can upload multiple PDF documents and tag those documents with a corresponding language (up to 108). When end users sign in, we look at their browser language preference and display the matching document. If there's no match, we display the default document, which is the first document that is uploaded.
A: You can [review previously accepted terms of use policies](#how-users-can-rev
A: If you've configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user will be required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409). **Q: What endpoints does the terms of use service use for authentication?**<br />
-A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you will need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
+A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you'll need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
## Next steps
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
-> [!div renderon="portal" class="sxs-lookup display-on-portal"]
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
> # Quickstart: Sign in users and call the Microsoft Graph API from an Android app > > In this quickstart, you download and run a code sample that demonstrates how an Android application can sign in users and get an access token to call the Microsoft Graph API.
> ### Step 1: Configure your application in the Azure portal > For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker. >
-> <button id="makechanges" class="nextstepaction" class="configure-app-button"> Make this change for me </button>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
> > > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
> ### Step 2: Download the project > > Run the project using Android Studio.
-> <a href='https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip'><button id="downloadsample" class="download-sample-button">Download the code sample</button></a>
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
> > > ### Step 3: Your app is configured and ready to run
> Move on to the Android tutorial in which you build an Android app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API. > > > [!div class="nextstepaction"]
-> > [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
+> > [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
-> [!div renderon="portal" class="sxs-lookup display-on-portal"]
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
> # Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app > > In this quickstart, you download and run a code sample that demonstrates how a native iOS or macOS application can sign in users and get an access token to call the Microsoft Graph API.
> #### Step 1: Configure your application > For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker. >
-> <button id="makechanges" class="nextstepaction" class="configure-app-button"> Make this change for me </button>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
> > > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes > > #### Step 2: Download the sample project >
-> <a href='https://github.com/Azure-Samples/active-directory-ios-swift-native-v2/archive/master.zip'><button id="downloadsample" class="downloadsample_ios">Download the code sample for iOS</button></a>
->
-> <a href='https://github.com/Azure-Samples/active-directory-macOS-swift-native-v2/archive/master.zip'><button id="downloadsample" class="downloadsample_ios">Download the code sample for macOS</button></a>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample_ios" class="download-sample-button">Download the code sample for iOS</button>
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample_macos" class="download-sample-button">Download the code sample for macOS</button>
> > #### Step 3: Install dependencies >
> Move on to the step-by-step tutorial in which you build an iOS or macOS app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API. > > > [!div class="nextstepaction"]
-> > [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
+> > [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Before reading through this article, it's recommended that you go through the fo
## Refresh token lifetime
-Refresh tokens have a longer lifetime than access tokens. The default lifetime for the tokens is 90 days and they replace themselves with a fresh token upon every use. As such, whenever a refresh token is used to acquire a new access token, a new refresh token is also issued. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
+Refresh tokens have a longer lifetime than access tokens. The default lifetime for the refresh tokens is 24 hours for [single page apps](reference-third-party-cookies-spas.md) and 90 days for all other scenarios. Refresh tokens replace themselves with a fresh token upon every use. The Microsoft identity platform doesn't revoke old refresh tokens when used to fetch new access tokens. Securely delete the old refresh token after acquiring a new one. Refresh tokens need to be stored safely like access tokens or application credentials.
+
+>[!IMPORTANT]
+> Refresh tokens sent to a redirect URI registered as `spa` expire after 24 hours. Additional refresh tokens acquired using the initial refresh token carry over that expiration time, so apps must be prepared to rerun the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users do not have to enter their credentials and usually don't even see any related user experience, just a reload of your application. The browser must visit the log-in page in a top-level frame to show the login session. This is due to [privacy features in browsers that block third party cookies](reference-third-party-cookies-spas.md).
## Refresh token expiration
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Because subdomains inherit the authentication type of the root domain by default
Use the following command to promote the subdomain: ```http
-POST https://graph.microsoft.com/v1.0/domains/foo.contoso.com/promote
+POST https://graph.windows.net/{tenant-id}/domains/foo.contoso.com/promote
``` ### Promote command error conditions
Invoking API with a federated verified subdomain with user references | POST | 4
- [Add custom domain names](../fundamentals/add-custom-domain.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) - [Manage domain names](domains-manage.md)-- [ForceDelete a custom domain name with Microsoft Graph API](/graph/api/domain-forcedelete)
+- [ForceDelete a custom domain name with Microsoft Graph API](/graph/api/domain-forcedelete)
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
-# Tutorial: Configure Silverfort with Azure Active Directory for secure hybrid access
+# Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort
-In this tutorial, learn how to integrate Silverfort with Azure Active Directory (Azure AD). [Silverfort](https://www.silverfort.com/) uses innovative agent-less and proxy-less technology to connect all your assets on-premises and in the cloud to Azure AD. This solution enables organizations to apply identity protection, visibility, and user experience across all environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and proactively prevents threats.
+[Silverfort](https://www.silverfort.com/) uses innovative agent-less and proxy-less technology to connect all your assets on-premises and in the cloud to Azure AD. This solution enables organizations to apply identity protection, visibility, and user experience across all environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and proactively prevents threats.
-Silverfort can seamlessly connect any type of asset into Azure AD, as if it was a modern web application. For example:
+In this tutorial, learn how to integrate your existing on premises Silverfort implementation with Azure Active Directory (Azure AD) for [hybrid access](../devices/concept-azure-ad-join-hybrid.md).
+
+Silverfort seamlessly connects assets with Azure AD. These **bridged** assets appear as regular applications in Azure AD and can be protected with Conditional Access, single-sign-on (SSO), multifactor authentication, auditing and more. Use Silverfort to connect assets including:
- Legacy and homegrown applications
Silverfort can seamlessly connect any type of asset into Azure AD, as if it was
- Infrastructure and industrial systems
-These **bridged** assets appear as regular applications in Azure AD and can be protected with Conditional Access, single-sign-on (SSO), multifactor authentication, auditing and more.
-
-This solution combines all corporate assets and third-party Identity and Access Management (IAM) platforms. For example, Active Directory, Active Directory Federation Services (ADFS), and Remote Authentication Dial-In User Service (RADIUS) on Azure AD, including hybrid and multi-cloud environments.
+Silverfort integrates your corporate assets and third-party Identity and Access Management (IAM) platforms. This includes Active Directory, Active Directory Federation Services (ADFS), and Remote Authentication Dial-In User Service (RADIUS) on Azure AD, including hybrid and multi-cloud environments.
-## Scenario description
+Follow the steps in this tutorial to configure and test the Silverfort Azure AD bridge in your Azure AD tenant to communicate with your existing Silverfort implementation. Once configured, you can create Silverfort authentication policies that bridge authentication requests from various identity sources to Azure AD for SSO. After an application is bridged, it can be managed in Azure AD.
-In this guide, you'll configure and test the Silverfort Azure AD bridge in your Azure AD tenant.
+## Silverfort with Azure AD Authentication Architecture
-Once configured, you can create Silverfort authentication policies that bridge authentication requests from various identity sources to Azure AD for SSO. Once an application is bridged, it can be managed in Azure AD.
-
-The following diagram shows the components included in the solution and sequence of authentication orchestrated by Silverfort.
+The following diagram describes the authentication architecture orchestrated by Silverfort in a hybrid environment.
![image shows the architecture diagram](./media/silverfort-azure-ad-integration/silverfort-architecture-diagram.png)
The following diagram shows the components included in the solution and sequence
## Prerequisites
-To set up SSO for an application that you added to your Azure AD tenant, you'll need:
+You must already have Silverfort deployed in your tenant or infrastructure in order to perform this tutorial. To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](https://www.silverfort.com/). You will need to install Silverfort Desktop app on relevant workstations.
+
+This tutorial requires you to set up Silverfort Azure AD Adapter in your Azure AD tenant. You'll need:
- An Azure account with an active subscription. You can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles in your Azure account - Global administrator, Cloud application administrator, Application administrator, or Owner of the service principal. -- An application that supports SSO and that was already pre-configured and added to the Azure AD gallery. The Silverfort application in the Azure AD gallery is already pre-configured. You'll need to add it as an Enterprise application from the gallery.-
-## Onboard with Silverfort
-
-To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](https://www.silverfort.com/). Install Silverfort Desktop app on relevant workstations.
+- The Silverfort Azure AD Adapter application in the Azure AD gallery is pre-configured to support SSO. You'll need to add Silverfort Azure AD Adapter to your tenant as an Enterprise application from the gallery.
## Configure Silverfort and create a policy 1. From a browser, log in to the **Silverfort admin console**.
-2. In the main menu, navigate to **Settings**, and then scroll to
+2. In the main menu, navigate to **Settings** and then scroll to
**Azure AD Bridge Connector** in the General section. Confirm your tenant ID, and then select **Authorize**. ![image shows azure ad bridge connector](./media/silverfort-azure-ad-integration/azure-ad-bridge-connector.png)
To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](http
![image shows enterprise application](./media/silverfort-azure-ad-integration/enterprise-application.png)
-5. In the Silverfot admin console, navigate to the **Policies** page, and select **Create Policy**.
+5. In the Silverfort admin console, navigate to the **Policies** page and select **Create Policy**.
-6. The **New Policy** dialog will appear. Enter a **Policy Name**, that would indicate the application name that will be created in Azure. For example, if you're adding multiple servers or applications under this policy, name it to reflect the resources covered by the policy. In the example, we'll create a policy for the *SL-APP1* server.
+6. The **New Policy** dialog will appear. Enter a **Policy Name** that would indicate the application name that will be created in Azure. For example, if you're adding multiple servers or applications under this policy, name it to reflect the resources covered by the policy. In the example, we'll create a policy for the *SL-APP1* server.
![image shows define policy](./media/silverfort-azure-ad-integration/define-policy.png)
To deploy Silverfort in your tenant or infrastructure, [contact Silverfort](http
![image shows add policy](./media/silverfort-azure-ad-integration/add-policy.png)
-14. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application should now appear. This application can now be included in [CA policies](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
+14. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application should now appear. This application can now be included in [Conditional Access policies](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
## Next steps - [Silverfort Azure AD adapter](https://azuremarketplace.microsoft.com/marketplace/apps/aad.silverfortazureadadapter?tab=overview) - [Silverfort resources](https://www.silverfort.com/resources/)+
+- [Contact Silverfort](https://www.silverfort.com/company/contact/)
active-directory How To Use Vm Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md
Last updated 01/11/2022 -
+ms.tool: azure-cli, azure-powershell
ms.devlang: azurecli
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Last updated 01/11/2022 -+
+ms.tool: azure-cli, azure-powershell
ms.devlang: azurecli #Customer intent: As an administrator, I want to know how to access Cosmos DB from a virtual machine using a managed identity
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
na Previously updated : 05/24/2022 Last updated : 10/07/2021
Select an alert to see a report that lists the users or roles that triggered the
## Alerts
-Alert | Severity | Trigger | Recommendation
- | | |
-**Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles.
-**Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use.
-**Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles.
-**Roles are being assigned outside of Privileged Identity Management (Preview)** | High | A role is managed directly through the Azure IAM resource blade or the Azure Resource Manager API | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management.
-
-> [!Note]
-> During the public preview of the **Roles are being assigned outside of Privileged Identity Management (Preview)** alert, Microsoft supports only permissions that are assigned at the subscription level.
+| Alert | Severity | Trigger | Recommendation |
+| | | | |
+| **Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles. |
+| **Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use. |
+| **Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles. |
### Severity
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
na Previously updated : 05/13/2019 Last updated : 05/27/2022 -+ # Usage and insights report in the Azure Active Directory portal
To access the data from the usage and insights report, you need:
## Use the report
-The usage and insights report shows the list of applications with one or more sign-in attempts, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
+The usage and insights report shows the list of applications with one or more sign-in attempts, and allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate. The sign-in graph per application only counts interactive user sign-ins.
Clicking **Load more** at the bottom of the list allows you to view additional applications on the page. You can select the date range to view all applications that have been used within the range.
active-directory Timeclock 365 Saml Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeclock-365-saml-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Timeclock 365 SAML | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Timeclock 365 SAML'
description: Learn how to configure single sign-on between Azure Active Directory and Timeclock 365 SAML.
Previously updated : 09/02/2021 Last updated : 05/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Timeclock 365 SAML
+# Tutorial: Azure AD SSO integration with Timeclock 365 SAML
In this tutorial, you'll learn how to integrate Timeclock 365 SAML with Azure Active Directory (Azure AD). When you integrate Timeclock 365 SAML with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Timeclock 365 SAML supports **SP** initiated SSO.
-* Timeclock 365 SAML supports [Automated user provisioning](timeclock-365-provisioning-tutorial.md).
+* Timeclock 365 SAML supports [Automated user provisioning](timeclock-365-saml-provisioning-tutorial.md).
## Adding Timeclock 365 SAML from the gallery
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Click on **Create** button to create the test user. > [!NOTE]
-> Timeclock 365 SAML also supports automatic user provisioning, you can find more details [here](./timeclock-365-provisioning-tutorial.md) on how to configure automatic user provisioning.
+> Timeclock 365 SAML also supports automatic user provisioning, you can find more details [here](./timeclock-365-saml-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
active-directory Whimsical Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whimsical-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Whimsical for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Whimsical.
++
+writer: twimmers
+
+ms.assetid: 4457a724-ed81-4f7b-bb3e-70beea80cb51
++++ Last updated : 05/11/2022+++
+# Tutorial: Configure Whimsical for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Whimsical and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Whimsical](https://service-portaltest.benq.com/login) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Whimsical
+> * Remove users in Whimsical when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Whimsical
+> * [Single sign-on](benq-iam-tutorial.md) to Whimsical (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* To use SCIM, SAML has to be enabled and correctly configured.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Whimsical](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Whimsical to support provisioning with Azure AD
+1. To enable SCIM, you must first set up SAML SSO with AAD.
+1. Go to "Workspace Settings", which you'll find under your workspace name in the top left.
+1. Enable SCIM provisioning and click "Reveal" to retrieve the token.
+1. In the "Provisioning" tab in AAD, set "Provisioning Mode" to "Automatic", and paste "https://whimsical.com/public-api/scim-v2/?aadOptscim062020" into "Tenant URL"
+
+## Step 3. Add Whimsical from the Azure AD application gallery
+
+Add Whimsical from the Azure AD application gallery to start managing provisioning to Whimsical. If you have previously setup Whimsical for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Whimsical, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+## Step 5. Configure automatic user provisioning to Whimsical
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Whimsical in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Whimsical**.
+
+ ![The Whimsical link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provision tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Whimsical Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Whimsical. If the connection fails, ensure your Whimsical account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Whimsical**.
+
+9. Review the user attributes that are synchronized from Azure AD to Whimsical in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Whimsical for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Whimsical API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |externalId|String|
+ |active|Boolean|
+ |displayName|String|
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for Whimsical, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to Whimsical by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
Last updated 01/03/2022
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## May 2022
+
+### Unlimited number of subscriptions
+It is easier now to get an overview of optimization opportunities available to your organization ΓÇô no need to spend time and effort to apply filters and process subscription in batches.
+
+To learn more, visit [Get started with Azure Advisor](advisor-get-started.md).
+
+### Tag filtering
+
+You can now get Advisor recommendations scoped to a business unit, workload, or team. Filter recommendations and calculate scores using tags you have already assigned to Azure resources, resource groups and subscriptions. Apply tag filters to:
+
+* Identify cost saving opportunities by business units
+* Compare scores for workloads to optimize critical ones first
+
+To learn more, visit [How to filter Advisor recommendations using tags](advisor-tag-filtering.md).
+ ## January 2022 [**Shutdown/Resize your virtual machines**](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) recommendation was enhanced to increase the quality, robustness, and applicability.
advisor Advisor Tag Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-tag-filtering.md
+
+ Title: Review optimization opportunities by workload, environment or team
+description: Review optimization opportunities by workload, environment or team
++ Last updated : 05/25/2022++
+# Review optimization opportunities by workload, environment or team
+
+You can now get Advisor recommendations and scores scoped to a workload, environment, or team using resource tag filters. Filter recommendations and calculate scores using tags you have already assigned to Azure resources, resource groups and subscriptions. Use tag filters to:
+
+* Identify cost saving opportunities by team
+* Compare scores for workloads to optimize the critical ones first
+
+> [!TIP]
+> For more information on how to use resource tags to organize and govern your Azure resources, please see the [Cloud Adoption FrameworkΓÇÖs guidance](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging) and [Build a cloud governance strategy on Azure](/learn/modules/build-cloud-governance-strategy-azure/).
+
+## How to filter recommendations using tags
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select [Advisor](https://aka.ms/azureadvisordashboard) from any page.
+1. On the Advisor dashboard, click on the **Add Filter** button.
+1. Select the tag in the **Filter** field and value(s).
+1. Click **Apply**. Summary tiles will be updated to reflect the filter.
+1. Click on any of the categories to review recommendations.
+
+ [ ![Screenshot of the Azure Advisor dashboard that shows count of recommendations after tag filter is applied.](./media/tags/overview-tag-filters.png) ](./media/tags/overview-tag-filters.png#lightbox)
+
+
+## How to calculate scores using resource tags
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select [Advisor](https://aka.ms/azureadvisordashboard) from any page.
+1. Select **Advisor score (preview)** from the navigation menu on the left.
+1. Click on the **Add Filter** button.
+1. Select the tag in the **Filter** field and value(s).
+1. Click **Apply**. Advisor score will be updated to only include resources impacted by the filter.
+1. Click on any of the categories to review recommendations.
+
+ [ ![Screenshot of the Azure Advisor score dashboard that shows score and recommendations after tag filter is applied.](./media/tags/score-tag-filters.png) ](./media/tags/score-tag-filters.png#lightbox)
+
+> [!NOTE]
+> Not all capabilities are available when tag filters are used. For example, tag filters are not supported for security score and score history.
+
+## Next steps
+
+To learn more about tagging, see:
+- [Define your tagging strategy - Cloud Adoption Framework](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging)
+- [Tag resources, resource groups, and subscriptions for logical organization - Azure Resource Manager](/azure/azure-resource-manager/management/tag-resources?tabs=json)
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
Last updated 06/10/2021-
+ms.tool: azure-cli, azure-powershell
ms.devlang: azurecli
aks Howto Deploy Java Liberty App With Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app-with-postgresql.md
The steps in this section guide you through creating an Azure Database for Postg
Use the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command to create the DB server. The following example creates a DB server named *youruniquedbname*. Make sure *youruniqueacrname* is unique within Azure. > [!TIP]
- > To help ensure a globally unique name, prepend a disambiguation string such as your intitials and the MMDD of today's date.
+ > To help ensure a globally unique name, prepend a disambiguation string such as your initials and the MMDD of today's date.
```bash
In directory *liberty/config*, the *server.xml* is used to configure the DB conn
After the offer is successfully deployed, an AKS cluster will be generated automatically. The AKS cluster is configured to connect to the ACR. Before we get started with the application, we need to extract the namespace configured for the AKS.
-1. Run following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
+1. Run the following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
```bash echo <appDeploymentTemplateYamlEncoded> | base64 -d
aks Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-tags.md
Title: Use Azure tags in Azure Kubernetes Service (AKS)
description: Learn how to use Azure provider tags to track resources in Azure Kubernetes Service (AKS). Previously updated : 02/08/2022 Last updated : 05/26/2022 # Use Azure tags in Azure Kubernetes Service (AKS)
When you create or update an AKS cluster with the `--tags` parameter, the follow
* The AKS cluster * The route table that's associated with the cluster * The public IP that's associated with the cluster
+* The load balancer that's associated with the cluster
* The network security group that's associated with the cluster * The virtual network that's associated with the cluster
+* The AKS managed kubelet msi associated with the cluster
+* The AKS managed addon msi associated with the cluster
+* The private DNS zone associated with the private cluster
+* The private endpoint associated with the private cluster
+
+> [!NOTE]
+> Azure Private DNS only supports 15 tags. [tag resources](../azure-resource-manager/management/tag-resources.md).
To create a cluster and assign Azure tags, run `az aks create` with the `--tags` parameter, as shown in the following command. Running the command creates a *myAKSCluster* in the *myResourceGroup* with the tags *dept=IT* and *costcenter=9999*.
parameters:
> > Any updates that you make to tags through Kubernetes will retain the value that's set through Kubernetes. For example, if your disk has tags *dept=IT* and *costcenter=5555* set by Kubernetes, and you use the portal to set the tags *team=beta* and *costcenter=3333*, the new list of tags would be *dept=IT*, *team=beta*, and *costcenter=5555*. If you then remove the disk through Kubernetes, the disk would have the tag *team=beta*.
-[install-azure-cli]: /cli/azure/install-azure-cli
+[install-azure-cli]: /cli/azure/install-azure-cli
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
The Web Application Routing solution makes it easy to access applications that a
The add-on deploys four components: an [nginx ingress controller][nginx], [Secrets Store CSI Driver][csi-driver], [Open Service Mesh (OSM)][osm], and [External-DNS][external-dns] controller. - **Nginx ingress Controller**: The ingress controller exposed to the internet.-- **External-dns**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
+- **External-DNS controller**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
- **CSI driver**: Connector used to communicate with keyvault to retrieve SSL certificates for ingress controller. - **OSM**: A lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.-- **External-DNS controller**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone. ## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).
+- An Azure Key Vault containing any application certificates.
+- A DNS solution.
### Install the `aks-preview` Azure CLI extension
You can also enable Web Application Routing on an existing AKS cluster using the
az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons web_application_routing ```
-After the cluster is deployed or updated, use the [az aks show][az-aks-show] command to retrieve the DNS zone name.
- ## Connect to your AKS cluster To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client.
If you use the Azure Cloud Shell, `kubectl` is already installed. You can also i
az aks install-cli ```
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*:
```azurecli
-az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
+az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
``` ## Create the application namespace
Copy the identity's object ID:
### Grant access to Azure Key Vault
+Obtain the vault URI for your Azure Key Vault:
+
+```azurecli
+az keyvault show --resource-group myResourceGroup --name myapp-contoso
+```
+ Grant `GET` permissions for Web Application Routing to retrieve certificates from Azure Key Vault: ```azurecli
annotations:
These annotations in the service manifest would direct Web Application Routing to create an ingress servicing `myapp.contoso.com` connected to the keyvault `myapp-contoso`.
-Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` and `<MY_KEYVAULT_URI>` with the DNS zone name collected in the previous step of this article.
+Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` with your DNS host name and `<MY_KEYVAULT_URI>` with the vault URI collected in the previous step of this article.
```yaml apiVersion: apps/v1
Use the [kubectl apply][kubectl-apply] command to create the resources.
kubectl apply -f samples-web-app-routing.yaml -n hello-web-app-routing ```
-The following example shows the created resources:
+The following example output shows the created resources:
```bash
-$ kubectl apply -f samples-web-app-routing.yaml -n hello-web-app-routing
- deployment.apps/aks-helloworld created service/aks-helloworld created ```
service/aks-helloworld created
## Verify the managed ingress was created ```bash
-$ kubectl get ingress -n hello-web-app-routing -n hello-web-app-routing
+$ kubectl get ingress -n hello-web-app-routing
``` Open a web browser to *<MY_HOSTNAME>*, for example *myapp.contoso.com* and verify you see the demo application. The application may take a few minutes to appear.
az aks disable-addons --addons web_application_routing --name myAKSCluster --re
When the Web Application Routing add-on is disabled, some Kubernetes resources may remain in the cluster. These resources include *configMaps* and *secrets*, and are created in the *app-routing-system* namespace. To maintain a clean cluster, you may want to remove these resources.
-Look for *addon-web-application-routing* resources using the following [kubectl get][kubectl-get] commands:
- ## Clean up Remove the associated Kubernetes objects created in this article using `kubectl delete`.
service "aks-helloworld" deleted
[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
-[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
+[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
app-service Configure Vnet Integration Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-enable.md
Last updated 10/20/2021
+ms.tool: azure-cli, azure-powershell
# Enable virtual network integration in Azure App Service
app-service Provision Resource Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-terraform.md
Last updated 8/26/2021
+ms.tool: terraform
application-gateway Application Gateway Websocket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-websocket.md
To establish a WebSocket connection, a specific HTTP-based handshake is exchange
![Diagram compares a client interacting with a web server, connecting twice to get two replies, with a WebSocket interaction, where a client connects to a server once to get multiple replies.](./media/application-gateway-websocket/websocket.png)
+> [!NOTE]
+> As described, the HTTP protocol is used only to perform a handshake when establishing a WebSocket connection. Once the handshake is completed, a WebSocket connection gets opened for transmitting the data, and the Web Application Firewall (WAF) cannot parse any contents. Therefore, WAF does not perform any inspections on such data.
+ ### Listener configuration element An existing HTTP listener can be used to support WebSocket traffic. The following is a snippet of an httpListeners element from a sample template file. You would need both HTTP and HTTPS listeners to support WebSocket and secure WebSocket traffic. Similarly you can use the portal or Azure PowerShell to create an application gateway with listeners on port 80/443 to support WebSocket traffic.
Another reason for this is that application gateway backend health probe support
## Next steps
-After learning about WebSocket support, go to [create an application gateway](quick-create-powershell.md) to get started with a WebSocket enabled web application.
+After learning about WebSocket support, go to [create an application gateway](quick-create-powershell.md) to get started with a WebSocket enabled web application.
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
As of March 15, 2021, Key Vault recognizes Application Gateway as a trusted serv
When you're using a restricted Key Vault, use the following steps to configure Application Gateway to use firewalls and virtual networks: > [!TIP]
-> The following steps are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address.
+> Steps 1-3 are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address.
1. In the Azure portal, in your Key Vault, select **Networking**. 1. On the **Firewalls and virtual networks** tab, select **Selected networks**.
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
description: Learn about regions and availability zones and how they work to hel
Previously updated : 03/30/2022 Last updated : 05/30/2022
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
description: Learn what services are supported by availability zones and underst
Previously updated : 03/25/2022 Last updated : 05/30/2022
azure-arc Active Directory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md
# Azure Arc-enabled SQL Managed Instance with Active Directory authentication + Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). The Arc-enabled SQL Managed Instance uses an existing on-premises Active Directory (AD) domain for authentication. + This article describes how to enable Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes: - Customer-managed keytab (CMK) - System-managed keytab (SMK)
azure-arc Active Directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-prerequisites.md
This document explains how to prepare to deploy Azure Arc-enabled data services with Active Directory (AD) authentication. Specifically the article describes Active Directory objects you need to configure before the deployment of Kubernetes resources. + [The introduction](active-directory-introduction.md#compare-ad-integration-modes) describes two different integration modes: - *System-managed keytab* mode allows the system to create and manage the AD accounts for each SQL Managed Instance. - *Customer-managed keytab* mode allows you to create and manage the AD accounts for each SQL Managed Instance.
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md
Previously updated : 02/22/2022 Last updated : 05/27/2022
To view the changes made to the Azure Arc-enabled SQL managed instance, you can
az sql mi-arc show -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s ```
+## Configure readable secondaries
+
+When you deploy Azure Arc enabled SQL managed instance in `BusinessCritical` service tier with 2 or more replicas, by default, one secondary replica is automatically configured as `readableSecondary`. This setting can be changed, either to add or to remove the readable secondaries as follows:
+
+```azurecli
+az sql mi-arc update --name <sqlmi name> --readable-secondaries <value> --k8s-namespace <namespace> --use-k8s
+```
+
+For example, the following example will reset the readable secondaries to 0.
+
+```azurecli
+az sql mi-arc update --name sqlmi1 --readable-secondaries 0 --k8s-namespace mynamespace --use-k8s
+```
+## Configure replicas
+
+You can also scale up or down the number of replicas deployed in the `BusinessCritical` service tier as follows:
+
+```azurecli
+az sql mi-arc update --name <sqlmi name> --replicas <value> --k8s-namespace <namespace> --use-k8s
+```
+
+For example:
+
+The following example will scale down the number of replicas from 3 to 2.
+
+```azurecli
+az sql mi-arc update --name sqlmi1 --replicas 2 --k8s-namespace mynamespace --use-k8s
+```
+
+> [Note]
+> If you scale down from 2 replicas to 1 replica, you may run into a conflict with the pre-configured `--readable--secondaries` setting. You can first edit the `--readable--secondaries` before scaling down the replicas.
++ ## Configure Server options You can configure server configuration settings for Azure Arc-enabled SQL managed instance after creation time. This article describes how to configure settings like enabling or disabling mssql Agent, enable specific trace flags for troubleshooting scenarios.
azure-arc Configure Transparent Data Encryption Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md
# Enable transparent data encryption on Azure Arc-enabled SQL Managed Instance
-This article describes how to enable transparent data encryption on a database created in an Azure Arc-enabled SQL Managed Instance.
+This article describes how to enable transparent data encryption on a database created in an Azure Arc-enabled SQL Managed Instance. In this article, the term *managed instance* refers to a deployment of Azure Arc-enabled SQL Managed Instance.
## Prerequisites
-Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and have connected to it.
+Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and connect to it.
- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md) - [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md)
-## Turn on transparent data encryption on a database in Azure Arc-enabled SQL Managed Instance
+## Turn on transparent data encryption on a database in the managed instance
-Turning on transparent data encryption in Azure Arc-enabled SQL Managed Instance follows the same steps as SQL Server on-premises. Follow the steps described in [SQL Server's transparent data encryption guide](/sql/relational-databases/security/encryption/transparent-data-encryption#enable-tde).
+Turning on transparent data encryption in the managed instance follows the same steps as SQL Server on-premises. Follow the steps described in [SQL Server's transparent data encryption guide](/sql/relational-databases/security/encryption/transparent-data-encryption#enable-tde).
-After creating the necessary credentials, it's highly recommended to back up any newly created credentials.
+After you create the necessary credentials, back up any newly created credentials.
-## Back up a transparent data encryption credential from Azure Arc-enabled SQL Managed Instance
+## Back up a transparent data encryption credential
-When backing up from Azure Arc-enabled SQL Managed Instance, the credentials will be stored within the container. It isn't necessary to store the credentials on a persistent volume, but you may use the mount path for the data volume within the container if you'd like: `/var/opt/mssql/data`. Otherwise, the credentials will be stored in-memory in the container. Below is an example of backing up a certificate from Azure Arc-enabled SQL Managed Instance.
+When you back up credentials from the managed instance, the credentials are stored within the container. To store credentials on a persistent volume, specify the mount path in the container. For example, `var/opt/mssql/data`. The following example backs up a certificate from the managed instance:
> [!NOTE]
-> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. `kubectl` can mistake the drive in the path as a pod name. For example, `kubectl` might mistake `C` to be a pod name in `C:\folder`. Users can avoid this issue by using relative paths or removing the `C:` from the provided path while in the `C:` drive. This issue also applies to environment variables on Windows like `$HOME`.
+> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below.
1. Back up the certificate from the container to `/var/opt/mssql/data`.
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
2. Copy the certificate from the container to your file system.
+### [Windows](#tab/windows)
+
+ ```console
+ kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-certificate-path> > <local-certificate-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.crt > $HOME\sqlcerts\servercert.crt
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path> ```
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt ./sqlcerts/servercert.crt
+ kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt $HOME/sqlcerts/servercert.crt
``` ++ 3. Copy the private key from the container to your file system.
+### [Windows](#tab/windows)
+ ```console
+ kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-private-key-path> > <local-private-key-path>
+ ```
+
+ Example:
+
+ ```console
+ kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.key > $HOME\sqlcerts\servercert.key
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path> ```
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key ./sqlcerts/servercert.key
+ kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key $HOME/sqlcerts/servercert.key
``` ++ 4. Delete the certificate and private key from the container. ```console
When backing up from Azure Arc-enabled SQL Managed Instance, the credentials wil
kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" ```
-## Restore a transparent data encryption credential to Azure Arc-enabled SQL Managed Instance
+## Restore a transparent data encryption credential to a managed instance
-Similar to above, restore the credentials by copying them into the container and running the corresponding T-SQL afterwards.
+Similar to above, to restore the credentials, copy them into the container and run the corresponding T-SQL afterwards.
> [!NOTE]
-> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. `kubectl` can mistake the drive in the path as a pod name. For example, `kubectl` might mistake `C` to be a pod name in `C:\folder`. Users can avoid this issue by using relative paths or removing the `C:` from the provided path while in the `C:` drive. This issue also applies to environment variables on Windows like `$HOME`.
+> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below.
1. Copy the certificate from your file system to the container.
+### [Windows](#tab/windows)
+ ```console
+ type <local-certificate-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-certificate-path>
+ ```
+
+ Example:
+ ```console
+ type $HOME\sqlcerts\servercert.crt | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.crt
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path> ```
Similar to above, restore the credentials by copying them into the container and
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi ./sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt
+ kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt
``` ++ 2. Copy the private key from your file system to the container.
+### [Windows](#tab/windows)
+ ```console
+ type <local-private-key-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-private-key-path>
+ ```
+
+ Example:
+ ```console
+ type $HOME\sqlcerts\servercert.key | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.key
+ ```
+
+### [Linux](#tab/linux)
```console kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path> ```
Similar to above, restore the credentials by copying them into the container and
Example: ```console
- kubectl cp --namespace arc-ns --container arc-sqlmi ./sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key
+ kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key
``` ++ 3. Create the certificate using file paths from `/var/opt/mssql/data`. ```sql
azure-arc Connect Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-active-directory-sql-managed-instance.md
This article describes how to connect to SQL Managed Instance endpoint using Active Directory (AD) authentication. Before you proceed, make sure you have an AD-integrated Azure Arc-enabled SQL Managed Instance deployed already. + See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance](deploy-active-directory-sql-managed-instance.md) to deploy Azure Arc-enabled SQL Managed Instance with Active Directory authentication enabled. > [!NOTE]
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
Previously updated : 03/24/2022 Last updated : 05/27/2022
Optionally, you can specify certificates for logs and metrics UI dashboards. See
After the extension and custom location are created, proceed to deploy the Azure Arc data controller as follows. ```azurecli
-az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-logs true --auto-upload-metrics true --custom-location <name of custom location> --storage-class <storageclass>
+az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-metrics true --custom-location <name of custom location> --storage-class <storageclass>
# Example
-az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-logs true --auto-upload-metrics true --custom-location mycustomlocation --storage-class mystorageclass
+az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --custom-location mycustomlocation --storage-class mystorageclass
``` If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows: ```azurecli
-az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --path ./azure-arc-custom --auto-upload-logs true --auto-upload-metrics true --custom-location <name of custom location>
+az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location <name of custom location>
# Example
-az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --path ./azure-arc-custom --auto-upload-logs true --auto-upload-metrics true --custom-location mycustomlocation
+az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location mycustomlocation
``` ## Monitor the status of Azure Arc data controller deployment
azure-arc Deploy Active Directory Connector Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-cli.md
This article explains how to deploy an Active Directory (AD) connector using Azure CLI. The AD connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance. + ## Prerequisites ### Install tools
azure-arc Deploy Active Directory Connector Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-portal.md
Active Directory (AD) connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instances. + This article explains how to deploy, manage, and delete an Active Directory (AD) connector in directly connected mode from the Azure portal. ## Prerequisites
azure-arc Deploy Active Directory Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance-cli.md
This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication using Azure CLI. + See these articles for specific instructions: - [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication. + Before you proceed, complete the steps explained in [Customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a system-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md) ## Prerequisites
azure-arc Deploy Customer Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-customer-managed-keytab-active-directory-connector.md
This article explains how to deploy Active Directory (AD) connector in customer-managed keytab mode. The connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance. + ## Active Directory connector in customer-managed keytab mode In customer-managed keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
azure-arc Deploy System Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-system-managed-keytab-active-directory-connector.md
This article explains how to deploy Active Directory connector in system-managed keytab mode. It is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance. + ## Active Directory connector in system-managed keytab mode In System-Managed Keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
azure-arc Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-guide.md
description: Introduction to troubleshooting resources
--++ Previously updated : 07/30/2021 Last updated : 05/27/2022
This article identifies troubleshooting resources for Azure Arc-enabled data services.
+## Logs Upload related errors
+
+If you deployed Azure Arc data controller in the `direct` connectivity mode using `kubectl`, and have not created a secret for the Log Analytics workspace credentials, you may see the following error messages in the Data Controller CR (Custom Resource):
+
+```
+status": {
+ "azure": {
+ "uploadStatus": {
+ "logs": {
+ "lastUploadTime": "YYYY-MM-HHTMM:SS:MS.SSSSSSZ",
+ "message": "spec.settings.azure.autoUploadLogs is true, but failed to get log-workspace-secret secret."
+ },
+
+```
+
+To resolve the above error, create a secret with the Log Analytics Workspace credentials containing the `WorkspaceID` and the `SharedAccessKey` as follows:
+
+```
+apiVersion: v1
+data:
+ primaryKey: <base64 encoding of Azure Log Analytics workspace primary key>
+ workspaceId: <base64 encoding of Azure Log Analytics workspace Id>
+kind: Secret
+metadata:
+ name: log-workspace-secret
+ namespace: <your datacontroller namespace>
+type: Opaque
+
+```
+
+## Metrics upload related errors in direct connected mode
+
+If you configured automatic upload of metrics, in the direct connected mode and the permissions needed for the MSI have not been properly granted (as described in [Upload metrics](upload-metrics.md)), you might see an error in your logs as follows:
+
+```output
+'Metric upload response: {"error":{"code":"AuthorizationFailed","message":"Check Access Denied Authorization for AD object XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX over scope /subscriptions/XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX/resourcegroups/my-resource-group/providers/microsoft.azurearcdata/sqlmanagedinstances/arc-dc, User Tenant Id: XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX. Microsoft.Insights/Metrics/write was not allowed, Microsoft.Insights/Telemetry/write was notallowed. Warning: Principal will be blocklisted if the service principal is not granted proper access while it hits the GIG endpoint continuously."}}
+```
+
+To resolve above error, retrieve the MSI for the Azure Arc data controller extension, and grant the required roles as described in [Upload metrics](upload-metrics.md).
++
+## Usage upload related errors in direct connected mode
+
+If you deployed your Azure Arc data controller in the direct connected mode the permissions needed to upload your usage information are automatically granted for the Azure Arc data controller extension MSI. If the automatic upload process runs into permissions related issues you might see an error in your logs as follows:
+
+```
+identified that your data controller stopped uploading usage data to Azure. The error was:
+
+{"lastUploadTime":"2022-05-05T20:10:47.6746860Z","message":"Data controller upload response: {\"error\":{\"code\":\"AuthorizationFailed\",\"message\":\"The client 'XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX' with object id 'XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX' does not have authorization to perform action 'microsoft.azurearcdata/datacontrollers/write' over scope '/subscriptions/XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX/resourcegroups/my-resource-group/providers/microsoft.azurearcdata/datacontrollers/arc-dc' or the scope is invalid. If access was recently granted, please refresh your credentials.\"}}"}
+```
+
+To resolve the permissions issue, retrieve the MSI and grant the required roles as described in [Upload metrics](upload-metrics.md)).
++ ## Resources by type [Scenario: Troubleshooting PostgreSQL Hyperscale server groups](troubleshoot-postgresql-hyperscale-server-group.md)
azure-arc Upload Usage Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-usage-data.md
description: Upload usage Azure Arc-enabled data services data to Azure
--++ Previously updated : 11/03/2021 Last updated : 05/27/2022 # Upload usage data to Azure in **indirect** mode
-Periodically, you can export out usage information. The export and upload of this information creates and updates the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure.
+Periodically, you can export out usage information. The export and upload of this information creates and updates the data controller, SQL managed instance, and PostgreSQL resources in Azure.
> [!NOTE] > Usage information is automatically uploaded for Azure Arc data controller deployed in **direct** connectivity mode. The instructions in this article only apply to uploading usage information for Azure Arc data controller deployed in **indirect** connectivity mode..
Usage information such as inventory and resource usage can be uploaded to Azure
az arcdata dc export --type usage --path usage.json --k8s-namespace <namespace> --use-k8s ```
- This command creates a `usage.json` file with all the Azure Arc-enabled data resources such as SQL managed instances and PostgreSQL Hyperscale instances etc. that are created on the data controller.
+ This command creates a `usage.json` file with all the Azure Arc-enabled data resources such as SQL managed instances and PostgreSQL instances etc. that are created on the data controller.
For now, the file is not encrypted so that you can see the contents. Feel free to open in a text editor and see what the contents look like.
-You will notice that there are two sets of data: `resources` and `data`. The `resources` are the data controller, PostgreSQL Hyperscale server groups, and SQL Managed Instances. The `resources` records in the data capture the pertinent events in the history of a resource - when it was created, when it was updated, and when it was deleted. The `data` records capture how many cores were available to be used by a given instance for every hour.
+You will notice that there are two sets of data: `resources` and `data`. The `resources` are the data controller, PostgreSQL, and SQL Managed Instances. The `resources` records in the data capture the pertinent events in the history of a resource - when it was created, when it was updated, and when it was deleted. The `data` records capture how many cores were available to be used by a given instance for every hour.
Example of a `resource` entry:
Example of a `data` entry:
az arcdata dc upload --path usage.json ```
+## Upload frequency
+
+In the **indirect** mode, usage information needs to be uploaded to Azure at least once in every 30 days. It is highly recommended to upload more frequently, such as daily or weekly. If usage information is not uploaded past 32 days, you will see some degradation in the service such as being unable to provision any new resources.
+
+There will be two types of notifications for delayed usage uploads - warning phase and degraded phase. In the warning phase there will be a message such as `Billing data for the Azure Arc data controller has not been uploaded in {0} hours. Please upload billing data as soon as possible.`.
+
+In the degraded phase, the message will look like `Billing data for the Azure Arc data controller has not been uploaded in {0} hours. Some functionality will not be available until the billing data is uploaded.`.
+
+The Azure portal Overview page for Data Controller and the Custom Resource status of the Data controller in your kubernetes cluster will both indicate the last upload date and the status message(s).
+++ ## Automating uploads (optional) If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script.
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Title: "Configurations and GitOps - Azure Arc-enabled Kubernetes"
+ Title: "GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes"
Last updated 05/24/2022
description: "This article provides a conceptual overview of GitOps and configur
keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps"
-# Configurations and GitOps with Azure Arc-enabled Kubernetes
+# GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes
> [!NOTE] > This document is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. The Git repository can contain:+ * YAML-format manifests describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc. * Helm charts for deploying applications.
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
Title: "Conceptual overview Azure Kubernetes Configuration Management (GitOps)"
+ Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes"
description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 5/3/2022 Last updated : 5/26/2022
-# GitOps in Azure
+# GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes
Azure provides configuration management capability using GitOps in Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. You can easily enable and use GitOps in these clusters.
azure-functions Azure Functions Az Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/azure-functions-az-redundancy.md
Title: Azure Functions availability zone support on Elastic Premium plans
description: Learn how to use availability zone redundancy with Azure Functions for high-availability function applications on Elastic Premium plans. Previously updated : 09/07/2021 Last updated : 03/24/2022 # Goal: Introduce AZ Redundancy in Azure Functions elastic premium plans to customers + a tutorial on how to get started with ARM templates # Azure Functions support for availability zone redundancy
-Availability zone (AZ) support for Azure Functions is now available on Elastic Premium and Dedicated (App Service) plans. A Zone Redundant Azure Function application will automatically balance its instances between availability zones for higher availability. This document focuses on zone redundancy support for Elastic Premium Function plans. For zone redundancy on Dedicated plans, refer [here](../app-service/how-to-zone-redundancy.md).
+Availability zone (AZ) support for Azure Functions is now available on Premium (Elastic Premium) and Dedicated (App Service) plans. A zone-redundant Functions application automatically balances its instances between availability zones for higher availability. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, refer [here](../app-service/how-to-zone-redundancy.md).
+ ## Overview
-An [availability zone](../availability-zones/az-overview.md#availability-zones) is a high-availability offering that protects your applications and data from datacenter failures. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there&#39;s a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones.
+An [availability zone](../availability-zones/az-overview.md#availability-zones) is a high-availability offering that protects your applications and data from datacenter failures. Availability zones are unique physical locations within an Azure region. Each zone comprises one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high-availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating into other zones.
-A zone redundant function app will automatically distribute load the instances that your app runs on between the availability zones in the region. For Zone Redundant Elastic Premium apps, even as the app scales in and out, the instances the app is running on are still evenly distributed between availability zones.
+A zone redundant function app automatically distributes the instances your app runs on between the availability zones in the region. For apps running in a zone-redundant Premium plan, even as the app scales in and out, the instances the app is running on are still evenly distributed between availability zones.
## Requirements
-> [!IMPORTANT]
-> When selecting a [storage account](storage-considerations.md#storage-account-requirements) for your function app, be sure to use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Otherwise, in the case of a zonal outage, Functions may show unexpected behavior due to its dependency on Storage.
+When hosting in a zone-redundant Premium plan, the following requirements must be met.
+- You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions may show unexpected behavior during a zonal outage.
- Both Windows and Linux are supported.-- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. Instructions on zone redundancy with Dedicated (App Service) hosting plan can be found [here](../app-service/how-to-zone-redundancy.md).
+- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. Instructions on zone redundancy with Dedicated (App Service) hosting plan can be found [in this article](../app-service/how-to-zone-redundancy.md).
- Availability zone (AZ) support isn't currently available for function apps on [Consumption](consumption-plan.md) plans.-- Zone redundant plans must specify a minimum instance count of 3.-- Function apps on an Elastic Premium plan additionally must have a minimum [always ready instances](functions-premium-plan.md#always-ready-instances) count of 3.-- Can be enabled in any of the following regions:
+- Zone redundant plans must specify a minimum instance count of three.
+- Function apps hosted on a Premium plan must also have a minimum [always ready instances](functions-premium-plan.md#always-ready-instances) count of three.
+
+Zone-redundant Premium plans can currently be enabled in any of the following regions:
- West US 2 - West US 3 - Central US
+ - South Central US
- East US - East US 2 - Canada Central
A zone redundant function app will automatically distribute load the instances t
- Japan East - Southeast Asia - Australia East-- At this time, must be created through [ARM template](../azure-resource-manager/templates/index.yml). ## How to deploy a function app on a zone redundant Premium plan
-For initial creation of a zone redundant Elastic Premium Functions plan, you need to deploy via [ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md). Then, once successfully created, you can view and interact with the Function Plan via the Azure portal and CLI tooling. An ARM template is only needed for the initial creation of the Function Plan. A guide to hosting Functions on Premium plans can be found [here](functions-infrastructure-as-code.md#deploy-on-premium-plan). Once the zone redundant plan is created and deployed, any function app hosted on your new plan will now be zone redundant.
+There are currently two ways to deploy a zone-redundant premium plan and function app. You can use either the [Azure portal](https://portal.azure.com) or an ARM template.
+
+# [Azure portal](#tab/azure-portal)
+
+1. Open the Azure portal and navigate to the **Create Function App** page. Information on creating a function app in the portal can be found [here](functions-create-function-app-portal.md#create-a-function-app).
+
+1. In the **Basics** page, fill out the fields for your function app. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
+
+ | Setting | Suggested value | Notes for Zone Redundancy |
+ | | - | -- |
+ | **Region** | Preferred region | The subscription under which this new function app is created. You must pick a region that is AZ enabled from the [list above](#requirements). |
+
+ ![Screenshot of Basics tab of function app create page.](./media/functions-az-redundancy\azure-functions-basics-az.png)
+
+1. In the **Hosting** page, fill out the fields for your function app hosting plan. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
+
+ | Setting | Suggested value | Notes for Zone Redundancy |
+ | | - | -- |
+ | **Storage Account** | A [zone-redundant storage account](storage-considerations.md#storage-account-requirements) | As mentioned above in the [requirements](#requirements) section, we strongly recommend using a zone-redundant storage account for your zone redundant function app. |
+ | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../app-service/how-to-zone-redundancy.md). |
+ | **Zone Redundancy** | Enabled | This field populates the flag that determines if your app is zone redundant or not. You won't be able to select `Enabled` unless you have chosen a region supporting zone redundancy, as mentioned in step 2. |
+
+ ![Screenshot of Hosting tab of function app create page.](./media/functions-az-redundancy\azure-functions-hosting-az.png)
-The only properties to be aware of while creating a zone redundant Function plan are the new **zoneRedundant** property and the Function Plan instance count (**capacity**) fields. The **zoneRedundant** property must be set to **true** and the **capacity** property should be set based on the workload requirement, but no less than 3. Choosing the right capacity varies based on several factors and high availability/fault tolerance strategies. A good rule of thumb is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
+1. For the rest of the function app creation process, create your function app as normal. There are no fields in the rest of the creation process that affect zone redundancy.
+
+# [ARM template](#tab/arm-template)
+
+You can use an [ARM template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md) to deploy to a zone-redundant Premium plan. A guide to hosting Functions on Premium plans can be found [here](functions-infrastructure-as-code.md#deploy-on-premium-plan).
+
+The only properties to be aware of while creating a zone-redundant hosting plan are the new `zoneRedundant` property and the plan's instance count (`capacity`) fields. The `zoneRedundant` property must be set to `true` and the `capacity` property should be set based on the workload requirement, but not less than `3`. Choosing the right capacity varies based on several factors and high availability/fault tolerance strategies. A good rule of thumb is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
> [!IMPORTANT]
-> Azure function Apps hosted on an elastic premium, zone redundant Function plan must have a minimum [always ready instance](functions-premium-plan.md#always-ready-instances) count of 3. This is to enforce that a zone redundant function app always has enough instances to satisfy at least one worker per zone.
+> Azure Functions apps hosted on an elastic premium, zone-redundant plan must have a minimum [always ready instance](functions-premium-plan.md#always-ready-instances) count of 3. This make sure that a zone-redundant function app always has enough instances to satisfy at least one worker per zone.
-Below is an ARM template snippet for a zone redundant, Premium Function Plan, showing the new **zoneRedundant** field and the **capacity** specification.
+Below is an ARM template snippet for a zone-redundant, Premium plan showing the `zoneRedundant` field and the `capacity` specification.
-```
- "resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-01-15",
- "name": "your_plan_name_here",
- "location": "Central US",
- "sku": {
- "name": "EP3",
- "tier": "ElasticPremium",
- "size": "EP3",
- "family": "EP",
- "capacity": 3
- },
- "kind": "elastic",
- "properties": {
- "perSiteScaling": false,
- "elasticScaleEnabled": true,
- "maximumElasticWorkerCount": 20,
- "isSpot": false,
- "reserved": false,
- "isXenon": false,
- "hyperV": false,
- "targetWorkerCount": 0,
- "targetWorkerSizeId": 0,
- "zoneRedundant": true
- }
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2021-01-15",
+ "name": "your_plan_name_here",
+ "location": "Central US",
+ "sku": {
+ "name": "EP3",
+ "tier": "ElasticPremium",
+ "size": "EP3",
+ "family": "EP",
+ "capacity": 3
+ },
+ "kind": "elastic",
+ "properties": {
+ "perSiteScaling": false,
+ "elasticScaleEnabled": true,
+ "maximumElasticWorkerCount": 20,
+ "isSpot": false,
+ "reserved": false,
+ "isXenon": false,
+ "hyperV": false,
+ "targetWorkerCount": 0,
+ "targetWorkerSizeId": 0,
+ "zoneRedundant": true
}
- ]
+ }
+]
```
-To learn more, see [Automate resource deployment for your function app in Azure Functions](functions-infrastructure-as-code.md).
+To learn more about these templates, see [Automate resource deployment in Azure Functions](functions-infrastructure-as-code.md).
+++
+After the zone-redundant plan is created and deployed, any function app hosted on your new plan is considered zone-redundant.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Improve the performance and reliability of Azure Functions](performance-reliability.md)
++
azure-functions Durable Functions Http Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-api.md
GET /admin/extensions/DurableTaskExtension/instances
&createdTimeFrom={timestamp} &createdTimeTo={timestamp} &runtimeStatus={runtimeStatus1,runtimeStatus2,...}
+ &instanceIdPrefix={prefix}
&showInput=[true|false] &top={integer} ```
GET /runtime/webhooks/durableTask/instances?
&createdTimeFrom={timestamp} &createdTimeTo={timestamp} &runtimeStatus={runtimeStatus1,runtimeStatus2,...}
+ &instanceIdPrefix={prefix}
&showInput=[true|false] &top={integer} ```
Request parameters for this API include the default set mentioned previously as
| **`createdTimeFrom`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or after the given ISO8601 timestamp.| | **`createdTimeTo`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or before the given ISO8601 timestamp.| | **`runtimeStatus`** | Query string | Optional parameter. When specified, filters the list of returned instances based on their runtime status. To see the list of possible runtime status values, see the [Querying instances](durable-functions-instance-management.md) article. |
+| **`instanceIdPrefix`** | Query string | Optional parameter. When specified, filters the list of returned instances to include only instances whose instance id starts with the specified prefix string. Available starting with [version 2.7.2](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask/2.7.2) of the extension. |
| **`top`** | Query string | Optional parameter. When specified, limits the number of instances returned by the query. | ### Response
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The above sample value of `1800` sets a timeout of 30 minutes. To learn more, se
## WEBSITE\_CONTENTAZUREFILECONNECTIONSTRING
-Connection string for storage account where the function app code and configuration are stored in event-driven scaling plans running on Windows. For more information, see [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
+Connection string for storage account where the function app code and configuration are stored in event-driven scaling plans. For more information, see [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
|Key|Sample value| ||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-Only used when deploying to a Windows or Linux Premium plan or to a Windows Consumption plan. Not supported for Linux Consumption plans or Windows or Linux Dedicated plans. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+This setting is used for Consumption and Premium plan apps on both Windows and Linux. It's not used for Dedicated plan apps, which aren't dynamically scaled by Functions.
+
+Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
## WEBSITE\_CONTENTOVERVNET
azure-functions Functions Create First Function Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-bicep.md
+
+ Title: Create your function app resources in Azure using Bicep
+description: Create and deploy to Azure a simple HTTP triggered serverless function using Bicep.
++ Last updated : 05/12/2022+++++
+# Quickstart: Create and deploy Azure Functions resources using Bicep
+
+In this article, you use Bicep to create a function that responds to HTTP requests.
+
+Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
++
+## Prerequisites
+
+### Azure account
+
+Before you begin, you must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/function-app-create-dynamic/).
++
+The following four Azure resources are created by this Bicep file:
+++ [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage account, which is required by Functions.++ [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms): create a serverless Consumption hosting plan for the function app.++ [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites): create a function app.++ [**microsoft.insights/components**](/azure/templates/microsoft.insights/components): create an Application Insights instance for monitoring.+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters appInsightsLocation=<app-location>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -appInsightsLocation "<app-location>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<app-location\>** with the region for Application Insights, which is usually the same as the resource group.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use Azure CLI or Azure PowerShell to validate the deployment.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Visit function app welcome page
+
+1. Use the output from the previous validation step to retrieve the unique name created for your function app.
+1. Open a browser and enter the following URL: **\<https://<appName.azurewebsites.net\>**. Make sure to replace **<\appName\>** with the unique name created for your function app.
+
+When you visit the URL, you should see a page like this:
++
+## Clean up resources
+
+If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place as you'll build on what you've already done.
+
+Otherwise, if you no longer need the resources, use Azure CLI, PowerShell, or Azure portal to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+Now that you've publish your first function, learn more by adding an output binding to your function.
+
+# [Visual Studio Code](#tab/visual-studio-code)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md)
+
+# [Visual Studio](#tab/visual-studio)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs.md)
+
+# [Command line](#tab/command-line)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md)
++
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
Title: Develop and run Azure Functions locally
description: Learn how to code and test Azure Functions on your local computer before you run them on Azure Functions. Previously updated : 09/04/2018 Last updated : 05/19/2022 # Code and test Azure Functions locally
-While you're able to develop and test Azure Functions in the [Azure portal], many developers prefer a local development experience. Functions makes it easy to use your favorite code editor and development tools to create and test functions on your local computer. Your local functions can connect to live Azure services, and you can debug them on your local computer using the full Functions runtime.
+While you're able to develop and test Azure Functions in the [Azure portal], many developers prefer a local development experience. When you use Functions, using your favorite code editor and development tools to create and test functions on your local computer becomes easier. Your local functions can connect to live Azure services, and you can debug them on your local computer using the full Functions runtime.
This article provides links to specific development environments for your preferred language. It also provides some shared guidance for local development, such as working with the [local.settings.json file](#local-settings-file).
The way in which you develop functions on your local computer depends on your [l
|[Visual Studio Code](functions-develop-vs-code.md)| [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). | | [Command prompt or terminal](functions-run-local.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. | | [Visual Studio 2019](functions-develop-vs.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio 2019](https://www.visualstudio.com/vs/) and later versions. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
-| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md) |
+| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md). |
[!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
-Each of these local development environments lets you create function app projects and use predefined Functions templates to create new functions. Each uses the Core Tools so that you can test and debug your functions against the real Functions runtime on your own machine just as you would any other app. You can also publish your function app project from any of these environments to Azure.
+Each of these local development environments lets you create function app projects and use predefined function templates to create new functions. Each uses the Core Tools so that you can test and debug your functions against the real Functions runtime on your own machine just as you would any other app. You can also publish your function app project from any of these environments to Azure.
## Local settings file
These settings are supported when you run projects locally:
| Setting | Description | | | -- | | **`IsEncrypted`** | When this setting is set to `true`, all values are encrypted with a local machine key. Used with `func settings` commands. Default value is `false`. You might want to encrypt the local.settings.json file on your local computer when it contains secrets, such as service connection strings. The host automatically decrypts settings when it runs. Use the `func settings decrypt` command before trying to read locally encrypted settings. |
-| **`Values`** | Collection of application settings used when a project is running locally. These key-value (string-string) pairs correspond to application settings in your function app in Azure, like [`AzureWebJobsStorage`]. Many triggers and bindings have a property that refers to a connection string app setting, like `Connection` for the [Blob storage trigger](functions-bindings-storage-blob-trigger.md#configuration). For these properties, you need an application setting defined in the `Values` array. See the subsequent table for a list of commonly used settings. <br/>Values must be strings and not JSON objects or arrays. Setting names can't include a double underline (`__`) and should not include a colon (`:`). Double underline characters are reserved by the runtime, and the colon is reserved to support [dependency injection](functions-dotnet-dependency-injection.md#working-with-options-and-settings). |
+| **`Values`** | Collection of application settings used when a project is running locally. These key-value (string-string) pairs correspond to application settings in your function app in Azure, like [`AzureWebJobsStorage`]. Many triggers and bindings have a property that refers to a connection string app setting, like `Connection` for the [Blob storage trigger](functions-bindings-storage-blob-trigger.md#configuration). For these properties, you need an application setting defined in the `Values` array. See the subsequent table for a list of commonly used settings. <br/>Values must be strings and not JSON objects or arrays. Setting names can't include a double underline (`__`) and shouldn't include a colon (`:`). Double underline characters are reserved by the runtime, and the colon is reserved to support [dependency injection](functions-dotnet-dependency-injection.md#working-with-options-and-settings). |
| **`Host`** | Settings in this section customize the Functions host process when you run projects locally. These settings are separate from the host.json settings, which also apply when you run projects in Azure. | | **`LocalHttpPort`** | Sets the default port used when running the local Functions host (`func host start` and `func run`). The `--port` command-line option takes precedence over this setting. For example, when running in Visual Studio IDE, you may change the port number by navigating to the "Project Properties -> Debug" window and explicitly specifying the port number in a `host start --port <your-port-number>` command that can be supplied in the "Application Arguments" field. | | **`CORS`** | Defines the origins allowed for [cross-origin resource sharing (CORS)](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing). Origins are supplied as a comma-separated list with no spaces. The wildcard value (\*) is supported, which allows requests from any origin. |
The following application settings can be included in the **`Values`** array whe
| Setting | Values | Description | |--|--|--| |**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azurite Emulator](../storage/common/storage-use-azurite.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
-|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson) |
+|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson). |
|**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.|
-| **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates that PowerShell 7 be used when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. When running in Azure, the PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
+| **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates to use PowerShell 7 when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. The PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, when it runs in Azure, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
## Next steps
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
Title: Develop Azure Functions by using Visual Studio Code
description: Learn how to develop and test Azure Functions by using the Azure Functions extension for Visual Studio Code. ms.devlang: csharp, java, javascript, powershell, python- Previously updated : 02/21/2021+ Last updated : 05/19/2022 #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
Before you install and run the [Azure Functions extension][Azure Functions exten
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-Other resources that you need, like an Azure storage account, are created in your subscription when you [publish by using Visual Studio Code](#publish-to-azure).
+Other resources that you need, like an Azure storage account, are created in your subscription when you [publish by using Visual Studio Code](#publish-to-azure).
### Run local requirements
These prerequisites are only required to [run and debug your functions locally](
# [C\#](#tab/csharp)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
+* The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-+ [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x).
+* [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x).
# [Java](#tab/java)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
+* [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
-+ [Java 8](/azure/developer/jav#java-versions).
+* [Java 8](/azure/developer/jav#java-versions).
-+ [Maven 3 or later](https://maven.apache.org/)
+* [Maven 3 or later](https://maven.apache.org/).
# [JavaScript](#tab/nodejs)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
+* [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
# [PowerShell](#tab/powershell)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time.
-+ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
+* [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
-+ Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1)
+* Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1).
-+ The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
+* The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
# [Python](#tab/python)
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-+ [Python 3.x](https://www.python.org/downloads/). For version information, see [Python versions](functions-reference-python.md#python-version) by the Azure Functions runtime.
+* [Python 3.x](https://www.python.org/downloads/). For version information, see [Python versions](functions-reference-python.md#python-version) by the Azure Functions runtime.
-+ [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
+* [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
The Functions extension lets you create a function app project, along with your
1. From **Azure: Functions**, select the **Create Function** icon:
- ![Create a function](./media/functions-develop-vs-code/create-function.png)
+ :::image type="content" source="./media/functions-develop-vs-code/create-function.png" alt-text=" Screenshot for Create Function.":::
1. Select the folder for your function app project, and then **Select a language for your function project**. 1. Select the **HTTP trigger** function template, or you can select **Skip for now** to create a project without a function. You can always [add a function to your project](#add-a-function-to-your-project) later.
- ![Choose the HTTP trigger template](./media/functions-develop-vs-code/create-function-choose-template.png)
+ :::image type="content" source="./media/functions-develop-vs-code/select-http-trigger.png" alt-text="Screenshot for selecting H T T P trigger.":::
1. Type **HttpExample** for the function name and select Enter, and then select **Function** authorization. This authorization level requires you to provide a [function key](functions-bindings-http-webhook-trigger.md#authorization-keys) when you call the function endpoint.
- ![Select Function authorization](./media/functions-develop-vs-code/create-function-auth.png)
+ :::image type="content" source="./media/functions-develop-vs-code/create-function-auth.png" alt-text="Screenshot for creating function authorization.":::
- A function is created in your chosen language and in the template for an HTTP-triggered function.
+1. From the dropdown list, select **Add to workplace**.
- ![HTTP-triggered function template in Visual Studio Code](./media/functions-develop-vs-code/new-function-full.png)
+ :::image type="content" source="./media/functions-develop-vs-code/add-to-workplace.png" alt-text=" Screenshot for selectIng Add to workplace.":::
+
+1. In **Do you trust the authors of the files in this folder?** window, select **Yes**.
+
+ :::image type="content" source="./media/functions-develop-vs-code/select-author-file.png" alt-text="Screenshot to confirm trust in authors of the files.":::
+
+1. A function is created in your chosen language and in the template for an HTTP-triggered function.
+
+ :::image type="content" source="./media/functions-develop-vs-code/new-function-created.png" alt-text="Screenshot for H T T P-triggered function template in Visual Studio Code.":::
### Generated project files
Depending on your language, these other files are created:
# [Java](#tab/java)
-+ A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
+* A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
-+ A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
+* A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
# [JavaScript](#tab/nodejs)
Depending on your language, these other files are created:
# [PowerShell](#tab/powershell) * An HttpExample folder that contains the [function.json definition file](functions-reference-powershell.md#folder-structure) and the run.ps1 file, which contains the function code.
-
+ # [Python](#tab/python)
-
+ * A project-level requirements.txt file that lists packages required by Functions.
-
+ * An HttpExample folder that contains the [function.json definition file](functions-reference-python.md#folder-structure) and the \_\_init\_\_.py file, which contains the function code.
-At this point, you can [add input and output bindings](#add-input-and-output-bindings) to your function.
+At this point, you can [add input and output bindings](#add-input-and-output-bindings) to your function.
You can also [add a new function to your project](#add-a-function-to-your-project). ## Install binding extensions
Replace `<TARGET_VERSION>` in the example with a specific version of the package
## Add a function to your project
-You can add a new function to an existing project by using one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
+You can add a new function to an existing project by using one of the predefined Functions triggers templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
The results of this action depend on your project's language:
The `msg` parameter is an `ICollector<T>` type, which represents a collection of
Messages are sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=csharp) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=csharp) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
# [Java](#tab/java)
Update the function method to add the following parameter to the `Run` method de
:::code language="java" source="~/functions-quickstart-java/functions-add-output-binding-storage-queue/src/main/java/com/function/Function.java" range="20-21":::
-The `msg` parameter is an `OutputBinding<T>` type, where is `T` is a string that is written to an output binding when the function completes. The following code sets the message in the output binding:
+The `msg` parameter is an `OutputBinding<T>` type, where `T` is a string that is written to an output binding when the function completes. The following code sets the message in the output binding:
:::code language="java" source="~/functions-quickstart-java/functions-add-output-binding-storage-queue/src/main/java/com/function/Function.java" range="33-34"::: This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=java) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=java) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
# [JavaScript](#tab/nodejs)
In your function code, the `msg` binding is accessed from the `context`, as in t
This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=javascript) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=javascript) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
# [PowerShell](#tab/powershell)
To learn more, see the [Queue storage output binding reference article](function
This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=powershell) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=powershell) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
# [Python](#tab/python)
The following code adds string data from the request to the output queue:
This message is sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=python) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=python) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
To learn more, see the [Queue storage output binding reference article](function
Visual Studio Code lets you publish your Functions project directly to Azure. In the process, you create a function app and related resources in your Azure subscription. The function app provides an execution context for your functions. The project is packaged and deployed to the new function app in your Azure subscription.
-When you publish from Visual Studio Code to a new function app in Azure, you can choose either a quick function app create path using defaults or an advanced path, where you have more control over the remote resources created.
+When you publish from Visual Studio Code to a new function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you'll have more control over the remote resources created.
-When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
+When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
### Quick function app create
The following steps publish your project to a new function app created with adva
1. If you're not signed in, you're prompted to **Sign in to Azure**. You can also **Create a free Azure account**. After signing in from the browser, go back to Visual Studio Code.
-1. If you have multiple subscriptions, **Select a subscription** for the function app, and then select **+ Create New Function App in Azure... _Advanced_**. This _Advanced_ option gives you more control over the resources you create in Azure.
+1. If you have multiple subscriptions, **Select a subscription** for the function app, and then select **+ Create New Function App in Azure... _Advanced_**. This _Advanced_ option gives you more control over the resources you create in Azure.
1. Following the prompts, provide this information:
To call an HTTP-triggered function from a client, you need the URL of the functi
The function URL is copied to the clipboard, along with any required keys passed by the `code` query parameter. Use an HTTP tool to submit POST requests, or a browser for GET requests to the remote function.
-When getting the URL of functions in Azure, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
+When the extension gets the URL of functions in Azure, it uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
## Republish project files
When you set up [continuous deployment](functions-continuous-deployment.md), you
## Run functions
-The Azure Functions extension lets you run individual functions, either in your project on your local development computer or in your Azure subscription.
+The Azure Functions extension lets you run individual functions. You can run functions either in your project on your local development computer or in your Azure subscription.
For HTTP trigger functions, the extension calls the HTTP endpoint. For other kinds of triggers, it calls administrator APIs to start the function. The message body of the request sent to the function depends on the type of trigger. When a trigger requires test data, you're prompted to enter data in a specific JSON format.
-### Run functions in Azure
+### Run functions in Azure.
-To execute a function in Azure from Visual Studio Code.
+To execute a function in Azure from Visual Studio Code.
-1. In the command pallet, enter **Azure Functions: Execute function now** and choose your Azure subscription.
+1. In the command pallet, enter **Azure Functions: Execute function now** and choose your Azure subscription.
-1. Choose your function app in Azure from the list. If you don't see your function app, make sure you're signed in to the correct subscription.
+1. Choose your function app in Azure from the list. If you don't see your function app, make sure you're signed in to the correct subscription.
-1. Choose the function you want to run from the list and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
+1. Choose the function you want to run from the list and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
-
+ You can also run your function from the **Azure: Functions** area by right-clicking (Ctrl-clicking on Mac) the function you want to run from your function app in your Azure subscription and choosing **Execute Function Now...**.
-When running functions in Azure, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
+When you run your functions in Azure from Visual Studio Code, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
### Run functions locally
-The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings). To run your Functions project locally, you must meet [additional requirements](#run-local-requirements).
+The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings). To run your Functions project locally, you must meet [more requirements](#run-local-requirements).
#### Configure the project to run locally
For more information, see [Local settings file](#local-settings).
#### <a name="debugging-functions-locally"></a>Debug functions locally
-To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This is the same as running the `func host start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
+To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This step is the same as running the `func host start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
-When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
+When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
-1. In the command pallet, enter **Azure Functions: Execute function now** and choose **Local project**.
+1. In the command pallet, enter **Azure Functions: Execute function now** and choose **Local project**.
-1. Choose the function you want to run in your project and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
+1. Choose the function you want to run in your project and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
1. When the function runs locally and after the response is received, a notification is raised in Visual Studio Code. Information about the function execution is shown in **Terminal** panel.
-Running functions locally doesn't require using keys.
+Running functions locally doesn't require using keys.
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
The settings in the local.settings.json file in your project should be the same
The easiest way to publish the required settings to your function app in Azure is to use the **Upload settings** link that appears after you publish your project:
-![Upload application settings](./media/functions-develop-vs-code/upload-app-settings.png)
You can also publish settings by using the **Azure Functions: Upload Local Setting** command in the command palette. You can add individual settings to application settings in Azure by using the **Azure Functions: Add New Setting** command.
If the local file is encrypted, it's decrypted, published, and encrypted again.
View existing app settings in the **Azure: Functions** area by expanding your subscription, your function app, and **Application Settings**.
-![View function app settings in Visual Studio Code](./media/functions-develop-vs-code/view-app-settings.png)
### Download settings from Azure
When you [run functions locally](#run-functions-locally), log data is streamed t
When you're developing an application, it's often useful to see logging information in near-real time. You can view a stream of log files being generated by your functions. This output is an example of streaming logs for a request to an HTTP-triggered function:
-![Streaming logs output for HTTP trigger](media/functions-develop-vs-code/streaming-logs-vscode-console.png)
To learn more, see [Streaming logs](functions-monitoring.md#streaming-logs).
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Title: Develop Azure Functions using Visual Studio description: Learn how to develop and test Azure Functions by using Azure Functions Tools for Visual Studio 2019. ms.devlang: csharp-+ Previously updated : 12/10/2020 Last updated : 05/19/2022 # Develop Azure Functions using Visual Studio
Unless otherwise noted, procedures and examples shown are for Visual Studio 2019
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] > [!NOTE]
-> In Visual Studio 2017, the Azure development workload installs Azure Functions Tools as a separate extension. When you update your Visual Studio 2017 installation, make sure that you're using the [most recent version](#check-your-tools-version) of the Azure Functions tools. The following sections show you how to check and (if needed) update your Azure Functions Tools extension in Visual Studio 2017.
+> In Visual Studio 2017, the Azure development workload installs Azure Functions Tools as a separate extension. When you update your Visual Studio 2017 installation, make sure that you're using the [most recent version](#check-your-tools-version) of the Azure Functions Tools. The following sections show you how to check and (if needed) update your Azure Functions Tools extension in Visual Studio 2017.
> > Skip these sections if you're using Visual Studio 2019.
For a full list of the bindings supported by Functions, see [Supported bindings]
## Run functions locally
-Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project the local Functions host (func.exe) is started listening on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
+Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project, the local Functions host (func.exe) starts to listen on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
To start your function in Visual Studio in debug mode:
For a more detailed testing scenario using Visual Studio, see [Testing functions
## Publish to Azure
-When you publish from Visual Studio, it uses one of two deployment methods:
+When you publish from Visual Studio, it uses one of the two deployment methods:
* [Web Deploy](functions-deployment-technologies.md#web-deploy-msdeploy): Packages and deploys Windows apps to any IIS server. * [Zip Deploy with run-From-package enabled](functions-deployment-technologies.md#zip-deploy): Recommended for Azure Functions deployments.
Use the following steps to publish your project to a function app in Azure.
## Function app settings
-Because Visual Studio doesn't upload these settings automatically when you publish the project, any settings you add in the local.settings.json you must also add to the function app in Azure.
+Visual Studio doesn't upload these settings automatically when you publish the project. Any settings you add in the local.settings.json you must also add to the function app in Azure.
The easiest way to upload the required settings to your function app in Azure is to select the **Manage Azure App Service settings** link that appears after you successfully publish your project.
To learn more about monitoring using Application Insights, see [Monitor Azure Fu
## Testing functions
-This section describes how to create a C# function app project in Visual Studio and run and tests with [xUnit](https://github.com/xunit/xunit).
+This section describes how to create a C# function app project in Visual Studio and to run and test with [xUnit](https://github.com/xunit/xunit).
![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
Now that the projects are created, you can create the classes used to run the au
Each function takes an instance of [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) to handle message logging. Some tests either don't log messages or have no concern for how logging is implemented. Other tests need to evaluate messages logged to determine whether a test is passing.
-You'll create a new class named `ListLogger` which holds an internal list of messages to evaluate during a testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
+You'll create a new class named `ListLogger`, which holds an internal list of messages to evaluate during testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
Create a new class in *Functions.Tests* project named **NullScope.cs** and enter the following code:
The members implemented in this class are:
- **Http_trigger_should_return_string_from_member_data**: This test uses xUnit attributes to provide sample data to the HTTP function. -- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer functions. Once the function is run, then the log is checked to ensure the expected message is present.
+- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer function. Once the function is run, then the log is checked to ensure the expected message is present.
If you want to access application settings in your tests, you can [inject](functions-dotnet-dependency-injection.md) an `IConfiguration` instance with mocked environment variable values into your function. ### Run tests
-To run the tests, navigate to the **Test Explorer** and click **Run all**.
+To run the tests, navigate to the **Test Explorer** and select **Run all**.
![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png) ### Debug tests
-To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and click **Run > Debug Last Run**.
+To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and select **Run > Debug Last Run**.
## Next steps For more information about the Azure Functions Core Tools, see [Work with Azure Functions Core Tools](functions-run-local.md).
-For more information about developing functions as .NET class libraries, see [Azure Functions C# developer reference](functions-dotnet-class-library.md). This article also links to examples of how to use attributes to declare the various types of bindings supported by Azure Functions.
+For more information about developing functions as .NET class libraries, see [Azure Functions C# developer reference](functions-dotnet-class-library.md). This article also links to examples on how to use attributes to declare the various types of bindings supported by Azure Functions.
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
az functionapp create \
--storage-account $STORAGE_ACCOUNT \ --consumption-plan-location $LOCATION \ --runtime java \
- --functions-version 2
+ --functions-version 3
``` # [Cmd](#tab/cmd)
az functionapp create ^
--storage-account %STORAGE_ACCOUNT% ^ --consumption-plan-location %LOCATION% ^ --runtime java ^
- --functions-version 2
+ --functions-version 3
```
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps. For other hosting plan options, see the [hosting plan article](functions-scale.md).
->[!IMPORTANT]
->Azure Functions runs on the Azure App Service platform. In the App Service platform, plans that host Premium plan function apps are referred to as *Elastic* Premium plans, with SKU names like `EP1`. If you choose to run your function app on a Premium plan, make sure to create a plan with an SKU name that starts with "E", such as `EP1`. App Service plan SKU names that start with "P", such as `P1V2` (Premium V2 Small plan), are actually [Dedicated hosting plans](dedicated-plan.md). Because they are Dedicated and not Elastic Premium, plans with SKU names starting with "P" won't scale dynamically and may increase your costs.
Premium plan hosting provides the following benefits to your functions:
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions description: Understand how to develop functions with Python Previously updated : 11/4/2020 Last updated : 05/19/2022 ms.devlang: python-+ # Azure Functions Python developer guide
def main(req):
return f'Hello, {user}!' ```
-You can also explicitly declare the attribute types and return type in the function using Python type annotations. This helps you use the intellisense and autocomplete features provided by many Python code editors.
+You can also explicitly declare the attribute types and return type in the function using Python type annotations. This action helps you to use the IntelliSense and autocomplete features provided by many Python code editors.
```python import azure.functions
The main project folder (<project_root>) can contain the following files:
Each function has its own code file and binding configuration file (function.json).
-When deploying your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
+When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
## Import behavior
from . import example #(relative)
> [!NOTE] > The *shared_code/* folder needs to contain an \_\_init\_\_.py file to mark it as a Python package when using absolute import syntax.
-The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it is not supported by static type checker and not supported by Python test frameworks:
+The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it isn't supported by static type checker and not supported by Python test frameworks:
```python from __app__.shared_code import my_first_helper_function #(deprecated __app__ import)
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.
### Log custom telemetry
-By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure), which sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
>[!NOTE] >To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
Likewise, you can set the `status_code` and `headers` for the response message i
## Web frameworks
-You can leverage WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
+You can apply WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example:
The host.json file must also be updated to include an HTTP `routePrefix`, as sho
} ```
-Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI hander approach or a WSGI wrapper approach for Flask:
+Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
# [ASGI](#tab/asgi)
Name of the function.
ID of the current function invocation. `trace_context`
-Context for distributed tracing. Please refer to [`Trace Context`](https://www.w3.org/TR/trace-context/) for more information..
+Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/).
`retry_context`
-Context for retries to the function. Please refer to [`retry-policies`](./functions-bindings-errors.md#retry-policies-preview) for more information.
+Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies-preview).
## Global variables
-It is not guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
+It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
```python CACHED_DATA = None
Azure Functions supports the following Python versions:
| 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 | | 2.x | 3.7<br/>3.6 |
-<sup>*</sup>Official CPython distributions
+<sup>*</sup>Official Python distributions
To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created and can't be changed.
-When running locally, the runtime uses the available Python version.
+The runtime uses the available Python version, when you run it locally.
### Changing Python version
-To set a Python function app to a specific language version, you need to specify the language as well as the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
-
-To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md)
-
-To see the full list of supported Python versions functions apps, please refer to this [article](./supported-languages.md)
+To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+To learn more about Azure Functions runtime support policy, refer [article](./language-support-policy.md).
+To see the full list of supported Python versions functions apps, refer [article](./supported-languages.md).
# [Azure CLI](#tab/azurecli-linux)
az functionapp config set --name <FUNCTION_APP> \
--linux-fx-version <LINUX_FX_VERSION> ```
-Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the Python version you want to use, prefixed by `python|` e.g. `python|3.9`
+Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the Python version you want to use, prefixed by `python|` for example, `python|3.9`.
You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
pip install -r requirements.txt
## Publishing to Azure
-When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file, which is located at the root of your project directory.
+When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file. You can locate this file at the root of your project directory.
-Project files and folders that are excluded from publishing, including the virtual environment folder, are listed in the .funcignore file.
+Project files and folders that are excluded from publishing, including the virtual environment folder, you can find them in the root directory of your project.
There are three build actions supported for publishing your Python project to Azure: remote build, local build, and builds using custom dependencies.
You can also use Azure Pipelines to build your dependencies and publish using co
### Remote build
-When using remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
+When you use remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
Dependencies are obtained remotely based on the contents of the requirements.txt file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, the Azure Functions Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure.
func azure functionapp publish <APP_NAME> --build local
Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-Using the `--build local` option, project dependencies are read from the requirements.txt file and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, dependencies in your requirements.txt file can't be acquired by Core Tools, you must use the custom dependencies option for publishing.
+When you use the `--build local` option, project dependencies are read from the requirements.txt file and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, you can't get requirements.txt file by Core Tools, you must use the custom dependencies option for publishing.
We don't recommend using local builds when developing locally on Windows.
If your project uses packages not publicly available to our tools, you can make
pip install --target="<PROJECT_DIR>/.python_packages/lib/site-packages" -r requirements.txt ```
-When using custom dependencies, you should use the `--no-build` publishing option, since you have already installed the dependencies into the project folder.
+When using custom dependencies, you should use the `--no-build` publishing option, since you've already installed the dependencies into the project folder.
```command func azure functionapp publish <APP_NAME> --no-build
Remember to replace `<APP_NAME>` with the name of your function app in Azure.
## Unit Testing
-Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package is not immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
+Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
Take *my_second_function* as an example, following is a mock test of an HTTP triggered function:
from os import listdir
filesDirListInTemp = listdir(tempFilePath) ```
-We recommend that you maintain your tests in a folder separate from the project folder. This keeps you from deploying test code with your app.
+We recommend that you maintain your tests in a folder separate from the project folder. This action keeps you from deploying test code with your app.
## Preinstalled libraries
-There are a few libraries come with the Python Functions runtime.
+There are a few libraries that come with the Python Functions runtime.
### Python Standard Library
-The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they are provided by package collections.
+The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they're provided by package collections.
To view the full details of the list of these libraries, see the links below:
Extensions are imported in your function code much like a standard Python librar
Review the information for a given extension to learn more about the scope in which the extension runs.
-Extensions implement a Python worker extension interface that lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
+Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
### Using extensions
You can use a Python worker extension library in your Python functions by follow
1. Add the extension package in the requirements.txt file for your project. 1. Install the library into your app. 1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`:
- + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file)
+ + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
+ Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings). 1. Import the extension module into your function trigger. 1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation.
function-level-extension==1.0.0
``` ```python+ # <project_root>/Trigger/__init__.py from function_level_extension import FuncExtension
def main(req, context):
### Creating extensions
-Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer designs, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
+Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer design, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
To learn how to create, package, publish, and consume a Python worker extension package, see [Develop Python worker extensions for Azure Functions](develop-python-worker-extensions.md).
By default, a host instance for Python can process only one function invocation
## <a name="shared-memory"></a>Shared memory (preview)
-To improve throughput, Functions lets your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
+To improve throughput, Functions let your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
For example, you might enable shared memory to reduce bottlenecks when using Blob storage bindings to transfer payloads larger than 1 MB. This functionality is available only for function apps running in Premium and Dedicated (App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory). -
+
## Known issues and FAQ Following is a list of troubleshooting guides for common issues: * [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror)
-* [Cannot import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
+* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
All known issues and feature requests are tracked using [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-containers.md
ms.contributor: charles.weininger Previously updated : 04/25/2022 Last updated : 05/26/2022 # Profile live Azure containers with Application Insights
In this article, you'll learn the various ways you can:
} ```
+1. Enable Application Insights and Profiler in `Startup.cs`:
+
+ ```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddApplicationInsightsTelemetry(); // Add this line of code to enable Application Insights.
+ services.AddServiceProfiler(); // Add this line of code to Enable Profiler
+ services.AddControllersWithViews();
+ }
+ ```
+ ## Pull the latest ASP.NET Core build/runtime images 1. Navigate to the .NET Core 6.0 example directory.
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-overview.md
ms.contributor: charles.weininger Previously updated : 05/11/2022 Last updated : 05/26/2022
For these metrics, you can get a value of greater than 100% by consuming multipl
## Limitations
-The default data retention period is five days. The maximum data ingested per day is 10 GB.
+The default data retention period is five days.
There are no charges for using the Profiler service. To use it, your web app must be hosted in the basic tier of the Web Apps feature of Azure App Service, at minimum.
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler.md
To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps
:::image type="content" source="./media/profiler/enable-profiler.png" alt-text="Screenshot of enabling Profiler on your app.":::
-## Enable Profiler manually
+## Enable Profiler using app settings
If your Application Insights resource is in a different subscription from your App Service, you'll need to enable Profiler manually by creating app settings for your Azure App Service. You can automate the creation of these settings using a template or other means. The settings needed to enable the profiler:
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
For information about how to enable Container insights, see [Onboard Container i
Azure Monitor provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019 deployed across resource groups in your subscriptions. It shows clusters discovered across all environments that aren't monitored by the solution. You can immediately understand cluster health, and from here, you can drill down to the node and controller performance page or navigate to see performance charts for the cluster. For AKS clusters that were discovered and identified as unmonitored, you can enable monitoring for them at any time.
-The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described [here](container-insights-overview.md#what-does-container-insights-provide) in the overview article.
+The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described in [Feature of Container insights](container-insights-overview.md#features-of-container-insights) in the overview article.
-## Sign in to the Azure portal
-
-Sign in to the [Azure portal](https://portal.azure.com).
## Multi-cluster view from Azure Monitor
azure-monitor Container Insights Azure Redhat Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-azure-redhat-setup.md
- Title: Configure Azure Red Hat OpenShift v3.x with Container insights | Microsoft Docs
-description: This article describes how to configure monitoring of a Kubernetes cluster with Azure Monitor hosted on Azure Red Hat OpenShift version 3 and higher.
- Previously updated : 06/30/2020--
-# Configure Azure Red Hat OpenShift v3 with Container insights
-
->[!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired June 2022.
->
-> As of October 2020 you will no longer be able to create new 3.11 clusters.
-> Existing 3.11 clusters will continue to operate until June 2022 but will no be longer supported after that date.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](../../openshift/tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:aro-feedback@microsoft.com).
-
-Container insights provides rich monitoring experience for the Azure Kubernetes Service (AKS) and AKS Engine clusters. This article describes how to enable monitoring of Kubernetes clusters hosted on [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 3 and latest supported version of version 3, to achieve a similar monitoring experience.
-
->[!NOTE]
->Support for Azure Red Hat OpenShift is a feature in public preview at this time.
->
-
-Container insights can be enabled for new, or one or more existing deployments of Azure Red Hat OpenShift using the following supported methods:
--- For an existing cluster from the Azure portal or using Azure Resource Manager template.-- For a new cluster using Azure Resource Manager template, or while creating a new cluster using the [Azure CLI](/cli/azure/openshift#az-openshift-create).-
-## Supported and unsupported features
-
-Container insights supports monitoring Azure Red Hat OpenShift as described in the [Overview](container-insights-overview.md) article, except for the following features:
--- Live Data (preview)-- [Collect metrics](container-insights-update-metrics.md) from cluster nodes and pods and storing them in the Azure Monitor metrics database-
-## Prerequisites
--- A [Log Analytics workspace](../logs/workspace-design.md).-
- Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
--- To enable and access the features in Container insights, at a minimum you need to be a member of the Azure *Contributor* role in the Azure subscription, and a member of the [*Log Analytics Contributor*](../logs/manage-access.md#azure-rbac) role of the Log Analytics workspace configured with Container insights.--- To view the monitoring data, you are a member of the [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role permission with the Log Analytics workspace configured with Container insights.-
-## Identify your Log Analytics workspace ID
-
- To integrate with an existing Log Analytics workspace, start by identifying the full resource ID of your Log Analytics workspace. The resource ID of the workspace is required for the parameter `workspaceResourceId` when you enable monitoring using the Azure Resource Manager template method.
-
-1. List all the subscriptions that you have access to by running the following command:
-
- ```azurecli
- az account list --all -o table
- ```
-
- The output will look like the following:
-
- ```azurecli
- Name CloudName SubscriptionId State IsDefault
- -- - --
- Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
- ```
-
-1. Copy the value for **SubscriptionId**.
-
-1. Switch to the subscription that hosts the Log Analytics workspace by running the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the workspace>
- ```
-
-1. Display the list of workspaces in your subscriptions in the default JSON format by running the following command:
-
- ```
- az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
- ```
-
-1. In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-
-## Enable for a new cluster using an Azure Resource Manager template
-
-Perform the following steps to deploy an Azure Red Hat OpenShift cluster with monitoring enabled. Before proceeding, review the tutorial [Create an Azure Red Hat OpenShift cluster](../../openshift/tutorial-create-cluster.md) to understand the dependencies that you need to configure so your environment is set up correctly.
-
-This method includes two JSON templates. One template specifies the configuration to deploy the cluster with monitoring enabled, and the other contains parameter values that you configure to specify the following:
--- The Azure Red Hat OpenShift cluster resource ID.--- The resource group the cluster is deployed in.--- [Azure Active Directory tenant ID](../../openshift/howto-create-tenant.md#create-a-new-azure-ad-tenant) noted after performing the steps to create one or one already created.--- [Azure Active Directory client application ID](../../openshift/howto-aad-app-configuration.md#create-an-azure-ad-app-registration) noted after performing the steps to create one or one already created.--- [Azure Active Directory Client secret](../../openshift/howto-aad-app-configuration.md#create-a-client-secret) noted after performing the steps to create one or one already created.--- [Azure AD security group](../../openshift/howto-aad-app-configuration.md#create-an-azure-ad-security-group) noted after performing the steps to create one or one already created.--- Resource ID of an existing Log Analytics workspace. See [Identify your Log Analytics workspace ID](#identify-your-log-analytics-workspace-id) to learn how to get this information.--- The number of master nodes to create in the cluster.--- The number of compute nodes in the agent pool profile.--- The number of infrastructure nodes in the agent pool profile.-
-If you are unfamiliar with the concept of deploying resources by using a template, see:
--- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)--- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)-
-If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.65 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create a cluster with the monitoring add-on using the following commands:
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_new_cluster/newClusterWithMonitoring.json`
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_new_cluster/newClusterWithMonitoringParam.json`
-
-2. Sign in to Azure
-
- ```azurecli
- az login
- ```
-
- If you have access to multiple subscriptions, run `az account set -s {subscription ID}` replacing `{subscription ID}` with the subscription you want to use.
-
-3. Create a resource group for your cluster if you don't already have one. For a list of Azure regions that supports OpenShift on Azure, see [Supported Regions](../../openshift/supported-resources.md#azure-regions).
-
- ```azurecli
- az group create -g <clusterResourceGroup> -l <location>
- ```
-
-4. Edit the JSON parameter file **newClusterWithMonitoringParam.json** and update the following values:
-
- - *location*
- - *clusterName*
- - *aadTenantId*
- - *aadClientId*
- - *aadClientSecret*
- - *aadCustomerAdminGroupId*
- - *workspaceResourceId*
- - *masterNodeCount*
- - *computeNodeCount*
- - *infraNodeCount*
-
-5. The following step deploys the cluster with monitoring enabled by using the Azure CLI.
-
- ```azurecli
- az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./newClusterWithMonitoring.json --parameters @./newClusterWithMonitoringParam.json
- ```
-
- The output resembles the following:
-
- ```output
- provisioningState : Succeeded
- ```
-
-## Enable for an existing cluster
-
-Perform the following steps to enable monitoring of an Azure Red Hat OpenShift cluster deployed in Azure. You can accomplish this from the Azure portal or using the provided templates.
-
-### From the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. On the Azure portal menu or from the Home page, select **Azure Monitor**. Under the **Insights** section, select **Containers**.
-
-3. On the **Monitor - containers** page, select **Non-monitored clusters**.
-
-4. From the list of non-monitored clusters, find the cluster in the list and click **Enable**. You can identify the results in the list by looking for the value **ARO** under the column **CLUSTER TYPE**.
-
-5. On the **Onboarding to Container insights** page, if you have an existing Log Analytics workspace in the same subscription as the cluster, select it from the drop-down list.
- The list preselects the default workspace and location that the cluster is deployed to in the subscription.
-
- ![Enable monitoring for non-monitored clusters](./media/container-insights-onboard/kubernetes-onboard-brownfield-01.png)
-
- >[!NOTE]
- >If you want to create a new Log Analytics workspace for storing the monitoring data from the cluster, follow the instructions in [Create a Log Analytics workspace](../logs/quick-create-workspace.md). Be sure to create the workspace in the same subscription that the RedHat OpenShift cluster is deployed to.
-
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-
-### Enable using an Azure Resource Manager template
-
-This method includes two JSON templates. One template specifies the configuration to enable monitoring, and the other contains parameter values that you configure to specify the following:
--- The Azure RedHat OpenShift cluster resource ID.--- The resource group the cluster is deployed in.--- A Log Analytics workspace. See [Identify your Log Analytics workspace ID](#identify-your-log-analytics-workspace-id) to learn how to get this information.-
-If you are unfamiliar with the concept of deploying resources by using a template, see:
--- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)--- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)-
-If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.65 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-1. Download the template and parameter file to update your cluster with the monitoring add-on using the following commands:
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_existing_cluster/existingClusterOnboarding.json`
-
- `curl -LO https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/aro/enable_monitoring_to_existing_cluster/existingClusterParam.json`
-
-2. Sign in to Azure
-
- ```azurecli
- az login
- ```
-
- If you have access to multiple subscriptions, run `az account set -s {subscription ID}` replacing `{subscription ID}` with the subscription you want to use.
-
-3. Specify the subscription of the Azure RedHat OpenShift cluster.
-
- ```azurecli
- az account set --subscription "Subscription Name"
- ```
-
-4. Run the following command to identify the cluster location and resource ID:
-
- ```azurecli
- az openshift show -g <clusterResourceGroup> -n <clusterName>
- ```
-
-5. Edit the JSON parameter file **existingClusterParam.json** and update the values *aroResourceId* and *aroResourceLocation*. The value for **workspaceResourceId** is the full resource ID of your Log Analytics workspace, which includes the workspace name.
-
-6. To deploy with Azure CLI, run the following commands:
-
- ```azurecli
- az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./ExistingClusterOnboarding.json --parameters @./existingClusterParam.json
- ```
-
- The output resembles the following:
-
- ```output
- provisioningState : Succeeded
- ```
-
-## Next steps
--- With monitoring enabled to collect health and resource utilization of your RedHat OpenShift cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.--- By default, the containerized agent collects the stdout/ stderr container logs of all the containers running in all the namespaces except kube-system. To configure container log collection specific to particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure desired data collection settings to your ConfigMap configurations file.--- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)--- To learn how to stop monitoring your cluster with Container insights, see [How to Stop Monitoring Your Azure Red Hat OpenShift cluster](./container-insights-optout-openshift-v3.md).
azure-monitor Container Insights Azure Redhat4 Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-azure-redhat4-setup.md
- Title: Configure Azure Red Hat OpenShift v4.x with Container insights | Microsoft Docs
-description: This article describes how to configure monitoring for a Kubernetes cluster with Azure Monitor that's hosted on Azure Red Hat OpenShift version 4 or later.
- Previously updated : 03/05/2021--
-# Configure Azure Red Hat OpenShift v4.x with Container insights
-
-Container insights provides a rich monitoring experience for Azure Kubernetes Service (AKS) and AKS engine clusters. This article describes how to achieve a similar monitoring experience by enabling monitoring for Kubernetes clusters that are hosted on [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x.
-
->[!NOTE]
-> We are phasing out Container Insights support for Azure Red Hat OpenShift v4.x by May 2022. We recommend customers to migrate Container Insights on Azure Arc enabled Kubernetes, which offers an upgraded experience and 1-click onboarding. For more information, please visit our [documentation](./container-insights-enable-arc-enabled-clusters.md)
->
--
->[!NOTE]
->Support for Azure Red Hat OpenShift is a feature in public preview at this time.
->
-
-You can enable Container insights for one or more existing deployments of Azure Red Hat OpenShift v4.x by using the supported methods described in this article.
-
-For an existing cluster, run this [Bash script in the Azure CLI](/cli/azure/openshift#az-openshift-create&preserve-view=true).
-
-## Supported and unsupported features
-
-Container insights supports monitoring Azure Red Hat OpenShift v4.x as described in [Container insights overview](container-insights-overview.md), except for the following features:
--- Live Data (preview)-- [Collecting metrics](container-insights-update-metrics.md) from cluster nodes and pods and storing them in the Azure Monitor metrics database-
-## Prerequisites
--- The Azure CLI version 2.0.72 or later --- The [Helm 3](https://helm.sh/docs/intro/install/) CLI tool--- Latest version of [OpenShift CLI](https://docs.openshift.com/container-platform/4.7/cli_reference/openshift_cli/getting-started-cli.html)--- [Bash version 4](https://www.gnu.org/software/bash/)--- The [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool--- A [Log Analytics workspace](../logs/workspace-design.md).-
- Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
--- To enable and access the features in Container insights, you need to have, at minimum, an Azure *Contributor* role in the Azure subscription and a [*Log Analytics Contributor*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.--- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.-
-## Enable monitoring for an existing cluster
-
-To enable monitoring for an Azure Red Hat OpenShift version 4 or later cluster that's deployed in Azure by using the provided Bash script, do the following:
-
-1. Sign in to Azure by running the following command:
-
- ```azurecli
- az login
- ```
-
-1. Download and save to a local folder the script that configures your cluster with the monitoring add-in by running the following command:
-
- `curl -o enable-monitoring.sh -L https://aka.ms/enable-monitoring-bash-script`
-
-1. Connect to ARO v4 cluster using the instructions in [Tutorial: Connect to an Azure Red Hat OpenShift 4 cluster](../../openshift/tutorial-connect-cluster.md).
--
-### Integrate with an existing workspace
-
-In this section, you enable monitoring of your cluster using the Bash script you downloaded earlier. To integrate with an existing Log Analytics workspace, start by identifying the full resource ID of your Log Analytics workspace that's required for the `logAnalyticsWorkspaceResourceId` parameter, and then run the command to enable the monitoring add-in against the specified workspace.
-
-If you don't have a workspace to specify, you can skip to the [Integrate with the default workspace](#integrate-with-the-default-workspace) section and let the script create a new workspace for you.
-
-1. List all the subscriptions that you have access to by running the following command:
-
- ```azurecli
- az account list --all -o table
- ```
-
- The output will look like the following:
-
- ```azurecli
- Name CloudName SubscriptionId State IsDefault
- -- - --
- Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
- ```
-
-1. Copy the value for **SubscriptionId**.
-
-1. Switch to the subscription that hosts the Log Analytics workspace by running the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the workspace>
- ```
-
-1. Display the list of workspaces in your subscriptions in the default JSON format by running the following command:
-
- ```
- az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
- ```
-
-1. In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-
-1. To enable monitoring, run the following command. Replace the values for the `azureAroV4ClusterResourceId` and `logAnalyticsWorkspaceResourceId` parameters.
-
- ```bash
- export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
- export logAnalyticsWorkspaceResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>"
- ```
-
- Here is the command you must run once you have populated the variables with Export commands:
-
- `bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --workspace-id $logAnalyticsWorkspaceResourceId`
-
-After you've enabled monitoring, it might take about 15 minutes before you can view the health metrics for the cluster.
-
-### Integrate with the default workspace
-
-In this section, you enable monitoring for your Azure Red Hat OpenShift v4.x cluster by using the Bash script that you downloaded.
-
-In this example, you're not required to pre-create or specify an existing workspace. This command simplifies the process for you by creating a default workspace in the default resource group of the cluster subscription, if one doesn't already exist in the region.
-
-The default workspace that's created is in the format of *DefaultWorkspace-\<GUID>-\<Region>*.
-
-Replace the value for the `azureAroV4ClusterResourceId` parameter.
-
-```bash
-export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
-```
-
-For example:
-
-`bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId
-
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-
-### Enable monitoring from the Azure portal
-
-The multi-cluster view in Container insights highlights your Azure Red Hat OpenShift clusters that don't have monitoring enabled under the **Unmonitored clusters** tab. The **Enable** option next to your cluster doesn't initiate onboarding of monitoring from the portal. You're redirected to this article to enable monitoring manually by following the steps that were outlined earlier in this article.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. On the left pane or from the home page, select **Azure Monitor**.
-
-1. In the **Insights** section, select **Containers**.
-
-1. On the **Monitor - containers** page, select **Unmonitored clusters**.
-
-1. In the list of non-monitored clusters, select the cluster, and then select **Enable**.
-
- You can identify the results in the list by looking for the **ARO** value in the **Cluster Type** column. After you select **Enable**, you're redirected to this article.
-
-## Next steps
--- Now that you've enabled monitoring to collect health and resource utilization of your RedHat OpenShift version 4.x cluster and the workloads that are running on them, learn [how to use](container-insights-analyze.md) Container insights.--- By default, the containerized agent collects the *stdout* and *stderr* container logs of all the containers that are running in all the namespaces except kube-system. To configure a container log collection that's specific to a particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure the data collection settings you want for your *ConfigMap* configuration file.--- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md).--- To learn how to stop monitoring your cluster by using Container insights, see [How to stop monitoring your Azure Red Hat OpenShift cluster](./container-insights-optout-openshift-v3.md).
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Last updated 05/29/2020
This article provides pricing guidance for Container insights to help you understand the following:
-* How to estimate costs up-front before you enable this Insight
-
+* How to estimate costs up-front before you enable Container Insights.
* How to measure costs after Container insights has been enabled for one or more containers- * How to control the collection of data and make cost reductions Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster.
The Azure Monitor pricing model is primarily based on the amount of data ingeste
The following is a summary of what types of data are collected from a Kubernetes cluster with Container insights that influences cost and can be customized based on your usage: - Stdout, stderr container logs from every monitored container in every Kubernetes namespace in the cluster- - Container environment variables from every monitored container in the cluster- - Completed Kubernetes jobs/pods in the cluster that does not require monitoring- - Active scraping of Prometheus metrics- - [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes master node logs in your AKS cluster to analyze log data generated by master components such as the *kube-apiserver* and *kube-controller-manager*. ## What is collected from Kubernetes clusters
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
Last updated 02/04/2021
# Enable AKS monitoring addon using Azure Policy
-This article describes how to enable AKS Monitoring Addon using Azure Custom Policy. Monitoring Addon Custom Policy can be assigned either at subscription or resource group scope. If Azure Log Analytics workspace and AKS cluster are in different subscriptions then the managed identity used by the policy assignment has to have the required role permissions on both the subscriptions or least on the resource of the Log Analytics workspace. Similarly, if the policy is scoped to the resource group, then the managed identity should have the required role permissions on the Log Analytics workspace if the workspace not in the selected resource group scope.
+This article describes how to enable AKS Monitoring Addon using Azure Custom Policy.
+## Permissions required
Monitoring Addon require following roles on the managed identity used by Azure Policy: - [azure-kubernetes-service-contributor-role](../../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role) - [log-analytics-contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
+Monitoring Addon Custom Policy can be assigned at either the subscription or resource group scope. If the Log Analytics workspace and AKS cluster are in different subscriptions, then the managed identity used by the policy assignment must have the required role permissions on both the subscriptions or on the Log Analytics workspace resource. Similarly, if the policy is scoped to the resource group, then the managed identity should have the required role permissions on the Log Analytics workspace if the workspace is not in the selected resource group scope.
++ ## Create and assign policy definition using Azure portal ### Create policy definition
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Title: "Monitor Azure Arc-enabled Kubernetes clusters" Previously updated : 04/05/2021
+ Title: Monitor Azure Arc-enabled Kubernetes clusters
Last updated : 05/24/2022
-description: "Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor"
+description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.
# Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters
description: "Collect metrics and logs of Azure Arc-enabled Kubernetes clusters
## Prerequisites -- You've met the pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).-- A Log Analytics workspace: Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).-- You need to have [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.
+- Pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).
+- og Analytics workspace. Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace using [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).
+- [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.
- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#azure-rbac) role assignment on the Log Analytics workspace. - The following endpoints need to be enabled for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
Title: Monitor an Azure Kubernetes Service (AKS) cluster deployed | Microsoft Docs description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS) cluster with Container insights already deployed in your subscription. Previously updated : 09/12/2019 Last updated : 05/24/2022 # Enable monitoring of Azure Kubernetes Service (AKS) cluster already deployed- This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on [Azure Kubernetes Service](../../aks/index.yml) that have already been deployed in your subscription.
-You can enable monitoring of an AKS cluster that's already deployed using one of the supported methods:
-
-* Azure CLI
-* [Terraform](#enable-using-terraform)
-* [From Azure Monitor](#enable-from-azure-monitor-in-the-portal) or [directly from the AKS cluster](#enable-directly-from-aks-cluster-in-the-portal) in the Azure portal
-* With the [provided Azure Resource Manager template](#enable-using-an-azure-resource-manager-template) by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with Azure CLI.
- If you're connecting an existing AKS cluster to an Azure Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription in which the Log Analytics workspace was created. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
-## Sign in to the Azure portal
-
-Sign in to the [Azure portal](https://portal.azure.com).
- ## Enable using Azure CLI The following step enables monitoring of your AKS cluster using Azure CLI. In this example, you are not required to pre-create or specify an existing workspace. This command simplifies the process for you by creating a default workspace in the default resource group of the AKS cluster subscription if one does not already exist in the region. The default workspace created resembles the format of *DefaultWorkspace-\<GUID>-\<Region>*.
azure-monitor Container Insights Enable New Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md
Title: Monitor a new Azure Kubernetes Service (AKS) cluster | Microsoft Docs description: Learn how to enable monitoring for a new Azure Kubernetes Service (AKS) cluster with Container insights subscription. Previously updated : 04/25/2019 Last updated : 05/24/2022 ms.devlang: azurecli
ms.devlang: azurecli
This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on [Azure Kubernetes Service](../../aks/index.yml) that you are preparing to deploy in your subscription.
-You can enable monitoring of an AKS cluster using one of the supported methods:
-
-* Azure CLI
-* Terraform
## Enable using Azure CLI
To enable monitoring of a new AKS cluster created with Azure CLI, follow the ste
## Enable using Terraform
-If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://www.terraform.io/docs/providers/azurerm/r/log_analytics_workspace.html) if you do not chose to specify an existing one.
+If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one.
>[!NOTE] >If you choose to use Terraform, you must be running the Terraform Azure RM Provider version 1.17.0 or above.
-To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://www.terraform.io/docs/providers/azurerm/r/log_analytics_solution.html) and complete the profile by including the [**addon_profile**](https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#addon_profile) and specify **oms_agent**.
+To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
After you've enabled monitoring and all configuration tasks are completed successfully, you can monitor the performance of your cluster in either of two ways:
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Title: Configure GPU monitoring with Container insights | Microsoft Docs
+ Title: Configure GPU monitoring with Container insights
description: This article describes how you can configure monitoring Kubernetes clusters with NVIDIA and AMD GPU enabled nodes with Container insights. Previously updated : 03/27/2020 Last updated : 05/24/2022 # Configure GPU monitoring with Container insights
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
The following configurations are officially supported with Container insights. I
Before you start, make sure that you have the following: -- A [Log Analytics workspace](../logs/workspace-design.md).-
- Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
+- [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
>[!NOTE] >Enable monitoring of multiple clusters with the same cluster name to same Log Analytics workspace is not supported. Cluster names must be unique.
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md
Title: View metrics in real-time with Container insights | Microsoft Docs
+ Title: View metrics in real-time with Container insights
description: This article describes the real-time view of metrics without using kubectl with Container insights. Previously updated : 10/15/2019 Last updated : 05/24/2022
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
Title: View Live Data with Container insights | Microsoft Docs
+ Title: View Live Data with Container insights
description: This article describes the real-time view of Kubernetes logs, events, and pod metrics without using kubectl in Container insights. Previously updated : 03/04/2021 Last updated : 05/24/2022
Container insights includes the Live Data feature, which is an advanced diagnost
This article provides a detailed overview and helps you understand how to use this feature.
-For help setting up or troubleshooting the Live Data feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly access the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
+For help setting up or troubleshooting the Live Data feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly accesses the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
## View AKS resource live logs Use the following procedure to view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view.
The pane title shows the name of the Pod the container is grouped with.
### Filter events
-While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to chose from.
+While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to choose from.
## View metrics
The Live Data feature includes search functionality. In the **Search** field, yo
### Scroll Lock and Pause
-To suspend autoscroll and control the behavior of the pane, allowing you to manually scroll through the new data read, you can use the **Scroll** option. To re-enable autoscroll, simply select the **Scroll** option again. You can also pause retrieval of log or event data by selecting the the **Pause** option, and when you are ready to resume, simply select **Play**.
+To suspend autoscroll and control the behavior of the pane, allowing you to manually scroll through the new data read, you can use the **Scroll** option. To re-enable autoscroll, simply select the **Scroll** option again. You can also pause retrieval of log or event data by selecting the **Pause** option, and when you are ready to resume, simply select **Play**.
![Live Data console pane pause live view](./media/container-insights-livedata-overview/livedata-pane-scroll-pause-example.png)
azure-monitor Container Insights Livedata Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-setup.md
Title: Set up Container insights Live Data (preview) | Microsoft Docs
+ Title: Configure live data in Container insights
description: This article describes how to set up the real-time view of container logs (stdout/stderr) and events without using kubectl with Container insights. Previously updated : 01/08/2020 Last updated : 05/24/2022
-# How to set up the Live Data (preview) feature
+# How to configure Live Data in Container insights
-To view Live Data (preview) with Container insights from Azure Kubernetes Service (AKS) clusters, you need to configure authentication to grant permission to access to your Kubernetes data. This security configuration allows real-time access to your data through the Kubernetes API directly in the Azure portal.
+To view Live Data with Container insights from Azure Kubernetes Service (AKS) clusters, you need to configure authentication to grant permission to access to your Kubernetes data. This security configuration allows real-time access to your data through the Kubernetes API directly in the Azure portal.
This feature supports the following methods to control access to the logs, events, and metrics:
This feature supports the following methods to control access to the logs, event
These instructions require both administrative access to your Kubernetes cluster, and if configuring to use Azure Active Directory (AD) for user authentication, administrative access to Azure AD.
-This article explains how to configure authentication to control access to the Live Data (preview) feature from the cluster:
+This article explains how to configure authentication to control access to the Live Data feature from the cluster:
- Kubernetes role-based access control (Kubernetes RBAC) enabled AKS cluster - Azure Active Directory integrated AKS cluster.
This article explains how to configure authentication to control access to the L
## Authentication model
-The Live Data (preview) features utilizes the Kubernetes API, identical to the `kubectl` command-line tool. The Kubernetes API endpoints utilize a self-signed certificate, which your browser will be unable to validate. This feature utilizes an internal proxy to validate the certificate with the AKS service, ensuring the traffic is trusted.
+The Live Data features utilizes the Kubernetes API, identical to the `kubectl` command-line tool. The Kubernetes API endpoints utilize a self-signed certificate, which your browser will be unable to validate. This feature utilizes an internal proxy to validate the certificate with the AKS service, ensuring the traffic is trusted.
The Azure portal prompts you to validate your login credentials for an Azure Active Directory cluster, and redirect you to the client registration setup during cluster creation (and re-configured in this article). This behavior is similar to the authentication process required by `kubectl`.
The Azure portal prompts you to validate your login credentials for an Azure Act
## Using clusterMonitoringUser with Kubernetes RBAC-enabled clusters
-To eliminate the need to apply additional configuration changes to allow the Kubernetes user role binding **clusterUser** access to the Live Data (preview) feature after [enabling Kubernetes RBAC](#configure-kubernetes-rbac-authorization) authorization, AKS has added a new Kubernetes cluster role binding called **clusterMonitoringUser**. This cluster role binding has all the necessary permissions out-of-the-box to access the Kubernetes API and the endpoints for utilizing the Live Data (preview) feature.
+To eliminate the need to apply additional configuration changes to allow the Kubernetes user role binding **clusterUser** access to the Live Data feature after [enabling Kubernetes RBAC](#configure-kubernetes-rbac-authorization) authorization, AKS has added a new Kubernetes cluster role binding called **clusterMonitoringUser**. This cluster role binding has all the necessary permissions out-of-the-box to access the Kubernetes API and the endpoints for utilizing the Live Data feature.
-In order to utilize the Live Data (preview) feature with this new user, you need to be a member of the [Azure Kubernetes Service Cluster User](../../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role on the AKS cluster resource. Container insights, when enabled, is configured to authenticate using the clusterMonitoringUser by default. If the clusterMonitoringUser role binding does not exist on a cluster, **clusterUser** is used for authentication instead. Contributor gives you access to the clusterMonitoringUser (if it exists) and Azure Kuberenetes Service Cluster User gives you access to the clusterUser. Any of these two roles give sufficient access to use this feature.
+In order to utilize the Live Data feature with this new user, you need to be a member of the [Azure Kubernetes Service Cluster User](../../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role on the AKS cluster resource. Container insights, when enabled, is configured to authenticate using the clusterMonitoringUser by default. If the clusterMonitoringUser role binding does not exist on a cluster, **clusterUser** is used for authentication instead. Contributor gives you access to the clusterMonitoringUser (if it exists) and Azure Kuberenetes Service Cluster User gives you access to the clusterUser. Any of these two roles give sufficient access to use this feature.
AKS released this new role binding in January 2020, so clusters created before January 2020 do not have it. If you have a cluster that was created before January 2020, the new **clusterMonitoringUser** can be added to an existing cluster by performing a PUT operation on the cluster, or performing any other operation on the cluster that performs a PUT operation on the cluster, such as updating the cluster version.
For more information on advanced security setup in Kubernetes, review the [Kuber
## Grant permission
-Each Azure AD account must be granted permission to the appropriate APIs in Kubernetes in order to access the Live Data (preview) feature. The steps to grant the Azure Active Directory account are similar to the steps described in the [Kubernetes RBAC authentication](#configure-kubernetes-rbac-authorization) section. Before applying the yaml configuration template to your cluster, replace **clusterUser** under **ClusterRoleBinding** with the desired user.
+Each Azure AD account must be granted permission to the appropriate APIs in Kubernetes in order to access the Live Data feature. The steps to grant the Azure Active Directory account are similar to the steps described in the [Kubernetes RBAC authentication](#configure-kubernetes-rbac-authorization) section. Before applying the yaml configuration template to your cluster, replace **clusterUser** under **ClusterRoleBinding** with the desired user.
>[!IMPORTANT] >If the user you grant the Kubernetes RBAC binding for is in the same Azure AD tenant, assign permissions based on the userPrincipalName. If the user is in a different Azure AD tenant, query for and use the objectId property.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Metric alerts from Container insights description: This article reviews the recommended metric alerts available from Container insights in public preview. Previously updated : 10/28/2020 Last updated : 05/24/2022
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Title: Enable Container insights | Microsoft Docs
+ Title: Enable Container insights
description: This article describes how to enable and configure Container insights so that you can understand how your container is performing and what performance-related issues have been identified. Previously updated : 06/30/2020- Last updated : 05/24/2022 # Enable Container insights
+This article provides an overview of the requirements and options that are available for configuring Container insights to monitor the performance of workloads that are deployed to Kubernetes environments. You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using a number of supported methods.
-This article provides an overview of the options that are available for setting up Container insights to monitor the performance of workloads that are deployed to Kubernetes environments and hosted on:
+## Supported configurations
+Container insights supports the following environments:
- [Azure Kubernetes Service (AKS)](../../aks/index.yml) - [Azure Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md)
This article provides an overview of the options that are available for setting
- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x - [Red Hat OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4.x
-You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using any of the following supported methods:
--- The Azure portal-- Azure PowerShell-- The Azure CLI-- [Terraform and AKS](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks)-
-For any non-AKS kubernetes cluster, you will need to first connect your cluster to [Azure Arc](../../azure-arc/kubernetes/overview.md) before enabling monitoring.
+## Supported Kubernetes versions
+The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
## Prerequisites- Before you start, make sure that you've met the following requirements:
-> [!IMPORTANT]
-> Log Analytics Containerized Linux Agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics.
-Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
->
-> If you have a Kubernetes cluster with Windows nodes, then please review and configure the Network Security Group and Network Policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in cluster's virtual network.
-
+**Log Analytics workspace**
+Container insights supports a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). For a list of the supported mapping pairs to use for the default workspace, see [Region mappings supported by Container insights](container-insights-region-mapping.md).
-- You have a Log Analytics workspace.
+You can let the onboarding experience create a default workspace in the default resource group of the AKS cluster subscription. If you already have a workspace though, then you will most likely want to use that one. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for details.
- Container insights supports a Log Analytics workspace in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor).
+An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure portal, but can be done with Azure CLI or Resource Manager template.
- You can create a workspace when you enable monitoring for your new AKS cluster, or you can let the onboarding experience create a default workspace in the default resource group of the AKS cluster subscription.
-
- If you choose to create the workspace yourself, you can create it through:
- - [Azure Resource Manager](../logs/resource-manager-workspace.md)
- - [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json)
- - [The Azure portal](../logs/quick-create-workspace.md)
-
- For a list of the supported mapping pairs to use for the default workspace, see [Region mapping for Container insights](container-insights-region-mapping.md).
-- You are a member of the *Log Analytics contributor* group for enabling container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage workspaces](../logs/manage-access.md).
+**Permissions**
+To enable container monitoring, you require the following permissions:
-- You are a member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on the AKS cluster resource.
+- Member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.
+- Member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on any AKS cluster resources.
- [!INCLUDE [log-analytics-agent-note](../../../includes/log-analytics-agent-note.md)]
+To enable container monitoring, you require the following permissions:
-- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
+- Member of [Log Analytics reader](../logs/manage-access.md#azure-rbac) role if you aren't already a member of [Log Analytics contributor](../logs/manage-access.md#azure-rbac).
-- Prometheus metrics aren't collected by default. Before you [configure the agent](container-insights-prometheus-integration.md) to collect the metrics, it's important to review the [Prometheus documentation](https://prometheus.io/) to understand what data can be scraped and what methods are supported.-- An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure Portal, but can be done with Azure CLI or Resource Manager template.
+**Prometheus**
+Prometheus metrics aren't collected by default. Before you [configure the agent](container-insights-prometheus-integration.md) to collect the metrics, it's important to review the [Prometheus documentation](https://prometheus.io/) to understand what data can be scraped and what methods are supported.
-## Supported configurations
+**Kubelet secure port**
+Log Analytics Containerized Linux Agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
-Container insights officially supports the following configurations:
+If you have a Kubernetes cluster with Windows nodes, then please review and configure the Network Security Group and Network Policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in cluster's virtual network.
-- Environments: Azure Red Hat OpenShift, Kubernetes on-premises, and the AKS engine on Azure and Azure Stack. For more information, see [the AKS engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).-- The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).-- We recommend connecting your cluster to [Azure Arc](../../azure-arc/kubernetes/overview.md) and enabling monitoring through Container Insights via Azure Arc.
-> [!IMPORTANT]
-> Please note that the monitoring add-on is not currently supported for AKS clusters configured with the [HTTP Proxy (preview)](../../aks/http-proxy.md)
## Network firewall requirements
The following table lists the proxy and firewall configuration information for A
| `*.oms.opinsights.azure.us` | 443 | OMS onboarding | | `dc.services.visualstudio.com` | 443 | For agent telemetry that uses Azure Public Cloud Application Insights |
-## Components
+## Agent
+Container insights relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
-Your ability to monitor performance relies on a containerized Log Analytics agent for Linux that's specifically developed for Container insights. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
+The agent version is *microsoft/oms:ciprod04202018* or later, and it's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
-The agent version is microsoft/oms:ciprod04202018 or later, and it's represented by a date in the following format: *mmddyyyy*.
>[!NOTE] >With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows server node to collect logs and forward it to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor on behalf all Windows nodes in the cluster.
-When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
> [!NOTE]
-> If you've already deployed an AKS cluster, you've enabled monitoring by using either the Azure CLI or a provided Azure Resource Manager template, as demonstrated later in this article. You can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent.
->
-> The template needs to be deployed in the same resource group as the cluster.
+> If you've already deployed an AKS cluster and enabled monitoring using either the Azure CLI or a Azure Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
+## Installation options
To enable Container insights, use one of the methods that's described in the following table:
-| Deployment state | Method | Description |
-||--|-|
-| New Kubernetes cluster | [Create an AKS cluster by using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md)| You can enable monitoring for a new AKS cluster that you create by using the Azure CLI. |
-| | [Create an AKS cluster by using Terraform](container-insights-enable-new-cluster.md#enable-using-terraform)| You can enable monitoring for a new AKS cluster that you create by using the open-source tool Terraform. |
-| | [Create an OpenShift cluster by using an Azure Resource Manager template](container-insights-azure-redhat-setup.md#enable-for-a-new-cluster-using-an-azure-resource-manager-template) | You can enable monitoring for a new OpenShift cluster that you create by using a preconfigured Azure Resource Manager template. |
-| | [Create an OpenShift cluster by using the Azure CLI](/cli/azure/openshift#az-openshift-create) | You can enable monitoring when you deploy a new OpenShift cluster by using the Azure CLI. |
-| Existing AKS cluster | [Enable monitoring of an AKS cluster by using the Azure CLI](container-insights-enable-existing-clusters.md#enable-using-azure-cli) | You can enable monitoring for an AKS cluster that's already deployed by using the Azure CLI. |
-| |[Enable for AKS cluster using Terraform](container-insights-enable-existing-clusters.md#enable-using-terraform) | You can enable monitoring for an AKS cluster that's already deployed by using the open-source tool Terraform. |
-| | [Enable for AKS cluster from Azure Monitor](container-insights-enable-existing-clusters.md#enable-from-azure-monitor-in-the-portal)| You can enable monitoring for one or more AKS clusters that are already deployed from the multi-cluster page in Azure Monitor. |
-| | [Enable from AKS cluster](container-insights-enable-existing-clusters.md#enable-directly-from-aks-cluster-in-the-portal)| You can enable monitoring directly from an AKS cluster in the Azure portal. |
-| | [Enable for AKS cluster using an Azure Resource Manager template](container-insights-enable-existing-clusters.md#enable-using-an-azure-resource-manager-template)| You can enable monitoring for an AKS cluster by using a preconfigured Azure Resource Manager template. |
-| Existing non-AKS Kubernetes cluster | [Enable for non-AKS Kubernetes cluster by using the Azure CLI](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-cli). | You can enable monitoring for your Kubernetes clusters that are hosted outside of Azure and enabled with Azure Arc, this includes hybrid, OpenShift, and multi-cloud using Azure CLI. |
-| | [Enable for non-AKS Kubernetes cluster using an Azure Resource Manager template](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-resource-manager) | You can enable monitoring for your clusters enabled with Arc by using a preconfigured Azure Resource Manager template. |
-| | [Enable for non-AKS Kubernetes cluster from Azure Monitor](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-portal) | You can enable monitoring for one or more clusters enabled with Arc that are already deployed from the multicluster page in Azure Monitor. |
+| Deployment state | Method |
+||--|
+| New Kubernetes cluster | [Enable monitoring for a new AKS cluster using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md)|
+| | [Enable for a new AKS cluster by using the open-source tool Terraform](container-insights-enable-new-cluster.md#enable-using-terraform)|
+| | [Enable for a new OpenShift cluster by using an Azure Resource Manager template](container-insights-azure-redhat-setup.md#enable-for-a-new-cluster-using-an-azure-resource-manager-template) |
+| | [Enable for a new OpenShift cluster by using the Azure CLI](/cli/azure/openshift#az-openshift-create) |
+| Existing AKS cluster | [Enable monitoring for an existing AKS cluster using the Azure CLI](container-insights-enable-existing-clusters.md#enable-using-azure-cli) |
+| |[Enable for an existing AKS cluster using Terraform](container-insights-enable-existing-clusters.md#enable-using-terraform) |
+| | [Enable for an existing AKS cluster from Azure Monitor](container-insights-enable-existing-clusters.md#enable-from-azure-monitor-in-the-portal)|
+| | [Enable directly from an AKS cluster in the Azure portal](container-insights-enable-existing-clusters.md#enable-directly-from-aks-cluster-in-the-portal)|
+| | [Enable for AKS cluster using an Azure Resource Manager template](container-insights-enable-existing-clusters.md#enable-using-an-azure-resource-manager-template)|
+| Existing non-AKS Kubernetes cluster | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc using the Azure CLI](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-cli). |
+| | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc using a preconfigured Azure Resource Manager template](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-resource-manager) |
+| | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc from the multicluster page Azure Monitor](container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-portal) |
## Next steps
+Once you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
-Now that you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
Title: How to stop monitoring your hybrid Kubernetes cluster | Microsoft Docs description: This article describes how you can stop monitoring of your hybrid Kubernetes cluster with Container insights. Previously updated : 06/16/2020 Last updated : 05/24/2022
azure-monitor Container Insights Optout Openshift V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v3.md
Title: How to stop monitoring your Azure Red Hat OpenShift v3 cluster | Microsoft Docs description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift cluster with Container insights. Previously updated : 04/24/2020 Last updated : 05/24/2022
azure-monitor Container Insights Optout Openshift V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v4.md
Title: How to stop monitoring your Azure and Red Hat OpenShift v4 cluster | Microsoft Docs description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift and Red Hat OpenShift version 4 cluster with Container insights. Previously updated : 04/24/2020 Last updated : 05/24/2022
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
Title: How to Stop Monitoring Your Azure Kubernetes Service cluster | Microsoft Docs description: This article describes how you can discontinue monitoring of your Azure AKS cluster with Container insights. Previously updated : 08/19/2019 Last updated : 05/24/2022 ms.devlang: azurecli
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights | Microsoft Docs description: This article describes Container insights that monitors AKS Container Insights solution and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. + Last updated 09/08/2020
Container insights is a feature designed to monitor the performance of container
- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine) - [Azure Container Instances](../../container-instances/container-instances-overview.md) - Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises-- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) (preview) Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Docker, Moby, and any CRI compatible runtime such as CRI-O and ContainerD. Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications.
-Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are written to the metrics store and log data is written to the logs store associated with your [Log Analytics](../logs/log-query-overview.md) workspace.
+Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
-![Container insights architecture](./media/container-insights-overview/azmon-containers-architecture-01.png)
-## What does Container insights provide?
+## Features of Container insights
-Container insights delivers a comprehensive monitoring experience using different features of Azure Monitor. These features enable you to understand the performance and health of your Kubernetes cluster running Linux and Windows Server 2019 operating system, and the container workloads. With Container insights you can:
+Container insights delivers a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads.
-* Identify AKS containers that are running on the node and their average processor and memory utilization. This knowledge can help you identify resource bottlenecks.
-* Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.
-* Identify where the container resides in a controller or a pod. This knowledge can help you view the controller's or pod's overall performance.
-* Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod.
-* Understand the behavior of the cluster under average and heaviest loads. This knowledge can help you identify capacity needs and determine the maximum load that the cluster can sustain.
-* Configure alerts to proactively notify you or record it when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.
-* Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes using [queries](container-insights-log-query.md) to create custom alerts, dashboards, and perform detailed analysis.
-* Monitor container workloads [deployed to AKS Engine](https://github.com/Azure/aks-engine) on-premises and [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
-* Monitor container workloads [deployed to Azure Red Hat OpenShift](../../openshift/intro-openshift.md).
+- Identify resource bottlenecks by identifying AKS containers running on the node and their average processor and memory utilization.
+- Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.
+- View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod.
+- Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod.
+- Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.
+- Configure alerts to proactively notify you or record it when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.
+- Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes using [queries](container-insights-log-query.md) to create custom alerts, dashboards, and perform detailed analysis.
+- Monitor container workloads [deployed to AKS Engine](https://github.com/Azure/aks-engine) on-premises and [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
+- Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
- >[!NOTE]
- >Support for Azure Red Hat OpenShift is a feature in public preview at this time.
- >
-* Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
-The main differences in monitoring a Windows Server cluster compared to a Linux cluster are the following:
+Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights. Note that the video refers to *Azure Monitor for Containers* which is the previous name for *Container insights*.
-- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows node and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.-- Disk storage capacity information isn't available for Windows nodes.-- Only pod environments are monitored, not Docker environments.-- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.-
-Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights.
+[!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
-> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
-## How do I access this feature?
-You can access Container insights two ways, from Azure Monitor or directly from the selected AKS cluster. From Azure Monitor, you have a global perspective of all the containers deployed, which are monitored and which are not, allowing you to search and filter across your subscriptions and resource groups, and then drill into Container insights from the selected container. Otherwise, you can access the feature directly from a selected AKS container from the AKS page.
+## How to access Container insights
+Access Container insights in the Azure portal from Azure Monitor or directly from the selected AKS cluster. The Azure Monitor menu gives you the global perspective of all the containers deployed amd which are monitored, allowing you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
![Overview of methods to access Container insights](./media/container-insights-overview/azmon-containers-experience.png) +
+## Differences between Windows and Linux clusters
+The main differences in monitoring a Windows Server cluster compared to a Linux cluster include the following:
+
+- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows node and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+- Disk storage capacity information isn't available for Windows nodes.
+- Only pod environments are monitored, not Docker environments.
+- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.
+ ## Next steps To begin monitoring your Kubernetes cluster, review [How to enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Title: Configure PV monitoring with Container insights | Microsoft Docs description: This article describes how you can configure monitoring Kubernetes clusters with persistent volumes with Container insights. Previously updated : 03/03/2021 Last updated : 05/24/2022 # Configure PV monitoring with Container insights
azure-monitor Container Insights Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-region-mapping.md
When enabling Container insights, only certain regions are supported for linking a Log Analytics workspace and an AKS cluster, and collecting custom metrics submitted to Azure Monitor. ## Log Analytics workspace supported mappings- Supported AKS regions are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service). The Log Analytics workspace must be in the same region except for the regions listed in the following table. Watch [AKS release notes](https://github.com/Azure/AKS/releases) for updates.
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
Title: Reports in Container insights description: Describes reports available to analyze data collected by Container insights. Previously updated : 03/02/2021 Last updated : 05/24/2022 # Reports in Container insights
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
Title: "Transition from the Container Monitoring Solution to using Container Insights"
+ Title: Transition from the Container Monitoring Solution to using Container Insights
Last updated 1/18/2022
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
Title: How to Troubleshoot Container insights | Microsoft Docs description: This article describes how you can troubleshoot and resolve issues with Container insights. Previously updated : 03/25/2021 Last updated : 05/24/2022
You can also manually grant this role from the Azure portal by performing the fo
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). ## Container insights is enabled but not reporting any information-
-If Container insights is successfully enabled and configured, but you cannot view status information or no results are returned from a log query, you diagnose the problem by following these steps:
+Use the following steps to diagnose the problem if you can't view status information or no results are returned from a log query:
1. Check the status of the agent by running the command:
If Container insights is successfully enabled and configured, but you cannot vie
omsagent 1 1 1 1 3h ```
-4. Check the status of the pod to verify that it is running using the command: `kubectl get pods --namespace=kube-system`
+4. Check the status of the pod to verify that it's running using the command: `kubectl get pods --namespace=kube-system`
The output should resemble the following example with a status of *Running* for the omsagent:
The table below summarizes known errors you may encounter while using Container
| Error messages | Action | | - | | | Error Message `No data for selected filters` | It may take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster. |
-| Error Message `Error retrieving data` | While Azure Kubernetes Service cluster is setting up for health and performance monitoring, a connection is established between the cluster and Azure Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error may occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted and if it was, you will need to re-enable monitoring of your cluster with Container insights and specify an existing or create a new workspace. To re-enable, you will need to [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |
-| `Error retrieving data` after adding Container insights through az aks cli | When enable monitoring using `az aks cli`, Container insights may not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left-hand side. To resolve this issue, you will need to redeploy the solution by following the instructions on [how to deploy Container insights](container-insights-onboard.md) |
+| Error Message `Error retrieving data` | While Azure Kubernetes Service cluster is setting up for health and performance monitoring, a connection is established between the cluster and Azure Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error may occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, you'll need to re-enable monitoring of your cluster with Container insights and either specify an existing workspace or create a new one. To re-enable, you'll need to [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |
+| `Error retrieving data` after adding Container insights through az aks cli | When enable monitoring using `az aks cli`, Container insights may not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left-hand side. To resolve this issue, you'll need to redeploy the solution by following the instructions on [how to deploy Container insights](container-insights-onboard.md) |
-To help diagnose the problem, we have provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot).
+To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot).
-## Container insights agent ReplicaSet Pods are not scheduled on non-Azure Kubernetes cluster
+## Container insights agent ReplicaSet Pods aren't scheduled on non-Azure Kubernetes cluster
Container insights agent ReplicaSet Pods has a dependency on the following node selectors on the worker (or agent) nodes for the scheduling:
If your worker nodes donΓÇÖt have node labels attached, then agent ReplicaSet Po
## Performance charts don't show CPU or memory of nodes and containers on a non-Azure cluster
-Container insights agent Pods uses the cAdvisor endpoint on the node agent to gather the performance metrics. Verify the containerized agent on the node is configured to allow `cAdvisor port: 10255` to be opened on all nodes in the cluster to collect performance metrics.
+Container insights agent pods use the cAdvisor endpoint on the node agent to gather the performance metrics. Verify the containerized agent on the node is configured to allow `cAdvisor port: 10255` to be opened on all nodes in the cluster to collect performance metrics.
-## Non-Azure Kubernetes cluster are not showing in Container insights
+## Non-Azure Kubernetes cluster aren't showing in Container insights
To view the non-Azure Kubernetes cluster in Container insights, Read access is required on the Log Analytics workspace supporting this Insight and on the Container Insights solution resource **ContainerInsights (*workspace*)**.
To view the non-Azure Kubernetes cluster in Container insights, Read access is r
``` azurecli az role assignment list --assignee "SP/UserassignedMSI for omsagent" --scope "/subscriptions/<subid>/resourcegroups/<RG>/providers/Microsoft.ContainerService/managedClusters/<clustername>" --role "Monitoring Metrics Publisher" ```
- For clusters with MSI, the user assigned client id for omsagent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client id.
+ For clusters with MSI, the user assigned client ID for omsagent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client ID.
3. For clusters with Azure Active Directory pod identity enabled and using MSI:
To view the non-Azure Kubernetes cluster in Container insights, Read access is r
``` ## Installation of Azure Monitor Containers Extension fail with an error containing ΓÇ£manifests contain a resource that already existsΓÇ¥ on Azure Arc Enabled Kubernetes cluster
-The error _manifests contain a resource that already exists_ indicates that resources of the Container Insights agent already exist on the Azure Arc Enabled Kubernetes cluster. This indicates that the container insights agent is already installed either through azuremonitor-containers HELM chart or Monitoring Addon if it is AKS Cluster which is connected Azure Arc. The solution to this issue, is to clean up the existing resources of container insights agent if it exists and then enable Azure Monitor Containers Extension.
+The error _manifests contain a resource that already exists_ indicates that resources of the Container Insights agent already exist on the Azure Arc Enabled Kubernetes cluster. This indicates that the container insights agent is already installed, either through azuremonitor-containers HELM chart or Monitoring Addon if it's AKS Cluster that's connected Azure Arc. The solution to this issue is to clean up the existing resources of container insights agent if it exists. Then enable Azure Monitor Containers Extension.
### For non-AKS clusters
-1. Against the K8s cluster which is connected to Azure Arc, run below command to verify whether the azmon-containers-release-1 helm chart release exists or not:
+1. Against the K8s cluster that's connected to Azure Arc, run below command to verify whether the azmon-containers-release-1 helm chart release exists or not:
`helm list -A`
The error _manifests contain a resource that already exists_ indicates that reso
`helm del azmon-containers-release-1` ### For AKS clusters
-1. Run below commands and look for omsagent addon profile to verify the AKS monitoring addon enabled or not:
+1. Run the following commands and look for omsagent addon profile to verify whether the AKS monitoring addon is enabled:
``` az account set -s <clusterSubscriptionId> az aks show -g <clusterResourceGroup> -n <clusterName> ```
-2. If there is omsagent addon profile config with log analytics workspace resource Id in the output of the above command indicates that, AKS Monitoring addon enabled and which needs to be disabled:
+2. If the output includes an omsagent addon profile config with a log analytics workspace resource ID, this indicates that AKS Monitoring addon is enabled and needs to be disabled:
`az aks disable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName>`
-If above steps didnΓÇÖt resolve the installation of Azure Monitor Containers Extension issues, please create a ticket to Microsoft for further investigation.
+If above steps didnΓÇÖt resolve the installation of Azure Monitor Containers Extension issues, create a ticket to Microsoft for further investigation.
## Next steps
azure-monitor Azure Cli Application Insights Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-cli-application-insights-component.md
Last updated 09/10/2012--
+ms.tool: azure-cli
# Manage Application Insights components by using Azure CLI
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups
**Example**
-This example configures the `ContainerLog` table for Basic Logs.
+This example configures the `ContainerLogV2` table for Basic Logs.
+
+Container Insights uses ContainerLog by default, to switch to using ContainerLogV2, please follow these [instructions](../containers/container-insights-logging-v2.md) before attempting to convert the table to Basic Logs.
**Sample request** ```http
-PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
+PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview
``` Use this request body to change to Basic Logs:
Status code: 200
"schema": {...} }, "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
- "name": "ContainerLog"
+ "name": "ContainerLogV2"
} ```
For example:
- To set Basic Logs: ```azurecli
- az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLog --plan Basic
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Basic
``` - To set Analytics Logs: ```azurecli
- az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLog --plan Analytics
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Analytics
```
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{
**Sample Request** ```http
-GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
+GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview
```
Status code: 200
"provisioningState": "Succeeded" }, "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
- "name": "ContainerLog"
+ "name": "ContainerLogV2"
} ```
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 03/15/2022 Last updated : 05/27/2022 # Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes
Azure NetApp Files supports fetching of extended groups from the LDAP name servi
When itΓÇÖs determined that LDAP will be used for operations such as name lookup and fetching extended groups, the following process occurs: 1. Azure NetApp Files uses an LDAP client configuration to make a connection attempt to the ADDS/AADDS LDAP server that is specified in the [Azure NetApp Files AD configuration](create-active-directory-connections.md).
-1. If the TCP connection over the defined ADDS/AADDS LDAP service port is successful, then the Azure NetApp Files LDAP client attempts to ΓÇ£bindΓÇ¥ (log in) to the ADDS/AADDS LDAP server (domain controller) by using the defined credentials in the LDAP client configuration.
+1. If the TCP connection over the defined ADDS/AADDS LDAP service port is successful, then the Azure NetApp Files LDAP client attempts to ΓÇ£bindΓÇ¥ (sign in) to the ADDS/AADDS LDAP server (domain controller) by using the defined credentials in the LDAP client configuration.
1. If the bind is successful, then the Azure NetApp Files LDAP client uses the RFC 2307bis LDAP schema to make an LDAP search query to the ADDS/AADDS LDAP server (domain controller). The following information is passed to the server in the query: * [Base/user DN](configure-ldap-extended-groups.md#ldap-search-scope) (to narrow search scope)
The following information is passed to the server in the query:
![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png) 7. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
- 1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**.
+ 1. Select **Active Directory connections**. On an existing Active Directory connection, select the context menu (the three dots `…`), and select **Edit**.
2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option. ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
The following information is passed to the server in the query:
* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Create and manage Active Directory connections](create-active-directory-connections.md) * [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain)
+* [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md)
* [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) * [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)
azure-netapp-files Configure Nfs Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-nfs-clients.md
na Previously updated : 09/22/2021 Last updated : 05/27/2022 # Configure an NFS client for Azure NetApp Files
-The NFS client configuration described in this article is part of the setup when you [configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) or [create a dual-protocol volume](create-volumes-dual-protocol.md). A wide variety of Linux distributions are available to use with Azure NetApp Files. This article describes configurations for two of the more commonly used environments: RHEL 8 and Ubuntu 18.04.
+The NFS client configuration described in this article is part of the setup when you [configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) or [create a dual-protocol volume](create-volumes-dual-protocol.md) or [NFSv3/NFSv4.1 with LDAP](configure-ldap-extended-groups.md). A wide variety of Linux distributions are available to use with Azure NetApp Files. This article describes configurations for two of the more commonly used environments: RHEL 8 and Ubuntu 18.04.
## Requirements and considerations
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Azure enables you to create and manage support requests, also known as support t
> The Azure portal URL is specific to the Azure cloud where your organization is deployed. > >- Azure portal for commercial use is: [https://portal.azure.com](https://portal.azure.com)
->- Azure portal for Germany is: [https://portal.microsoftazure.de](https://portal.microsoftazure.de)
>- Azure portal for the United States government is: [https://portal.azure.us](https://portal.azure.us) Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans).
Follow these links to learn more:
* [Azure support ticket REST API](/rest/api/support) * Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
-* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 05/03/2022 Last updated : 05/25/2022 # Use tags to organize your Azure resources and management hierarchy
-You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them by values that make sense for your organization. Each tag consists of a name and a value pair. For example, you can apply the name _Environment_ and the value _Production_ to all the resources in production.
+Tags are metadata elements that you apply to your Azure resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named Environment. To identify the resources deployed to production, give them a value of Production. Fully formed, the key-value pair becomes, Environment = Production.
+
+You can apply tags to your Azure resources, resource groups, and subscriptions.
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
Resource tags support all cost-accruing services. To ensure that cost-accruing s
> Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, commands that return existing tag definitions, deployment histories, exported templates, and monitoring logs. > [!IMPORTANT]
-> Tag names are case-insensitive for operations. A tag with a tag name, regardless of casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports.
+> Tag names are case-insensitive for operations. A tag with a tag name, regardless of the casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports.
> > Tag values are case-sensitive.
Resource tags support all cost-accruing services. To ensure that cost-accruing s
There are two ways to get the required access to tag resources. -- You can have write access to the `Microsoft.Resources/tags` resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. Currently, the tag contributor role can't apply tags to resources or resource groups through the portal. It can apply tags to subscriptions through the portal. It supports all tag operations through PowerShell and REST API.
+- You can have write access to the `Microsoft.Resources/tags` resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. The tag contributor role, for example, can't apply tags to resources or resource groups through the portal. It can, however, apply tags to subscriptions through the portal. It supports all tag operations through Azure PowerShell and REST API.
-- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. For example, to apply tags to virtual machines, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
+- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. To apply tags to virtual machines, for example, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
## PowerShell ### Apply tags
-Azure PowerShell offers two commands for applying tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You must have the `Az.Resources` module 1.12.0 or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) 3.6.1 or later.
+Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) version 3.6.1 or later.
-The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
+The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
The following example applies a set of tags to a storage account:
Properties :
Status Normal ```
-If you run the command again but this time with different tags, notice that the earlier tags are removed.
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
```azurepowershell-interactive $tags = @{"Team"="Compliance"; "Environment"="Production"}
$tags = @{"Dept"="Finance"; "Status"="Normal"}
Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge ```
-Notice that the two new tags were added to the two existing tags.
+Notice that the existing tags grow with the addition of the two new tags.
```output Properties :
Properties :
Environment Production ```
-Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
```azurepowershell-interactive $tags = @{"Status"="Green"}
Properties :
Environment Production ```
-When you set the `-Operation` parameter to `Replace`, the existing tags are replaced by the new set of tags.
+When you set the `-Operation` parameter to `Replace`, the new set of tags replaces the existing tags.
```azurepowershell-interactive $tags = @{"Project"="ECommerce"; "CostCenter"="00123"; "Team"="Web"}
Properties :
Project ECommerce ```
-The same commands also work with resource groups or subscriptions. You pass in the identifier for the resource group or subscription you want to tag.
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
To add a new set of tags to a resource group, use:
$resource | ForEach-Object { Update-AzTag -Tag @{ "Dept"="IT"; "Environment"="Te
### List tags
-To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass in the resource ID for the entity.
+To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass the resource ID of the entity.
To see the tags for a resource, use:
To get resource groups that have a specific tag name and value, use:
### Remove tags
-To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass in the tags you want to delete.
+To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
```azurepowershell-interactive $removeTags = @{"Project"="ECommerce"; "Team"="Web"}
Remove-AzTag -ResourceId "/subscriptions/$subscription"
### Apply tags
-Azure CLI offers two commands for applying tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You must have Azure CLI 2.10.0 or later. You can check your version with `az version`. To update or install, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+Azure CLI offers two commands to apply tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You need to have the Azure CLI 2.10.0 version or later. You can check your version with `az version`. To update or install it, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-The `az tag create` replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
+The `az tag create` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
The following example applies a set of tags to a storage account:
When the command completes, notice that the resource has two tags.
}, ```
-If you run the command again but this time with different tags, notice that the earlier tags are removed.
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
```azurecli-interactive az tag create --resource-id $resource --tags Team=Compliance Environment=Production
To add tags to a resource that already has tags, use `az tag update`. Set the `-
az tag update --resource-id $resource --operation Merge --tags Dept=Finance Status=Normal ```
-Notice that the two new tags were added to the two existing tags.
+Notice that the existing tags grow with the addition of the two new tags.
```output "properties": {
Notice that the two new tags were added to the two existing tags.
}, ```
-Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+Each tag name can have only one value. If you provide a new value for a tag, the new tag replaces the old value, even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
```azurecli-interactive az tag update --resource-id $resource --operation Merge --tags Status=Green
az tag update --resource-id $resource --operation Merge --tags Status=Green
}, ```
-When you set the `--operation` parameter to `Replace`, the existing tags are replaced by the new set of tags.
+When you set the `--operation` parameter to `Replace`, the new set of tags replaces the existing tags.
```azurecli-interactive az tag update --resource-id $resource --operation Replace --tags Project=ECommerce CostCenter=00123 Team=Web
Only the new tags remain on the resource.
}, ```
-The same commands also work with resource groups or subscriptions. You pass in the identifier for the resource group or subscription you want to tag.
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
To add a new set of tags to a resource group, use:
az tag update --resource-id /subscriptions/$sub --operation Merge --tags Team="W
### List tags
-To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass in the resource ID for the entity.
+To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass the resource ID of the entity.
To see the tags for a resource, use:
az group list --tag Dept=Finance
### Remove tags
-To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass in the tags you want to delete.
+To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass the resource ID of the tags you want to delete.
```azurecli-interactive az tag update --resource-id $resource --operation Delete --tags Project=ECommerce Team=Web ```
-The specified tags are removed.
+You've removed the specified tags.
```output "properties": {
az tag delete --resource-id $resource
### Handling spaces
-If your tag names or values include spaces, enclose them in double quotes.
+If your tag names or values include spaces, enclose them in quotation marks.
```azurecli-interactive az tag update --resource-id $group --operation Merge --tags "Cost Center"=Finance-1222 Location="West US"
az tag update --resource-id $group --operation Merge --tags "Cost Center"=Financ
## ARM templates
-You can tag resources, resource groups, and subscriptions during deployment with an Azure Resource Manager template (ARM template).
+You can tag resources, resource groups, and subscriptions during deployment with an ARM template.
> [!NOTE] > The tags you apply through an ARM template or Bicep file overwrite any existing tags.
resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
### Apply an object
-You can define an object parameter that stores several tags, and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that is applied to the tag element.
+You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
# [JSON](#tab/json)
resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
### Apply tags from resource group
-To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When getting the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
+To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
# [JSON](#tab/json)
resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
### Apply tags to resource groups or subscriptions
-You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. The tags are applied to the target resource group or subscription for the deployment. Each time you deploy the template you replace any tags there were previously applied.
+You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
# [JSON](#tab/json)
resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
-To apply the tags to a resource group, use either PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
+To apply the tags to a resource group, use either Azure PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
```azurepowershell-interactive New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
To work with tags through the Azure REST API, use:
## SDKs
-For samples of applying tags with SDKs, see:
+For examples of applying tags with SDKs, see:
* [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/resourcemanager/Azure.ResourceManager/samples/Sample2_ManagingResourceGroups.md) * [Java](https://github.com/Azure-Samples/resources-java-manage-resource-group/blob/master/src/main/java/com/azure/resourcemanager/resources/samples/ManageResourceGroup.java)
For samples of applying tags with SDKs, see:
## Inherit tags
-Tags applied to the resource group or subscription aren't inherited by the resources. To apply tags from a subscription or resource group to the resources, see [Azure Policies - tags](tag-policies.md).
+Resources don't inherit the tags you apply to a resource group or a subscription. To apply tags from a subscription or resource group to the resources, see [Azure Policies - tags](tag-policies.md).
## Tags and billing
-You can use tags to group your billing data. For example, if you're running multiple VMs for different organizations, use the tags to group usage by cost center. You can also use tags to categorize costs by runtime environment, such as the billing usage for VMs running in the production environment.
+You can use tags to group your billing data. If you're running multiple VMs for different organizations, for example, use the tags to group usage by cost center. You can also use tags to categorize costs by runtime environment, such as the billing usage for VMs running in the production environment.
-You can retrieve information about tags by downloading the usage file, a comma-separated values (CSV) file available from the Azure portal. For more information, see [Download or view your Azure billing invoice and daily usage data](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md). For services that support tags with billing, the tags appear in the **Tags** column.
+You can retrieve information about tags by downloading the usage file available from the Azure portal. For more information, see [Download or view your Azure billing invoice and daily usage data](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md). For services that support tags with billing, the tags appear in the **Tags** column.
For REST API operations, see [Azure Billing REST API Reference](/rest/api/billing/).
For REST API operations, see [Azure Billing REST API Reference](/rest/api/billin
The following limitations apply to tags: * Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
-* Each resource, resource group, and subscription can have a maximum of 50 tag name/value pairs. If you need to apply more tags than the maximum allowed number, use a JSON string for the tag value. The JSON string can contain many values that are applied to a single tag name. A resource group or subscription can contain many resources that each have 50 tag name/value pairs.
-* The tag name is limited to 512 characters, and the tag value is limited to 256 characters. For storage accounts, the tag name is limited to 128 characters, and the tag value is limited to 256 characters.
-* Tags can't be applied to classic resources such as Cloud Services.
-* Azure IP Groups and Azure Firewall Policies don't support PATCH operations, which means they don't support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az-network-ip-group-update) command.
+* Each resource, resource group, and subscription can have a maximum of 50 tag name-value pairs. If you need to apply more tags than the maximum allowed number, use a JSON string for the tag value. The JSON string can contain many of the values that you apply to a single tag name. A resource group or subscription can contain many resources that each have 50 tag name-value pairs.
+* The tag name has a limit of 512 characters and the tag value has a limit of 256 characters. For storage accounts, the tag name has a limit of 128 characters and the tag value has a limit of 256 characters.
+* Classic resources such as Cloud Services don't support tags.
+* Azure IP Groups and Azure Firewall Policies don't support PATCH operations. PATCH API method operations, therefore, can't update tags through the portal. Instead, you can use the update commands for those resources. You can update tags for an IP group, for example, with the [az network ip-group update](/cli/azure/network/ip-group#az-network-ip-group-update) command.
* Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/` > [!NOTE]
- > * Azure DNS zones don't support the use of spaces in the tag or a tag that starts with a number. Azure DNS tag names do not support special and unicode characters. The value can contain all characters.
+ > * Azure Domain Name System (DNS) zones don't support the use of spaces in the tag or a tag that starts with a number. Azure DNS tag names don't support special and unicode characters. The value can contain all characters.
> > * Traffic Manager doesn't support the use of spaces, `#` or `:` in the tag name. The tag name can't start with a number. >
The following limitations apply to tags:
> > * The following Azure resources only support 15 tags: > * Azure Automation
- > * Azure CDN
+ > * Azure Content Delivery Network (CDN)
> * Azure DNS (Zone and A records) > * Azure Private DNS (Zone, A records, and virtual network link)
azure-signalr Signalr Quickstart Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-dotnet-core.md
The code for this tutorial is available for download in the [AzureSignalR-sample
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note-dotnet.md)]
+Ready to start?
+
+> [!div class="nextstepaction"]
+> [Step by step build](#prerequisites)
+
+> [!div class="nextstepaction"]
+> [Try chat demo now](https://asrs-simplechat-live-demo.azurewebsites.net/)
+ ## Prerequisites * Install the [.NET Core SDK](https://dotnet.microsoft.com/download).
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
description: In this tutorial, you learn how to build and modify a Blazor Server
Previously updated : 09/09/2020 Last updated : 05/22/2022 ms.devlang: csharp
This tutorial shows you how to build and modify a Blazor Server app. You'll lear
> * Quick-deploy to Azure App Service in Visual Studio. > * Migrate from local SignalR to Azure SignalR Service.
+Ready to start?
+
+> [!div class="nextstepaction"]
+> [Step by step build](#prerequisites)
+
+> [!div class="nextstepaction"]
+> [Try Blazor demo now](https://asrs-blazorchat-live-demo.azurewebsites.net/chatroom)
+ ## Prerequisites * Install [.NET Core 3.0 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.0) (Version >= 3.0.100)
azure-web-pubsub Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview.md
There are many different ways to program with Azure Web PubSub service, as some
- **Use provided SDKs to manage the WebSocket connections in self-host app servers** - Azure Web PubSub service provides SDKs in C#, JavaScript, Java and Python to manage the WebSocket connections easily, including broadcast messages to the connections, add connections to some groups, or close the connections, etc. - **Send messages from server to clients via REST API** - Azure Web PubSub service provides REST API to enable applications to post messages to clients connected, in any REST capable programming languages.
+## Quick start
+
+> [!div class="nextstepaction"]
+> [Play with chat demo](https://azure.github.io/azure-webpubsub/demos/chat)
+
+> [!div class="nextstepaction"]
+> [Build a chat app](tutorial-build-chat.md)
+ ## Next steps [!INCLUDE [next step](includes/include-next-step.md)]
azure-web-pubsub Quickstart Use Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-sdk.md
Now let's use Azure Web PubSub SDK to publish a message to the connected client.
console.log('Usage: node publish <message>'); return 1; }
- const hub = "pubsub";
+ const hub = "myHub1";
let service = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, hub); // by default it uses `application/json`, specify contentType as `text/plain` if you want plain-text service.sendToAll(process.argv[2], { contentType: "text/plain" });
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Title: Troubleshoot the Azure Backup agent description: In this article, learn how to troubleshoot the installation and registration of the Azure Backup agent. Previously updated : 04/05/2022 Last updated : 05/31/2022
We recommend that you check the following before you start troubleshooting Micro
- You can use [Add Exclusion rules to existing policy](./backup-azure-manage-mars.md#add-exclusion-rules-to-existing-policy) to exclude unsupported, missing, or deleted files from your backup policy to ensure successful backups. -- Avoid deleting and recreating protected folders with the same names in the top-level folder. Doing so could result in the backup completing with warnings with the error *A critical inconsistency was detected, therefore changes cannot be replicated.* If you need to delete and recreate folders, then consider doing so in subfolders under the protected top-level folder.
+- Avoid deleting and recreating protected folders with the same names in the top-level folder. Doing so could result in the backup completing with warnings with the error: *A critical inconsistency was detected, therefore changes cannot be replicated.* If you need to delete and recreate folders, then consider doing so in subfolders under the protected top-level folder.
## Failed to set the encryption key for secure backups
We recommend that you check the following before you start troubleshooting Micro
| Error | Possible causes | Recommended actions | ||||
-| <br />Error 34506. The encryption passphrase stored on this computer is not correctly configured. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
+| <br />Error 34506. The encryption passphrase stored on this computer is not correctly configured. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. <li> If you've recently moved your scratch folder, ensure that the path of your scratch folder location matches the values of the registry key entries shown below: <br><br> **Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* <br><br>**Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config\CloudBackupProvider` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* |
## Backups don't run according to schedule
cdn Cdn Azure Cli Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/scripts/cli/cdn-azure-cli-create-endpoint.md
Last updated 03/09/2021 -
+ms.devlang: azurecli
+ms.tool: azure-cli
# Create an Azure CDN profile and endpoint using the Azure CLI
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 5/11/2022 Last updated : 5/26/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
->[!NOTE]
-
->The May Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the May Guest OS. This list is subject to change.
## May 2022 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 22-05 | [5013941] | Latest Cumulative Update(LCU) | 6.44 | May 10, 2022 |
-| Rel 22-05 | [5011486] | IE Cumulative Updates | 2.123, 3.110, 4.103 | Mar 8, 2022 |
-| Rel 22-05 | [5013944] | Latest Cumulative Update(LCU) | 7.12 | May 10, 2022 |
-| Rel 22-05 | [5013952] | Latest Cumulative Update(LCU) | 5.68 | May 10, 2022 |
-| Rel 22-05 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.123 | May 10, 2022 |
-| Rel 22-05 | [5012141] | .NET Framework 4.5.2 Security and Quality Rollup | 2.123 | Apr 12, 2022 |
-| Rel 22-05 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.103 | May 10, 2022 |
-| Rel 22-05 | [5012142] | .NET Framework 4.5.2 Security and Quality Rollup | 4.103 | Apr 12, 2022 |
-| Rel 22-05 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.110 | May 10, 2022 |
-| Rel 22-05 | [5012140] | . NET Framework 4.5.2 Security and Quality Rollup | 3.110 | Apr 12, 2022 |
-| Rel 22-05 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.44 | May 10, 2022 |
-| Rel 22-05 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.12 | May 10, 2022 |
-| Rel 22-05 | [5014012] | Monthly Rollup | 2.123 | May 10, 2022 |
-| Rel 22-05 | [5014017] | Monthly Rollup | 3.110 | May 10, 2022 |
-| Rel 22-05 | [5014011] | Monthly Rollup | 4.103 | May 10, 2022 |
-| Rel 22-05 | [5014027] | Servicing Stack update | 3.110 | May 10, 2022 |
-| Rel 22-05 | [5014025] | Servicing Stack update | 4.103 | May 10, 2022 |
-| Rel 22-05 | [4578013] | Standalone Security Update | 4.103 | Aug 19, 2020 |
-| Rel 22-05 | [5014026] | Servicing Stack update | 5.68 | May 10, 2022 |
-| Rel 22-05 | [5011649] | Servicing Stack update | 2.123 | Mar 8, 2022 |
-| Rel 22-05 | [4494175] | Microcode | 5.68 | Sep 1, 2020 |
-| Rel 22-05 | [4494174] | Microcode | 6.44 | Sep 1, 2020 |
+| Rel 22-05 | [5013941] | Latest Cumulative Update(LCU) | [6.44] | May 10, 2022 |
+| Rel 22-05 | [5011486] | IE Cumulative Updates | [2.123], [3.110], [4.103] | Mar 8, 2022 |
+| Rel 22-05 | [5013944] | Latest Cumulative Update(LCU) | [7.12] | May 10, 2022 |
+| Rel 22-05 | [5013952] | Latest Cumulative Update(LCU) | [5.68] | May 10, 2022 |
+| Rel 22-05 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | [2.123] | May 10, 2022 |
+| Rel 22-05 | [5012141] | .NET Framework 4.5.2 Security and Quality Rollup | [2.123] | Apr 12, 2022 |
+| Rel 22-05 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | [4.103] | May 10, 2022 |
+| Rel 22-05 | [5012142] | .NET Framework 4.5.2 Security and Quality Rollup | [4.103] | Apr 12, 2022 |
+| Rel 22-05 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | [3.110] | May 10, 2022 |
+| Rel 22-05 | [5012140] | . NET Framework 4.5.2 Security and Quality Rollup | [3.110] | Apr 12, 2022 |
+| Rel 22-05 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.44] | May 10, 2022 |
+| Rel 22-05 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | [7.12] | May 10, 2022 |
+| Rel 22-05 | [5014012] | Monthly Rollup | [2.123] | May 10, 2022 |
+| Rel 22-05 | [5014017] | Monthly Rollup | [3.110] | May 10, 2022 |
+| Rel 22-05 | [5014011] | Monthly Rollup | [4.103] | May 10, 2022 |
+| Rel 22-05 | [5014027] | Servicing Stack update | [3.110] | May 10, 2022 |
+| Rel 22-05 | [5014025] | Servicing Stack update | [4.103] | May 10, 2022 |
+| Rel 22-05 | [4578013] | Standalone Security Update | [4.103] | Aug 19, 2020 |
+| Rel 22-05 | [5014026] | Servicing Stack update | [5.68] | May 10, 2022 |
+| Rel 22-05 | [5011649] | Servicing Stack update | [2.123] | Mar 8, 2022 |
+| Rel 22-05 | [4494175] | Microcode | [5.68] | Sep 1, 2020 |
+| Rel 22-05 | [4494174] | Microcode | [6.44] | Sep 1, 2020 |
[5013941]: https://support.microsoft.com/kb/5013941 [5011486]: https://support.microsoft.com/kb/5011486
The following tables show the Microsoft Security Response Center (MSRC) updates
[5011649]: https://support.microsoft.com/kb/5011649 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[2.123]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.110]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.103]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.68]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.44]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.12]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## April 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 4/30/2022 Last updated : 5/26/2022 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **May 26, 2022**
+The May Guest OS has released.
+ ###### **April 30, 2022** The April Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.12_202205-01 | May 26, 2022 | Post 7.14 |
| WA-GUEST-OS-7.11_202204-01 | April 30, 2022 | Post 7.13 |
-| WA-GUEST-OS-7.10_202203-01 | March 19, 2022 | Post 7.12 |
+|~~WA-GUEST-OS-7.10_202203-01~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-7.9_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-7.8_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-7.6_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.44_202205-01 | May 26, 2022 | Post 6.46 |
| WA-GUEST-OS-6.43_202204-01 | April 30, 2022 | Post 6.45 |
-| WA-GUEST-OS-6.42_202203-01 | March 19, 2022 | Post 6.44 |
+|~~WA-GUEST-OS-6.42_202203-01~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-6.41_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-6.40_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-6.38_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.68_202205-01 | May 26, 2022 | Post 5.70 |
| WA-GUEST-OS-5.67_202204-01 | April 30, 2022 | Post 5.69 |
-| WA-GUEST-OS-5.66_202203-01 | March 19, 2022 | Post 5.68 |
+|~~WA-GUEST-OS-5.66_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-5.65_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-5.64_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-5.62_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.103_202205-01 | May 26, 2022 | Post 4.105 |
| WA-GUEST-OS-4.102_202204-01 | April 30, 2022 | Post 4.104 |
-| WA-GUEST-OS-4.101_202203-01 | March 19, 2022 | Post 4.103 |
+|~~WA-GUEST-OS-4.101_202203-01~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-4.100_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-4.99_202201-02~~| February 11 , 2022 | March 19, 2022 | |~~WA-GUEST-OS-4.97_202112-01~~| January 10 , 2022 | March 2, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.110_202205-01 | May 26, 2022 | Post 3.112 |
| WA-GUEST-OS-3.109_202204-01 | April 30, 2022 | Post 3.111 |
-| WA-GUEST-OS-3.108_202203-01 | March 19, 2022 | Post 3.110 |
+|~~WA-GUEST-OS-3.108_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-3.107_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-3.106_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-3.104_202112-01~~| January 10, 2022 | March 2, 2022|
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.123_202205-01 | May 26, 2022 | Post 2.125 |
| WA-GUEST-OS-2.122_202204-01 | April 30, 2022 | Post 2.124 |
-| WA-GUEST-OS-2.121_202203-01 | March 19, 2022 | Post 2.123 |
+|~~WA-GUEST-OS-2.121_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-2.120_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-2.119_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-2.117_202112-01~~| January 10, 2022 | March 2, 2022 |
cloud-shell Example Terraform Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/example-terraform-bash.md
vm-linux
Last updated 11/15/2017
+ms.tool: terraform
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Next, the phoneme sequence goes into the neural acoustic model to predict acoust
Neural text-to-speech voice models are trained by using deep neural networks based on the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
-You can adapt the neural text-to-speech engine to fit your needs. To create a custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
+You can adapt the neural text-to-speech engine to fit your needs. To create a custom neural voice, use [Speech Studio](https://aka.ms/speechstudio/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
## Custom Neural Voice project types
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Here's more information about the sequence of steps shown in the previous diagra
1. [Choose a model](how-to-custom-speech-choose-model.md) and create a Custom Speech project. Use a <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. 1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech-to-text offering for your applications, tools, and products.
-1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://speech.microsoft.com/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
+1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required. 1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended. 1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint.
cognitive-services How To Custom Commands Integrate Remote Skills https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-integrate-remote-skills.md
In this article, you will learn how to export a Custom Commands application as a remote skill.
+> [!NOTE]
+> Exporting a Custom Commands application as a remote skill is a limited preview feature.
+ ## Prerequisites > [!div class="checklist"] > * [Understanding of Bot Framework Skill](/azure/bot-service/skills-conceptual)
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
In this article, you'll learn how to deploy an endpoint for a Custom Speech mode
To create a custom endpoint, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. If this is your first endpoint, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
An endpoint can be updated to use another model that was created by the same Spe
To use a new model and redeploy the custom endpoint:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. 1. Select the link to an endpoint by name, and then select **Change model**. 1. Select the new model that you want the endpoint to use.
Logging data is available for export if you configured it while creating the end
To download the endpoint logs:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. 1. Select the link by endpoint name. 1. Under **Content logging**, select **Download log**.
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
You can test the accuracy of your custom model by creating a test. A test requir
Follow these steps to create a test:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Test models**. 1. Select **Create new test**. 1. Select **Evaluate accuracy** > **Next**.
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
If you plan to train a model with audio data, use a Speech resource in a [region
After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Train custom models**. 1. Select **Train a new model**. 1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list.
cognitive-services How To Custom Speech Transcription Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-transcription-editor.md
Datasets in the **Training and testing dataset** tab can't be updated. You can i
To import a dataset to the Editor, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**. 1. Select **Import data** 1. Select datasets. You can select audio data only, audio + human-labeled data, or both. For audio-only data, you can use the default models to automatically generate machine transcription after importing to the editor.
Once a dataset has been imported to the Editor, you can start editing the datase
To edit a dataset's transcription in the Editor, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**. 1. Select the link to a dataset by name. 1. From the **Audio + text files** table, select the link to an audio file by name.
Datasets in the Editor can be exported to the **Training and testing dataset** t
To export datasets from the Editor, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**. 1. Select the link to a dataset by name. 1. Select one or more rows from the **Audio + text files** table.
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
You need audio or text data for testing the accuracy of Microsoft speech recogni
To upload your own datasets in Speech Studio, follow these steps:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Speech datasets** > **Upload data**. 1. Select the **Training data** or **Testing data** tab. 1. Select a dataset type, and then select **Next**.
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
A Speech service subscription is required before you can use Custom Neural Voice
Once you've created an Azure account and a Speech service subscription, you'll need to sign in to Speech Studio and connect your subscription. 1. Get your Speech service subscription key from the Azure portal.
-1. Sign in to [Speech Studio](https://speech.microsoft.com), and then select **Custom Voice**.
+1. Sign in to [Speech Studio](https://aka.ms/speechstudio/customvoice), and then select **Custom Voice**.
1. Select your subscription and create a speech project. 1. If you want to switch to another Speech subscription, select the **cog** icon at the top.
Content like data, models, tests, and endpoints are organized into projects in S
To create a custom voice project:
-1. Sign in to [Speech Studio](https://speech.microsoft.com).
+1. Sign in to [Speech Studio](https://aka.ms/speechstudio/customvoice).
1. Select **Text-to-Speech** > **Custom Voice** > **Create project**. See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
After the recordings are ready, follow [Prepare training data](how-to-custom-voi
### Training
-After you've prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
+After you've prepared the training data, go to [Speech Studio](https://aka.ms/speechstudio/customvoice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
### Testing
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
Before you can migrate to custom neural voice, your [application](https://aka.ms
> Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs. 1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and then [apply here](https://aka.ms/customneural).
-2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://speech.microsoft.com) using the same Azure subscription that you provide in your application.
+2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
> [!IMPORTANT] > To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
Now try Speech Studio to see how phrase list can improve recognition accuracy.
> [!NOTE] > You may be prompted to select your Azure subscription and Speech resource, and then acknowledge billing for your region.
-1. Sign in to [Speech Studio](https://speech.microsoft.com/).
-1. Select **Real-time Speech-to-text**.
+1. Go to **Real-time Speech-to-text** in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool).
1. You test speech recognition by uploading an audio file or recording audio with a microphone. For example, select **record audio with a microphone** and then say "Hi Rehaan, this is Jessie from Contoso bank. " Then select the red button to stop recording. 1. You should see the transcription result in the **Test results** text box. If "Rehaan", "Jessie", or "Contoso" were recognized incorrectly, you can add the terms to a phrase list in the next step. 1. Select **Show advanced options** and turn on **Phrase list**.
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
At this time, Custom Commands supports speech subscriptions created in regions t
## Go to the Speech Studio for Custom Commands
-1. In a web browser, go to [Speech Studio](https://speech.microsoft.com/).
+1. In a web browser, go to [Speech Studio](https://aka.ms/speechstudio/customcommands).
1. Enter your credentials to sign in to the portal. The default view is your list of Speech subscriptions.
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
# Speech service supported regions
-The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://speech.microsoft.com).
+The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://aka.ms/speechstudio/).
Keep in mind the following points:
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
Datasets for customer-created data assets, such as customized speech models, cus
While some customers use our default endpoints to transcribe audio or standard voices for speech synthesis, other customers create assets for customization.
-These assets are backed up regularly and automatically by the repositories themselves, so **no data loss will occur** if a region becomes unavailable. However, you must take steps to ensure service continuity in the event of a region outage.
+These assets are backed up regularly and automatically by the repositories themselves, so **no data loss will occur** if a region becomes unavailable. However, you must take steps to ensure service continuity if there's a region outage.
## How to monitor service availability
-If you use our default endpoints, you should configure your client code to monitor for errors, and if errors persist, be prepared to re-direct to another region of your choice where you have a service subscription.
+If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a service subscription.
Follow these steps to configure your client to monitor for errors:
Follow these steps to configure your client to monitor for errors:
3. From Azure portal, create Speech Service resources for each region. - If you have set a specific quota, you may also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md).
-4. Note that each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
+4. Each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
- Regional Speech service endpoints - [Regional subscription key and the region code](./rest-speech-to-text.md)
-5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here is sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
+5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here's sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
1. Since networks experience transient errors, for single connectivity issue occurrences, the suggestion is to retry. 2. For persistence redirect traffic to the new STS token service and Speech service endpoint. (For Text-to-Speech, reference sample code: [GitHub: TTS public voice switching region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L880).
-The recovery from regional failures for this usage type can be instantaneous and at a very low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal.
+The recovery from regional failures for this usage type can be instantaneous and at a low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal.
## Custom endpoint recovery
-Data assets, models or deployments in one region cannot be made visible or accessible in any other region.
+Data assets, models or deployments in one region can't be made visible or accessible in any other region.
You should create Speech Service resources in both a main and a secondary region by following the same steps as used for default endpoints. ### Custom Speech
-Custom Speech Service does not support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
+Custom Speech Service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
1. Create your custom model in one main region (Primary). 2. Run the [Model Copy API](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to replicate the custom model to all prepared regions (Secondary).
Custom Speech Service does not support automatic failover. We suggest the follow
- If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Configure your client to fail over on persistent errors as with the default endpoints usage.
-Your client code can monitor availability of your deployed models in your primary region, and redirect their audio traffic to the secondary region when the primary fails. If you do not require real-time failover, you can still follow these steps to prepare for a manual failover.
+Your client code can monitor availability of your deployed models in your primary region, and redirect their audio traffic to the secondary region when the primary fails. If you don't require real-time failover, you can still follow these steps to prepare for a manual failover.
#### Offline failover
-If you do not require real-time failover you can decide to import your data, create and deploy your models in the secondary region at a later time with the understanding that these tasks will take time to complete.
+If you don't require real-time failover you can decide to import your data, create and deploy your models in the secondary region at a later time with the understanding that these tasks will take time to complete.
#### Failover time requirements
This section provides general guidance about timing. The times were recorded to
- Model copy API call: **10 mins** - Client code reconfiguration and deployment: **Depending on the client system**
-It is nonetheless advisable to create keys for a primary and secondary region for production models with real-time requirements.
+It's nonetheless advisable to create keys for a primary and secondary region for production models with real-time requirements.
### Custom Voice
-Custom Voice does not support automatic failover. Handle real-time synthesis failures with these two options.
+Custom Voice doesn't support automatic failover. Handle real-time synthesis failures with these two options.
**Option 1: Fail over to public voice in the same region.**
Check the [public voices available](./language-support.md#prebuilt-neural-voices
**Option 2: Fail over to custom voice on another region.** 1. Create and deploy your custom voice in one main region (primary).
-2. Copy your custom voice model to another region (the secondary region) in [Speech Studio](https://speech.microsoft.com).
+2. Copy your custom voice model to another region (the secondary region) in [Speech Studio](https://aka.ms/speechstudio/).
3. Go to Speech Studio and switch to the Speech resource in the secondary region. Load the copied model and create a new endpoint. - Voice model deployment usually finishes **in 3 minutes**.
- - Note: additional endpoint is subjective to additional charges. [Check the pricing for model hosting here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+ - Each endpoint is subject to extra charges. [Check the pricing for model hosting here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
4. Configure your client to fail over to the secondary region. See sample code in C#: [GitHub: custom voice failover to secondary region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L920). ### Speaker Recognition
-Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically failover operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used in case of an outage.
+Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically fail over operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used if there's an outage.
-During an outage, Speaker Recognition service will automatically failover to a paired region and use the backed up data to continue processing requests until the main region is back online.
+During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
If you have multiple phrases to add, call `.addPhrase()` for each phrase to add
# [Custom speech-to-text](#tab/cstt)
-The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://speech.microsoft.com/customspeech).
+The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech).
The custom speech **Model ID** is required to run the container. For more information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
How to get information for the base model:
How to get information for the custom model:
-1. Go to the [Speech Studio](https://speech.microsoft.com/) portal.
+1. Go to the [Speech Studio](https://aka.ms/speechstudio/customspeech) portal.
1. Sign in if necessary, and go to **Custom Speech**. 1. Select your project, and go to **Deployment**. 1. Select the required endpoint.
You aren't able to see the existing value of the concurrent request limit parame
To create an increase request, you provide your deployment region and the custom endpoint ID. To get it, perform the following actions:
-1. Go to the [Speech Studio](https://speech.microsoft.com/) portal.
+1. Go to the [Speech Studio](https://aka.ms/speechstudio/customvoice) portal.
1. Sign in if necessary, and go to **Custom Voice**. 1. Select your project, and go to **Deployment**. 1. Select the required endpoint.
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
# What is Speech Studio?
-[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
+[Speech Studio](https://aka.ms/speechstudio/) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
## Speech Studio features In Speech Studio, the following Speech service features are available as project types:
-* **Real-time speech-to-text**: Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md).
+* [Real-time speech-to-text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md).
-* **Custom Speech**: Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
+* [Custom Speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
-* **Pronunciation assessment**: Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
+* [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
-* **Voice Gallery**: Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and human-like neural voices.
+* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and human-like neural voices.
-* **Custom Voice**: Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
+* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
-* **Audio Content Creation**: Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots, with the easy-to-use [Audio Content Creation](how-to-audio-content-creation.md) tool. With Speech Studio, you can export these audio files to use in your applications.
+* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots, with the easy-to-use [Audio Content Creation](how-to-audio-content-creation.md) tool. With Speech Studio, you can export these audio files to use in your applications.
-* **Custom Keyword**: A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
+* [Custom Keyword](https://aka.ms/speechstudio/customkeyword): A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
-* **Custom Commands**: Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
+* [Custom Commands](https://aka.ms/speechstudio/customcommands): Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
## Next steps
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/voice-assistants.md
The first step in creating a voice assistant is to decide what you want it to do
| If you want... | Consider using... | Examples | |-||-| |Open-ended conversation with robust skills integration and full deployment control | Azure Bot Service bot with [Direct Line Speech](direct-line-speech.md) channel | <ul><li>"I need to go to Seattle"</li><li>"What kind of pizza can I order?"</li></ul>
-|Voice-command or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>More examples at [Speech Studio](https://speech.microsoft.com/customcommands)</li></ul>
+|Voice-command or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>More examples at [Speech Studio](https://aka.ms/speechstudio/customcommands)</li></ul>
If you aren't yet sure what you want your assistant to do, we recommend [Direct Line Speech](direct-line-speech.md) as the best option. It offers integration with a rich set of tools and authoring aids, such as the [Virtual Assistant solution and enterprise template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md), to build on common patterns and use your existing knowledge sources.
cognitive-services Training And Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/training-and-model.md
Title: "Legacy: What are trainings and models? - Custom Translator"
+ Title: "Legacy: What are trainings and modeling? - Custom Translator"
description: A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive data sets are required training dataset, tuning dataset, and testing dataset.
#Customer intent: As a Custom Translator user, I want to concept of a model and training, so that I can efficiently use training, tuning and testing datasets the helps me build a translation model.
-# What are trainings and models?
+# What are training and modeling?
A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive document types are required: training, tuning, and testing. Dictionary document type can also be provided. For more information, _see_ [Sentence alignment](./sentence-alignment.md#suggested-minimum-number-of-sentences).
The test data should include parallel documents where the target language senten
You don't need more than 2,500 sentences as the testing data. When you let the system choose the testing set automatically, it will use a random subset of sentences from your bilingual training documents, and exclude these sentences from the training material itself.
-You can view the custom translations of the testing set, and compare them to the translations provided in your testing set, by navigating to the test tab within a model.
+You can view the custom translations of the testing set, and compare them to the translations provided in your testing set, by navigating to the test tab within a model.
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Previously updated : 12/03/2021 Last updated : 05/27/2022
The Language service enables you to send API requests asynchronously, using eith
Currently, the following features are available to be used asynchronously: * Entity linking
-* Extractive summarization
+* Document summarization
+* Conversation summarization
* Key phrase extraction * Language detection * Named Entity Recognition (NER)
-* Personally Identifiable Information (PII) detection
+* Customer content detection
* Sentiment analysis and opinion mining * Text Analytics for health
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Document summarization supports the following features:
This documentation contains the following article types: * [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=conversation-summarization) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/document-summarization.md) contain instructions for using the service in more specific or customized ways.
+* [**How-to guides**](how-to/conversation-summarization.md) contain instructions for using the service in more specific or customized ways.
Conversation summarization is a broad topic, consisting of several approaches to represent relevant information in text. The conversation summarization feature described in this documentation enables you to use abstractive text summarization to produce a summary of issues and resolutions in transcripts of web chats and service call transcripts between customer-service agents, and your customers.
cognitive-services Concept Active Inactive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-active-inactive-events.md
Title: Active and inactive events - Personalizer description: This article discusses the use of active and inactive events within the Personalizer service.--++ ms.
cognitive-services Concept Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-active-learning.md
Title: Learning policy - Personalizer description: Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different.--++ ms.
cognitive-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-apprentice-mode.md
Title: Apprentice mode - Personalizer description: Learn how to use apprentice mode to gain confidence in a model without changing any code.--++ ms.
Last updated 05/01/2020
# Use Apprentice mode to train Personalizer without affecting your existing application
-Due to the nature of **real-world** Reinforcement Learning, a Personalizer model can only be trained in a production environment. When deploying a new use case, the Personalizer model is not performing efficiently because it takes time for the model to be sufficiently trained. **Apprentice mode** is a learning behavior that eases this situation and allows you to gain confidence in the model ΓÇô without the developer changing any code.
+Due to the nature of **real-world** Reinforcement Learning, a Personalizer model can only be trained in a production environment. When deploying a new use case, the Personalizer model isn't performing efficiently because it takes time for the model to be sufficiently trained. **Apprentice mode** is a learning behavior that eases this situation and allows you to gain confidence in the model ΓÇô without the developer changing any code.
[!INCLUDE [Important Blue Box - Apprentice mode pricing tier](./includes/important-apprentice-mode.md)]
Apprentice mode gives you trust in the Personalizer service and its machine lear
The two main reasons to use Apprentice mode are:
-* Mitigating **Cold Starts**: Apprentice mode helps manage and assess the cost of a "new" model's learning time - when it is not returning the best action and not achieved a satisfactory level of effectiveness of around 60-80%.
+* Mitigating **Cold Starts**: Apprentice mode helps manage and assess the cost of a "new" model's learning time - when it isn't returning the best action and not achieved a satisfactory level of effectiveness of around 60-80%.
* **Validating Action and Context Features**: Features sent in actions and context may be inadequate or inaccurate - too little, too much, incorrect, or too specific to train Personalizer to attain the ideal effectiveness rate. Use [feature evaluations](concept-feature-evaluation.md) to find and fix issues with features. ## When should you use Apprentice mode? Use Apprentice mode to train Personalizer to improve its effectiveness through the following scenarios while leaving the experience of your users unaffected by Personalizer:
-* You are implementing Personalizer in a new use case.
-* You have significantly changed the features you send in Context or Actions.
-* You have significantly changed when and how you calculate rewards.
+* You're implementing Personalizer in a new use case.
+* You've significantly changed the features you send in Context or Actions.
+* You've significantly changed when and how you calculate rewards.
-Apprentice mode is not an effective way of measuring the impact Personalizer is having on reward scores. To measure how effective Personalizer is at choosing the best possible action for each Rank call, use [Offline evaluations](concepts-offline-evaluation.md).
+Apprentice mode isn't an effective way of measuring the impact Personalizer is having on reward scores. To measure how effective Personalizer is at choosing the best possible action for each Rank call, use [Offline evaluations](concepts-offline-evaluation.md).
## Who should use Apprentice mode?
Apprentice mode is useful for developers, data scientists and business decision
* **Data scientists** can use Apprentice mode to validate that the features are effective to train the Personalizer models, that the reward wait times arenΓÇÖt too long or short.
-* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (i.e. rewards) compared to existing business logic. This allows them to make a informed decision impacting user experience, where real revenue and user satisfaction are at stake.
+* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (i.e. rewards) compared to existing business logic. This allows them to make an informed decision impacting user experience, where real revenue and user satisfaction are at stake.
## Comparing Behaviors - Apprentice mode and Online mode
Learning when in Apprentice mode differs from Online mode in the following ways.
|--|--|--| |Impact on User Experience|You can use existing user behavior to train Personalizer by letting it observe (not affect) what your **default action** would have been and the reward it obtained. This means your usersΓÇÖ experience and the business results from them wonΓÇÖt be impacted.|Display top action returned from Rank call to affect user behavior.| |Learning speed|Personalizer will learn more slowly when in Apprentice mode than when learning in Online mode. Apprentice mode can only learn by observing the rewards obtained by your **default action**, which limits the speed of learning, as no exploration can be performed.|Learns faster because it can both exploit the current model and explore for new trends.|
-|Learning effectiveness "Ceiling"|Personalizer can approximate, very rarely match, and never exceed the performance of your base business logic (the reward total achieved by the **default action** of each Rank call). This approximation cieling is reduced by exploration. For example, with exploration at 20% it is very unlikely apprentice mode performance will exceed 80%, and 60% is a reasonable target at which to graduate to online mode.|Personalizer should exceed applications baseline, and over time where it stalls you should conduct on offline evaluation and feature evaluation to continue to get improvements to the model. |
-|Rank API value for rewardActionId|The users' experience doesnΓÇÖt get impacted, as _rewardActionId_ is always the first action you send in the Rank request. In other words, the Rank API does nothing visible for your application during Apprentice mode. Reward APIs in your application should not change how it uses the Reward API between one mode and another.|Users' experience will be changed by the _rewardActionId_ that Personalizer chooses for your application. |
+|Learning effectiveness "Ceiling"|Personalizer can approximate, very rarely match, and never exceed the performance of your base business logic (the reward total achieved by the **default action** of each Rank call). This approximation ceiling is reduced by exploration. For example, with exploration at 20% it's very unlikely apprentice mode performance will exceed 80%, and 60% is a reasonable target at which to graduate to online mode.|Personalizer should exceed applications baseline, and over time where it stalls you should conduct on offline evaluation and feature evaluation to continue to get improvements to the model. |
+|Rank API value for rewardActionId|The users' experience doesnΓÇÖt get impacted, as _rewardActionId_ is always the first action you send in the Rank request. In other words, the Rank API does nothing visible for your application during Apprentice mode. Reward APIs in your application shouldn't change how it uses the Reward API between one mode and another.|Users' experience will be changed by the _rewardActionId_ that Personalizer chooses for your application. |
|Evaluations|Personalizer keeps a comparison of the reward totals that your default business logic is getting, and the reward totals Personalizer would be getting if in Online mode at that point. A comparison is available in the Azure portal for that resource|Evaluate PersonalizerΓÇÖs effectiveness by running [Offline evaluations](concepts-offline-evaluation.md), which let you compare the total rewards Personalizer has achieved against the potential rewards of the applicationΓÇÖs baseline.| A note about apprentice mode's effectiveness:
Apprentice Mode attempts to train the Personalizer model by attempting to imitat
### Scenarios where Apprentice Mode May Not be Appropriate: #### Editorially chosen Content:
-In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors are not an algorithm, and the factors considered by editors can be nuanced and not included as features of the context and actions, Apprentice mode is unlikely to be able to predict the next baseline action. In these situations you can:
+In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors aren't an algorithm, and the factors considered by editors can be nuanced and not included as features of the context and actions, Apprentice mode is unlikely to be able to predict the next baseline action. In these situations you can:
-* Test Personalizer in Online Mode: Apprentice mode not predicting baselines does not imply Personalizer can't achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
+* Test Personalizer in Online Mode: Apprentice mode not predicting baselines doesn't imply Personalizer can't achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
* Add editorial considerations and recommendations as features: Ask your editors what factors influence their choices, and see if you can add those as features in your context and action. For example, editors in a media company may highlight content while a certain celebrity is in the news: This knowledge could be added as a Context feature. ### Factors that will improve and accelerate Apprentice Mode
-If apprentice mode is learning and attaining Matched rewards above zero but seems to be growing slowly (not getting to 60%..80% matched rewards within 2 weeks), it is possible that the challenge is having too little data. Taking the following steps could accelerate the learning.
+If apprentice mode is learning and attaining Matched rewards above zero but seems to be growing slowly (not getting to 60% to 80% matched rewards within two weeks), it's possible that the challenge is having too little data. Taking the following steps could accelerate the learning.
1. Adding more events with positive rewards over time: Apprentice mode will perform better in use cases where your application gets more than 100 positive rewards per day. For example, if a website rewarding a click has 2% clickthrough, it should be having at least 5,000 visits per day to have noticeable learning. 2. Try a reward score that is simpler and happens more frequently. For example going from "Did users finish reading the article" to "Did users start reading the article". 3. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will make them less similar.
-4. Reducing Actions per Event: Personalizer will use the Explore % setting to discover preferences and trends. When a Rank call has more actions, the chance of an Action being chosen for exploration becomes lower. Reduce the number of actions sent in each Rank call to a smaller number, to less than 10. This can be a temporary adjustement to show that Apprentice Mode has the right data to match rewards.
+4. Reducing Actions per Event: Personalizer will use the Explore % setting to discover preferences and trends. When a Rank call has more actions, the chance of an Action being chosen for exploration becomes lower. Reduce the number of actions sent in each Rank call to a smaller number, to less than 10. This can be a temporary adjustment to show that Apprentice Mode has the right data to match rewards.
## Using Apprentice mode to train with historical data If you have a significant amount of historical data, youΓÇÖd like to use to train Personalizer, you can use Apprentice mode to replay the data through Personalizer.
-Set up the Personalizer in Apprentice Mode and create a script that calls Rank with the actions and context features from the historical data. Call the Reward API based on your calculations of the records in this data. You will need approximately 50,000 historical events to see some results but 500,000 is recommended for higher confidence in the results.
+Set up the Personalizer in Apprentice Mode and create a script that calls Rank with the actions and context features from the historical data. Call the Reward API based on your calculations of the records in this data. You'll need approximately 50,000 historical events to see some results but 500,000 is recommended for higher confidence in the results.
-When training from historical data, it is recommended that the data sent in (features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set), matches the data (features and calculation of reward) available from the existing application.
+When training from historical data, it's recommended that the data sent in (features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set), matches the data (features and calculation of reward) available from the existing application.
Offline and post-facto data tends to be more incomplete and noisier and differs in format. While training from historical data is possible, the results from doing so may be inconclusive and not a good predictor of how well Personalizer will learn, especially if the features vary between past data and the existing application.
Typically for Personalizer, when compared to training with historical data, chan
## Using Apprentice Mode versus A/B Tests
-It is only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode. In Apprentice mode, only the **default action** is used, which means all users would effectively see the control experience.
+It's only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode. In Apprentice mode, only the **default action** is used, which means all users would effectively see the control experience.
Even if Personalizer is just the _treatment_, the same challenge is present when validating the data is good for training Personalizer. Apprentice mode could be used instead, with 100% of traffic, and with all users getting the control (unaffected) experience.
cognitive-services Concept Auto Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-auto-optimization.md
Title: Auto-optimize - Personalizer description: This article provides a conceptual overview of the auto-optimize feature for Azure Personalizer service.--++ ms.
cognitive-services Concept Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-feature-evaluation.md
Title: Feature evaluation - Personalizer description: When you run an Evaluation in your Personalizer resource from the Azure portal, Personalizer provides information about what features of context and actions are influencing the model. --++ ms.
When you run an Evaluation in your Personalizer resource from the [Azure portal]
This is useful in order to: * Imagine additional features you could use, getting inspiration from what features are more important in the model.
-* See what features are not important, and potentially remove them or further analyze what may be affecting usage.
+* See what features aren't important, and potentially remove them or further analyze what may be affecting usage.
* Provide guidance to editorial or curation teams about new content or products worth bringing into the catalog. * Troubleshoot common problems and mistakes that happen when sending features to Personalizer.
To see feature importance results, you must run an evaluation. The evaluation cr
The resulting information about feature importance represents the current Personalizer online model. The evaluation analyzes feature importance of the model saved at the end date of the evaluation period, after undergoing all the training done during the evaluation, with the current online learning policy.
-The feature importance results do not represent other policies and models tested or created during the evaluation. The evaluation will not include features sent to Personalizer after the end of the evaluation period.
+The feature importance results don't represent other policies and models tested or created during the evaluation. The evaluation won't include features sent to Personalizer after the end of the evaluation period.
## How to interpret the feature importance evaluation
Personalizer evaluates features by creating "groups" of features that have simil
Information about each Feature includes:
-* Whether the feature comes from Context or Actions.
-* Feature Key and Value.
+* Whether the feature comes from Context or Actions
+* Feature Key and Value
-For example, an ice cream shop ordering app may see "Context.Weather:Hot" as a very important feature.
+For example, an ice cream shop ordering app may see `Context.Weather:Hot` as a very important feature.
Personalizer displays correlations of features that, when taken into account together, produce higher rewards.
-For example, you may see "Context.Weather:Hot *with* Action.MenuItem:IceCream" as well as "Context.Weather:Cold *with* Action.MenuItem:WarmTea:
+For example, you may see `Context.Weather:Hot` *with* `Action.MenuItem:IceCream` as well as `Context.Weather:Cold` *with* `Action.MenuItem:WarmTea:`.
## Actions you can take based on feature evaluation
For example, you may see "Context.Weather:Hot *with* Action.MenuItem:IceCream" a
Get inspiration from the more important features in the model. For example, if you see "Context.MobileBattery:Low" in a video mobile app, you may think that connection type may also make customers choose to see one video clip over another, then add features about connectivity type and bandwidth into your app.
-### See what features are not important
+### See what features aren't important
-Potentially remove unimportant features or further analyze what may affect usage. Features may rank low for many reasons. One could be that genuinely the feature doesn't affect user behavior. But it could also mean that the feature is not apparent to the user.
+Potentially remove unimportant features or further analyze what may affect usage. Features may rank low for many reasons. One could be that genuinely the feature doesn't affect user behavior. But it could also mean that the feature isn't apparent to the user.
For example, a video site could see that "Action.VideoResolution=4k" is a low-importance feature, contradicting user research. The cause could be that the application doesn't even mention or show the video resolution, so users wouldn't change their behavior based on it. ### Provide guidance to editorial or curation teams
-Provide guidance about new content or products worth bringing into the catalog. Personalizer is designed to be a tool that augments human insight and teams. One way it does this is by providing information to editorial groups on what is it about products, articles or content that drives behavior. For example, the video application scenario may show that there is an important feature called "Action.VideoEntities.Cat:true", prompting the editorial team to bring in more cat videos.
+Provide guidance about new content or products worth bringing into the catalog. Personalizer is designed to be a tool that augments human insight and teams. One way it does this is by providing information to editorial groups on what is it about products, articles or content that drives behavior. For example, the video application scenario may show that there's an important feature called "Action.VideoEntities.Cat:true", prompting the editorial team to bring in more cat videos.
### Troubleshoot common problems and mistakes
Common problems and mistakes can be fixed by changing your application code so i
Common mistakes when sending features include the following:
-* Sending personally identifiable information (PII). PII specific to one individual (such as name, phone number, credit card numbers, IP Addresses) should not be used with Personalizer. If your application needs to track users, use a non-identifying UUID or some other UserID number. In most scenarios this is also problematic.
-* With large numbers of users, it is unlikely that each user's interaction will weigh more than all the population's interaction, so sending user IDs (even if non-PII) will probably add more noise than value to the model.
-* Sending date-time fields as precise timestamps instead of featurized time values. Having features such as Context.TimeStamp.Day=Monday or "Context.TimeStamp.Hour"="13" is more useful. There will be at most 7 or 24 feature values for each. But "Context.TimeStamp":"1985-04-12T23:20:50.52Z" is so precise that there will be no way to learn from it because it will never happen again.
+* Sending personally identifiable information (PII). PII specific to one individual (such as name, phone number, credit card numbers, IP Addresses) shouldn't be used with Personalizer. If your application needs to track users, use a non-identifying UUID or some other UserID number. In most scenarios this is also problematic.
+* With large numbers of users, it's unlikely that each user's interaction will weigh more than all the population's interaction, so sending user IDs (even if non-PII) will probably add more noise than value to the model.
+* Sending date-time fields as precise timestamps instead of featurized time values. Having features such as Context.TimeStamp.Day=Monday or "Context.TimeStamp.Hour"="13" is more useful. There will be at most 7 or 24 feature values for each. But `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is so precise that there will be no way to learn from it because it will never happen again.
## Next steps
cognitive-services Concept Multi Slot Personalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-multi-slot-personalization.md
Title: Multi-slot personalization description: Learn where and when to use single-slot and multi-slot personalization with the Personalizer Rank and Reward APIs. --++
cognitive-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-rewards.md
Title: Reward score - Personalizer description: The reward score indicates how well the personalization choice, RewardActionID, resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior. Personalizer trains its machine learning models by evaluating the rewards.--++ ms.
cognitive-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-exploration.md
Title: Exploration - Personalizer description: With exploration, Personalizer is able to continue delivering good results, even as user behavior changes. Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.--++ ms.
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
Title: "Features: Action and context - Personalizer" description: Personalizer uses features, information about actions and context, to make better ranking suggestions. Features can be very generic, or specific to an item.--++ ms.
cognitive-services Concepts Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-offline-evaluation.md
Title: Use the Offline Evaluation method - Personalizer description: This article will explain how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.--++ ms.
cognitive-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-reinforcement-learning.md
Title: Reinforcement Learning - Personalizer description: Personalizer uses information about actions and current context to make better ranking suggestions. The information about these actions and context are attributes or properties that are referred to as features.--++ ms.
While there are many subtypes and styles of reinforcement learning, this is how
* Your application provides information about each alternative and the context of the user. * Your application computes a _reward score_.
-Unlike some approaches to reinforcement learning, Personalizer does not require a simulation to work in. Its learning algorithms are designed to react to an outside world (versus control it) and learn from each data point with an understanding that it is a unique opportunity that cost time and money to create, and that there is a non-zero regret (loss of possible reward) if suboptimal performance happens.
+Unlike some approaches to reinforcement learning, Personalizer doesn't require a simulation to work in. Its learning algorithms are designed to react to an outside world (versus control it) and learn from each data point with an understanding that it's a unique opportunity that cost time and money to create, and that there's a non-zero regret (loss of possible reward) if suboptimal performance happens.
## What type of reinforcement learning algorithms does Personalizer use?
The explore/exploit traffic allocation is made randomly following the percentage
John Langford coined the name Contextual Bandits (Langford and Zhang [2007]) to describe a tractable subset of reinforcement learning and has worked on a half-dozen papers improving our understanding of how to learn in this paradigm: * Beygelzimer et al. [2011]
-* Dudík et al. [2011a,b]
+* Dudík et al. [2011a, b]
* Agarwal et al. [2014, 2012] * Beygelzimer and Langford [2009] * Li et al. [2010]
cognitive-services Concepts Scalability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-scalability-performance.md
Title: Scalability and Performance - Personalizer description: "High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance: latency and training throughput."--++ ms.
Some applications require low latencies when returning a rank. Low latencies are
Personalizer works by updating a model that is retrained based on messages sent asynchronously by Personalizer after Rank and Reward APIs. These messages are sent using an Azure EventHub for the application.
- It is unlikely most applications will reach the maximum joining and training throughput of Personalizer. While reaching this maximum will not slow down the application, it would imply Event Hub queues are getting filled internally faster than they can be cleaned up.
+ It's unlikely most applications will reach the maximum joining and training throughput of Personalizer. While reaching this maximum won't slow down the application, it would imply event hub queues are getting filled internally faster than they can be cleaned up.
## How to estimate your throughput requirements * Estimate the average number of bytes per ranking event adding the lengths of the context and action JSON documents. * Divide 20MB/sec by this estimated average bytes.
-For example, if your average payload has 500 features and each is an estimated 20 characters, then each event is approximately 10kb. With these estimates, 20,000,000 / 10,000 = 2,000 events/sec, which is about 173 million events/day.
+For example, if your average payload has 500 features and each is an estimated 20 characters, then each event is approximately 10 kb. With these estimates, 20,000,000 / 10,000 = 2,000 events/sec, which is about 173 million events/day.
-If you are reaching these limits, please contact our support team for architecture advice.
+If you're reaching these limits, please contact our support team for architecture advice.
## Next steps
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/encrypt-data-at-rest.md
Title: Personalizer service encryption of data at rest description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Personalizer, and how to enable and manage CMK. -+ Last updated 08/28/2020-+ #Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
cognitive-services Ethics Responsible Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/ethics-responsible-use.md
Title: Ethics and responsible use - Personalizer description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.--++ ms.
cognitive-services How Personalizer Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-personalizer-works.md
Title: How Personalizer Works - Personalizer description: The Personalizer _loop_ uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the Rank and Reward calls.--++ ms.
cognitive-services How To Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-create-resource.md
Title: Create Personalizer resource description: In this article, learn how to create a personalizer resource in the Azure portal for each feedback loop. --++ ms.
cognitive-services How To Learning Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-learning-behavior.md
Title: Configure learning behavior description: Apprentice mode gives you confidence in the Personalizer service and its machine learning capabilities, and provides metrics that the service is sent information that can be learned from ΓÇô without risking online traffic.--++ ms.
cognitive-services How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-manage-model.md
Title: Manage model and learning settings - Personalizer description: The machine-learned model and learning settings can be exported for backup in your own source control system.--++ ms.
From the Resource management's section for **Model and learning settings**, revi
## Clear data for your learning loop 1. In the Azure portal, for your Personalizer resource, on the **Model and learning settings** page, select **Clear data**.
-1. In order to clear all data, and reset the learning loop to the original state, select all 3 check boxes.
+1. In order to clear all data, and reset the learning loop to the original state, select all three check boxes.
![In Azure portal, clear data from Personalizer resource.](./media/settings/clear-data-from-personalizer-resource.png) |Value|Purpose| |--|--|
- |Logged personalization and reward data.|This logging data is used in offline evaluations. Clear the data if you are resetting your resource.|
+ |Logged personalization and reward data.|This logging data is used in offline evaluations. Clear the data if you're resetting your resource.|
|Reset the Personalizer model.|This model changes on every retraining. This frequency of training is specified in **upload model frequency** on the **Configuration** page. |
- |Set the learning policy to default.|If you have changed the learning policy as part of an offline evaluation, this resets to the original learning policy.|
+ |Set the learning policy to default.|If you've changed the learning policy as part of an offline evaluation, this resets to the original learning policy.|
1. Select **Clear selected data** to begin the clearing process. Status is reported in Azure notifications, in the top-right navigation.
cognitive-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-multi-slot.md
Title: How to use multi-slot with Personalizer description: Learn how to use multi-slot with Personalizer to improve content recommendations provided by the service. --++
cognitive-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-offline-evaluation.md
Title: How to perform offline evaluation - Personalizer description: This article will show you how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.--++ ms.
cognitive-services How To Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-settings.md
Title: Configure Personalizer description: Service configuration includes how the service treats rewards, how often the service explores, how often the model is retrained, and how much data is stored.--++ ms.
cognitive-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md
Title: "Quickstart: Create and use learning loop with SDK - Personalizer" description: This quickstart shows you how to create and manage your knowledge base using the Personalizer client library.--++ ms.
cognitive-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/terminology.md
Title: Terminology - Personalizer description: Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs.--++ ms.
cognitive-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
Title: "Tutorial: Azure Notebook - Personalizer" description: This tutorial simulates a Personalizer loop _system in an Azure Notebook, which suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is also available and stored in a coffee dataset.--++ ms.
cognitive-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-personalizer-chat-bot.md
Title: Use Personalizer in chat bot - Personalizer description: Customize a C# .NET chat bot with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.--++ ms.
cognitive-services Tutorial Use Personalizer Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-personalizer-web-app.md
Title: Use web app - Personalizer description: Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.--++ ms.
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/what-is-personalizer.md
Title: What is Personalizer? description: Personalizer is a cloud-based service that allows you to choose the best experience to show to your users, learning from their real-time behavior.--++ ms.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/whats-new.md
Title: What's new - Personalizer description: This article contains news about Personalizer.--++ ms.
cognitive-services Where Can You Use Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/where-can-you-use-personalizer.md
Title: Where and how to use - Personalizer description: Personalizer can be applied in any situation where your application can select the right item, action, or product to display - in order to make the experience better, achieve better business results, or improve productivity.--++ ms.
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
Title: Azure direct routing provisioning and configuration - Azure Communication Services
-description: Learn how to add a Session Border Controller and configure voice routing for Azure Communication Services direct routing
+ Title: Use direct routing to connect existing telephony service
+description: Learn how to add a Session Border Controller and configure voice routing for Azure Communication Services direct routing.
Previously updated : 06/30/2021 Last updated : 05/26/2022 +
-# Session Border Controllers and voice routing
+# Use direct routing to connect to existing telephony service
Azure Communication Services direct routing enables you to connect your existing telephony infrastructure to Azure. The article lists the high-level steps required for connecting a supported Session Border Controller (SBC) to direct routing and how voice routing works for the enabled Communication resource. [!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)]
For information about whether Azure Communication Services direct routing is the
### Configure using Azure portal 1. In the left navigation, select Direct routing under Voice Calling - PSTN and then select Configure from the Session Border Controller tab.
-1. Enter a fully qualified domain name and signaling port for the SBC.
-
-- SBC certificate must match the name; wildcard certificates are supported.-- The *.onmicrosoft.com domain canΓÇÖt be used for the FQDN of the SBC.
-For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./direct-routing-infrastructure.md).
- :::image type="content" source="../media/direct-routing-provisioning/add-session-border-controller.png" alt-text="Adding Session Border Controller.":::
-- When you're done, select Next.
-If everything set up correctly, you should see exchange of OPTIONS messages between Microsoft and your Session Border Controller, user your SBC monitoring/logs to validate the connection.
+2. Enter a fully qualified domain name and signaling port for the SBC.
+ - SBC certificate must match the name; wildcard certificates are supported.
+ - The *.onmicrosoft.com domain canΓÇÖt be used for the FQDN of the SBC.
+
+ For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./direct-routing-infrastructure.md).
+
+ :::image type="content" source="../media/direct-routing-provisioning/add-session-border-controller.png" alt-text="Screenshot of Adding Session Border Controller.":::
+
+3. When you're done, select Next.
+
+ If everything is set up correctly, you should see an exchange of OPTIONS messages between Microsoft and your Session Border Controller. Use your SBC monitoring/logs to validate the connection.
## Voice routing considerations
-Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific Session Border Controller (SBC) based on the called number pattern.
-When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) will try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource. If there's a match, the call goes through the direct routing trunk. If there's no match, the next step is to process the `alternateCallerId` parameter of the `callAgent.startCall` method. If the resource is enabled for Voice Calling (PSTN) and has at least one number purchased from Microsoft, the `alternateCallerId` is checked. If the `alternateCallerId` matches one of a purchased number for the resource, the call is routed through the Voice Calling (PSTN) using Microsoft infrastructure. If `alternateCallerId` parameter doesn't match any of the purchased numbers, the call will fail. The diagram below demonstrates the Azure Communication Services voice routing logic.
+Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific SBC based on the called number pattern.
+
+When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) will try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource.
+
+- If there's a match, the call goes through the direct routing trunk.
+- If there's no match, the next step is to process the `alternateCallerId` parameter of the `callAgent.startCall` method.
+- If the resource is enabled for Voice Calling (PSTN) and has at least one number purchased from Microsoft, the `alternateCallerId` is checked.
+- If the `alternateCallerId` matches a purchased number for the resource, the call is routed through the Voice Calling (PSTN) using Microsoft infrastructure.
+- If `alternateCallerId` parameter doesn't match any of the purchased numbers, the call will fail.
+
+The diagram below demonstrates the Azure Communication Services voice routing logic.
## Voice routing examples The following examples display voice routing in a call flow.
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
### Configure using Azure portal
-Give your Voice Route a name, specify the number pattern using regular expressions, and select SBC for that pattern.
+Give your voice route a name, specify the number pattern using regular expressions, and select SBC for that pattern.
Here are some examples of basic regular expressions: - `^\+\d+$` - matches a telephone number with one or more digits that start with a plus - `^+1(\d[10])$` - matches a telephone number with a ten digits after a `+1`
You can select multiple SBCs for a single pattern. In such a case, the routing a
### Delete using Azure portal
-#### To delete a Voice Route:
+#### To delete a voice route:
1. In the left navigation, go to Direct routing under Voice Calling - PSTN and then select the Voice Routes tab. 1. Select route or routes you want to delete using a checkbox. 1. Select Remove.
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
Previously updated : 01/10/2022 Last updated : 05/24/2022
Azure and Teams are interoperable. This interoperability gives organizations cho
- **Microsoft 365 + Azure hybrid.** Combine Microsoft 365 Teams and Bookings with a custom Azure application for the consumer experience. Organizations take advantage of Microsoft 365's employee familiarity but customize and embed the consumer visit experience in their own application. - **Azure custom.** Build the entire solution on Azure primitives: the business experience, the consumer experience, and scheduling systems.
-![Diagram of virtual visit implementation options](./media/sample-builder/virtual-visit-options.svg)
+![Diagram of virtual visit implementation options](./media/virtual-visits/virtual-visit-options.svg)
These three **implementation options** are columns in the table below, while each row provides a **use case** and the **enabling technologies**.
There are other ways to customize and combine Microsoft tools to deliver a virtu
## Extend Microsoft 365 with Azure The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid configurations are popular because they combine employee familiarity of Microsoft 365 with the ability to customize the consumer experience. TheyΓÇÖre also a good launching point to understanding more complex and customized architectures. The diagram below shows user steps for a virtual visit:
-![High-level architecture of a hybrid virtual visits solution](./media/sample-builder/virtual-visit-arch.svg)
+![High-level architecture of a hybrid virtual visits solution](./media/virtual-visits/virtual-visit-arch.svg)
1. Consumer schedules the visit using Microsoft 365 Bookings. 2. Consumer gets a visit reminder through SMS and Email. 3. Provider joins the visit using Microsoft Teams.
In this section weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft
This sample uses takes advantage of the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar.
-![Booking configuration experience](./media/sample-builder/bookings-url.png)
+![Screenshot of Booking configuration experience](./media/virtual-visits/bookings-url.png)
+
+Make sure online meeting is enable for the calendar by going to https://outlook.office.com/bookings/services.
+
+![Screenshot of Booking services configuration experience](./media/virtual-visits/bookings-services.png)
+
+And then make sure "Add online meeting" is enable.
+
+![Screenshot of Booking services online meeting configuration experience](./media/virtual-visits/bookings-services-online-meeting.png)
+ ### Step 2 ΓÇô Sample Builder Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder), or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard and configure if Chat or Screen Sharing should be enabled. Change themes and text to you match your application. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors.
-[ ![Sample builder start page](./media/sample-builder/sample-builder-start.png)](./media/sample-builder/sample-builder-start.png#lightbox)
+[ ![Screenshot of Sample builder start page](./media/virtual-visits/sample-builder-start.png)](./media/virtual-visits/sample-builder-start.png#lightbox)
### Step 3 - Deploy At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js).
-[ ![Sample builder deployment page](./media/sample-builder/sample-builder-landing.png)](./media/sample-builder/sample-builder-landing.png#lightbox)
+[ ![Screenshot of Sample builder deployment page](./media/virtual-visits/sample-builder-landing.png)](./media/virtual-visits/sample-builder-landing.png#lightbox)
The deployment launches an Azure Resource Manager (ARM) template that deploys the themed application you configured.
-![Sample builder arm template](./media/sample-builder/sample-builder-arm.png)
+![Screenshot of Sample builder arm template](./media/virtual-visits/sample-builder-arm.png)
After walking through the ARM template you can **Go to resource group**
-![Screenshot of a completed Azure Resource Manager Template](./media/sample-builder/azure-complete-deployment.png)
+![Screenshot of a completed Azure Resource Manager Template](./media/virtual-visits/azure-complete-deployment.png)
### Step 4 - Test The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services.
-![produced azure resources in azure portal](./media/sample-builder/azure-resources.png)
+![Screenshot of produced azure resources in azure portal](./media/virtual-visits/azure-resources.png)
+
+Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISIT` allows you to try out the consumer experience and join a Teams meeting. `https://<YOUR URL>/BOOK` embeds the Booking experience for consumer scheduling.
+
+![Screenshot of final view of azure app service](./media/virtual-visits/azure-resource-final.png)
+
+### Step 5 - Set deployed app URL in Bookings
-Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISITS` allows you to try out the consumer experience and join a Teams meeting. `https://<YOUR URL>/BOOK` embeds the Booking experience for consumer scheduling.
+Copy your application url into your calendar Business information setting by going to https://outlook.office.com/bookings/businessinformation.
-![final view of azure app service](./media/sample-builder/azure-resource-final.png)
+![Screenshot of final view of bookings business information](./media/virtual-visits/bookings-acs-app-integration-url.png)
## Going to production The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual visit: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production. ### Launching patterns
-Consumers want to jump directly to the virtual visit from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISITS`, Bookings will point users to `https://<YOUR URL>/VISITS?=<TEAMID>.`
+Consumers want to jump directly to the virtual visit from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISIT`, Bookings will point users to `https://<YOUR URL>/VISIT?MEETINGURL=<MEETING URL>.`
### Integrate into your existing app The app service generated by the Sample Builder is a stand-alone artifact, designed for desktop and mobile browsers. However you may have a website or mobile application already and need to migrate these experiences to that existing codebase. The code generated by the Sample Builder should help, but you can also use:
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
Get started with a sample Redis Cache and Python Custom Application [here](https
[Gramine](https://grapheneproject.io/) is a lightweight guest OS, designed to run a single Linux application with minimal host requirements. Gramine can run applications in an isolated environment. There's tooling support for converting existing Docker container applications to Gramine Shielded Containers (GSCs).
-For more information, see the Gramine's [sample application and deployment on AKS](https://graphene.readthedocs.io/en/latest/cloud-deployment.html#azure-kubernetes-service-aks)
+For more information, see the Gramine's [sample application and deployment on AKS](https://github.com/gramineproject/contrib/tree/master/Examples/aks-attestation)
### Occlum
Do you have questions about your implementation? Do you want to become an enable
- [Deploy AKS cluster with Intel SGX Confidential VM Nodes](./confidential-enclave-nodes-aks-get-started.md) - [Microsoft Azure Attestation](../attestation/overview.md) - [Intel SGX Confidential Virtual Machines](virtual-machine-solutions-sgx.md)-- [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
+- [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
properties:
periodSeconds: 3 - type: readiness tcpSocket:
- - port: 8081
+ port: 8081
initialDelaySeconds: 10 periodSeconds: 3 - type: startup
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
You can use your existing MongoDB apps with API for MongoDB by just changing the
This API stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. Cassandra API in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. Cassandra API is wire protocol compatible with the Apache Cassandra. You should consider Cassandra API if you want to benefit the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This means on Cassandra API you don't need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
-You can use Apache Cassandra client drivers to connect to the Cassandra API. The Cassandra API enables you to interact with data using the Cassandra Query Language (CQL), and tools like CQL shell, Cassandra client drivers that you're already familiar with. Cassandra API currently only supports OLTP scenarios. Using Cassandra API, you can also use the unique features of Azure Cosmos DB such as change feed. To learn more, see [Cassandra API](cassandra-introduction.md) article.
+You can use Apache Cassandra client drivers to connect to the Cassandra API. The Cassandra API enables you to interact with data using the Cassandra Query Language (CQL), and tools like CQL shell, Cassandra client drivers that you're already familiar with. Cassandra API currently only supports OLTP scenarios. Using Cassandra API, you can also use the unique features of Azure Cosmos DB such as [change feed](cassandra-change-feed.md). To learn more, see [Cassandra API](cassandra-introduction.md) article. If you're already familiar with Apache Cassandra, but new to Azure Cosmos DB, we recommend our article on [how to adapt to the Cassandra API if you are coming from Apache Cassandra](./cassandr).
## Gremlin API
cosmos-db Create Graph Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-console.md
You need to have an Azure subscription to create an Azure Cosmos DB account for
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.3** or earlier. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
+You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.13**. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
## Create a database account
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-dotnet.md
Now let's clone a Gremlin API app from GitHub, set the connection string, and ru
5. Restore the NuGet packages in the project. The restore operation should include the Gremlin.Net driver, and the Newtonsoft.Json package.
-6. You can also install the Gremlin.Net@v3.4.6 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
+6. You can also install the Gremlin.Net@v3.4.13 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
```bash
- nuget install Gremlin.NET -Version 3.4.6
+ nuget install Gremlin.NET -Version 3.4.13
``` > [!NOTE]
-> The Gremlin API currently only [supports Gremlin.Net up to v3.4.6](gremlin-support.md#compatible-client-libraries). If you install the latest version, you'll receive errors when using the service.
+> The supported Gremlin.NET driver version for Gremlin API is available [here](gremlin-support.md#compatible-client-libraries). Latest released versions of Gremlin.NET may see incompatibilities, so please check the linked table for compatibility updates.
## Review the code
cosmos-db Create Graph Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-java.md
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). - [Git](https://www.git-scm.com/downloads). -- [Gremlin-driver 3.4.0](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.0), this dependency is mentioned in the quickstart sample's pom.xml
+- [Gremlin-driver 3.4.13](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.13), this dependency is mentioned in the quickstart sample's pom.xml
## Create a database account
cosmos-db Gremlin Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/gremlin-support.md
Azure Cosmos DB Graph engine closely follows [Apache TinkerPop](https://tinkerpo
The following table shows popular Gremlin drivers that you can use against Azure Cosmos DB:
-| Download | Source | Getting Started | Supported connector version |
+| Download | Source | Getting Started | Supported/Recommended connector version |
| | | | |
-| [.NET](https://tinkerpop.apache.org/docs/3.4.6/reference/#gremlin-DotNet) | [Gremlin.NET on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Create Graph using .NET](create-graph-dotnet.md) | 3.4.6 |
-| [Java](https://mvnrepository.com/artifact/com.tinkerpop.gremlin/gremlin-java) | [Gremlin JavaDoc](https://tinkerpop.apache.org/javadocs/current/full/) | [Create Graph using Java](create-graph-java.md) | 3.2.0+ |
-| [Node.js](https://www.npmjs.com/package/gremlin) | [Gremlin-JavaScript on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript) | [Create Graph using Node.js](create-graph-nodejs.md) | 3.3.4+ |
-| [Python](https://tinkerpop.apache.org/docs/3.3.1/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](create-graph-python.md) | 3.2.7 |
+| [.NET](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-DotNet) | [Gremlin.NET on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Create Graph using .NET](create-graph-dotnet.md) | 3.4.13 |
+| [Java](https://mvnrepository.com/artifact/com.tinkerpop.gremlin/gremlin-java) | [Gremlin JavaDoc](https://tinkerpop.apache.org/javadocs/current/full/) | [Create Graph using Java](create-graph-java.md) | 3.4.13 |
+| [Python](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](create-graph-python.md) | 3.4.13 |
+| [Gremlin console](https://tinkerpop.apache.org/download.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.4.13 |
+| [Node.js](https://www.npmjs.com/package/gremlin) | [Gremlin-JavaScript on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript) | [Create Graph using Node.js](create-graph-nodejs.md) | 3.4.13 |
| [PHP](https://packagist.org/packages/brightzone/gremlin-php) | [Gremlin-PHP on GitHub](https://github.com/PommeVerte/gremlin-php) | [Create Graph using PHP](create-graph-php.md) | 3.1.0 | | [Go Lang](https://github.com/supplyon/gremcos/) | [Go Lang](https://github.com/supplyon/gremcos/) | | This library is built by external contributors. The Azure Cosmos DB team doesn't offer any support or maintain the library. |
-| [Gremlin console](https://tinkerpop.apache.org/download.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.2.0 + |
+
+> [!NOTE]
+> Gremlin client driver versions for __3.5.*__, __3.6.*__ have known compatibility issues, so we recommend using the latest supported 3.4.* driver versions listed above.
+> This table will be updated when compatibility issues have been addressed for these newer driver versions.
## Supported Graph Objects
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Previously updated : 04/06/2022 Last updated : 05/26/2022 # Secure access to data in Azure Cosmos DB
Azure Cosmos DB provides three ways to control access to your data.
| Access control type | Characteristics | ||| | [Primary/secondary keys](#primary-keys) | Shared secret allowing any management or data operation. It comes in both read-write and read-only variants. |
-| [Role-based access control](#rbac) | Fine-grained, role-based permission model using Azure Active Directory (AAD) identities for authentication. |
+| [Role-based access control](#rbac) | Fine-grained, role-based permission model using Azure Active Directory (Azure AD) identities for authentication. |
| [Resource tokens](#resource-tokens)| Fine-grained permission model based on native Azure Cosmos DB users and permissions. | ## <a id="primary-keys"></a> Primary/secondary keys
CosmosClient client = new CosmosClient(endpointUrl, authorizationKey);
Azure Cosmos DB exposes a built-in role-based access control (RBAC) system that lets you: -- Authenticate your data requests with an Azure Active Directory (AAD) identity.
+- Authenticate your data requests with an Azure Active Directory identity.
- Authorize your data requests with a fine-grained, role-based permission model. Azure Cosmos DB RBAC is the ideal access control method in situations where:
For an example of a middle tier service used to generate or broker resource toke
Azure Cosmos DB users are associated with a Cosmos database. Each database can contain zero or more Cosmos DB users. The following code sample shows how to create a Cosmos DB user using the [Azure Cosmos DB .NET SDK v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement). ```csharp
-//Create a user.
-Database database = benchmark.client.GetDatabase("SalesDatabase");
-
+// Create a user.
+Database database = client.GetDatabase("SalesDatabase");
User user = await database.CreateUserAsync("User 1"); ```
A permission resource is associated with a user and assigned to a specific resou
If you enable the [diagnostic logs on data-plane requests](cosmosdb-monitor-resource-logs.md), the following two properties corresponding to the permission are logged:
-* **resourceTokenPermissionId** - This property indicates the resource token permission Id that you have specified.
+* **resourceTokenPermissionId** - This property indicates the resource token permission ID that you have specified.
* **resourceTokenPermissionMode** - This property indicates the permission mode that you have set when creating the resource token. The permission mode can have values such as "all" or "read".
The following code sample shows how to create a permission resource, read the re
```csharp // Create a permission on a container and specific partition key value Container container = client.GetContainer("SalesDatabase", "OrdersContainer");
-user.CreatePermissionAsync(
+await user.CreatePermissionAsync(
new PermissionProperties(
- id: "permissionUser1Orders",
- permissionMode: PermissionMode.All,
+ id: "permissionUser1Orders",
+ permissionMode: PermissionMode.All,
container: container, resourcePartitionKey: new PartitionKey("012345"))); ```
user.CreatePermissionAsync(
The following code snippet shows how to retrieve the permission associated with the user created above and instantiate a new CosmosClient on behalf of the user, scoped to a single partition key. ```csharp
-//Read a permission, create user client session.
-PermissionProperties permissionProperties = await user.GetPermission("permissionUser1Orders")
+// Read a permission, create user client session.
+Permission permission = await user.GetPermission("permissionUser1Orders").ReadAsync();
-CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrResourceToken: permissionProperties.Token);
+CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrResourceToken: permission.Resource.Token);
``` ## Differences between RBAC and resource tokens
CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrR
|--|--|--| | Authentication | With Azure Active Directory (Azure AD). | Based on the native Azure Cosmos DB users<br>Integrating resource tokens with Azure AD requires extra work to bridge Azure AD identities and Azure Cosmos DB users. | | Authorization | Role-based: role definitions map allowed actions and can be assigned to multiple identities. | Permission-based: for each Azure Cosmos DB user, you need to assign data access permissions. |
-| Token scope | An AAD token carries the identity of the requester. This identity is matched against all assigned role definitions to perform authorization. | A resource token carries the permission granted to a specific Azure Cosmos DB user on a specific Azure Cosmos DB resource. Authorization requests on different resources may requires different tokens. |
-| Token refresh | The AAD token is automatically refreshed by the Azure Cosmos DB SDKs when it expires. | Resource token refresh is not supported. When a resource token expires, a new one needs to be issued. |
+| Token scope | An Azure AD token carries the identity of the requester. This identity is matched against all assigned role definitions to perform authorization. | A resource token carries the permission granted to a specific Azure Cosmos DB user on a specific Azure Cosmos DB resource. Authorization requests on different resources may require different tokens. |
+| Token refresh | The Azure AD token is automatically refreshed by the Azure Cosmos DB SDKs when it expires. | Resource token refresh is not supported. When a resource token expires, a new one needs to be issued. |
## Add users and assign roles
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
Title: Overview of Cost Management + Billing-+ description: You use Cost Management + Billing features to conduct billing administrative tasks and manage billing access to costs. You also use the features to monitor and control Azure spending and to optimize Azure resource use. keywords:
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md
Title: Assign access to Cost Management data-+ description: This article walks you though assigning permission to Cost Management data for various access scopes.
cost-management-billing Aws Integration Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-manage.md
Title: Manage AWS costs and usage in Cost Management-+ description: This article helps you understand how to use cost analysis and budgets in Cost Management to manage your AWS costs and usage.
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
Title: Set up AWS integration with Cost Management-+ description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management.
cost-management-billing Cost Analysis Built In Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-built-in-views.md
Title: Use built-in views in Cost analysis-+ description: This article helps you understand when to use which view, how each one provides unique insights about your costs and recommended next steps to investigate further.
cost-management-billing Cost Analysis Common Uses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md
Title: Common cost analysis uses in Cost Management-+ description: This article explains how you can get results for common cost analysis tasks in Cost Management.
cost-management-billing Cost Management Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-management-error-codes.md
Title: Troubleshoot common Cost Management errors-+ description: This article describes common Cost Management errors and provides information about solutions.
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
Title: Monitor usage and spending with cost alerts in Cost Management-+ description: This article describes how cost alerts help you monitor usage and spending in Cost Management.
cost-management-billing Cost Mgt Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-best-practices.md
Title: Optimize your cloud investment with Cost Management-+ description: This article helps get the most value out of your cloud investments, reduce your costs, and evaluate where your money is being spent.
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md
Title: Get started with Cost Management for partners-+ description: This article explains how partners use Cost Management features and how they enable access for their customers.
cost-management-billing Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/group-filter.md
Title: Group and filter options in Cost Management-+ description: This article explains how to use group and filter options in Cost Management.
cost-management-billing Ingest Azure Usage At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md
Title: Retrieve large cost datasets recurringly with exports from Cost Management-+ description: This article helps you regularly export large amounts of data with exports from Cost Management.
cost-management-billing Reporting Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reporting-get-started.md
For more information about credits, see [Track Microsoft Customer Agreement Azur
- [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md). - [Analyze Azure costs with the Power BI App](analyze-cost-data-azure-cost-management-power-bi-template-app.md).-- [Connect to Azure Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
+- [Connect to Microsoft Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
- [Create and manage exported data](tutorial-export-acm-data.md).
cost-management-billing Save Share Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/save-share-views.md
Title: Save and share customized views-+ description: This article explains how to save and share a customized view with others.
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
Title: Tutorial - Create and manage exported data from Cost Management-+ description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems.
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
Title: Understand Cost Management data-+ description: This article helps you better understand data that's included in Cost Management and how frequently it's processed, collected, shown, and closed.
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-work-scopes.md
Title: Understand and work with Cost Management scopes-+ description: This article helps you understand billing and resource management scopes available in Azure and how to use the scopes in Cost Management and APIs.
cost-management-billing Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/elevate-access-global-admin.md
Title: Elevate access to manage billing accounts-+ description: Describes how to elevate access for a Global Administrator to manage billing accounts using the Azure portal or REST API.
cost-management-billing Reservation Amortization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-amortization.md
Title: View amortized reservation costs-+ description: This article helps you understand what amortized reservation costs are and how to view them in cost analysis.
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Title: Identify anomalies and unexpected changes in cost-+ description: Learn how to identify anomalies and unexpected changes in cost.
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 04/13/2022 Last updated : 05/27/2022
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Previously updated : 09/29/2021 Last updated : 05/26/2022 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
Specifically, this Salesforce connector supports:
- Salesforce Developer, Professional, Enterprise, or Unlimited editions. - Copying data from and to Salesforce production, sandbox, and custom domain.
-The Salesforce connector is built on top of the Salesforce REST/Bulk API. When copying data from Salesforce, the connector automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; You can explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service.
+The Salesforce connector is built on top of the Salesforce REST/Bulk API. When copying data from Salesforce, the connector automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; You can explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service. When copying data to Salesforce, the connector uses BULK API v1.
>[!NOTE] >The connector no longer sets default version for Salesforce API. For backward compatibility, if a default API version was set before, it keeps working. The default value is 45.0 for source, and 40.0 for sink.
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 04/13/2022 Last updated : 05/27/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| Connector | Format | Dataset/inline | | | | -- | |[Amazon S3](connector-amazon-simple-storage-service.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
-[Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
+|[Asana (Preview)](connector-asana.md#mapping-data-flow-properties) | | -/Γ£ô |
+|[Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô | | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Common Data Model](format-common-data-model.md#source-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br/>-/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
devtest-labs How To Move Schedule To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-schedule-to-new-region.md
Title: How to move a schedule to another region
-description: This article explains how to move schedules to another Azure region.
+ Title: Move a schedule to another region
+description: This article explains how to move a top level schedule to another Azure region.
Last updated 05/09/2022
-# Move schedules to another region
+# Move a schedule to another region
-In this article, you'll learn how to move schedules by using an Azure Resource Manager (ARM) template.
+In this article, you'll learn how to move a schedule by using an Azure Resource Manager (ARM) template.
DevTest Labs supports two types of schedules.
event-hubs Exceptions Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/exceptions-dotnet.md
try
{ // Read events using the consumer client }
-catch (EventHubsException ex) where
+catch (EventHubsException ex) when
(ex.Reason == EventHubsException.FailureReason.ConsumerDisconnected) { // Take action based on a consumer being disconnected
catch (EventHubsException ex) where
``` ## Next steps
-There are other exceptions that are documented in the [legacy article](event-hubs-messaging-exceptions.md). Some of them apply only to the legacy Event Hubs .NET client library.
+There are other exceptions that are documented in the [legacy article](event-hubs-messaging-exceptions.md). Some of them apply only to the legacy Event Hubs .NET client library.
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), is the standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and the web browser remain private and encrypted.
-To meet your security or compliance requirements, Azure Front Door (AFD) supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the backend. Since connections to the backend happen over the public IP. It's highly recommended you configure HTTPS as the forwarding protocol on your Azure Front Door to enforce end-to-end TLS encryption from the client to the backend.
+To meet your security or compliance requirements, Azure Front Door (AFD) supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the backend. Since connections to the backend happen over the public IP, it is highly recommended you configure HTTPS as the forwarding protocol on your Azure Front Door to enforce end-to-end TLS encryption from the client to the backend. TLS/SSL offload is also supported if you deploy a private backend with AFD Premium using the [PrivateLink](private-link.md) feature.
## End-to-end TLS encryption
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
After you create an Azure Front Door Standard/Premium profile, the default front
## Add a new custom domain > [!NOTE]
-> * When using Azure DNS, creating Apex domains isn't supported on Azure Front Door currently. There are other DNS providers that support CNAME flattening or DNS chasing that will allow APEX domains to be used for Azure Front Door Standard/Premium.
> * If a custom domain is validated in one of the Azure Front Door Standard, Premium, classic or classic Microsoft CDN profiles, then it can't be added to another profile. A custom domain is managed by Domains section in the portal. A custom domain can be created and validated before association to an endpoint. A custom domain and its subdomains can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Doors. You can also map custom domains with different subdomains to the same Front Door endpoint.
A custom domain is managed by Domains section in the portal. A custom domain can
| Internal error | If you see this error, retry by clicking the **Refresh** or **Regenerate** buttons. If you're still experiencing issues, raise a support request. | > [!NOTE]
-> 1. If the **Regenerate** button doesn't work, delete and recreate the domain.
-> 2. If the domain state doesn't reflect as expected, select the **Refresh** button.
+> 1. The default TTL for TXT record is 1 hour. When you need to regenerate the TXT record for re-validation, please pay attention to the TTL for the previous TXT record. If it doesn't expire, the validation will fail until the previous TXT record expires.
+> 2. If the **Regenerate** button doesn't work, delete and recreate the domain.
+> 3. If the domain state doesn't reflect as expected, select the **Refresh** button.
## Associate the custom domain with your Front Door Endpoint
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
Responses to these requests might also contain an HTML error page in the respons
There are several possible causes for this symptom. The overall reason is that your HTTP request isn't fully RFC-compliant.
-An example of noncompliance is a `POST` request sent without either a **Content-Length** or a **Transfer-Encoding** header. An example would be using `curl -X POST https://example-front-door.domain.com`. This request doesn't meet the requirements set out in [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.2). Azure Front Door would block it with an HTTP 411 response.
+An example of noncompliance is a `POST` request sent without either a **Content-Length** or a **Transfer-Encoding** header. An example would be using `curl -X POST https://example-front-door.domain.com`. This request doesn't meet the requirements set out in [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.2). Azure Front Door would block it with an HTTP 411 response. Such requests will not be logged.
This behavior is separate from the web application firewall (WAF) functionality of Azure Front Door. Currently, there's no way to disable this behavior. All HTTP requests must meet the requirements, even if the WAF functionality isn't in use.
governance Create Management Group Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-azure-cli.md
Title: "Quickstart: Create a management group with the Azure CLI"
description: In this quickstart, you use the Azure CLI to create a management group to organize your resources into a resource hierarchy. Last updated 08/17/2021 -
+ms.tool: azure-cli
# Quickstart: Create a management group with the Azure CLI
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
Title: "Quickstart: New policy assignment with Terraform"
description: In this quickstart, you use Terraform and HCL syntax to create a policy assignment to identify non-compliant resources. Last updated 08/17/2021
+ms.tool: terraform
# Quickstart: Create a policy assignment to identify non-compliant resources using Terraform
hdinsight Apache Hadoop On Premises Migration Best Practices Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-architecture.md
Previously updated : 12/06/2019 Last updated : 05/27/2019 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - architecture best practices
Some HDInsight Hive metastore best practices are as follows:
Read the next article in this series: -- [Infrastructure best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-infrastructure.md)
+- [Infrastructure best practices for on-premises to Azure HDInsight Hadoop migration](apache-hadoop-on-premises-migration-best-practices-infrastructure.md)
hdinsight Troubleshoot Lost Key Vault Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-lost-key-vault-access.md
Title: Azure HDInsight clusters with disk encryption lose Key Vault access
description: Troubleshooting steps and possible resolutions for Key Vault access issues when interacting with Azure HDInsight clusters. Previously updated : 01/30/2020 Last updated : 05/27/2022 # Scenario: Azure HDInsight clusters with disk encryption lose Key Vault access
hdinsight Hdinsight Authorize Users To Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-authorize-users-to-ambari.md
description: 'How to manage Ambari user and group permissions for HDInsight clus
Previously updated : 11/27/2019 Last updated : 05/27/2022 # Authorize users for Apache Ambari Views
We have assigned our Azure AD domain user "hiveuser2" to the *Cluster User* role
* [Manage ESP HDInsight clusters](./domain-joined/apache-domain-joined-manage.md) * [Use the Apache Hive View with Apache Hadoop in HDInsight](hadoop/apache-hadoop-use-hive-ambari-view.md) * [Synchronize Azure AD users to the cluster](hdinsight-sync-aad-users-to-cluster.md)
-* [Manage HDInsight clusters by using the Apache Ambari REST API](./hdinsight-hadoop-manage-ambari-rest-api.md)
+* [Manage HDInsight clusters by using the Apache Ambari REST API](./hdinsight-hadoop-manage-ambari-rest-api.md)
hdinsight Hdinsight Business Continuity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-business-continuity-architecture.md
description: This article discusses the different possible business continuity a
keywords: hadoop high availability Previously updated : 10/07/2020 Last updated : 05/27/2022 # Azure HDInsight business continuity architectures
To learn more about the items discussed in this article, see:
* [Azure HDInsight business continuity](./hdinsight-business-continuity.md) * [Azure HDInsight highly available solution architecture case study](./hdinsight-high-availability-case-study.md)
-* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
+* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
hdinsight Hdinsight Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-business-continuity.md
description: This article gives an overview of best practices, single region ava
keywords: hadoop high availability Previously updated : 10/08/2020 Last updated : 05/27/2022 # Azure HDInsight business continuity
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
Title: Azure HDInsight for Visual Studio Code
description: Learn how to use the Spark & Hive Tools (Azure HDInsight) for Visual Studio Code. Use the tools to create and submit queries and scripts. Previously updated : 10/20/2020 Last updated : 05/27/2022
From the menu bar, go to **View** > **Command Palette**, and then enter **Azure:
## Next steps
-For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
+For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-powershell.md
Title: Create Apache Hadoop clusters using PowerShell - Azure HDInsight
description: Learn how to create Apache Hadoop, Apache HBase, Apache Storm, or Apache Spark clusters on Linux for HDInsight by using Azure PowerShell.
+ms.tool: azure-powershell
Last updated 12/18/2019
hdinsight Hdinsight High Availability Case Study https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-high-availability-case-study.md
description: This article is a fictional case study of a possible Azure HDInsigh
keywords: hadoop high availability Previously updated : 10/08/2020 Last updated : 05/27/2022 # Azure HDInsight highly available solution architecture case study
hdinsight Llap Schedule Based Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md
Title: HDInsight Interactive Query Autoscale(Schedule-Based) Guide and Best Practices
+ Title: HDInsight Interactive Query Autoscale(bchedule-based) guide and best practices
description: LLAP Autoscale Guide and Best Practices
Last updated 05/25/2022
-# Azure HDInsight Interactive Query Cluster (Hive LLAP) Schedule Based Autoscale
+# Azure HDInsight interactive query cluster (Hive LLAP) schedule based autoscale
This document provides the onboarding steps to enable schedule-based autoscale for Interactive Query (LLAP) Cluster type in Azure HDInsight. It includes some of the best practices to operate Autoscale in Hive-LLAP.
Disabling the WLM should be before the actual schedule of the scaling event and
Each time the Interactive Query cluster scales, the Autoscale smart probe would perform a silent update of the number of LLAP Daemons and the Concurrency in the Ambari since these configurations are static. These configs are updated to make sure if autoscale is in disabled state or LLAP Service restarts for some reason. It utilizes all the worker nodes resized at that time. Explicit restart of services to handle these stale config changes isn't required.
-### **Next Steps**
+### **Next steps**
If the above guidelines didn't resolve your query, visit one of the following. * Get answers from Azure experts through [Azure Community Support](https://azure.microsoft.com/support/community/).
If the above guidelines didn't resolve your query, visit one of the following.
* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
-## **Other References:**
+## **Other references:**
* [Interactive Query in Azure HDInsight](./apache-interactive-query-get-started.md) * [Create a cluster with Schedule-based Autoscaling](./apache-interactive-query-get-started.md) * [Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide](./hive-llap-sizing-guide.md)
- * [Hive Warehouse Connector in Azure HDInsight](./apache-hive-warehouse-connector.md)
+ * [Hive Warehouse Connector in Azure HDInsight](./apache-hive-warehouse-connector.md)
hdinsight Overview Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-data-lake-storage-gen2.md
description: Overview of Data Lake Storage Gen2 in HDInsight.
Previously updated : 04/21/2020 Last updated : 05/27/2022 # Azure Data Lake Storage Gen2 overview in HDInsight
For more information, see [Use the Azure Data Lake Storage Gen2 URI](../storage/
* [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) * [Introduction to Azure Storage](../storage/common/storage-introduction.md)
-* [Azure Data Lake Storage Gen1 overview](./overview-data-lake-storage-gen1.md)
+* [Azure Data Lake Storage Gen1 overview](./overview-data-lake-storage-gen1.md)
hdinsight Zookeeper Troubleshoot Quorum Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/zookeeper-troubleshoot-quorum-fails.md
Title: Apache ZooKeeper server fails to form a quorum in Azure HDInsight
description: Apache ZooKeeper server fails to form a quorum in Azure HDInsight Previously updated : 05/20/2020 Last updated : 05/28/2022 # Apache ZooKeeper server fails to form a quorum in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
- Get answers from Azure experts through [Azure Community Support](https://azure.microsoft.com/support/community/). - Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.-- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/troubleshoot-nas.md
description: Tips to avoid and fix configuration errors and other problems that
Previously updated : 05/26/2022 Last updated : 05/27/2022
Check these settings both on the NAS itself and also on any firewalls between th
## Check root squash settings
-Root squash settings can disrupt file access if they are improperly configured. You should check that the settings on each storage export and on the matching HPC Cache client access policies are consistent.
+Root squash settings can disrupt file access if they are improperly configured. You should check that the settings on each storage export and on the matching HPC Cache client access policies are appropriate.
Root squash prevents requests sent by a local superuser root on the client from being sent to a back-end storage system as root. It reassigns requests from root to a non-privileged user ID (UID) like 'nobody'.
Root squash can be configured in an HPC Cache system in these places:
* At the storage export - You can configure your storage system to reassign incoming requests from root to a non-privileged user ID (UID).
-These two settings should match. That is, if a storage system export squashes root, you should change its HPC Cache client access rule to also squash root. If the settings don't match, you can have access problems when you try to read or write to the back-end storage system through the HPC Cache.
+If your storage system export squashes root, you should update the HPC Cache client access rule for that storage target to also squash root. If not, you can have access problems when you try to read or write to the back-end storage system through the HPC Cache.
-This table illustrates the behavior for different root squash scenarios when a client request is sent as UID 0 (root). The scenarios marked with * are ***not recommended*** because they can cause access problems.
+This table illustrates the behavior for different root squash scenarios when a client request is sent as UID 0 (root). The scenario marked with * is ***not recommended*** because it can cause access problems.
| Setting | UID sent from client | UID sent from HPC Cache | Effective UID on back-end storage | |--|--|--|--| | no root squash | 0 (root) | 0 (root) | 0 (root) |
-| *root squash at HPC Cache only | 0 (root) | 65534 (nobody) | 65534 (nobody) |
+| root squash at HPC Cache only | 0 (root) | 65534 (nobody) | 65534 (nobody) |
| *root squash at NAS storage only | 0 (root) | 0 (root) | 65534 (nobody) | | root squash at HPC Cache and NAS | 0 (root) | 65534 (nobody) | 65534 (nobody) |
This table illustrates the behavior for different root squash scenarios when a c
## Check access on directory paths <!-- previously linked in prereqs article as allow-root-access-on-directory-paths -->
+<!-- check if this is still accurate - 05-2022 -->
For NAS systems that export hierarchical directories, check that Azure HPC Cache has appropriate access to each export level in the path to the files you are using.
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md
Last updated 08/30/2021 --+
+ms.tool: azure-cli
+ # This topic applies to device developers and solution builders.
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
+
+ Title: Use the REST API to add upload storage account configuration in Azure IoT Central
+description: How to use the IoT Central REST API to add upload storage account configuration in an application
++ Last updated : 05/12/2022++++++
+# How to use the IoT Central REST API to upload a file
+
+IoT Central lets you upload media and other files from connected devices to cloud storage. You configure the file upload capability in your IoT Central application, and then implement file uploads in your device code. In this article, learn how to:
+
+* Use the REST API to configure the file upload capability in your IoT Central application.
+* Test the file upload by running some sample device code.
+
+The IoT Central REST API lets you:
+
+* Add a file upload storage account configuration
+* Update a file upload storage account configuration
+* Get the file upload storage account configuration
+* Delete the file upload storage configuration
+
+Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
++
+## Prerequisites
+
+To test the file upload, install the following prerequisites in your local development environment:
+
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/Download)
+
+## Add a file upload storage account configuration
+
+### Create a storage account
+
+To use the Azure Storage REST API, you need a bearer token for the `management.azure.com` resource. To get a bearer token, you can use the Azure CLI:
+
+```azurecli
+az account get-access-token --resource https://management.azure.com
+```
+
+If you don't have a storage account for your blobs, you can use the following request to create one in your subscription:
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}?api-version=2021-09-01
+
+```
+
+The request headers have the following fields:
+
+* `subscriptionId` : The ID of the target subscription.
+* `resourceGroupName`: The name of the resource group in your subscription. The name is case insensitive.
+* `accountName` : The name of the storage account within the specified resource group. Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only.
+
+The request body has the following required fields:
+
+* `kind` : Type of storage account
+* `location` : The geo-location where the resource lives
+* `sku`: The SKU name.
+
+```json
+{
+ "kind": "BlockBlobStorage",
+ "location": "West US",
+ "sku": "Premium_LRS"
+}
+```
+
+### Create a container
+
+Use the following request to create a container called `fileuploads` in your storage account for your blobs:
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/blobServices/default/containers/fileuploads?api-version=2021-09-01
+```
+
+* `containerName` : Blob container names must be between 3 and 63 characters in length and use numbers, lower-case letters and dash (-) only. Every dash (-) character must be immediately preceded and followed by a letter or number.
+
+Send an empty request body with this request that looks like the following example:
+
+```json
+{
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "/subscriptions/your-subscription-id/resourceGroups/yourResourceGroupName/providers/Microsoft.Storage/storageAccounts/yourAccountName/blobServices/default/containers/fileuploads",
+ "name": "fileuploads",
+ "type": "Microsoft.Storage/storageAccounts/blobServices/containers"
+}
+```
+
+### Get the storage account keys
+
+Use the following request to retrieve that storage account keys that you need when you configure the upload in IoT Central:
+
+```http
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/listKeys?api-version=2021-09-01
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "keys": [
+ {
+ "creationTime": "2022-05-19T19:22:40.9132287Z",
+ "keyName": "key1",
+ "value": "j3UTm**************==",
+ "permissions": "FULL"
+ },
+ {
+ "creationTime": "2022-05-19T19:22:40.9132287Z",
+ "keyName": "key2",
+ "value": "Nbs3W**************==",
+ "permissions": "FULL"
+ }
+ ]
+}
+```
+
+### Create the upload configuration
+
+Use the following request to create a file upload blob storage account configuration in your IoT Central application:
+
+```http
+PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+The request body has the following fields:
+
+* `account`: The storage account name where to upload the file to.
+* `connectionString`: The connection string to connect to the storage account. Use one of the `value` values from the previous `listKeys` request as the `AccountKey` value.
+* `container`: The name of the container inside the storage account. The following example uses the name `fileuploads`.
+* `etag`: ETag to prevent conflict with multiple uploads
+* `sasTtl`: ISO 8601 duration standard, The amount of time the deviceΓÇÖs request to upload a file is valid before it expires.
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "fileuploads",
+ "sasTtl": "PT1H"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "fileuploads",
+ "sasTtl": "PT1H",
+ "state": "pending",
+ "etag": "\"7502ac89-0000-0300-0000-627eaf100000\""
+
+}
+
+```
+
+## Get the file upload storage account configuration
+
+Use the following request to retrieve details of a file upload blob storage account configuration in your IoT Central application:
++
+```http
+GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "yourContainerName",
+ "state": "succeeded",
+ "etag": "\"7502ac89-0000-0300-0000-627eaf100000\""
+
+}
+```
+
+## Update the file upload storage account configuration
+
+Use the following request to update a file upload blob storage account configuration in your IoT Central application:
+
+```http
+PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+```json
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "yourContainerName2",
+ "sasTtl": "PT1H"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+
+{
+ "account": "yourAccountName",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;BlobEndpoint=https://yourAccountName.blob.core.windows.net/",
+ "container": "yourContainerName2",
+ "sasTtl": "PT1H",
+ "state": "succeeded",
+ "etag": "\"7502ac89-0000-0300-0000-627eaf100000\""
+}
+```
+
+## Remove the file upload storage account configuration
+
+Use the following request to delete a storage account configuration:
+
+```http
+DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+```
+
+## Test file upload
+
+After you [configure file uploads](#add-a-file-upload-storage-account-configuration) in your IoT Central application, you can test it with the sample code. If you haven't already cloned the file upload sample repository, use the following commands to clone it to a suitable location on your local machine and install the dependent packages:
+
+```
+git clone https://github.com/azure-Samples/iot-central-file-upload-device
+cd iotc-file-upload-device
+npm i
+npm build
+```
+
+### Create the device template and import the model
+
+To test the file upload you run a sample device application. Create a device template for the sample device to use.
+
+1. Open your application in IoT Central UI.
+
+1. Navigate to the **Device Templates** tab in the left pane, select **+ New**:
+
+1. Choose **IoT device** as the template type.
+
+1. On the **Customize** page of the wizard, enter a name such as *File Upload Device Sample* for the device template.
+
+1. On the **Review** page, select **Create**.
+
+1. Select **Import a model** and upload the *FileUploadDeviceDcm.json* manifest file from the folder `iotc-file-upload-device\setup` in the repository you downloaded previously.
+
+1. Select **Publish** to publish the device template.
+
+### Add a device
+
+To add a device to your Azure IoT Central application:
+
+1. Choose **Devices** on the left pane.
+
+1. Select the *File Upload Device Sample* device template which you created earlier.
+
+1. Select + **New** and select **Create**.
+
+1. Select the device which you created and Select **Connect**
+
+Copy the values for `ID scope`, `Device ID`, and `Primary key`. You'll use these values in the device sample code.
+
+### Run the sample code
+
+Open the git repository you downloaded in VS code. Create an ".env" file at the root of your project and add the values you copied above. The file should look like the sample below with the values you made a note of previously.
+
+```
+scopeId=<YOUR_SCOPE_ID>
+deviceId=<YOUR_DEVICE_ID>
+deviceKey=<YOUR_PRIMARY_KEY>
+modelId=dtmi:IoTCentral:IotCentralFileUploadDevice;1
+```
+
+Open the git repository you downloaded in VS code. Press F5 to run/debug the sample. In your terminal window you see that the device is registered and is connected to IoT Central:
+
+```
+
+Starting IoT Central device...
+ > Machine: Windows_NT, 8 core, freemem=6674mb, totalmem=16157mb
+Starting device registration...
+DPS registration succeeded
+Connecting the device...
+IoT Central successfully connected device: 7z1xo26yd8
+Sending telemetry: {
+ "TELEMETRY_SYSTEM_HEARTBEAT": 1
+}
+Sending telemetry: {
+ "TELEMETRY_SYSTEM_HEARTBEAT": 1
+}
+Sending telemetry: {
+ "TELEMETRY_SYSTEM_HEARTBEAT": 1
+}
+
+```
+
+The sample project comes with a sample file named *datafile.json*. This is the file that's uploaded when you use the **Upload File** command in your IoT Central application.
+
+To test this open your application and select the device you created. Select the **Command** tab and you see a button named **Run**. When you select that button the IoT Central app calls a direct method on your device to upload the file. You can see this direct method in the sample code in the /device.ts file. The method is named *uploadFileCommand*.
+
+Select the **Raw data** tab to verify the file upload status.
++
+You can also make a [REST API](/rest/api/storageservices/list-blobs) call to verify the file upload status in the storage container.
+
+## Next steps
+
+Now that you've learned how to configure file uploads with the REST API, a suggested next step is to [How to create device templates from IoT Central GUI.](howto-set-up-template.md#create-a-device-template)
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-publish-subscribe.md
The following JSON snippet is an example of an authorization policy that explici
When writing your authorization policy, keep in mind: - It requires `$edgeHub` twin schema version 1.2.
+ > [!IMPORTANT]
+ > Once your IoT Edge device is deployed, it currently won't display correctly in the Azure portal with schema version 1.2 (version 1.1 will be fine). This is a known bug and will be fixed soon. However, this won't affect your device, as it's still connected in IoT Hub and can be communicated with at any time using the Azure CLI.
+ :::image type="content" source="./media/how-to-publish-subscribe/unsupported-1.2-schema.png" alt-text="Screenshot of Azure portal error on the IoT Edge device page.":::
- By default, all operations are denied. - Authorization statements are evaluated in the order that they appear in the JSON definition. It starts by looking at `identities` and then selects the first *allow* or *deny* statements that match the request. If there are conflicts between these statements, the *deny* statement wins. - Several variables (for example, substitutions) can be used in the authorization policy:
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](http
## Update the security daemon
-The IoT Edge security daemon is a native component that needs to be updated using the package manager on the IoT Edge device.
+The IoT Edge security daemon is a native component that needs to be updated using the package manager on the IoT Edge device. View the [Update the security daemon](how-to-update-iot-edge.md#update-the-security-daemon) tutorial for a walk-through on Linux-based devices.
Check the version of the security daemon running on your device by using the command `iotedge version`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the version.
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md
Last updated 08/24/2021
-# Use Visual Studio 2019 to develop and debug modules for Azure IoT Edge
+# Use Visual Studio 2022 to develop and debug modules for Azure IoT Edge
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-This article shows you how to use Visual Studio 2019 to develop and debug Azure IoT Edge modules.
+This article shows you how to use Visual Studio 2022 to develop and debug Azure IoT Edge modules.
-The Azure IoT Edge Tools for Visual Studio extension provides the following benefits:
+The **Azure IoT Edge Tools for Visual Studio** extension provides the following benefits:
* Create, edit, build, run, and debug IoT Edge solutions and modules on your local development computer.
+* Code your Azure IoT modules in C or C# with the benefits of Visual Studio development.
* Deploy your IoT Edge solution to an IoT Edge device via Azure IoT Hub.
-* Code your Azure IoT modules in C or C# while having all of the benefits of Visual Studio development.
-* Manage IoT Edge devices and modules with UI.
+* Manage IoT Edge devices and modules with the UI.
-This article shows you how to use the Azure IoT Edge Tools for Visual Studio 2019 to develop your IoT Edge modules. You also learn how to deploy your project to an IoT Edge device. Currently, Visual Studio 2019 provides support for modules written in C and C#. The supported device architectures are Windows X64 and Linux X64 or ARM32. For more information about supported operating systems, languages, and architectures, see [Language and architecture support](module-development.md#language-and-architecture-support).
+Visual Studio 2022 provides support for modules written in C and C#. The supported device architectures are Windows x64 and Linux x64 or ARM32, while ARM64 is in preview. For more information about supported operating systems, languages, and architectures, see [Language and architecture support](module-development.md#language-and-architecture-support).
## Prerequisites
-This article assumes that you use a machine running Windows as your development machine. On Windows computers, you can develop either Windows or Linux modules.
+This article assumes that you use a machine running Windows as your development machine.
-* To develop modules with **Windows containers**, use a Windows computer running version 1809/build 17763 or newer.
-* To develop modules with **Linux containers**, use a Windows computer that meets the [requirements for Docker Desktop](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install).
+* On Windows computers, you can develop either Windows or Linux modules.
-Install Visual Studio on your development machine. Make sure you include the **Azure development** and **Desktop development with C++** workloads in your Visual Studio 2019 installation. You can [Modify Visual Studio 2019](/visualstudio/install/modify-visual-studio?view=vs-2019&preserve-view=true) to add the required workloads.
+ * To develop modules with **Windows containers**, use a Windows computer running version 1809/build 17763 or newer.
+ * To develop modules with **Linux containers**, use a Windows computer that meets the [requirements for Docker Desktop](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install).
-After your Visual Studio 2019 is ready, you also need the following tools and components:
+* Install Visual Studio on your development machine. Make sure you include the **Azure development** and **Desktop development with C++** workloads in your Visual Studio 2022 installation. Alternatively, you can [Modify Visual Studio 2022](/visualstudio/install/modify-visual-studio?view=vs-2022&preserve-view=true) to add the required workloads, if Visual Studio is already installed on your machine.
-* Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) from the Visual Studio marketplace to create an IoT Edge project in Visual Studio 2019.
+* Install the Azure IoT Edge Tools either from the Marketplace or from Visual Studio:
- > [!TIP]
- > If you are using Visual Studio 2017, download and install [Azure IoT Edge Tools for VS 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) from the Visual Studio marketplace
+ * Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs17iotedgetools) from the Visual Studio Marketplace.
+
+ > [!TIP]
+ > If you are using Visual Studio 2019, download and install [Azure IoT Edge Tools for VS 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) from the Visual Studio marketplace
+
+ * Or, in Visual Studio go to **Tools > Get Tools and Features**. The Visual Studio Installer will open. From the **Individual components** tab, select **Azure IoT Edge Tools for VS 2022**, then select **Install** in the lower right of the popup. Close the popup when finished.
+
+ If you only need to update your tools, go to the **Manage Extensions** window, expand **Updates > Visual Studio Marketplace**, select **Azure IoT Edge Tools** then select **Update**.
+
+ After the update is complete, select **Close** and restart Visual Studio.
-* Download and install [Docker Community Edition](https://docs.docker.com/install/) on your development machine to build and run your module images. You'll need to set Docker CE to run in either Linux container mode or Windows container mode, depending on the type of modules you are developing.
+* Download and install [Docker Community Edition](https://docs.docker.com/install/) on your development machine to build and run your module images. Set Docker CE to run in either Linux container mode or Windows container mode, depending on the type of modules you are developing.
-* Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (3.5/3.6/3.7/3.8) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal. Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0.
+* Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (3.5/3.6/3.7/3.8) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal.
```cmd pip install --upgrade iotedgehubdev ```
+
+ > [!TIP]
+ >Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0. You'll need to have a pre-existing IoT Edge device in the Azure portal and have your connection string ready during setup.
-* Install the Vcpkg library manager, and then install the **azure-iot-sdk-c package** for Windows.
+ You may need to restart Visual Studio to complete the installation.
+
+* Install the **Vcpkg** library manager
```cmd git clone https://github.com/Microsoft/vcpkg
After your Visual Studio 2019 is ready, you also need the following tools and co
bootstrap-vcpkg.bat ```
+ Install the **azure-iot-sdk-c** package for Windows
```cmd vcpkg.exe install azure-iot-sdk-c:x64-windows vcpkg.exe --triplet x64-windows integrate install
After your Visual Studio 2019 is ready, you also need the following tools and co
> [!TIP] > You can use a local Docker registry for prototype and testing purposes instead of a cloud registry.
-* To test your module on a device, you'll need an active IoT hub with at least one IoT Edge device. To quickly create an IoT Edge device for testing, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md). If you are running IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you start development in Visual Studio.
-
-### Check your tools version
+* To test your module on a device, you'll need an active IoT Hub with at least one IoT Edge device. To create an IoT Edge device for testing you can create one in the Azure portal or with the CLI:
-1. From the **Extensions** menu, select **Manage Extensions**. Expand **Installed > Tools** and you can find **Azure IoT Edge Tools for Visual Studio** and **Cloud Explorer for Visual Studio**.
+ * Creating one in the [Azure portal](https://portal.azure.com/) is the quickest. From the Azure portal, go to your IoT Hub resource. Select **IoT Edge** from the menu on the left and then select **Add IoT Edge Device**.
-1. Note the installed version. You can compare this version with the latest version on Visual Studio Marketplace ([Cloud Explorer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.CloudExplorerForVS2019), [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools))
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/create-new-iot-edge-device.png" alt-text="Screenshot of how to add a new I o T Edge device":::
+
+ A new popup called **Create a device** will appear. Add a name to your device (known as the Device ID), then select **Save** in the lower left.
+
+ Finally, confirm that your new device exists in your IoT Hub, from the **Device management > IoT Edge** menu. For more information on creating an IoT Edge device through the Azure portal, read [Create and provision an IoT Edge device on Linux using symmetric keys](how-to-provision-single-device-linux-symmetric.md).
-1. If your version is older than what's available on Visual Studio Marketplace, update your tools in Visual Studio as shown in the following section.
+ * To create an IoT Edge device with the CLI follow the steps in the quickstart for [Linux](quickstart-linux.md#register-an-iot-edge-device) or [Windows](quickstart.md#register-an-iot-edge-device). In the process of registering an IoT Edge device, you create an IoT Edge device.
-> [!NOTE]
-> If you are using Visual Studio 2022, [Cloud Explorer](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer?view=vs-2022&preserve-view=true) is retired. To deploy Azure IoT Edge modules, use [Azure CLI](how-to-deploy-modules-cli.md?view=iotedge-2020-11&preserve-view=true) or [Azure portal](how-to-deploy-modules-portal.md?view=iotedge-2020-11&preserve-view=true).
-
-### Update your tools
-
-1. In the **Manage Extensions** window, expand **Updates > Visual Studio Marketplace**, select **Azure IoT Edge Tools** or **Cloud Explorer for Visual Studio** and select **Update**.
-
-1. After the tools update is downloaded, close Visual Studio to trigger the tools update using the VSIX installer.
-
-1. In the installer, select **OK** to start and then **Modify** to update the tools.
-
-1. After the update is complete, select **Close** and restart Visual Studio.
+ If you are running the IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you start development in Visual Studio.
## Create an Azure IoT Edge project
-The IoT Edge project template in Visual Studio creates a solution that can be deployed to IoT Edge devices. First you create an Azure IoT Edge solution, and then you generate the first module in that solution. Each IoT Edge solution can contain more than one module.
+The IoT Edge project template in Visual Studio creates a solution that can be deployed to IoT Edge devices. In summary, first you'll create an Azure IoT Edge solution, and then you'll generate the first module in that solution. Each IoT Edge solution can contain more than one module.
+
+In all, we're going to build three projects in our solution. The main module that contains EdgeAgent and EdgeHub, in addition to the temperature sensor module, then you'll add two more IoT Edge modules.
> [!TIP]
-> The IoT Edge project structure created by Visual Studio is not the same as in Visual Studio Code.
+> The IoT Edge project structure created by Visual Studio is not the same as the one in Visual Studio Code.
1. In Visual Studio, create a new project.
-1. On the **Create a new project** page, search for **Azure IoT Edge**. Select the project that matches the platform and architecture for your IoT Edge device, and click **Next**.
+1. In the **Create a new project** window, search for **Azure IoT Edge**. Select the project that matches the platform and architecture for your IoT Edge device, and click **Next**.
:::image type="content" source="./media/how-to-visual-studio-develop-module/create-new-project.png" alt-text="Create New Project":::
-1. On the **Configure your new project** page, enter a name for your project and specify the location, then select **Create**.
+1. In the **Configure your new project** window, enter a name for your project and specify the location, then select **Create**.
-1. On the **Add Module** window, select the type of module you want to develop. You can also select **Existing module** to add an existing IoT Edge module to your deployment. Specify your module name and module image repository.
+1. In the **Add Module** window, select the type of module you want to develop. You can also select **Existing module** to add an existing IoT Edge module to your deployment. Specify your module name and module image repository.
- Visual Studio autopopulates the repository URL with **localhost:5000/<module name\>**. If you use a local Docker registry for testing, then **localhost** is fine. If you use Azure Container Registry, then replace **localhost:5000** with the login server from your registry's settings. The login server looks like **_\<registry name\>_.azurecr.io**.The final result should look like **\<*registry name*\>.azurecr.io/_\<module name\>_**.
+ Visual Studio autopopulates the repository URL with **localhost:5000/<module name\>**. If you use a local Docker registry for testing, then **localhost** is fine. If you use Azure Container Registry, then replace **localhost:5000** with the login server from your registry's settings.
+
+ The login server looks like **_\<registry name\>_.azurecr.io**.The final result should look like **\<*registry name*\>.azurecr.io/_\<module name\>_**, for example **my-registry-name.azurecr.io/my-module-name**.
Select **Add** to add your module to the project. ![Add Application and Module](./media/how-to-visual-studio-develop-csharp-module/add-module.png)
+ > [!NOTE]
+ >If you have an existing IoT Edge project, you can still change the repository URL by opening the **module.json** file. The repository URL is located in the 'repository' property of the JSON file.
+ Now you have an IoT Edge project and an IoT Edge module in your Visual Studio solution.
-The module folder contains a file for your module code, named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files provide the information needed to build your module as a Windows or Linux container.
+#### Project structure
+
+In your solution is a main project folder and a single module folder. Both are on the project level. The main project folder contains your deployment manifest.
-The project folder contains a list of all the modules included in that project. Right now it should show only one module, but you can add more. For more information about adding modules to a project, see the [Build and debug multiple modules](#build-and-debug-multiple-modules) section later in this article.
+The module project folder contains a file for your module code named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files included here provide the information needed to build your module as a Windows or Linux container.
+#### Deployment manifest of your project
-The project folder also contains a file named `deployment.template.json`. This file is a template of an IoT Edge deployment manifest, which defines all the modules that will run on a device along with how they will communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md). If you open this deployment template, you see that the two runtime modules, **edgeAgent** and **edgeHub** are included, along with the custom module that you created in this Visual Studio project. A fourth module named **SimulatedTemperatureSensor** is also included. This default module generates simulated data that you can use to test your modules, or delete if it's not necessary. To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
+The deployment manifest you'll edit is called `deployment.debug.template.json`. This file is a template of an IoT Edge deployment manifest, which defines all the modules that run on a device along with how they communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md).
+
+If you open this deployment template, you see that the two runtime modules, **edgeAgent** and **edgeHub** are included, along with the custom module that you created in this Visual Studio project. A fourth module named **SimulatedTemperatureSensor** is also included. This default module generates simulated data that you can use to test your modules, or delete if it's not necessary. To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
### Set IoT Edge runtime version The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is version 1.2. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio to match.
-1. In the Solution Explorer, right-click the name of your project and select **Set IoT Edge runtime version**.
+1. In the Solution Explorer, right-click the name of your main project and select **Set IoT Edge runtime version**.
- :::image type="content" source="./media/how-to-visual-studio-develop-module/set-iot-edge-runtime-version.png" alt-text="Right-click your project name and select set IoT Edge runtime version.":::
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/set-iot-edge-runtime-version.png" alt-text="Screenshot of how to find and select the menu item named 'Set I o T Edge Runtime version'.":::
-1. Use the drop-down menu to choose the runtime version that your IoT Edge devices are running, then select **OK** to save your changes.
+1. Use the drop-down menu to choose the runtime version that your IoT Edge devices are running, then select **OK** to save your changes. If no change was made, select **Cancel** to exit.
-1. Re-generate your deployment manifest with the new runtime version. Right-click the name of your project and select **Generate deployment for IoT Edge**.
+1. If you changed the version, re-generate your deployment manifest by right-clicking the name of your project and select **Generate deployment for IoT Edge**. This will generate a deployment manifest based on your deployment template and will appear in the **config** folder of your Visual Studio project.
-## Develop your module
+## Module infrastructure & development options
When you add a new module, it comes with default code that is ready to be built and deployed to a device so that you can start testing without touching any code. The module code is located within the module folder in a file named `Program.cs` (for C#) or `main.c` (for C).
When you're ready to customize the module template with your own code, use the [
## Set up the iotedgehubdev testing tool
-The IoT edgeHub dev tool provides a local development and debug experience. The tool helps start IoT Edge modules without the IoT Edge runtime so that you can create, develop, test, run, and debug IoT Edge modules and solutions locally. You don't have to push images to a container registry and deploy them to a device for testing.
+The Azure IoT EdgeHub Dev Tool provides a local development and debug experience. The tool helps start IoT Edge modules without the IoT Edge runtime so that you can create, develop, test, run, and debug IoT Edge modules and solutions locally. You don't have to push images to a container registry and deploy them to a device for testing.
For more information, see [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/).
-To initialize the tool, provide an IoT Edge device connection string from IoT Hub.
+To initialize the tool in Visual Studio:
-1. Retrieve the connection string of an IoT Edge device from the Azure portal, the Azure CLI, or the Visual Studio Cloud Explorer.
+1. Retrieve the connection string of your IoT Edge device (found in your IoT Hub) from the [Azure portal](https://portal.azure.com/) or from the Azure CLI.
-1. From the **Tools** menu, select **Azure IoT Edge Tools** > **Setup IoT Edge Simulator**.
+ If using the CLI to retrieve your connection string, use this command, replacing "**[device_id]**" and "**[hub_name]**" with your own values:
+
+ ```Azure CLI
+ az iot hub device-identity connection-string show --device-id [device_id] --hub-name [hub_name]
+ ```
+
+1. From the **Tools** menu in Visual Studio, select **Azure IoT Edge Tools** > **Setup IoT Edge Simulator**.
1. Paste the connection string and click **OK**.
To initialize the tool, provide an IoT Edge device connection string from IoT Hu
Typically, you'll want to test and debug each module before running it within an entire solution with multiple modules. >[!TIP]
->Make sure you have switched over to the correct Docker container mode, either Linux container mode or Windows container mode, depending on the type of IoT Edge module you are developing. From the Docker Desktop menu, you can toggle between the two types of modes. Select **Switch to Windows containers** to use Windows containers, or select **Switch to Linux containers** to use Linux containers.
+>Depending on the type of IoT Edge module you are developing, you may need to enable the correct Docker container mode: either Linux or Windows. From the Docker Desktop menu, you can toggle between the two types of modes. Select **Switch to Windows containers** or select **Switch to Linux containers**. For this tutorial, we use Linux.
+>
+>:::image type="content" source="./media/how-to-visual-studio-develop-module/system-tray.png" alt-text="Screenshot of how to find and select the menu item named 'Switch to Windows containers'.":::
-1. In **Solution Explorer**, right-click the module folder and select **Set as StartUp Project** from the menu.
+1. In **Solution Explorer**, right-click the module project folder and select **Set as StartUp Project** from the menu.
- ![Set Start-up Project](./media/how-to-visual-studio-develop-csharp-module/module-start-up-project.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/module-start-up-project.png" alt-text="Screenshot of how to set project as startup project.":::
-1. Press **F5** or click the run button in the toolbar to run the module. It may take 10&ndash;20 seconds the first time you do so.
+1. Press **F5** or click the run button in the toolbar to run the module. It may take 10&ndash;20 seconds the first time you do so. Be sure you don't have other Docker containers running that might bind the port you need for this project.
- ![Run Module](./media/how-to-visual-studio-develop-csharp-module/run-module.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/run-module.png" alt-text="Screenshot of how to run a module.":::
-1. You should see a .NET Core console app start if the module has been initialized successfully.
+1. You should see a .NET Core console app window appear if the module has been initialized successfully.
1. Set a breakpoint to inspect the module.
Typically, you'll want to test and debug each module before running it within an
curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages ```
- ![Debug Single Module](./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png" alt-text="Screenshot of the output console, Visual Studio project, and Bash window." lightbox="./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png":::
+
+ The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window, found when the debugger is running. Go to Debug > Windows > Locals.
- The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window.
+ In your Bash or shell, you should see a `{"message":"accepted"}` confirmation.
+
+ In your .NET console you should see:
+
+ ```dotnetcli
+ IoT Hub module client initialized.
+ Received message: 1, Body: [hello world]
+ ```
> [!TIP] > You can also use [PostMan](https://www.getpostman.com/) or other API tools to send messages instead of `curl`.
Typically, you'll want to test and debug each module before running it within an
After you're done developing a single module, you might want to run and debug an entire solution with multiple modules.
-1. In **Solution Explorer**, add a second module to the solution by right-clicking the project folder. On the menu, select **Add** > **New IoT Edge Module**.
+1. In **Solution Explorer**, add a second module to the solution by right-clicking the main project folder. On the menu, select **Add** > **New IoT Edge Module**.
+
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/add-new-module.png" alt-text="Screenshot of how to add a 'New I o T Edge Module' from the menu." lightbox="./media/how-to-visual-studio-develop-module/add-new-module.png":::
- ![Add a new module to an existing IoT Edge project](./media/how-to-visual-studio-develop-csharp-module/add-new-module.png)
+1. In the `Add module` window give your new module a name and replace the `localhost:5000` portion of the repository URL with your Azure Container Registry login server, like you did before.
-1. Open the file `deployment.template.json` and you'll see that the new module has been added in the **modules** section. A new route was also added to the **routes** section to send messages from the new module to IoT Hub. If you want to send data from the simulated temperature sensor to the new module, add another route like the following example:
+1. Open the file `deployment.debug.template.json` to see that the new module has been added in the **modules** section. A new route was also added to the **routes** section in `EdgeHub` to send messages from the new module to IoT Hub. To send data from the simulated temperature sensor to the new module, add another route with the following line of `JSON`. Replace `<NewModuleName>` (in two places) with your own module name.
```json "sensorTo<NewModuleName>": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/<NewModuleName>/inputs/input1\")" ```
-1. Right-click the project folder and select **Set as StartUp Project** from the context menu.
+1. Right-click the main project (for example, `IoTEdgeProject`) and select **Set as StartUp Project**.
-1. Create your breakpoints and then press **F5** to run and debug multiple modules simultaneously. You should see multiple .NET Core console app windows, which each window representing a different module.
+1. Create breakpoints in each module and then press **F5** to run and debug multiple modules simultaneously. You should see multiple .NET Core console app windows, with each window representing a different module.
- ![Debug Multiple Modules](./media/how-to-visual-studio-develop-csharp-module/debug-multiple-modules.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-csharp-module/debug-multiple-modules.png" alt-text="Screenshot of Visual Studio with two output consoles.":::
1. Press **Ctrl + F5** or select the stop button to stop debugging. ## Build and push images
-1. Make sure the IoT Edge project is the start-up project, not one of the individual modules. Select either **Debug** or **Release** as the configuration to build for your module images.
+1. Make sure the main IoT Edge project is the start-up project, not one of the individual modules. Select either **Debug** or **Release** as the configuration to build for your module images.
> [!NOTE] > When choosing **Debug**, Visual Studio uses `Dockerfile.(amd64|windows-amd64).debug` to build Docker images. This includes the .NET Core command-line debugger VSDBG in your container image while building it. For production-ready IoT Edge modules, we recommend that you use the **Release** configuration, which uses `Dockerfile.(amd64|windows-amd64)` without VSDBG.
-1. If you're using a private registry like Azure Container Registry (ACR), use the following Docker command to sign in to it. You can get the username and password from the **Access keys** page of your registry in the Azure portal. If you're using local registry, you can [run a local registry](https://docs.docker.com/registry/deploying/#run-a-local-registry).
+1. If you're using a private registry like Azure Container Registry (ACR), use the following Docker command to sign in to it. You can get the username and password from the **Access keys** page of your registry in the Azure portal.
```cmd docker login -u <ACR username> -p <ACR password> <ACR login server> ```
-1. If you're using a private registry like Azure Container Registry, you need to add your registry login information to the runtime settings found in the file `deployment.template.json`. Replace the placeholders with your actual ACR admin username, password, and registry name.
+1. Let's add the Azure Container Registry login information to the runtime settings found in the file `deployment.debug.template.json`. There are two ways to do this. You can either add your registry credentials to your `.env` file (most secure) or add them directly to your `deployment.debug.template.json` file.
+
+ **Add credentials to your `.env` file:**
+
+ In the Solution Explorer, click the button that will **Show All Files**. The `.env` file will appear. Add your Azure Container Registry username and password to your `.env` file. These credentials can be found on the **Access Keys** page of your Azure Container Registry in the Azure portal.
+
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/show-env-file.png" alt-text="Screenshot of button that will show all files in the Solution Explorer.":::
+
+ ```env
+ DEFAULT_RT_IMAGE=1.2
+ CONTAINER_REGISTRY_USERNAME_myregistry=<my-registry-name>
+ CONTAINER_REGISTRY_PASSWORD_myregistry=<my-registry-password>
+ ```
+
+ **Add credentials directly to `deployment.debug.template.json`:**
+
+ If you'd rather add your credentials directly to your deployment template, replace the placeholders with your actual ACR admin username, password, and registry name.
```json "settings": {
After you're done developing a single module, you might want to run and debug an
>[!NOTE] >This article uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
-1. In **Solution Explorer**, right-click the project folder and select **Build and Push IoT Edge Modules** to build and push the Docker image for each module.
+1. If you're using a local registry, you can [run a local registry](https://docs.docker.com/registry/deploying/#run-a-local-registry).
+
+1. Finally, in the **Solution Explorer**, right-click the main project folder and select **Build and Push IoT Edge Modules** to build and push the Docker image for each module. This might take a minute. When you see `Finished Build and Push IoT Edge Modules.` in your Output console of Visual Studio, you are done.
## Deploy the solution
-In the quickstart article that you used to set up your IoT Edge device, you deployed a module by using the Azure portal. You can also deploy modules using the Cloud Explorer for Visual Studio. You already have a deployment manifest prepared for your scenario, the `deployment.json` file and all you need to do is select a device to receive the deployment.
+In the quickstart article that you used to set up your IoT Edge device, you deployed a module by using the Azure portal. You can also deploy modules using the CLI in Visual Studio. You already have a deployment manifest template you've been observing throughout this tutorial. Let's generate a deployment manifest from that, then use an Azure CLI command to deploy your modules to your IoT Edge device in Azure.
-1. Open **Cloud Explorer** by clicking **View** > **Cloud Explorer**. Make sure you've logged in to Visual Studio 2019.
+1. Right-click on your main project in Visual Studio Solution Explorer and choose **Generate Deployment for IoT Edge**.
-1. In **Cloud Explorer**, expand your subscription, find your Azure IoT Hub and the Azure IoT Edge device you want to deploy.
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/generate-deployment.png" alt-text="Screenshot of location of the 'generate deployment' menu item.":::
-1. Right-click on the IoT Edge device to create a deployment for it. Navigate to the deployment manifest configured for your platform located in the **config** folder in your Visual Studio solution, such as `deployment.arm32v7.json`.
+1. Go to your local Visual Studio main project folder and look in the `config` folder. The file path might look like this: `C:\Users\<YOUR-USER-NAME>\source\repos\<YOUR-IOT-EDGE-PROJECT-NAME>\config`. Here you'll find the generated deployment manifest such as `deployment.amd64.debug.json`.
-1. Click the refresh button to see the new modules running along with the **SimulatedTemperatureSensor** module and **$edgeAgent** and **$edgeHub**.
+1. Check your `deployment.amd64.debug.json` file to confirm the `edgeHub` schema version is set to 1.2.
-## View generated data
+ ```json
+ "$edgeHub": {
+ "properties.desired": {
+ "schemaVersion": "1.2",
+ "routes": {
+ "IotEdgeModule2022ToIoTHub": "FROM /messages/modules/IotEdgeModule2022/outputs/* INTO $upstream",
+ "sensorToIotEdgeModule2022": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/IotEdgeModule2022/inputs/input1\")",
+ "IotEdgeModule2022bToIoTHub": "FROM /messages/modules/IotEdgeModule2022b/outputs/* INTO $upstream"
+ },
+ "storeAndForwardConfiguration": {
+ "timeToLiveSecs": 7200
+ }
+ }
+ }
+ ```
+ > [!TIP]
+ > The deployment template for Visual Studio 2022 requires the 1.2 schema version. If you need it to be 1.1 or 1.0, wait until after the deployment is generated (do not change it in `deployment.debug.template.json`). Generating a deployment will create a 1.2 schema by default. However, you can manually change `deployment.amd64.debug.json`, the generated manifest, if needed before deploying it to Azure.
+
+ > [!IMPORTANT]
+ > Once your IoT Edge device is deployed, it currently won't display correctly in the Azure portal with schema version 1.2 (version 1.1 will be fine). This is a known bug and will be fixed soon. However, this won't affect your device, as it's still connected in IoT Hub and can be communicated with at any time using the Azure CLI.
+ >
+ >:::image type="content" source="./media/how-to-publish-subscribe/unsupported-1.2-schema.png" alt-text="Screenshot of Azure portal error on the I o T Edge device page.":::
+
+1. Now let's deploy our manifest with an Azure CLI command. Open the Visual Studio **Developer Command Prompt** and change to the **config** directory.
+
+ ```cmd
+ cd config
+ ```
+
+1. From your **config** folder, execute the following deployment command. Replace the `[device id]`, `[hub name]`, and `[file path]` with your values.
-1. To monitor the D2C message for a specific IoT Edge device, select it in your IoT hub in **Cloud Explorer** and then click **Start Monitoring Built-in Event Endpoint** in the **Action** window.
+ ```cmd
+ az iot edge set-modules --device-id [device id] --hub-name [hub name] --content [file path]
+ ```
+
+ For example, your command might look like this:
+
+ ```cmd
+ az iot edge set-modules --device-id my-device-name --hub-name my-iot-hub-name --content deployment.amd64.debug.json
+ ```
+
+1. After running the command, you'll see a confirmation of deployment printed in `JSON` in your command prompt.
+
+### Confirm the deployment to your device
+
+To check that your IoT Edge modules were deployed to Azure, sign in to your device (or virtual machine), for example through SSH or Azure Bastion, and run the IoT Edge list command.
+
+```azurecli
+ iotedge list
+```
+
+You should see a list of your modules running on your device or virtual machine.
+
+```azurecli
+ NAME STATUS DESCRIPTION CONFIG
+ SimulatedTemperatureSensor running Up a day mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0
+ edgeAgent running Up a day mcr.microsoft.com/azureiotedge-agent:1.2
+ edgeHub running Up a day mcr.microsoft.com/azureiotedge-hub:1.2
+ myIotEdgeModule running Up 2 hours myregistry.azurecr.io/myiotedgemodule:0.0.1-amd64.debug
+ myIotEdgeModule2 running Up 2 hours myregistry.azurecr.io/myiotedgemodule2:0.0.1-amd64.debug
+```
+
+## View generated data
-1. To stop monitoring data, select **Stop Monitoring Built-in Event Endpoint** in the **Action** window.
+To monitor the device-to-cloud (D2C) messages for a specific IoT Edge device, review the [Tutorial: Monitor IoT Edge devices](tutorial-monitor-with-workbooks.md) to get started.
## Next steps
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
Before you put any device in production you should know how you're going to mana
* IoT Edge * CA certificates
-For more information, see [Update the IoT Edge runtime](how-to-update-iot-edge.md). The current methods for updating IoT Edge require physical or SSH access to the IoT Edge device. If you have many devices to update, consider adding the update steps to a script or use an automation tool like Ansible.
+[Device Update for IoT Hub](../iot-hub-device-update/index.yml) (Preview) is a service that enables you to deploy over-the-air updates (OTA) for your IoT Edge devices.
+
+Alternative methods for updating IoT Edge require physical or SSH access to the IoT Edge device. For more information, see [Update the IoT Edge runtime](how-to-update-iot-edge.md). To update multiple devices, consider adding the update steps to a script or use an automation tool like Ansible.
### Use Moby as the container engine
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| gpuName | GPU Device name | Name of GPU device to be used for passthrough. | | gpuPassthroughType | **DirectDeviceAssignment**, **ParaVirtualization**, or none (CPU only) | GPU Passthrough type | | gpuCount | Integer value between 1 and the number of the device's GPU cores | Number of GPU devices for the VM. <br><br>**Note**: If using ParaVirtualization, make sure to set gpuCount = 1 |
+| customSsh | None | Determines whether user wants to use their custom OpenSSH.Client installation. If present, ssh.exe must be available to the EFLOW PSM |
:::moniker-end <!-- end 1.1 -->
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| gpuName | GPU Device name | Name of GPU device to be used for passthrough. | | gpuPassthroughType | **DirectDeviceAssignment**, **ParaVirtualization**, or none (CPU only) | GPU Passthrough type | | gpuCount | Integer value between 1 and the number of the device's GPU cores | Number of GPU devices for the VM. <br><br>**Note**: If using ParaVirtualization, make sure to set gpuCount = 1 |
+| customSsh | None | Determines whether user wants to use their custom OpenSSH.Client installation. If present, ssh.exe must be available to the EFLOW PSM |
:::moniker-end <!-- end 1.2 -->
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
-| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).
+| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md).
| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) | | [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-provision-single-device-linux-x509.md) | | [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | X.509 auto-provisioning with DPS<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) |
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
Managed HSM local RBAC has several built-in roles. You can assign these roles to
|/keys/deletedKeys/delete||<center>X</center>|||||<center>X</center>| |/keys/backup/action|||<center>X</center>|||<center>X</center>| |/keys/restore/action|||<center>X</center>||||
-|/keys/export/action||<center>X</center>|||||
|/keys/release/action|||<center>X</center>|||| |/keys/import/action|||<center>X</center>|||| |**Key cryptographic operations**|
lab-services Quick Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-template.md
Get-AzLabServicesLab -Name $lab
Write-Host "Press [ENTER] to continue..." ```
-To verify educators can use the lab, navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about managing labs, see [View all labs](/azure/lab-services/how-to-manage-labs.md#)](how-to-manage-labs.md#view-all-labs).
+To verify educators can use the lab, navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about managing labs, see [View all labs](/azure/lab-services/how-to-manage-labs).
## Clean up resources
Alternately, an educator may delete a lab from the Azure Lab Services website: [
For a step-by-step tutorial that guides you through the process of creating a template, see: > [!div class="nextstepaction"]
-> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
logic-apps Logic Apps Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exception-handling.md
ms.suite: integration -+ Previously updated : 02/18/2021 Last updated : 05/26/2022 # Handle errors and exceptions in Azure Logic Apps
-The way that any integration architecture appropriately handles downtime or issues caused by dependent systems can pose a challenge. To help you create robust and resilient integrations that gracefully handle problems and failures, Logic Apps provides a first-class experience for handling errors and exceptions.
+The way that any integration architecture appropriately handles downtime or issues caused by dependent systems can pose a challenge. To help you create robust and resilient integrations that gracefully handle problems and failures, Azure Logic Apps provides a first-class experience for handling errors and exceptions.
<a name="retry-policies"></a> ## Retry policies
-For the most basic exception and error handling, you can use a *retry policy* in any action or trigger where supported, for example, see [HTTP action](../logic-apps/logic-apps-workflow-actions-triggers.md#http-trigger). A retry policy specifies whether and how the action or trigger retries a request when the original request times out or fails, which is any request that results in a 408, 429, or 5xx response. If no other retry policy is used, the default policy is used.
+For the most basic exception and error handling, you can use the *retry policy* when supported on a trigger or action, such as the [HTTP action](logic-apps-workflow-actions-triggers.md#http-trigger). If the trigger or action's original request times out or fails, resulting in a 408, 429, or 5xx response, the retry policy specifies that the trigger or action resend the request per policy settings.
-Here are the retry policy types:
+### Retry policy types
-| Type | Description |
-||-|
-| **Default** | This policy sends up to four retries at *exponentially increasing* intervals, which scale by 7.5 seconds but are capped between 5 and 45 seconds. |
-| **Exponential interval** | This policy waits a random interval selected from an exponentially growing range before sending the next request. |
-| **Fixed interval** | This policy waits the specified interval before sending the next request. |
-| **None** | Don't resend the request. |
+By default, the retry policy is set to the **Default** type.
+
+| Retry policy | Description |
+|--|-|
+| **Default** | This policy sends up to 4 retries at *exponentially increasing* intervals, which scale by 7.5 seconds but are capped between 5 and 45 seconds. For more information, review the [Default](#default) policy type. |
+| **None** | Don't resend the request. For more information, review the [None](#none) policy type. |
+| **Exponential Interval** | This policy waits a random interval, which is selected from an exponentially growing range before sending the next request. For more information, review the [Exponential Interval](#exponential-interval) policy type. |
+| **Fixed Interval** | This policy waits the specified interval before sending the next request. For more information, review the [Fixed Interval](#fixed-interval) policy type. |
|||
-For information about retry policy limits, see [Logic Apps limits and configuration](../logic-apps/logic-apps-limits-and-config.md#http-limits).
+<a name="retry-policy-limits"></a>
-### Change retry policy
+### Retry policy limits
-To select a different retry policy, follow these steps:
+For more information about retry policies, settings, limits, and other options, review [Retry policy limits](logic-apps-limits-and-config.md#retry-policy-limits).
-1. Open your logic app in Logic App Designer.
+### Change retry policy type in the designer
-1. Open the **Settings** for an action or trigger.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. If the action or trigger supports retry policies, under **Retry Policy**, select the type you want.
+1. Based on your [logic app type](logic-apps-overview.md#resource-environment-differences), open the trigger or action's **Settings**.
-Or, you can manually specify the retry policy in the `inputs` section for an action or trigger that supports retry policies. If you don't specify a retry policy, the action uses the default policy.
+ * **Consumption**: On the action shape, open the ellipses menu (**...**), and select **Settings**.
-```json
-"<action-name>": {
- "type": "<action-type>",
+ * **Standard**: On the designer, select the action. On the details pane, select **Settings**.
+
+1. If the trigger or action supports retry policies, under **Retry Policy**, select the policy type that you want.
+
+### Change retry policy type in the code view editor
+
+1. If necessary, confirm whether the trigger or action supports retry policies by completing the earlier steps in the designer.
+
+1. Open your logic app workflow in the code view editor.
+
+1. In the trigger or action definition, add the `retryPolicy` JSON object to that trigger or action's `inputs` object. Otherwise, if no `retryPolicy` object exists, the trigger or action uses the `default` retry policy.
+
+ ```json
"inputs": {
- "<action-specific-inputs>",
+ <...>,
"retryPolicy": { "type": "<retry-policy-type>",
- "interval": "<retry-interval>",
+ // The following properties apply to specific retry policies.
"count": <retry-attempts>,
- "minimumInterval": "<minimum-interval>",
- "maximumInterval": "<maximum-interval>"
+ "interval": "<retry-interval>",
+ "maximumInterval": "<maximum-interval>",
+ "minimumInterval": "<minimum-interval>"
},
- "<other-action-specific-inputs>"
+ <...>
}, "runAfter": {}
-}
-```
+ ```
-*Required*
+ *Required*
-| Value | Type | Description |
-|-||-|
-| <*retry-policy-type*> | String | The retry policy type you want to use: `default`, `none`, `fixed`, or `exponential` |
-| <*retry-interval*> | String | The retry interval where the value must use [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default minimum interval is `PT5S` and the maximum interval is `PT1D`. When you use the exponential interval policy, you can specify different minimum and maximum values. |
-| <*retry-attempts*> | Integer | The number of retry attempts, which must be between 1 and 90 |
-||||
+ | Property | Value | Type | Description |
+ |-|-||-|
+ | `type` | <*retry-policy-type*> | String | The retry policy type to use: `default`, `none`, `fixed`, or `exponential` |
+ | `count` | <*retry-attempts*> | Integer | For `fixed` and `exponential` policy types, the number of retry attempts, which is a value from 1 - 90. For more information, review [Fixed Interval](#fixed-interval) and [Exponential Interval](#exponential-interval). |
+ | `interval`| <*retry-interval*> | String | For `fixed` and `exponential` policy types, the retry interval value in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). For the `exponential` policy, you can also specify [optional maximum and minimum intervals](#optional-max-min-intervals). For more information, review [Fixed Interval](#fixed-interval) and [Exponential Interval](#exponential-interval). <br><br>**Consumption**: 5 seconds (`PT5S`) to 1 day (`P1D`). <br>**Standard**: For stateful workflows, 5 seconds (`PT5S`) to 1 day (`P1D`). For stateless workflows, 1 second (`PT1S`) to 1 minute (`PT1M`). |
+ |||||
-*Optional*
+ <a name="optional-max-min-intervals"></a>
-| Value | Type | Description |
-|-||-|
-| <*minimum-interval*> | String | For the exponential interval policy, the smallest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) |
-| <*maximum-interval*> | String | For the exponential interval policy, the largest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) |
-||||
+ *Optional*
-Here is more information about the different policy types.
+ | Property | Value | Type | Description |
+ |-|-||-|
+ | `maximumInterval` | <*maximum-interval*> | String | For the `exponential` policy, the largest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default value is 1 day (`P1D`). For more information, review [Exponential Interval](#exponential-interval). |
+ | `minimumInterval` | <*minimum-interval*> | String | For the `exponential` policy, the smallest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default value is 5 seconds (`PT5S`). For more information, review [Exponential Interval](#exponential-interval). |
+ |||||
-<a name="default-retry"></a>
+<a name="default"></a>
-### Default
+#### Default retry policy
-If you don't specify a retry policy, the action uses the default policy, which is actually an [exponential interval policy](#exponential-interval) that sends up to four retries at exponentially increasing intervals that are scaled by 7.5 seconds. The interval is capped between 5 and 45 seconds.
+If you don't specify a retry policy, the action uses the default policy. The default is actually an [exponential interval policy](#exponential-interval) that sends up to four retries at exponentially increasing intervals, which scales by 7.5 seconds. The interval is capped between 5 and 45 seconds.
-Though not explicitly defined in your action or trigger, here is how the default policy behaves in an example HTTP action:
+Though not explicitly defined in your action or trigger, the following example shows how the default policy behaves in an example HTTP action:
```json "HTTP": {
Though not explicitly defined in your action or trigger, here is how the default
} ```
-### None
+<a name="none"></a>
+
+### None - No retry policy
To specify that the action or trigger doesn't retry failed requests, set the <*retry-policy-type*> to `none`.
-### Fixed interval
+<a name="fixed-interval"></a>
+
+### Fixed interval retry policy
To specify that the action or trigger waits the specified interval before sending the next request, set the <*retry-policy-type*> to `fixed`.
This retry policy attempts to get the latest news two more times after the first
<a name="exponential-interval"></a>
-### Exponential interval
+### Exponential interval retry policy
+
+The exponential interval retry policy specifies that the trigger or action waits a random interval before sending the next request. This random interval is selected from an exponentially growing range. Optionally, you can override the default minimum and maximum intervals by specifying your own minimum and maximum intervals, based on whether you have a [Consumption or Standard logic app workflow](logic-apps-overview.md#resource-environment-differences).
-To specify that the action or trigger waits a random interval before sending the next request, set the <*retry-policy-type*> to `exponential`. The random interval is selected from an exponentially growing range. Optionally, you can also override the default minimum and maximum intervals by specifying your own minimum and maximum intervals.
+| Name | Consumption limit | Standard limit | Notes |
+||-|-|-|
+| Maximum delay | Default: 1 day | Default: 1 hour | To change the default limit in a Consumption logic app workflow, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in a Standard logic app workflow, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Minimum delay | Default: 5 sec | Default: 5 sec | To change the default limit in a Consumption logic app workflow, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in a Standard logic app workflow, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+|||||
**Random variable ranges**
-This table shows how Logic Apps generates a uniform random variable in the specified range for each retry up to and including the number of retries:
+For the exponential interval retry policy, the following table shows the general algorithm that Azure Logic Apps uses to generate a uniform random variable in the specified range for each retry. The specified range can be up to and including the number of retries.
| Retry number | Minimum interval | Maximum interval | |--|||
This table shows how Logic Apps generates a uniform random variable in the speci
<a name="control-run-after-behavior"></a>
-## Catch and handle failures by changing "run after" behavior
+## Manage the "run after" behavior
+
+When you add actions in the workflow designer, you implicitly declare the order to use for running those actions. After an action finishes running, that action is marked with a status such as **Succeeded**, **Failed**, **Skipped**, or **TimedOut**. By default, an action that you add in the designer runs only after the predecessor completes with **Succeeded** status. In an action's underlying definition, the `runAfter` property specifies that the predecessor action that must first finish and the statuses permitted for that predecessor before the successor action can run.
+
+When an action throws an unhandled error or exception, the action is marked **Failed**, and any successor action is marked **Skipped**. If this behavior happens for an action that has parallel branches, the Azure Logic Apps engine follows the other branches to determine their completion statuses. For example, if a branch ends with a **Skipped** action, that branch's completion status is based on that skipped action's predecessor status. After the workflow run completes, the engine determines the entire run's status by evaluating all the branch statuses. If any branch ends in failure, the entire workflow run is marked **Failed**.
+
+![Conceptual diagram with examples that show how run statuses are evaluated.](./media/logic-apps-exception-handling/status-evaluation-for-parallel-branches.png)
+
+To make sure that an action can still run despite its predecessor's status, you can change an action's "run after" behavior to handle the predecessor's unsuccessful statuses. That way, the action runs when the predecessor's status is **Succeeded**, **Failed**, **Skipped**, **TimedOut**, or all these statuses.
+
+For example, to run the Office 365 Outlook **Send an email** action after the Excel Online **Add a row into a table** predecessor action is marked **Failed**, rather than **Succeeded**, change the "run after" behavior using either the designer or code view editor.
+
+> [!NOTE]
+>
+> In the designer, the "run after" setting doesn't apply to the action that immediately
+> follows the trigger as the trigger must run successfully before the first action can run.
+
+<a name="change-run-after-designer"></a>
+
+### Change "run after" behavior in the designer
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open the logic app workflow in the designer.
+
+1. On the action shape, open the ellipses menu (**...**), and select **Configure run after**.
-When you add actions in the Logic App Designer, you implicitly declare the order to use for running those actions. After an action finishes running, that action is marked with a status such as `Succeeded`, `Failed`, `Skipped`, or `TimedOut`. In each action definition, the `runAfter` property specifies the predecessor action that must first finish and the statuses permitted for that predecessor before the successor action can run. By default, an action that you add in the designer runs only after the predecessor completes with `Succeeded` status.
+ ![Screenshot showing Consumption workflow designer and current action with ellipses and "Configure run after" selected.](./media/logic-apps-exception-handling/configure-run-after-consumption.png)
-When an action throws an unhandled error or exception, the action is marked `Failed`, and any successor action is marked `Skipped`. If this behavior happens for an action that has parallel branches, the Logic Apps engine follows the other branches to determine their completion statuses. For example, if a branch ends with a `Skipped` action, that branch's completion status is based on that skipped action's predecessor status. After the logic app run completes, the engine determines the entire run's status by evaluating all the branch statuses. If any branch ends in failure, the entire logic app run is marked `Failed`.
+ The action shape expands and shows the predecessor action for the currently selected action.
-![Examples that show how run statuses are evaluated](./media/logic-apps-exception-handling/status-evaluation-for-parallel-branches.png)
+ ![Screenshot showing Consumption workflow designer, current action, and "run after" status for predecessor action.](./media/logic-apps-exception-handling/predecessor-action-consumption.png)
-To make sure that an action can still run despite its predecessor's status, [customize an action's "run after" behavior](#customize-run-after) to handle the predecessor's unsuccessful statuses.
+1. Expand the predecessor action node to view all the "run after" statuses.
-<a name="customize-run-after"></a>
+ By default, the "run after" status is set to **is successful**. So, the predecessor action must run successfully before the currently selected action can run.
-### Customize "run after" behavior
+ ![Screenshot showing Consumption designer, current action, and default "run after" set to "is successful".](./media/logic-apps-exception-handling/default-run-after-status-consumption.png)
-You can customize an action's "run after" behavior so that the action runs when the predecessor's status is either `Succeeded`, `Failed`, `Skipped`, `TimedOut`, or any of these statuses. For example, to send an email after the Excel Online `Add_a_row_into_a_table` predecessor action is marked `Failed`, rather than `Succeeded`, change the "run after" behavior by following either step:
+1. Change the "run after" behavior to the status that you want. Make sure that you first select an option before you clear the default option. You have to always have at least one option selected.
-* In the design view, select the ellipses (**...**) button, and then select **Configure run after**.
+ The following example selects **has failed**.
- ![Configure "run after" behavior for an action](./media/logic-apps-exception-handling/configure-run-after-property-setting.png)
+ ![Screenshot showing Consumption designer, current action, and "run after" set to "has failed".](./media/logic-apps-exception-handling/failed-run-after-status-consumption.png)
- The action shape shows the default status that's required for the predecessor action, which is **Add a row into a table** in this example:
+1. To specify that the current action runs whether the predecessor action is marked as **Failed**, **Skipped**, or **TimedOut**, select the other statuses.
- ![Default "run after" behavior for an action](./media/logic-apps-exception-handling/change-run-after-property-status.png)
+ ![Screenshot showing Consumption designer, current action, and multiple "run after" statuses selected.](./media/logic-apps-exception-handling/run-after-multiple-statuses-consumption.png)
- Change the "run after" behavior to the status that you want, which is **has failed** in this example:
+1. When you're ready, select **Done**.
- ![Change "run after" behavior to "has failed"](./media/logic-apps-exception-handling/run-after-property-status-set-to-failed.png)
+### [Standard](#tab/standard)
- To specify that the action runs whether the predecessor action is marked as `Failed`, `Skipped` or `TimedOut`, select the other statuses:
+1. In the [Azure portal](https://portal.azure.com), open the logic app workflow in the designer.
- ![Change "run after" behavior to have any other status](./media/logic-apps-exception-handling/run-after-property-multiple-statuses.png)
+1. On the designer, select the action shape. On the details pane, select **Run After**.
-* In code view, in the action's JSON definition, edit the `runAfter` property, which follows this syntax:
+ ![Screenshot showing Standard workflow designer and current action details pane with "Run After" selected.](./media/logic-apps-exception-handling/configure-run-after-standard.png)
- ```json
- "<action-name>": {
- "inputs": {
- "<action-specific-inputs>"
- },
- "runAfter": {
- "<preceding-action>": [
- "Succeeded"
- ]
- },
- "type": "<action-type>"
- }
- ```
+ The **Run After** pane shows the predecessor action for the currently selected action.
- For this example, change the `runAfter` property from `Succeeded` to `Failed`:
+ ![Screenshot showing Standard designer, current action, and "run after" status for predecessor action.](./media/logic-apps-exception-handling/predecessor-action-standard.png)
- ```json
- "Send_an_email_(V2)": {
- "inputs": {
- "body": {
- "Body": "<p>Failed to&nbsp;add row to &nbsp;@{body('Add_a_row_into_a_table')?['Terms']}</p>",,
- "Subject": "Add row to table failed: @{body('Add_a_row_into_a_table')?['Terms']}",
- "To": "Sophia.Owen@fabrikam.com"
- },
- "host": {
- "connection": {
- "name": "@parameters('$connections')['office365']['connectionId']"
- }
- },
- "method": "post",
- "path": "/v2/Mail"
- },
- "runAfter": {
- "Add_a_row_into_a_table": [
- "Failed"
- ]
- },
- "type": "ApiConnection"
- }
- ```
+1. Expand the predecessor action node to view all the "run after" statuses.
- To specify that the action runs whether the predecessor action is marked as `Failed`, `Skipped` or `TimedOut`, add the other statuses:
+ By default, the "run after" status is set to **is successful**. So, the predecessor action must run successfully before the currently selected action can run.
- ```json
- "runAfter": {
- "Add_a_row_into_a_table": [
- "Failed", "Skipped", "TimedOut"
- ]
- },
- ```
+ ![Screenshot showing Standard designer, current action, and default "run after" set to "is successful".](./media/logic-apps-exception-handling/change-run-after-status-standard.png)
+
+1. Change the "run after" behavior to the status that you want. Make sure that you first select an option before you clear the default option. You have to always have at least one option selected.
+
+ The following example selects **has failed**.
+
+ ![Screenshot showing Standard designer, current action, and "run after" set to "has failed".](./media/logic-apps-exception-handling/failed-run-after-status-standard.png)
+
+1. To specify that the current action runs whether the predecessor action is marked as **Failed**, **Skipped**, or **TimedOut**, select the other statuses.
+
+ ![Screenshot showing Standard designer, current action, and multiple "run after" statuses selected.](./media/logic-apps-exception-handling/run-after-multiple-statuses-standard.png)
+
+1. To require that more than one predecessor action runs, each with their own "run after" statuses, expand the **Select actions** list. Select the predecessor actions that you want, and specify their required "run after" statuses.
+
+ ![Screenshot showing Standard designer, current action, and multiple predecessor actions available.](./media/logic-apps-exception-handling/multiple-predecessor-actions-standard.png)
+
+1. When you're ready, select **Done**.
+++
+### Change "run after" behavior in the code view editor
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the code view editor.
+
+1. In the action's JSON definition, edit the `runAfter` property, which has the following syntax:
+
+ ```json
+ "<action-name>": {
+ "inputs": {
+ "<action-specific-inputs>"
+ },
+ "runAfter": {
+ "<preceding-action>": [
+ "Succeeded"
+ ]
+ },
+ "type": "<action-type>"
+ }
+ ```
+
+1. For this example, change the `runAfter` property from `Succeeded` to `Failed`:
+
+ ```json
+ "Send_an_email_(V2)": {
+ "inputs": {
+ "body": {
+ "Body": "<p>Failed to add row to table: @{body('Add_a_row_into_a_table')?['Terms']}</p>",
+ "Subject": "Add row to table failed: @{body('Add_a_row_into_a_table')?['Terms']}",
+ "To": "Sophia.Owen@fabrikam.com"
+ },
+ "host": {
+ "connection": {
+ "name": "@parameters('$connections')['office365']['connectionId']"
+ }
+ },
+ "method": "post",
+ "path": "/v2/Mail"
+ },
+ "runAfter": {
+ "Add_a_row_into_a_table": [
+ "Failed"
+ ]
+ },
+ "type": "ApiConnection"
+ }
+ ```
+
+1. To specify that the action runs whether the predecessor action is marked as `Failed`, `Skipped` or `TimedOut`, add the other statuses:
+
+ ```json
+ "runAfter": {
+ "Add_a_row_into_a_table": [
+ "Failed", "Skipped", "TimedOut"
+ ]
+ },
+ ```
<a name="scopes"></a> ## Evaluate actions with scopes and their results
-Similar to running steps after individual actions with the `runAfter` property, you can group actions together inside a [scope](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md). You can use scopes when you want to logically group actions together, assess the scope's aggregate status, and perform actions based on that status. After all the actions in a scope finish running, the scope itself gets its own status.
+Similar to running steps after individual actions with the "run after" setting, you can group actions together inside a [scope](logic-apps-control-flow-run-steps-group-scopes.md). You can use scopes when you want to logically group actions together, assess the scope's aggregate status, and perform actions based on that status. After all the actions in a scope finish running, the scope itself gets its own status.
-To check a scope's status, you can use the same criteria that you use to check a logic app's run status, such as `Succeeded`, `Failed`, and so on.
+To check a scope's status, you can use the same criteria that you use to check a workflow run status, such as **Succeeded**, **Failed**, and so on.
-By default, when all the scope's actions succeed, the scope's status is marked `Succeeded`. If the final action in a scope results as `Failed` or `Aborted`, the scope's status is marked `Failed`.
+By default, when all the scope's actions succeed, the scope's status is marked **Succeeded**. If the final action in a scope is marked **Failed** or **Aborted**, the scope's status is marked **Failed**.
-To catch exceptions in a `Failed` scope and run actions that handle those errors, you can use the `runAfter` property for that `Failed` scope. That way, if *any* actions in the scope fail, and you use the `runAfter` property for that scope, you can create a single action to catch failures.
+To catch exceptions in a **Failed** scope and run actions that handle those errors, you can use the "run after" setting that **Failed** scope. That way, if *any* actions in the scope fail, and you use the "run after" setting for that scope, you can create a single action to catch failures.
-For limits on scopes, see [Limits and config](../logic-apps/logic-apps-limits-and-config.md).
+For limits on scopes, see [Limits and config](logic-apps-limits-and-config.md).
<a name="get-results-from-failures"></a> ### Get context and results for failures
-Although catching failures from a scope is useful, you might also want context to help you understand exactly which actions failed plus any errors or status codes that were returned. The [`result()` function](../logic-apps/workflow-definition-language-functions-reference.md#result) returns the results from the top-level actions in a scoped action by accepting a single parameter, which is the scope's name, and returning an array that contains the results from those first-level actions. These action objects include the same attributes as those returned by the `actions()` function, such as the action's start time, end time, status, inputs, correlation IDs, and outputs.
+Although catching failures from a scope is useful, you might also want more context to help you learn the exact failed actions plus any errors or status codes. The [`result()` function](workflow-definition-language-functions-reference.md#result) returns the results from the top-level actions in a scoped action. This function accepts the scope's name as a single parameter, and returns an array with the results from those top-level actions. These action objects have the same attributes as the attributes returned by the `actions()` function, such as the action's start time, end time, status, inputs, correlation IDs, and outputs.
> [!NOTE]
-> The `result()` function returns the results from *only* the first-level actions and not from deeper nested actions such as switch or condition actions.
+>
+> The `result()` function returns the results *only* from the top-level actions
+> and not from deeper nested actions such as switch or condition actions.
-To get context about the actions that failed in a scope, you can use the `@result()` expression with the scope's name and the `runAfter` property. To filter down the returned array to actions that have `Failed` status, you can add the [**Filter Array** action](logic-apps-perform-data-operations.md#filter-array-action). To run an action for a returned failed action, take the returned filtered array and use a [**For each** loop](../logic-apps/logic-apps-control-flow-loops.md).
+To get context about the actions that failed in a scope, you can use the `@result()` expression with the scope's name and the "run after" setting. To filter down the returned array to actions that have **Failed** status, you can add the [**Filter Array** action](logic-apps-perform-data-operations.md#filter-array-action). To run an action for a returned failed action, take the returned filtered array and use a [**For each** loop](logic-apps-control-flow-loops.md).
-Here's an example, followed by a detailed explanation, that sends an HTTP POST request with the response body for any actions that failed within the scope action named "My_Scope":
+The following JSON example sends an HTTP POST request with the response body for any actions that failed within the scope action named **My_Scope**. A detailed explanation follows the example.
```json "Filter_array": {
Here's an example, followed by a detailed explanation, that sends an HTTP POST r
} ```
-Here's a detailed walkthrough that describes what happens in this example:
+The following steps describe what happens in this example:
-1. To get the result from all actions inside "My_Scope", the **Filter Array** action uses this filter expression: `@result('My_Scope')`
+1. To get the result from all actions inside **My_Scope**, the **Filter Array** action uses this filter expression: `@result('My_Scope')`
-1. The condition for **Filter Array** is any `@result()` item that has a status equal to `Failed`. This condition filters the array that has all the action results from "My_Scope" down to an array with only the failed action results.
+1. The condition for **Filter Array** is any `@result()` item that has a status equal to `Failed`. This condition filters the array that has all the action results from **My_Scope** down to an array with only the failed action results.
1. Perform a `For_each` loop action on the *filtered array* outputs. This step performs an action for each failed action result that was previously filtered.
To perform different exception handling patterns, you can use the expressions pr
## Set up Azure Monitor logs
-The previous patterns are great way to handle errors and exceptions within a run, but you can also identify and respond to errors independent of the run itself. [Azure Monitor](../azure-monitor/overview.md) provides a simple way to send all workflow events, including all run and action statuses, to a [Log Analytics workspace](../azure-monitor/logs/data-platform-logs.md), [Azure storage account](../storage/blobs/storage-blobs-overview.md), or [Azure Event Hubs](../event-hubs/event-hubs-about.md).
+The previous patterns are useful ways to handle errors and exceptions that happen within a run. However, you can also identify and respond to errors that happen independently from the run. [Azure Monitor](../azure-monitor/overview.md) provides a streamlined way to send all workflow events, including all run and action statuses, to a destination. For example, you can send events to a [Log Analytics workspace](../azure-monitor/logs/data-platform-logs.md), [Azure storage account](../storage/blobs/storage-blobs-overview.md), or [Azure Event Hubs](../event-hubs/event-hubs-about.md).
To evaluate run statuses, you can monitor the logs and metrics, or publish them into any monitoring tool that you prefer. One potential option is to stream all the events through Event Hubs into [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/). In Stream Analytics, you can write live queries based on any anomalies, averages, or failures from the diagnostic logs. You can use Stream Analytics to send information to other data sources, such as queues, topics, SQL, Azure Cosmos DB, or Power BI. ## Next steps
-* [See how a customer builds error handling with Azure Logic Apps](../logic-apps/logic-apps-scenario-error-and-exception-handling.md)
-* [Find more Logic Apps examples and scenarios](../logic-apps/logic-apps-examples-and-scenarios.md)
+* [See how a customer builds error handling with Azure Logic Apps](logic-apps-scenario-error-and-exception-handling.md)
+* [Find more Azure Logic Apps examples and scenarios](logic-apps-examples-and-scenarios.md)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
The following table lists the values for a single workflow run:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-|
-| Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <p><p>**Note**: If the workflow's run duration exceeds the retention limit, that run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <p><p>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>For more information, review [Change duration and run history retention in storage](#change-retention). |
+| Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <p><p>**Note**: If the workflow's run duration exceeds the retention limit, this run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <p><p>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>For more information, review [Change duration and run history retention in storage](#change-retention). |
| Run duration | 90 days | - Stateful workflow: 90 days <br>(Default) <p><p>- Stateless workflow: 5 min <br>(Default) | 366 days | The amount of time that a workflow can continue running before forcing a timeout. <p><p>The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <p>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <p><p>For more information, review [Change run duration and history retention in storage](#change-duration). | | Recurrence interval | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days || ||||||
For more information about your logic app resource definition, review [Overview:
Azure Logic Apps supports write operations, including inserts and updates, through the on-premises data gateway. However, these operations have [limits on their payload size](/data-integration/gateway/service-gateway-onprem#considerations).
+<a name="retry-policy-limits"></a>
+
+## Retry policy limits
+
+The following table lists the retry policy limits for a trigger or action, based on whether you have a [Consumption or Standard logic app workflow](logic-apps-overview.md#resource-environment-differences).
+
+| Name | Consumption limit | Standard limit | Notes |
+||-|-|-|
+| Retry attempts | - Default: 4 attempts <br> - Max: 90 attempts | - Default: 4 attempts | To change the default limit in Consumption logic app workflows, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). To change the default limit in Standard logic app workflows, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Retry interval | None | Default: 7 sec | To change the default limit in Consumption logic app workflows, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in Standard logic app workflows, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+|||||
+ <a name="variables-action-limits"></a> ## Variables action limits
By default, the HTTP action and APIConnection actions follow the [standard async
| Request URL character limit | 16,384 characters | | ||||
-<a name="retry-policy-limits"></a>
-
-### Retry policy
-
-| Name | Multi-tenant limit | Single-tenant limit | Notes |
-||--||-|
-| Retry attempts | - Default: 4 attempts <br> - Max: 90 attempts | - Default: 4 attempts | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Retry interval | None | Default: 7 sec | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Retry max delay | Default: 1 day | Default: 1 hour | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Retry min delay | Default: 5 sec | Default: 5 sec | To change the default limit in the multi-tenant service, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-|||||
- <a name="authentication-limits"></a> ### Authentication limits
logic-apps Quickstart Logic Apps Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-powershell.md
ms.suite: integration -
+ms.tool: azure-powershell
+ Last updated 05/03/2022
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| Return value | Type | Description | | | - | -- |
-| <*converted-timestamp*> | String | The timestamp converted to the target time zone |
+| <*converted-timestamp*> | String | The timestamp converted to the target time zone without the timezone UTC offset. |
|||| *Example 1*
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-terraform.md
Last updated 01/05/2022
+ms.tool: terraform
# Manage Azure Machine Learning workspaces using Terraform
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
First you'll install the v2 SDK on your compute instance:
1. Now on the terminal, run the command: ```
- git clone --depth 1 https://github.com/Azure/azureml-examples --branch sdk-preview
+ git clone --depth 1 https://github.com/Azure/azureml-examples
``` 1. On the left, select **Notebooks**.
Before creating the pipeline, you'll set up the resources the pipeline will use:
Before we dive in the code, you'll need to connect to your Azure ML workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. -
-```python
-# handle to the workspace
-from azure.ai.ml import MLClient
-
-# Authentication package
-from azure.identity import DefaultAzureCredential
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=import-mlclient)]
In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find your Subscription ID: 1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
In the next cell, enter your Subscription ID, Resource Group name and Workspace
:::image type="content" source="media/tutorial-pipeline-python-sdk/find-info.png" alt-text="Screenshot shows how to find values needed for your code.":::
-```python
-# get a handle to the workspace
-ml_client = MLClient(
- DefaultAzureCredential(),
- subscription_id="<SUBSCRIPTION_ID>",
- resource_group_name="<RESOURCE_GROUP>",
- workspace_name="<AML_WORKSPACE_NAME>",
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client)]
The result is a handler to the workspace that you'll use to manage other resources and jobs.
The data you use for training is usually in one of the locations below:
Azure ML uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. In the section below, you'll consume some data from web url as one example. Data from other sources can be created as well.
-```python
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-web_path = "https://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls"
-
-credit_data = Data(
- name="creditcard_defaults",
- path=web_path,
- type=AssetTypes.URI_FILE,
- description="Dataset for credit card defaults",
- tags={"source_type": "web", "source": "UCI ML Repo"},
- version='1.0.0'
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=credit_data)]
This code just created a `Data` asset, ready to be consumed as an input by the pipeline that you'll define in the next sections. In addition, you can register the dataset to your workspace so it becomes reusable across pipelines.
Registering the dataset will enable you to:
Since this is the first time that you're making a call to the workspace, you may be asked to authenticate. Once the authentication is complete, you'll then see the dataset registration completion message. -
-```python
-credit_data = ml_client.data.create_or_update(credit_data)
-print(
- f"Dataset with name {credit_data.name} was registered to workspace, the dataset version is {credit_data.version}"
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-credit_data)]
In the future, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`.
+## Create a compute resource to run your pipeline
+
+Each step of an Azure ML pipeline can use a different compute resource for running the specific job of that step. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
+
+In this section, you'll provision a Linux compute cluster.
+
+For this tutorial you only need a basic cluster, so we'll use a Standard_DS3_v2 model with 2 vCPU cores, 7 GB RAM and create an Azure ML Compute.
+
+> [!TIP]
+> If you already have a compute cluster, replace "cpu-cluster" in the code below with the name of your cluster. This will keep you from creating another one.
+
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=cpu_cluster)]
## Create a job environment for pipeline steps
So far, you've created a development environment on the compute instance, your d
In this example, you'll create a conda environment for your jobs, using a conda yaml file. First, create a directory to store the file in. -
-```python
-import os
-dependencies_dir = "./dependencies"
-os.makedirs(dependencies_dir, exist_ok=True)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=dependencies_dir)]
Now, create the file in the dependencies directory.
-```python
-%%writefile {dependencies_dir}/conda.yml
-name: model-env
-channels:
- - conda-forge
-dependencies:
- - python=3.8
- - numpy=1.21.2
- - pip=21.2.4
- - scikit-learn=0.24.2
- - scipy=1.7.1
- - pandas>=1.1,<1.2
- - pip:
- - azureml-defaults==1.38.0
- - azureml-mlflow==1.38.0
- - inference-schema[numpy-support]==1.3.0
- - joblib==1.0.1
- - xlrd==2.0.1
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=conda.yml)]
The specification contains some usual packages, that you'll use in your pipeline (numpy, pip), together with some Azure ML specific packages (azureml-defaults, azureml-mlflow).
The Azure ML packages aren't mandatory to run Azure ML jobs. However, adding the
Use the *yaml* file to create and register this custom environment in your workspace:
-```Python
-from azure.ai.ml.entities import Environment
-
-custom_env_name = "aml-scikit-learn"
-
-pipeline_job_env = Environment(
- name=custom_env_name,
- description="Custom environment for Credit Card Defaults pipeline",
- tags={"scikit-learn": "0.24.2", "azureml-defaults": "1.38.0"},
- conda_file=os.path.join(dependencies_dir, "conda.yml"),
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
- version="1.0.0"
-)
-pipeline_job_env = ml_client.environments.create_or_update(pipeline_job_env)
-
-print(
- f"Environment with name {pipeline_job_env.name} is registered to workspace, the environment version is {pipeline_job_env.version}"
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=custom_env_name)]
## Build the training pipeline
Let's start by creating the first component. This component handles the preproce
First create a source folder for the data_prep component:
-```python
-import os
-
-data_prep_src_dir = "./components/data_prep"
-os.makedirs(data_prep_src_dir, exist_ok=True)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=data_prep_src_dir)]
This script performs the simple task of splitting the data into train and test datasets. Azure ML mounts datasets as folders to the computes, therefore, we created an auxiliary `select_first_file` function to access the data file inside the mounted input folder. [MLFlow](https://mlflow.org/docs/latest/tracking.html) will be used to log the parameters and metrics during our pipeline run.
-```python
-%%writefile {data_prep_src_dir}/data_prep.py
-import os
-import argparse
-import pandas as pd
-from sklearn.model_selection import train_test_split
-import logging
-import mlflow
--
-def main():
- """Main function of the script."""
-
- # input and output arguments
- parser = argparse.ArgumentParser()
- parser.add_argument("--data", type=str, help="path to input data")
- parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
- parser.add_argument("--train_data", type=str, help="path to train data")
- parser.add_argument("--test_data", type=str, help="path to test data")
- args = parser.parse_args()
-
- # Start Logging
- mlflow.start_run()
-
- print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
-
- print("input data:", args.data)
-
- credit_df = pd.read_excel(args.data, header=1, index_col=0)
-
- mlflow.log_metric("num_samples", credit_df.shape[0])
- mlflow.log_metric("num_features", credit_df.shape[1] - 1)
-
- credit_train_df, credit_test_df = train_test_split(
- credit_df,
- test_size=args.test_train_ratio,
- )
-
- # output paths are mounted as folder, therefore, we are adding a filename to the path
- credit_train_df.to_csv(os.path.join(args.train_data, "data.csv"), index=False)
-
- credit_test_df.to_csv(os.path.join(args.test_data, "data.csv"), index=False)
-
- # Stop Logging
- mlflow.end_run()
--
-if __name__ == "__main__":
- main()
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=def-main)]
Now that you have a script that can perform the desired task, create an Azure ML Component from it. You'll use the general purpose **CommandComponent** that can run command line actions. This command line action can directly call system commands or run a script. The inputs/outputs are specified on the command line via the `${{ ... }}` notation.
-```python
-%%writefile {data_prep_src_dir}/data_prep.yml
-# <component>
-name: data_prep_credit_defaults
-display_name: Data preparation for training
-# version: 1 # Not specifying a version will automatically update the version
-type: command
-inputs:
- data:
- type: uri_folder
- test_train_ratio:
- type: number
-outputs:
- train_data:
- type: uri_folder
- test_data:
- type: uri_folder
-code: .
-environment:
- # for this step, we'll use an AzureML curate environment
- azureml:aml-scikit-learn:1.0.0
-command: >-
- python data_prep.py
- --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}}
- --train_data ${{outputs.train_data}} --test_data ${{outputs.test_data}}
-# </component>
-```
-
-Once the `yaml` file and the script are ready, you can create your component using `load_component()`.
-
-```python
-# importing the Component Package
-from azure.ai.ml.entities import load_component
-
-# Loading the component from the yml file
-data_prep_component = load_component(yaml_file=os.path.join(data_prep_src_dir, "data_prep.yml"))
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=data_prep_component)]
Optionally, register the component in the workspace for future re-use.
-```python
-data_prep_component = ml_client.create_or_update(data_prep_component)
-
-print(
- f"Component {data_prep_component.name} with Version {data_prep_component.version} is registered"
-)
-```
- ## Create component 2: training (using yaml definition) The second component that you'll create will consume the training and test data, train a tree based model and return the output model. You'll use Azure ML logging capabilities to record and visualize the learning progress.
You used the `CommandComponent` class to create your first component. This time
Create the directory for this component:
-```python
-import os
-train_src_dir = "./components/train"
-os.makedirs(train_src_dir, exist_ok=True)
-```
-
-Create the training script in the directory:
-
-```python
-%%writefile {train_src_dir}/train.py
-import argparse
-from sklearn.ensemble import GradientBoostingClassifier
-from sklearn.metrics import classification_report
-from azureml.core.model import Model
-from azureml.core import Run
-import os
-import pandas as pd
-import joblib
-import mlflow
--
-def select_first_file(path):
- """Selects first file in folder, use under assumption there is only one file in folder
- Args:
- path (str): path to directory or file to choose
- Returns:
- str: full path of selected file
- """
- files = os.listdir(path)
- return os.path.join(path, files[0])
--
-# Start Logging
-mlflow.start_run()
-
-# enable autologging
-mlflow.sklearn.autolog()
-
-# This line creates a handles to the current run. It is used for model registration
-run = Run.get_context()
-
-os.makedirs("./outputs", exist_ok=True)
--
-def main():
- """Main function of the script."""
-
- # input and output arguments
- parser = argparse.ArgumentParser()
- parser.add_argument("--train_data", type=str, help="path to train data")
- parser.add_argument("--test_data", type=str, help="path to test data")
- parser.add_argument("--n_estimators", required=False, default=100, type=int)
- parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
- parser.add_argument("--registered_model_name", type=str, help="model name")
- parser.add_argument("--model", type=str, help="path to model file")
- args = parser.parse_args()
-
- # paths are mounted as folder, therefore, we are selecting the file from folder
- train_df = pd.read_csv(select_first_file(args.train_data))
-
- # Extracting the label column
- y_train = train_df.pop("default payment next month")
-
- # convert the dataframe values to array
- X_train = train_df.values
-
- # paths are mounted as folder, therefore, we are selecting the file from folder
- test_df = pd.read_csv(select_first_file(args.test_data))
-
- # Extracting the label column
- y_test = test_df.pop("default payment next month")
-
- # convert the dataframe values to array
- X_test = test_df.values
-
- print(f"Training with data of shape {X_train.shape}")
-
- clf = GradientBoostingClassifier(
- n_estimators=args.n_estimators, learning_rate=args.learning_rate
- )
- clf.fit(X_train, y_train)
-
- y_pred = clf.predict(X_test)
-
- print(classification_report(y_test, y_pred))
-
- # setting the full path of the model file
- model_file = os.path.join(args.model, "model.pkl")
- with open(model_file, "wb") as mf:
- joblib.dump(clf, mf)
-
- # Registering the model to the workspace
- model = Model.register(
- run.experiment.workspace,
- model_name=args.registered_model_name,
- model_path=model_file,
- tags={"type": "sklearn.GradientBoostingClassifier"},
- description="Model created in Azure ML on credit card defaults dataset",
- )
-
- # Stop Logging
- mlflow.end_run()
--
-if __name__ == "__main__":
- main()
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train_src_dir)]
As you can see in this training script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
For the environment of this step, you'll use one of the built-in (curated) Azure
First, create the *yaml* file describing the component:
-```python
-%%writefile {train_src_dir}/train.yml
-# <component>
-name: train_credit_defaults_model
-display_name: Train Credit Defaults Model
-# version: 1 # Not specifying a version will automatically update the version
-type: command
-inputs:
- train_data:
- type: uri_folder
- test_data:
- type: uri_folder
- learning_rate:
- type: number
- registered_model_name:
- type: string
-outputs:
- model:
- type: uri_folder
-code: .
-environment:
- # for this step, we'll use an AzureML curate environment
- azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:21
-command: >-
- python train.py
- --train_data ${{inputs.train_data}}
- --test_data ${{inputs.test_data}}
- --learning_rate ${{inputs.learning_rate}}
- --registered_model_name ${{inputs.registered_model_name}}
- --model ${{outputs.model}}
-# </component>
-
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train.yml)]
Now create and register the component:
-```python
-# importing the Component Package
-from azure.ai.ml.entities import load_component
-
-# Loading the component from the yml file
-train_component = load_component(yaml_file=os.path.join(train_src_dir, "train.yml"))
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train_component)]
-```python
-# Now we register the component to the workspace
-train_component = ml_client.create_or_update(train_component)
-
-# Create (register) the component in your workspace
-print(
- f"Component {train_component.name} with Version {train_component.version} is registered"
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-train_component)]
## Create the pipeline from components
To code the pipeline, you use a specific `@dsl.pipeline` decorator that identifi
Here, we used *input data*, *split ratio* and *registered model name* as input variables. We then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
-> [!IMPORTANT]
-> In the code below, replace `<CPU-CLUSTER-NAME>` with the name you used when you created a compute cluster in the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
-
-```python
-# the dsl decorator tells the sdk that we are defining an Azure ML pipeline
-from azure.ai.ml import dsl, Input, Output
-
-@dsl.pipeline(
- compute="<CPU-CLUSTER-NAME>",
- description="E2E data_perp-train pipeline",
-)
-def credit_defaults_pipeline(
- pipeline_job_data_input,
- pipeline_job_test_train_ratio,
- pipeline_job_learning_rate,
- pipeline_job_registered_model_name,
-):
- # using data_prep_function like a python call with its own inputs
- data_prep_job = data_prep_component(
- data=pipeline_job_data_input,
- test_train_ratio=pipeline_job_test_train_ratio,
- )
-
- # using train_func like a python call with its own inputs
- train_job = train_component(
- train_data=data_prep_job.outputs.train_data, # note: using outputs from previous step
- test_data=data_prep_job.outputs.test_data, # note: using outputs from previous step
- learning_rate=pipeline_job_learning_rate, # note: using a pipeline input as parameter
- registered_model_name=pipeline_job_registered_model_name,
- )
-
- # a pipeline returns a dict of outputs
- # keys will code for the pipeline output identifier
- return {
- "pipeline_job_train_data": data_prep_job.outputs.train_data,
- "pipeline_job_test_data": data_prep_job.outputs.test_data,
- }
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=pipeline)]
Now use your pipeline definition to instantiate a pipeline with your dataset, split rate of choice and the name you picked for your model.
-```python
-registered_model_name = "credit_defaults_model"
-
-# Let's instantiate the pipeline with the parameters of our choice
-pipeline = credit_defaults_pipeline(
- # pipeline_job_data_input=credit_data,
- pipeline_job_data_input=Input(type="uri_file", path=web_path),
- pipeline_job_test_train_ratio=0.2,
- pipeline_job_learning_rate=0.25,
- pipeline_job_registered_model_name=registered_model_name,
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=registered_model_name)]
## Submit the job
Here you'll also pass an experiment name. An experiment is a container for all t
Once completed, the pipeline will register a model in your workspace as a result of training.
-```python
-import webbrowser
-# submit the pipeline job
-returned_job = ml_client.jobs.create_or_update(
- pipeline,
-
- # Project's name
- experiment_name="e2e_registered_components",
-)
-# open the pipeline in web browser
-webbrowser.open(returned_job.services["Studio"].endpoint)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=returned_job)]
An output of "False" is expected from the above cell. You can track the progress of your pipeline, by using the link generated in the cell above.
Now deploy your machine learning model as a web service in the Azure cloud.
To deploy a machine learning service, you'll usually need: * The model assets (filed, metadata) that you want to deploy. You've already registered these assets in your training component.
-* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns.
-
-## Create an inference script
-
-The two things you need to accomplish in your inference script are:
-
-* Load your model (using a function called `init()`)
-* Run your model on input data (using a function called `run()`)
-
-In the following implementation the `init()` function loads the model, and the run function expects the data in `json` format with the input data stored under `data`.
-
-```python
-deploy_dir = "./deploy"
-os.makedirs(deploy_dir, exist_ok=True)
-```
-
-```python
-%%writefile {deploy_dir}/score.py
-import os
-import logging
-import json
-import numpy
-import joblib
--
-def init():
- """
- This function is called when the container is initialized/started, typically after create/update of the deployment.
- You can write the logic here to perform init operations like caching the model in memory
- """
- global model
- # AZUREML_MODEL_DIR is an environment variable created during deployment.
- # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
- model_path = os.path.join(os.getenv("AZUREML_MODEL_DIR"), "model.pkl")
- # deserialize the model file back into a sklearn model
- model = joblib.load(model_path)
- logging.info("Init complete")
--
-def run(raw_data):
- """
- This function is called for every invocation of the endpoint to perform the actual scoring/prediction.
- In the example we extract the data from the json input and call the scikit-learn model's predict()
- method and return the result back
- """
- logging.info("Request received")
- data = json.loads(raw_data)["data"]
- data = numpy.array(data)
- result = model.predict(data)
- logging.info("Request processed")
- return result.tolist()
-```
+* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns. When using a MLFlow model, as in this tutorial, this script is automatically created for you
## Create a new online endpoint Now that you have a registered model and an inference script, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier).
-```python
-import uuid
-
-# Creating a unique name for the endpoint
-online_endpoint_name = "credit-endpoint-" + str(uuid.uuid4())[:8]
-
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=online_endpoint_name)]
-```Python
-from azure.ai.ml.entities import (
- ManagedOnlineEndpoint,
- ManagedOnlineDeployment,
- CodeConfiguration,
- Model,
- Environment,
-)
-
-# create an online endpoint
-endpoint = ManagedOnlineEndpoint(
- name=online_endpoint_name,
- description="this is an online endpoint",
- auth_mode="key",
- tags={
- "training_dataset": "credit_defaults",
- "model_type": "sklearn.GradientBoostingClassifier",
- },
-)
-
-endpoint = ml_client.begin_create_or_update(endpoint)
-
-print(f"Endpint {endpoint.name} provisioning state: {endpoint.provisioning_state}")
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=endpoint)]
Once you've created an endpoint, you can retrieve it as below:
-```python
-endpoint = ml_client.online_endpoints.get(name = online_endpoint_name)
-
-print(f"Endpint \"{endpoint.name}\" with provisioning state \"{endpoint.provisioning_state}\" is retrieved")
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-endpoint)]
## Deploy the model to the endpoint
Once the endpoint is created, deploy the model with the entry script. Each endpo
You can check the *Models* page on the Azure ML studio, to identify the latest version of your registered model. Alternatively, the code below will retrieve the latest version number for you to use. -
-```python
-# Let's pick the latest version of the model
-latest_model_version = max(
- [int(m.version) for m in ml_client.models.list(name=registered_model_name)]
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=latest_model_version)]
Deploy the latest version of the model. > [!NOTE] > Expect this deployment to take approximately 6 to 8 minutes. -
-```python
-# picking the model to deploy. Here we use the latest version of our registered model
-model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
--
-#create an online deployment.
-blue_deployment = ManagedOnlineDeployment(
- name='blue',
- endpoint_name=online_endpoint_name,
- model=model,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:21",
- code_configuration=CodeConfiguration(
- code=deploy_dir,
- scoring_script="score.py"),
- instance_type='Standard_DS3_v2',
- instance_count=1)
-
-blue_deployment = ml_client.begin_create_or_update(blue_deployment)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=model)]
### Test with a sample query
Now that the model is deployed to the endpoint, you can run inference with it.
Create a sample request file following the design expected in the run method in the score script.
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=sample-request.json)]
-```python
-%%writefile {deploy_dir}/sample-request.json
-{"data": [
- [20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0],
- [10,9,8,7,6,5,4,3,2,1, 10,9,8,7,6,5,4,3,2,1,10,9,8]
-]}
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=write-sample-request)]
-```python
-# test the blue deployment with some sample data
-ml_client.online_endpoints.invoke(
- endpoint_name=online_endpoint_name,
- request_file="./deploy/sample-request.json",
- deployment_name='blue'
-)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client.online_endpoints.invoke)]
## Clean up resources
If you're not going to use the endpoint, delete it to stop using the resource.
> [!NOTE] > Expect this step to take approximately 6 to 8 minutes.
-```python
-ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
-```
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client.online_endpoints.begin_delete)]
## Next steps
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
Title: Plan a SaaS offer for the Microsoft commercial marketplace - Azure Marketplace
-description: Plan for a new software as a service (SaaS) offer for listing or selling in Microsoft AppSource, Azure Marketplace, or through the Cloud Solution Provider (CSP) program using the commercial marketplace program in Microsoft Partner Center.
+description: Plan a new software as a service (SaaS) offer for selling in Microsoft AppSource, Azure Marketplace, or through the Cloud Solution Provider (CSP) program using the commercial marketplace program in Microsoft Partner Center.
Previously updated : 10/26/2021 Last updated : 05/26/2022 # Plan a SaaS offer for the commercial marketplace
When you publish a SaaS offer, it will be listed in Microsoft AppSource, Azure M
If your SaaS offer is *both* an IT solution (Azure Marketplace) and a business solution (AppSource), select a category and a subcategory applicable to each online store. Offers published to both online stores should have a value proposition as an IT solution *and* a business solution. > [!IMPORTANT]
-> SaaS offers with [metered billing](partner-center-portal/saas-metered-billing.md) are available through Azure Marketplace and the Azure portal. SaaS offers with only private plans are available through the Azure portal and AppSource.
+> SaaS offers with [metered billing](partner-center-portal/saas-metered-billing.md) are available through Azure Marketplace and the Azure portal. SaaS offers with only private plans are only available through the Azure portal.
| Metered billing | Public plan | Private plan | Available in: | ||||| | Yes | Yes | No | Azure Marketplace and Azure portal | | Yes | Yes | Yes | Azure Marketplace and Azure portal* | | Yes | No | Yes | Azure portal only |
-| No | No | Yes | Azure portal and AppSource |
+| No | No | Yes | Azure portal only |
-&#42; The private plan of the offer will only be available via the Azure portal and AppSource.
+&#42; The private plan of the offer will only be available via the Azure portal.
For example, an offer with metered billing and a private plan only (no public plan), will be purchased by customers in the Azure portal. Learn more about [Private offers in Microsoft commercial marketplace](private-offers.md).
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
Modifying the parameter `replicate_wild_ignore_table` used to create replication
- The source server version must be at least MySQL version 5.7. - Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7 or both must be MySQL version 8.0.-- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication.
+- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html) if your MySQL version is greater than 8.0.23.
- The source server should use the MySQL InnoDB engine. - User must have permissions to configure binary logging and create new users on the source server. - Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Automatic backups, both snapshots and log backups, are performed on locally redu
>[!Note] >For both zone-redundant and same-zone HA:
->* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.
+>* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html) if your MySQL version is greater than 8.0.23.
>* The standby server isn't available for read or write operations. It's a passive standby to enable fast failover. >* Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
-+ Last updated 03/01/2021
+ms.tool: azure-cli
# Quickstart: Connect and query with Azure CLI with Azure Database for MySQL - Flexible Server
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md
On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www
#### Do I need to make any changes on my client to maintain connectivity?
-No change is required on client side. If you followed our previous recommendation below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
+> [!NOTE]
+> If you are using PHP driver with [enableRedirect](./how-to-redirection.md) kindly follow the steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) to avoid connection failures.
+
+No change is required on client side. If you followed steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
-###### Previous recommendation
+###### Create a combined CA certificate
To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps:
To verify if you're using SSL connection to connect to the server refer [SSL ver
No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
+#### Why do I need to update my root certificate if I am using PHP driver with [enableRedirect](./how-to-redirection.md) ?
+To address compliance requirements, the CA certificates of the host server were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2. With this update, database connections using the PHP Client driver with enableRedirect can no longer connect to the server, as the client devices are unaware of the certificate change and the new root CA details. Client devices that use PHP redirection drivers connect directly to the host server, bypassing the gateway. Refer this [link](single-server-overview.md#high-availability) for more on architecture of Azure Database for MySQL Single Server.
+ #### What if I have further questions? For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
If you are using an older version of the mysqlnd_azure extension (version 1.0.0-
|`on` or `1`|- If the connection does not use SSL on the driver side, no connection will be made. The following error will be returned: *"mysqlnd_azure.enableRedirect is on, but SSL option is not set in connection string. Redirection is only possible with SSL."*<br>- If SSL is used on the driver side, but redirection is not supported on the server, the first connection is aborted and the following error is returned: *"Connection aborted because redirection is not enabled on the MySQL server or the network package doesn't meet redirection protocol."*<br>- If the MySQL server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection.| |`preferred` or `2`<br> (default value)|- mysqlnd_azure will use redirection if possible.<br>- If the connection does not use SSL on the driver side, the server does not support redirection, or the redirected connection fails to connect for any non-fatal reason while the proxy connection is still a valid one, it will fall back to the first proxy connection.|
+For successful connection to Azure database for MySQL Single server using `mysqlnd_azure.enableRedirect` you need to follow mandatory steps of combining your root certificate as per the compliance requirements. For more details on please visit [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity).
+ The subsequent sections of the document will outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter. ### Ubuntu Linux
openshift Howto Enable Fips Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-enable-fips-openshift.md
+
+ Title: Enable FIPS on an Azure Red Hat OpenShift cluster
+description: Learn how to enable FIPS on an Azure Red Hat OpenShift cluster.
++ Last updated : 5/5/2022++
+keywords: aro, openshift, az aro, red hat, cli, azure, FIPS
+#Customer intent: I need to understand how to enable FIPS on an Azure Red Hat OpenShift cluster.
++
+# Enable FIPS for an Azure Red Hat OpenShift cluster
+
+This article explains how to enable Federal Information Processing Standard (FIPS) for an Azure Red Hat OpenShift cluster.
+
+The Federal Information Processing Standard (FIPS) 140 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products, and systems. Testing against the FIPS 140 standard is maintained by the Cryptographic Module Validation Program (CMVP), a joint effort between the US National Institute of Standards and Technology (NIST) and the Canadian Centre for Cyber Security, a branch of the Communications Security Establishment (CSE) of Canada.
+
+## Support for FIPS cryptography
+
+Starting with Release 4.10, you can deploy an Azure Red Hat OpenShift cluster in FIPS mode. FIPS mode ensures the control plane is using FIPS 140-2 cryptographic modules. All workloads and operators deployed on a cluster need to use FIPS 140-2 in order to be FIPS compliant.
+
+You can install an Azure Red Hat OpenShift cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture.
+
+> [!NOTE]
+> If you're using Azure File storage, you can't enable FIPS mode.
+
+## To enable FIPS on your Azure Red Hat OpenShift cluster
+
+To enable FIPs on your Azure Red Hat OpeShift cluster, define the following parameters as environment variables:
+
+```azurecli-interactive
+az aro create \
+ --resource-group $RESOURCEGROUP \
+ --name $CLUSTER \
+ --vnet aro-vnet \
+ --master-subnet master-subnet \
+ --worker-subnet worker-subnet
+ --fips
+```
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
-+
+ms.tool: azure-cli
Last updated 11/30/2021
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-azure-powershell.md
ms.devlang: azurepowershell-
+ms.tool: azure-powershell
+ Last updated 06/08/2020
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table for the mobile network site resour
|The region in which youΓÇÖre creating the mobile network site resource. We recommend that you use the East US region. |**Instance details: Region**| |The mobile network resource representing the private mobile network to which youΓÇÖre adding the site. |**Instance details: Mobile network**|
-## Collect custom location information
+## Collect packet core configuration values
-Identify the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
+Collect all the values in the following table for the packet core instance that will run in the site.
+
+ |Value |Field name in Azure portal |
+ |||
+ |The core technology type the packet core instance should support (5G or 4G). |**Technology type**|
+ |The custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).</br></br> If you're going to create your site using the Azure portal, collect the name of the custom location.</br></br> If you're going to create your site using an ARM template, collect the full resource ID of the custom location.|**Custom location**|
-- If you're going to create your site using the Azure portal, collect the name of the custom location.-- If you're going to create your site using an ARM template, collect the full resource ID of the custom location. ## Collect access network values
-Collect all the values in the following table to define the packet core instance's connection to the access network over the N2 and N3 interfaces.
+Collect all the values in the following table to define the packet core instance's connection to the access network over the control plane and user plane interfaces. The field name displayed in the Azure portal will depend on the value you have chosen for **Technology type**, as described in [Collect packet core configuration values](#collect-packet-core-configuration-values).
> [!IMPORTANT]
-> Where noted, you must use the same values you used when deploying the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device for this site. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
+> For all values in this table, you must use the same values you used when deploying the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device for this site. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
|Value |Field name in Azure portal | |||
- | The IP address for the packet core instance N2 signaling interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 address (signaling)**|
- | The IP address for the packet core instance N3 interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
- | The network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 subnet** and **N3 subnet**|
- | The access subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 gateway** and **N3 gateway**|
+ | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface. |**N2 address (signaling)** (for 5G) or **S1-MME address** (for 4G).|
+ | The IP address for the user plane interface on the access network. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface. |N/A. You'll only need this value if you're using an ARM template to create the site.|
+ | The network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. |**N2 subnet** and **N3 subnet** (for 5G), or **S1-MME subnet** and **S1-U subnet** (for 4G).|
+ | The access subnet default gateway. |**N2 gateway** and **N3 gateway** (for 5G), or **S1-MME gateway** and **S1-U gateway** (for 4G).|
## Collect data network values
-Collect all the values in the following table to define the packet core instance's connection to the data network over the N6 interface.
+Collect all the values in the following table to define the packet core instance's connection to the data network over the user plane interface.
> [!IMPORTANT] > Where noted, you must use the same values you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
Collect all the values in the following table to define the packet core instance
|Value |Field name in Azure portal | ||| |The name of the data network. |**Data network**|
- | The IP address for the packet core instance N6 interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
- |The network address of the data subnet in CIDR notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 subnet**|
- |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 gateway**|
- | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
- | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support static IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The IP address for the user plane interface on the data network. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface. You identified the IP address for this interface in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
+ |The network address of the data subnet in CIDR notation. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6/SGi subnet**|
+ |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6/SGi gateway**|
+ | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. |**NAPT**| ## Next steps
private-5g-core Collect Required Information For Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-service.md
Collect each of the values in the table below for your service.
| The name of the service. This name must only contain alphanumeric characters, dashes, or underscores. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Service name** |Yes| | A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer. This value must be an integer between 0 and 255 and must be unique among all services configured on the packet core instance. A lower value means a higher priority. | **Service precedence** |Yes| | The maximum bit rate (MBR) for uplink traffic (traveling away from user equipment (UEs)) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Uplink** | Yes|
-| The maximum bit rate (MBR) for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** | Yes|
-| The default QoS Flow Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** |No. Defaults to 9.|
-| The default 5G QoS Indicator (5QI) value for this service. The 5QI value identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>We recommend you choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). Non-GBR QoS Flows are in the following ranges: 5-9; 69-70; 79-80.</br></br>You can also choose a non-standardized 5QI value.</p><p>Azure Private 5G Core doesn't support 5QI values corresponding GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |No. Defaults to 9.|
-| The default QoS Flow preemption capability for QoS Flows for this service. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** |No. Defaults to **May not preempt**.|
-| The default QoS Flow preemption vulnerability for QoS Flows for this service. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** |No. Defaults to **Preemptable**.|
+| The MBR for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** | Yes|
+| The default Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** |No. Defaults to 9.|
+| The default 5G QoS Indicator (5QI) or QoS class identifier (QCI) value for this service. The 5QI (for 5G networks) or QCI (for 4G networks) value identifies a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value.</p><p>Azure Private 5G Core doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |No. Defaults to 9.|
+| The default preemption capability for QoS flows or EPS bearers for this service. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** | **Preemption capability** |No. Defaults to **May not preempt**.|
+| The default preemption vulnerability for QoS flows or EPS bearers for this service. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** |No. Defaults to **Preemptable**.|
## Data flow policy rule(s)
private-5g-core Collect Required Information For Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-sim-policy.md
Collect each of the values in the table below for your SIM policy.
|--|--|--| | The name of the private mobile network for which you're configuring this SIM policy. | N/A | Yes | | The SIM policy name. The name must be unique across all SIM policies configured for the private mobile network. | **Policy name** |Yes|
-| The UE-AMBR for traffic traveling away from UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Uplink** |Yes|
-| The UE-AMBR for traffic traveling towards UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Downlink** |Yes|
+| The UE-AMBR for traffic traveling away from UEs across all non-GBR QoS flows or EPS bearers. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Total bandwidth allowed - Uplink** |Yes|
+| The UE-AMBR for traffic traveling towards UEs across all non-GBR QoS flows or EPS bearers. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Total bandwidth allowed - Downlink** |Yes|
| The interval between UE registrations for UEs using SIMs to which this SIM policy is assigned, given in seconds. Choose an integer that is 30 or greater. If you omit the interval when first creating the SIM policy, it will default to 3,240 seconds (54 minutes). | **Registration timer** |No. Defaults to 3,240 seconds.| | The subscriber profile ID for RAT/Frequency Priority ID (RFSP ID) for this SIM policy, as defined in TS 36.413. If you want to set an RFSP ID, you must specify an integer between 1 and 256. | **RFSP index** |No. Defaults to no value.| ## Collect information for the network scope
-Within each SIM policy, you'll have a *network scope*. The network scope represents the data network to which SIMs assigned to the SIM policy will have access. It allows you to define the QoS policy settings used for the default QoS Flow for PDU sessions involving these SIMs. These settings include the session aggregated maximum bit rate (Session-AMBR), 5G QoS Indicator (5QI) value, and Allocation and Retention Policy (ARP) priority level. You can also determine the services that will be offered to SIMs.
+Within each SIM policy, you'll have a *network scope*. The network scope represents the data network to which SIMs assigned to the SIM policy will have access. It allows you to define the QoS policy settings used for the default QoS flow for PDU sessions involving these SIMs. These settings include the session aggregated maximum bit rate (Session-AMBR), 5G QoS identifier (5QI) or QoS class identifier (QCI) value, and Allocation and Retention Policy (ARP) priority level. You can also determine the services that will be offered to SIMs.
Collect each of the values in the table below for the network scope.
Collect each of the values in the table below for the network scope.
|--|--|--| |The Data Network Name (DNN) of the data network. The DNN must match the one you used when creating the private mobile network. | **Data network** | Yes | |The names of the services permitted on the data network. You must have already configured your chosen services. For more information on services, see [Policy control](policy-control.md). | **Service configuration** | No. The SIM policy will only use the service you configure using the same template. |
-|The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Uplink** | Yes |
-|The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Downlink** | Yes |
-|The default 5G QoS Indicator (5QI) value for this data network. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>Choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). These values are in the following ranges: 5-9; 69-70; 79-80. </br></br>You can also choose a non-standardized 5QI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI values corresponding to GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** | No. Defaults to 9. |
-|The default QoS Flow Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** | No. Defaults to 1. |
-|The default QoS Flow preemption capability for QoS Flows on this data network. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** | No. Defaults to **May not preempt**.|
-|The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** | No. Defaults to **Preemptable**.|
-|The default PDU session type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Default session type** | No. Defaults to **IPv4**.|
-|An additional PDU session type that Azure Private 5G Core supports for this data network. This type must not match the default type mentioned above. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Additional allowed session types** |No. Defaults to no value.|
+|The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Uplink** | Yes |
+|The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Downlink** | Yes |
+|The default 5QI (for 5G) or QCI (for 4G) value for this data network. These values identify a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers.</br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** | No. Defaults to 9. |
+|The default Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** | No. Defaults to 1. |
+|The default preemption capability for QoS flows or EPS bearers on this data network. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** | **Preemption capability** | No. Defaults to **May not preempt**.|
+|The default preemption vulnerability for QoS flows or EPS bearers on this data network. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** | No. Defaults to **Preemptable**.|
+|The default PDU session or PDN connection type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Default session type** | No. Defaults to **IPv4**.|
+|An additional PDU session or PDN connection type that Azure Private 5G Core supports for this data network. This type must not match the default type mentioned above. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Additional allowed session types** |No. Defaults to no value.|
## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
Contact your trials engineer and ask them to register your Azure subscription fo
Once your trials engineer has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+## Choose the core technology type (5G or 4G)
+
+Choose whether each site in the private mobile network should provide coverage for 5G or 4G user equipment (UEs). A single site cannot support 5G and 4G UEs simultaneously. If you're deploying multiple sites, you can choose to have some sites support 5G UEs and others support 4G UEs.
+ ## Allocate subnets and IP addresses Azure Private 5G Core requires a management network, access network, and data network. These networks can all be part of the same, larger network, or they can be separate. The approach you use depends on your traffic separation requirements.
For each of these networks, allocate a subnet and then identify the listed IP ad
- Network address in CIDR notation. - Default gateway. - One IP address for port 5 on the Azure Stack Edge Pro device. -- One IP address for the packet core instance's N2 signaling interface. -- One IP address for the packet core instance's N3 interface.
+- One IP address for the control plane interface. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface.
+- One IP address for the user plane interface. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface.
### Data network - Network address in CIDR notation. - Default gateway. - One IP address for port 6 on the Azure Stack Edge Pro device.-- One IP address for the packet core instance's N6 interface.
+- One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.
## Allocate user equipment (UE) IP address pools
For each site you're deploying, do the following:
For each site you're deploying, do the following. - Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices).-- If you're not enabling NAPT as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated for the packet core instance's N6 interface.
+- If you're not enabling NAPT as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated to the packet core instance's user plane interface on the data network.
## Order and set up your Azure Stack Edge Pro device(s)
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Azure Private 5G Core Preview private mobile networks include one or more *sites
## Prerequisites -- Complete the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses), [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools), and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
+- Carry out the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) for your new site.
- Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md). - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
-## Create the Mobile Network Site resource
+## Create the mobile network site resource
-In this step, you'll create the **Mobile Network Site** resource representing the physical enterprise location of your Azure Stack Edge device, which will host the packet core instance.
+In this step, you'll create the mobile network site resource representing the physical enterprise location of your Azure Stack Edge device, which will host the packet core instance.
1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal). 1. Search for and select the **Mobile Network** resource representing the private mobile network to which you want to add a site.
- :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a mobile network resource.":::
1. On the **Get started** tab, select **Create sites**.
In this step, you'll create the **Mobile Network Site** resource representing th
1. In the **Packet core** section, set the fields as follows:
- - Set **Technology type** to *5G*.
+ - Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Technology type** and **Custom location** fields.
- Leave the **Version** field blank unless you've been instructed to do otherwise by your support representative.
- - Set **Custom location** to the custom location you collected in [Collect custom location information](collect-required-information-for-a-site.md#collect-custom-location-information).
1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. Note the following:
- - Use the same value for both the **N2 subnet** and **N3 subnet** fields.
- - Use the same value for both the **N2 gateway** and **N3 gateway** fields.
+ - Use the same value for both the **N2 subnet** and **N3 subnet** fields (if this site will support 5G user equipment (UEs)).
+ - Use the same value for both the **N2 gateway** and **N3 gateway** fields (if this site will support 5G UEs).
+ - Use the same value for both the **S1-MME subnet** and **S1-U subnet** fields (if this site will support 4G UEs).
+ - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
1. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields in the **Attached data networks** section. Note that you can only connect the packet core instance to a single data network. 1. Select **Review + create**.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites -- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- Complete the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
+- Carry out the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) for your new site.
- Identify the names of the interfaces corresponding to ports 5 and 6 on your Azure Stack Edge Pro device. - Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md).
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
## Review the template
The template used in this how-to guide is from [Azure Quickstart Templates](http
Four Azure resources are defined in the template. - [**Microsoft.MobileNetwork/mobileNetworks/sites**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/sites): a resource representing your site as a whole.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network, including the IP address for the N6 interface and data subnet configuration.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the N3 interface.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the N2 interface.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the user plane interface on the access network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the control plane interface on the access network.
## Deploy the template
Four Azure resources are defined in the template.
| **Existing Data Network Name** | Enter the name of the data network to which your private mobile network connects. | | **Site Name** | Enter a name for your site. | | **Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- | **Control Plane Access Ip Address** | Enter the IP address for the packet core instance's N2 signaling interface. |
- | **Data Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- | **Data Plane Access Interface Ip Address** | Enter the IP address for the packet core instance's N3 interface. |
+ | **Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
+ | **User Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ | **User Plane Access Interface Ip Address** | Enter the IP address for the user plane interface on the access network. |
| **Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. | | **Access Gateway** | Enter the access subnet default gateway. | | **User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
- | **User Plane Data Interface Ip Address** | Enter the IP address for the packet core instance's N6 interface. |
+ | **User Plane Data Interface Ip Address** | Enter the IP address for the user plane interface on the data network. |
| **User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. | | **User Plane Data Interface Gateway** | Enter the data subnet default gateway. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- | **Core Network Technology** | Leave this field unchanged. |
+ | **Core Network Technology** | Enter `5GC` for 5G, or `EPC` for 4G. |
| **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. | | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
- [**Microsoft.MobileNetwork/mobileNetworks/services**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/services): a resource representing a service. - [**Microsoft.MobileNetwork/mobileNetworks/simPolicies**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/simPolicies): a resource representing a SIM policy. - [**Microsoft.MobileNetwork/mobileNetworks/sites**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/sites): a resource representing your site as a whole.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network, including the IP address for the N6 interface and data subnet configuration.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the N3 interface.-- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the N2 interface.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the user plane interface on the access network.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the control plane interface on the access network.
- [**Microsoft.MobileNetwork/mobileNetworks**](/azure/templates/microsoft.mobilenetwork/mobilenetworks): a resource representing the private mobile network as a whole. - [**Microsoft.MobileNetwork/sims:**](/azure/templates/microsoft.mobilenetwork/sims) a resource representing a physical SIM or eSIM.
The following Azure resources are defined in the template.
|**Sim Policy Name** | Leave this field unchanged. | |**Slice Name** | Leave this field unchanged. | |**Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- |**Control Plane Access Ip Address** | Enter the IP address for the packet core instance's N2 signaling interface. |
+ |**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
|**User Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
- |**User Plane Access Interface Ip Address** | Enter the IP address for the packet core instance's N3 interface. |
+ |**User Plane Access Interface Ip Address** | Enter the IP address for the user plane interface on the access network. |
|**Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. | |**Access Gateway** | Enter the access subnet default gateway. | |**User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
- |**User Plane Data Interface Ip Address** | Enter the IP address for the packet core instance's N6 interface. |
+ |**User Plane Data Interface Ip Address** | Enter the IP address for the user plane interface on the data network. |
|**User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. | |**User Plane Data Interface Gateway** | Enter the data subnet default gateway. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- |**Core Network Technology** | Leave this field unchanged. |
+ |**Core Network Technology** | Enter `5GC` for 5G, or `EPC` for 4G. |
|**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.| |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
private-5g-core Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/distributed-tracing.md
The distributed tracing web GUI provides two search tabs to allow you to search
If you can't see the **Search** heading, select the **Search** button in the top-level menu. -- **SUPI** - Allows you to search for activity involving a particular subscriber using their Subscription Permanent Identifier (SUPI). This tab also provides an **Errors** panel, which allows you to filter the results by error condition. To search for activity for a particular subscriber, enter all of the initial digits of the subscriber's SUPI into the text box on the **SUPI search** panel.
+- **SUPI** - Allows you to search for activity involving a particular subscriber using their subscription permanent identifier (SUPI) or, in 4G networks, their international mobile subscriber identity (IMSI). This tab also provides an **Errors** panel, which allows you to filter the results by error condition. To search for activity for a particular subscriber, enter all of the initial digits of the subscriber's SUPI or IMSI into the text box on the **SUPI search** panel.
- **Errors** - Allows you to search for error condition occurrences across all subscribers. To search for occurrences of error conditions across all subscribers, select the **Errors** tab and then use the drop-down menus on the **Error** panel to select an error category and, optionally, a specific error. :::image type="content" source="media\distributed-tracing\distributed-tracing-search-display.png" alt-text="Screenshot of the Search display in the distributed tracing web G U I, showing the S U P I and Errors tabs.":::
You can select an entry in the search results to view detailed information for t
When you select a specific result, the display shows the following tabs containing different categories of information. > [!NOTE]
-> In addition to the tabs described below, the distributed tracing web GUI also includes a **User Experience** tab. This tab is not used by Azure Private 5G Core Preview and will not display any information.
+> In addition to the tabs described below, the distributed tracing web GUI also includes a **User Experience** tab. This tab is not used by Azure Private 5G Core and will not display any information.
### Summary view
private-5g-core Key Components Of A Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/key-components-of-a-private-mobile-network.md
This article introduces the key physical components of a private mobile network deployed through Azure Private 5G Core Preview. It also details the resources you'll use to manage the private mobile network through Azure.
-Each private mobile network contains one or more *sites*. A site is a physical enterprise location (for example, Contoso Corporation's Chicago Factory) that will provide coverage for 5G user equipment (UEs). The following diagram shows the main components of a single site.
+Each private mobile network contains one or more *sites*. A site is a physical enterprise location (for example, Contoso Corporation's Chicago Factory) that will provide coverage for user equipment (UEs). The following diagram shows the main components of a single site.
:::image type="content" source="media/key-components-of-a-private-mobile-network/site-physical-components.png" alt-text="Diagram displaying the main components of a site in a private mobile network":::
Each private mobile network contains one or more *sites*. A site is a physical e
When you add a site to your private mobile network, you'll create a *Kubernetes cluster* on the Azure Stack Edge device. This serves as the platform for the packet core instance. -- Each packet core instance connects to a radio access network (RAN) to provide coverage for 5G UEs. You'll source your RAN from a third party.
+- Each packet core instance connects to a radio access network (RAN) to provide coverage for UEs. You'll source your RAN from a third party.
## Azure Private 5G Core resources
private-5g-core Monitor Private 5G Core With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-log-analytics.md
Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You can write queries to retrieve records or visualize data in charts, allowing you to monitor and analyze activity in your private mobile network.
+> [!IMPORTANT]
+> Log Analytics currently can only be used to monitor private mobile networks that support 5G UEs. You can still monitor private mobile networks supporting 4G UEs from the local network using the [packet core dashboards](packet-core-dashboards.md).
+ ## Enable Log Analytics You'll need to carry out the steps in [Enabling Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md) before you can use Log Analytics with Azure Private 5G Core.
private-5g-core Packet Core Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/packet-core-dashboards.md
You can access the following packet core dashboards:
- The **Device and Session Statistics dashboard** provides information about the device and session procedures being processed by the packet core instance.
+ > [!IMPORTANT]
+ > The **Device and Session Statistics dashboard** only displays metrics for packet core instances that support 5G UEs. It does not currently display any metrics related to 4G activity.
+ :::image type="content" source="media/packet-core-dashboards/packet-core-device-session-stats-dashboard.png" alt-text="Screenshot of the Device and Session Statistics dashboard. It shows panels for device authentication, device registration, device context, and P D U session procedures." lightbox="media/packet-core-dashboards/packet-core-device-session-stats-dashboard.png"::: - The **Uplink and Downlink Statistics dashboard** provides detailed statistics on the user plane traffic being handled by the packet core instance.
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
Azure Private 5G Core Preview provides flexible traffic handling. You can customize how your packet core instance applies quality of service (QoS) characteristics to traffic. You can also block or limit certain flows.
-## 5G quality of service (QoS) and QoS Flows
-The packet core instance is a key component in establishing *protocol data unit (PDU) sessions*, which are used to transport user plane traffic between a UE and the data network. Within each PDU session, there are one or more *service data flows (SDFs)*. Each SDF is a single IP flow or a set of aggregated IP flows of UE traffic that is used for a specific service.
+## 5G quality of service (QoS) and QoS flows
+
+In 5G networks, the packet core instance is a key component in establishing *protocol data unit (PDU)* sessions, which are used to transport user plane traffic between a UE and the data network. Within each PDU session, there are one or more *service data flows (SDFs)*. Each SDF is a single IP flow or a set of aggregated IP flows of UE traffic that is used for a specific service.
Each SDF may require a different set of QoS characteristics, including prioritization and bandwidth limits. For example, an SDF carrying traffic used for industrial automation will need to be handled differently to an SDF used for internet browsing.
-To ensure the correct QoS characteristics are applied, each SDF is bound to a *QoS Flow*. Each QoS Flow has a unique *QoS profile*, which identifies the QoS characteristics that should be applied to any SDFs bound to the QoS Flow. Multiple SDFs with the same QoS requirements can be bound to the same QoS Flow.
+To ensure the correct QoS characteristics are applied, each SDF is bound to a *QoS flow*. Each QoS flow has a unique *QoS profile*, which identifies the QoS characteristics that should be applied to any SDFs bound to the QoS flow. Multiple SDFs with the same QoS requirements can be bound to the same QoS flow.
A *QoS profile* has two main components. -- A *5G QoS identifier (5QI)*. The 5QI value corresponds to a set of QoS characteristics that should be used for the QoS Flow. These characteristics include guaranteed and maximum bitrates, priority levels, and limits on latency, jitter, and error rate. The 5QI is given as a scalar number.
+- A *5G QoS identifier (5QI)*. The 5QI value corresponds to a set of QoS characteristics that should be used for the QoS flow. These characteristics include guaranteed and maximum bitrates, priority levels, and limits on latency, jitter, and error rate. The 5QI is given as a scalar number.
- You can find more information on 5QI and each of the QoS characteristics in 3GPP TS 23.501. You can also find definitions for standardized (or non-dynamic) 5QI values.
+ You can find more information on 5QI values and each of the QoS characteristics in 3GPP TS 23.501. You can also find definitions for standardized (or non-dynamic) 5QI values.
The required parameters for each 5QI value are pre-configured in the Next Generation Node B (gNB). > [!NOTE]
-> Azure Private 5G Core does not support dynamically assigned 5QI, where specific QoS characteristics are signalled to the gNB during QoS Flow creation.
+> Azure Private 5G Core does not support dynamically assigned 5QI, where specific QoS characteristics are signalled to the gNB during QoS flow creation.
+
+- An *allocation and retention priority (ARP) value*. The ARP value defines a QoS flow's importance. It controls whether a particular QoS flow should be retained or preempted when there's resource constraint in the network, based on its priority compared to other QoS flows. The QoS profile may also define whether the QoS flow can preempt or be preempted by another QoS flow.
+
+Each unique QoS flow is assigned a unique *QoS flow ID (QFI)*, which is used by network elements to map SDFs to QoS flows.
+
+## 4G QoS and EPS bearers
+
+The packet core instance performs a very similar role in 4G networks to that described in [5G quality of service (QoS) and QoS flows](#5g-quality-of-service-qos-and-qos-flows).
-- An *allocation and retention priority (ARP) value*. The ARP value defines a QoS Flow's importance. It controls whether a particular QoS Flow should be retained or preempted when there's resource constraint in the network, based on its priority compared to other QoS Flows. The QoS profile may also define whether the QoS Flow can preempt or be preempted by another QoS Flow.
+In 4G networks, the packet core instance helps to establish *packet data network (PDN) connections* to transport user plane traffic. PDN connections also contain one or more SDFs.
-Each unique QoS Flow is assigned a unique *QoS Flow ID (QFI)*, which is used by network elements to map SDFs to QoS Flows.
+The SDFs are bound to *Evolved Packet System (EPS) bearers*. EPS bearers are also assigned a QoS profile, which comprises two components.
+
+- A *QoS class identifier (QCI)*, which is the equivalent of a 5QI in 5G networks.
+
+ You can find more information on QCI values in 3GPP 23.203. Each standardized QCI value is mapped to a 5QI value.
+
+- An ARP value. This works in the same way as in 5G networks to define an EPS bearer's importance.
+
+Each EPS bearer is assigned an *EPS bearer ID (EBI)*, which is used by network elements to map SDFs to EPS bearers.
## Azure Private 5G Core policy control configuration
-Azure Private 5G Core provides configuration to allow you to determine the QoS Flows the packet core instance will create and bind to SDFs during PDU session establishment. You can configure two primary resource types - *services* and *SIM policies*.
+Azure Private 5G Core provides configuration to allow you to determine the QoS flows or EPS bearers the packet core instance will create and bind to SDFs when establishing PDU sessions or PDN connections. You can configure two primary resource types - *services* and *SIM policies*.
### Services
A *service* is a representation of a set of QoS characteristics that you want to
Each service includes: -- A set of QoS characteristics that should be applied on SDFs matching the service. The packet core instance will use these characteristics to create a QoS Flow to bind to matching SDFs. You can specify the following QoS settings on a service:
+- A set of QoS characteristics that should be applied on SDFs matching the service. The packet core instance will use these characteristics to create a QoS flow or EPS bearer to bind to matching SDFs. You can specify the following QoS settings on a service:
- The maximum bit rate (MBR) for uplink traffic (away from the UE) across all matching SDFs. - The MBR for downlink traffic (towards the UE) across all matching SDFs. - An ARP priority value.
- - A 5QI value.
- - A preemption capability setting. This setting determines whether the QoS Flow created for this service can preempt another QoS Flow with a lower ARP priority level.
- - A preemption vulnerability setting. This setting determines whether the QoS Flow created for this service can be preempted by another QoS Flow with a higher ARP priority level.
+ - A 5QI value. This is mapped to a QCI value when used in 4G networks.
+ - A preemption capability setting. This setting determines whether the QoS flow or EPS bearer created for this service can preempt another QoS flow or EPS bearer with a lower ARP priority level.
+ - A preemption vulnerability setting. This setting determines whether the QoS flow or EPS bearer created for this service can be preempted by another QoS flow or EPS bearer with a higher ARP priority level.
- One or more *data flow policy rules*, which identify the SDFs to which the service should be applied. You can configure each rule with the following to determine when it's applied and the effect it will have:
Each SIM policy includes:
- A *network scope*, which defines how SIMs assigned to this SIM policy will connect to the data network. You can use the network scope to determine the following settings: - The services (as described in [Services](#services)) offered to SIMs on this data network.
- - A set of QoS characteristics that will be used to form the default QoS Flow for PDU sessions involving assigned SIMs on this data network.
+ - A set of QoS characteristics that will be used to form the default QoS flow for PDU sessions (or EPS bearer for PDN connections in 4G networks).
You can create multiple SIM policies to offer different QoS policy settings to separate groups of SIMs on the same data network. For example, you may want to create SIM policies with differing sets of services.
-## Creating and assigning QoS Flows during PDU session establishment
-
-During PDU session establishment, the packet core instance takes the following steps:
-
-1. Identifies the SIM resource representing the UE involved in the PDU session and its associated SIM policy (as described in [SIM policies](#sim-policies)).
-1. Creates a default QoS Flow for the PDU session using the configured values on the SIM policy.
-1. Identifies whether the SIM policy has any associated services (as described in [Services](#services)). If it does, the packet core instance creates extra QoS Flows using the QoS characteristics defined on these services.
-1. Signals the QoS Flows and any non-default characteristics to the gNodeB.
-1. Sends a set of QoS rules (including SDF definitions taken from associated services) to the UE. The UE uses these rules to take the following steps:
-
- - Checks uplink packets against the SDFs.
- - Applies any necessary traffic control.
- - Identifies the QoS Flow to which each SDF should be bound.
- - Marks packets with the appropriate QFI. The QFI ensures packets receive the correct QoS handling between the UE and the packet core instance without further inspection.
-
-1. Inspects downlink packets to check their properties against the data flow templates of the associated services, and then takes the following steps based on this matching:
-
- - Applies any necessary traffic control.
- - Identifies the QoS Flow to which each SDF should be bound.
- - Applies any necessary QoS treatment.
- - Marks packets with the QFI corresponding to the correct QoS Flow. The QFI ensures the packets receive the correct QoS handling between the packet core instance and data network without further inspection.
- ## Designing your policy control configuration Azure Private 5G Core policy control configuration is flexible, allowing you to configure new services and SIM policies whenever you need, based on the changing requirements of your private mobile network.
When you first come to design the policy control configuration for your own priv
You can also use the example Azure Resource Manager template (ARM template) in [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md) to quickly create a SIM policy with a single associated service.
+## QoS flow and EPS bearer creation and assignment
+
+This section describes how the packet core instance uses policy control configuration to create and assign QoS flows and EPS bearers. We describe the steps using 5G concepts for clarity, but the packet core instance takes the same steps in 4G networks. The table below gives the equivalent 4G concepts for reference.
+
+|5G |4G |
+|||
+|PDU session | PDN connection |
+|QoS flow | EPS bearer |
+| gNodeB | eNodeB |
+
+During PDU session establishment, the packet core instance takes the following steps:
+
+1. Identifies the SIM resource representing the UE involved in the PDU session and its associated SIM policy (as described in [SIM policies](#sim-policies)).
+1. Creates a default QoS flow for the PDU session using the configured values on the SIM policy.
+1. Identifies whether the SIM policy has any associated services (as described in [Services](#services)). If it does, the packet core instance creates extra QoS flows using the QoS characteristics defined on these services.
+1. Signals the QoS flows and any non-default characteristics to the gNodeB.
+1. Sends a set of QoS rules (including SDF definitions taken from associated services) to the UE. The UE uses these rules to take the following steps:
+
+ - Checks uplink packets against the SDFs.
+ - Applies any necessary traffic control.
+ - Identifies the QoS flow to which each SDF should be bound.
+ - In 5G networks only, the UE marks packets with the appropriate QFI. The QFI ensures packets receive the correct QoS handling between the UE and the packet core instance without further inspection.
+
+1. Inspects downlink packets to check their properties against the data flow templates of the associated services, and then takes the following steps based on this matching:
+
+ - Applies any necessary traffic control.
+ - Identifies the QoS flow to which each SDF should be bound.
+ - Applies any necessary QoS treatment.
+ - In 5G networks only, the packet core instance marks packets with the QFI corresponding to the correct QoS flow. The QFI ensures the packets receive the correct QoS handling between the packet core instance and data network without further inspection.
+ ## Next steps - [Learn how to create an example set of policy control configuration](tutorial-create-example-set-of-policy-control-configuration.md)
private-5g-core Private 5G Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-5g-core-overview.md
Azure Private 5G Core instantiates a single private mobile network distributed a
You can also deploy packet core instances in 4G mode to support Private Long-Term Evolution (LTE) use cases. For example, you can use the 4G Citizens Broadband Radio Service (CBRS) spectrum. 4G mode uses the same cloud-native components as 5G mode (such as the UPF). This is in contrast to other solutions that need to revert to a legacy 4G stack.
-The following diagram shows the network functions supported by a packet core instance. It also shows the interfaces these network functions use to interoperate with third-party components. Note that when running in 4G mode, the Unified Data Repository (UDR) performs the role that would usually be performed by a Home Subscriber Store (HSS).
+The following diagram shows the network functions supported by a packet core instance. It also shows the interfaces these network functions use to interoperate with third-party components.
- Diagram displaying the packet core architecture. The packet core includes the following 5G network functions: the A M F, the S M F, the U P F, the U D R, the N R F, the P C F, the U D M, and the A U S F. The A M F communicates with 5G user equipment over the N1 interface. A G Node B provided by a Microsoft partner communicates with the A M F over the N2 interface and the U P F over the N3 interface. The U P F communicates with the data network over the N6 interface. When operating in 4G mode, the packet core includes S 11 I W F and M M E network functions. The S 11 I W F communicates with the M M E over the S 11 interface. An E Node B provided by a Microsoft partner communicates with the M M E over the S 1 C interface.
+ Diagram displaying the packet core architecture. The packet core includes the following 5G network functions: the A M F, the S M F, the U P F, the U D R, the N R F, the P C F, the U D M, and the A U S F. The A M F communicates with 5G user equipment over the N1 interface. A G Node B provided by a Microsoft partner communicates with the A M F over the N2 interface and the U P F over the N3 interface. The U P F communicates with the data network over the N6 interface. When operating in 4G mode, the packet core includes M M E Proxy and M M E network functions. The M M E Proxy communicates with the M M E over the S 11 interface. An E Node B provided by a Microsoft partner communicates with the M M E over the S 1 M M E interface.
:::image-end::: Each packet core instance is connected to the local RAN network to provide coverage for cellular wireless devices. You can choose to limit these devices to local connectivity. Alternatively, you can provide multiple routes to the cloud, internet, or other enterprise data centers running IoT and automation applications.
-## Support for 5GC features
+## Feature support
### Supported 5G network functions
Each packet core instance is connected to the local RAN network to provide cover
- Unified Data Repository (UDR) - Network Repository Function (NRF)
-### Supported 5G procedures
+### Supported 4G network functions
-For information on Azure Private 5G Core's support for standards-based 5G procedures, see [Statement of compliance - Azure Private 5G Core](statement-of-compliance.md).
+Azure Private 5G Core uses the following network functions when supporting 4G UEs, in addition to the 5G network functions listed above.
+
+- Mobile Management Entity (MME)
+- MME-Proxy - The MME-Proxy works to allow 4G UEs to be served by 5G network functions.
+
+The following 5G network functions perform specific roles when supporting 4G UEs.
+
+- The UDR operates as a Home Subscriber Store (HSS).
+- The UPF operates as a System Architecture Evolution Gateway (SAEGW-U).
+
+### Supported 5G and 4G procedures
+
+For information on Azure Private 5G Core's support for standards-based 5G and 4G procedures, see [Statement of compliance - Azure Private 5G Core](statement-of-compliance.md).
### User equipment (UE) authentication and security context management Azure Private 5G Core supports the following authentication methods: -- Authentication using Subscription Permanent Identifiers (SUPI) and 5G Globally Unique Temporary Identities (5G-GUTI).-- 5G Authentication and Key Agreement (5G-AKA) for mutual authentication between UEs and the network.
+- Authentication using Subscription Permanent Identifiers (SUPI) and 5G Globally Unique Temporary Identities (5G-GUTI) for 5G user equipment (UEs).
+- Authentication using International Mobile Subscriber Identities (IMSI) and Globally Unique Temporary Identities (GUTI) for 4G UEs.
+- 5G Authentication and Key Agreement (5G-AKA) for mutual authentication between 5G UEs and the network.
+- Evolved Packet System based Authentication and Key Agreement (EPS-AKA) for mutual authentication between 4G UEs and the network.
The packet core instance performs ciphering and integrity protection of 5G non-access stratum (NAS). During UE registration, the UE includes its security capabilities for 5G NAS with 128-bit keys.
private-5g-core Statement Of Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/statement-of-compliance.md
All packet core network functions are compliant with Release 15 of the 3GPP spec
- TS 23.401: General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access. - TS 29.272: Evolved Packet System (EPS); Mobility Management Entity (MME) and Serving GPRS Support Node (SGSN) related interfaces based on Diameter protocol. - TS 29.274: 3GPP Evolved Packet System (EPS); Evolved General Packet Radio Service (GPRS) Tunneling Protocol for Control plane (GTPv2-C); Stage 3.
+- TS 33.401: 3GPP System Architecture Evolution (SAE); Security architecture.
- TS 36.413: Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1 Application Protocol (S1AP). ### Policy and charging control (PCC) framework
The implementation of all of the 3GPP specifications given in [3GPP specificatio
- IETF RFC 768: User Datagram Protocol. - IETF RFC 791: Internet Protocol.-- IETF RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers.
+- IETF RFC 2279: UTF-8, a transformation format of ISO 10646.
- IETF RFC 2460: Internet Protocol, Version 6 (IPv6) Specification.
+- IETF RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers.
+- IETF RFC 3748: Extensible Authentication Protocol (EAP).
+- IETF RFC 3986: Uniform Resource Identifier (URI): Generic Syntax.
+- IETF RFC 4187: Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA).
- IETF RFC 4291: IP Version 6 Addressing Architecture. - IETF RFC 4960: Stream Control Transmission Protocol.-- IETF RFC 2279: UTF-8, a transformation format of ISO 10646.-- IETF RFC 3986: Uniform Resource Identifier (URI): Generic Syntax.
+- IETF RFC 5448: Improved Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA').
- IETF RFC 5789: PATCH Method for HTTP.
+- IETF RFC 6458: Sockets API Extensions for the Stream Control Transmission Protocol (SCTP).
+- IETF RFC 6733: Diameter Base Protocol.
+- IETF RFC 6749: The OAuth 2.0 Authorization Framework.
- IETF RFC 6902: JavaScript Object Notation (JSON) Patch. - IETF RFC 7396: JSON Merge Patch. - IETF RFC 7540: Hypertext Transfer Protocol Version 2 (HTTP/2). - IETF RFC 7807: Problem Details for HTTP APIs. - IETF RFC 8259: The JavaScript Object Notation (JSON) Data Interchange Format.-- IETF RFC 3748: Extensible Authentication Protocol (EAP).-- IETF RFC 4187: Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA).-- IETF RFC 5448: Improved Extensible Authentication Protocol Method for 3rd Generation Authentication and Key Agreement (EAP-AKA').-- IETF RFC 6749: The OAuth 2.0 Authorization Framework. ## ITU-T Recommendations
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|| [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | || [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No | || [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | [Yes (Preview)](how-to-data-owner-policies-azure-sql-db.md) |
-|| [Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md)| [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | No* | No |
+|| [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)| [Yes](register-scan-azure-sql-managed-instance.md#scan) | [Yes](register-scan-azure-sql-managed-instance.md#scan) | No* | No |
|| [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| |Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | || [Cassandra](register-scan-cassandra-source.md)|[Yes](register-scan-cassandra-source.md#register) | No | [Yes](register-scan-cassandra-source.md#lineage)| No|
The following file types are supported for scanning, for schema extraction, and
Currently, nested data is only supported for JSON content.
-For all [system supported file types](#file-types-supported-for-scanning), if there is nested JSON content in a column, then the scanner parses the nested JSON data and surfaces it within the schema tab of the asset.
+For all [system supported file types](#file-types-supported-for-scanning), if there's nested JSON content in a column, then the scanner parses the nested JSON data and surfaces it within the schema tab of the asset.
-Nested data, or nested schema parsing, is not supported in SQL. A column with nested data will be reported and classified as is, and subdata will not be parsed.
+Nested data, or nested schema parsing, isn't supported in SQL. A column with nested data will be reported and classified as is, and subdata won't be parsed.
## Sampling within a file
For all structured file formats, Microsoft Purview scanner samples files in the
- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower. - For document file formats, it samples the first 20 MB of each file.
- - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Microsoft Purview captures only basic meta data like file name and fully qualified name.
+ - If a document file is larger than 20 MB, then it isn't subject to a deep scan (subject to classification). In that case, Microsoft Purview captures only basic meta data like file name and fully qualified name.
- For **tabular data sources (SQL)**, it samples the top 128 rows. - For **Azure Cosmos DB (SQL API)**, up to 300 distinct properties from the first 10 documents in a container will be collected for the schema and for each property, values from up to 128 documents or the first 1 MB will be sampled.
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Currently, the following data sources are supported to have a managed private en
- Azure Blob Storage - Azure Data Lake Storage Gen 2 - Azure SQL Database -- Azure SQL Database Managed Instance
+- Azure SQL Managed Instance
- Azure Cosmos DB - Azure Synapse Analytics - Azure Files
Additionally, you can deploy managed private endpoints for your Azure Key Vault
### Managed Virtual Network
-A Managed Virtual Network in Microsoft Purview is a virtual network which is deployed and managed by Azure inside the same region as Microsoft Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
+A Managed Virtual Network in Microsoft Purview is a virtual network that is deployed and managed by Azure inside the same region as Microsoft Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet-architecture.png" alt-text="Microsoft Purview Managed Virtual Network architecture":::
-You can deploy an Azure Managed Integration Runtime within a Microsoft Purview Managed Virtual Network. From there, the Managed VNet Runtime will leverage private endpoints to securely connect to and scan supported data sources.
+You can deploy an Azure Managed Integration Runtime within a Microsoft Purview Managed Virtual Network. From there, the Managed VNet Runtime will use private endpoints to securely connect to and scan supported data sources.
Creating a Managed VNet Runtime within Managed Virtual Network ensures that data integration process is isolated and secure.
Only a Managed private endpoint in an approved state can send traffic to a given
### Interactive authoring
-Interactive authoring capabilities is used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure Integration Runtime which is in Purview-Managed Virtual Network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time To Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
+Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure Integration Runtime that is in Purview-Managed Virtual Network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time To Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-interactive-authoring.png" alt-text="Interactive authoring":::
Interactive authoring capabilities is used for functionalities like test connect
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview account, ensure you meet the following prerequisites:
-1. An Microsoft Purview account deployed in one of the [supported regions](#supported-regions).
+1. A Microsoft Purview account deployed in one of the [supported regions](#supported-regions).
2. From Microsoft Purview roles, you must be a data curator at root collection level in your Microsoft Purview account. 3. From Azure RBAC roles, you must be contributor on the Microsoft Purview account and data source to approve private links.
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-region.png" alt-text="Screenshot that shows to create a Managed VNet Runtime":::
-5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in the Microsoft Purview governance portal for creating managed private endpoints for Microsoft Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
+5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in the Microsoft Purview governance portal for creating managed private endpoints for Microsoft Purview and its Managed Storage Account. Select on each workflow to approve the private endpoint for the corresponding Azure resource.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-workflows.png" alt-text="Screenshot that shows deployment of a Managed VNet Runtime":::
-6. In Azure portal, from your Microsoft Purview account resource blade, approve the managed private endpoint. From Managed storage account blade approve the managed private endpoints for blob and queue
+6. In Azure portal, from your Microsoft Purview account resource window, approve the managed private endpoint. From Managed storage account page approve the managed private endpoints for blob and queue
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Microsoft Purview":::
To scan any data sources using Managed VNet Runtime, you need to deploy and appr
2. Select **+ New**.
-3. From the list of supported data sources, select the type that corresponds to the data source you are planning to scan using Managed VNet Runtime.
+3. From the list of supported data sources, select the type that corresponds to the data source you're planning to scan using Managed VNet Runtime.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source.png" alt-text="Screenshot that shows how to create a managed private endpoint for data sources":::
-4. Provide a name for the managed private endpoint, select the Azure subscription and the data source from the drop down lists. Select **create**.
+4. Provide a name for the managed private endpoint, select the Azure subscription and the data source from the drop-down lists. Select **create**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe.png" alt-text="Screenshot that shows how to select data source for setting managed private endpoint":::
-5. From the list of managed private endpoints, click on the newly created managed private endpoint for your data source and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
+5. From the list of managed private endpoints, select on the newly created managed private endpoint for your data source and then select on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-approval.png" alt-text="Screenshot that shows the approval for managed private endpoint for data sources":::
-6. By clicking on the link, you are redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint and select **approve**.
+6. By clicking on the link, you're redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint and select **approve**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe-azure.png" alt-text="Screenshot that shows how to approve a private endpoint for data sources in Azure portal":::
To scan any data sources using Managed VNet Runtime, you need to deploy and appr
### Register and scan a data source using Managed VNet Runtime #### Register data source
-It is important to register the data source in Microsoft Purview prior to setting up a scan for the data source. Follow these steps to register data source if you haven't yet registered it.
+It's important to register the data source in Microsoft Purview prior to setting up a scan for the data source. Follow these steps to register data source if you haven't yet registered it.
1. Go to your Microsoft Purview account. 1. Select **Data Map** on the left menu.
To set up a scan using Account Key or SQL Authentication follow these steps:
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault":::
-6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop down lists. Select **create**.
+6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop-down lists. Select **create**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in the Microsoft Purview governance portal":::
-7. From the list of managed private endpoints, click on the newly created managed private endpoint for your Azure Key Vault and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
+7. From the list of managed private endpoints, select on the newly created managed private endpoint for your Azure Key Vault and then select on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-approve.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Azure Key Vault":::
-8. By clicking on the link, you are redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint for your Azure Key Vault and select **approve**.
+8. By clicking on the link, you're redirected to Azure portal. Under private endpoints connection, select the newly created private endpoint for your Azure Key Vault and select **approve**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-az-approve.png" alt-text="Screenshot that shows how to approve a private endpoint for an Azure Key Vault in Azure portal":::
To set up a scan using Account Key or SQL Authentication follow these steps:
14. Under **Connect via integration runtime**, select the newly created Managed VNet Runtime.
-15. For **Credential** Select the credential you have registered earlier, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**.
+15. For **Credential** Select the credential you've registered earlier, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-scan.png" alt-text="Screenshot that shows how to create a new scan using Managed VNet and a SPN":::
purview Concept Asset Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-asset-normalization.md
When ingesting assets into the Microsoft Purview data map, different sources updating the same data asset may send similar, but slightly different qualified names. While these qualified names represent the same asset, slight differences such as an extra character or different capitalization may cause these assets on the surface to appear different. To avoid storing duplicate entries and causing confusion when consuming the data catalog, Microsoft Purview applies normalization during ingestion to ensure all fully qualified names of the same entity type are in the same format.
-For example, you scan in an Azure Blob with the qualified name `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`. This blob is also consumed by an Azure Data Factory pipeline which will then add lineage information to the asset. The ADF pipeline may be configured to read the file as `https://myAccount.file.core.windows.net//myshare/folderA/folderB/my-file.parquet`. While the qualified name is different, this ADF pipeline is consuming the same piece of data. Normalization ensures that all the metadata from both Azure Blob Storage and Azure Data Factory is visible on a single asset, `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`.
+For example, you scan in an Azure Blob with the qualified name `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`. This blob is also consumed by an Azure Data Factory pipeline that will then add lineage information to the asset. The ADF pipeline may be configured to read the file as `https://myAccount.file.core.windows.net//myshare/folderA/folderB/my-file.parquet`. While the qualified name is different, this ADF pipeline is consuming the same piece of data. Normalization ensures that all the metadata from both Azure Blob Storage and Azure Data Factory is visible on a single asset, `https://myaccount.file.core.windows.net/myshare/folderA/folderB/my-file.parquet`.
## Normalization rules
Before: `https://myaccount.file.core.windows.net/myshare/{folderA}/folder{B/`
After: `https://myaccount.file.core.windows.net/myshare/%7BfolderA%7D/folder%7BB/` ### Trim section spaces
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
Before: `https://myaccount.file.core.windows.net/myshare/ folder A/folderB /` After: `https://myaccount.file.core.windows.net/myshare/folder A/folderB/` ### Remove hostname spaces
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
Before: `https://myaccount .file. core.win dows. net/myshare/folderA/folderB/` After: `https://myaccount.file.core.windows.net/myshare/folderA/folderB/` ### Remove square brackets
-Applies to: Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool
+Applies to: Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool
Before: `mssql://foo.database.windows.net/[bar]/dbo/[foo bar]`
After: `mssql://foo.database.windows.net/bar/dbo/foo%20bar`
> Spaces between two square brackets will be encoded ### Lowercase scheme
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
Before: `HTTPS://myaccount.file.core.windows.net/myshare/folderA/folderB/` After: `https://myaccount.file.core.windows.net/myshare/folderA/folderB/` ### Lowercase hostname
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Amazon S3
Before: `https://myAccount.file.Core.Windows.net/myshare/folderA/folderB/`
Before: `https://myAccount.file.core.windows.net/myshare/folderA/data.TXT`
After: `https://myaccount.file.core.windows.net/myshare/folderA/data.txt` ### Remove duplicate slash
-Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
+Applies to: Azure Blob, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Factory, Azure SQL Database, Azure SQL Managed Instance, Azure SQL pool, Azure Cosmos DB, Azure Cognitive Search, Azure Data Explorer, Azure Data Share, Amazon S3
Before: `https://myAccount.file.core.windows.net//myshare/folderA////folderB/`
Before: `https://mystore.azuredatalakestore.net/folderA/folderB/abc.csv`
After: `adl://mystore.azuredatalakestore.net/folderA/folderB/abc.csv` ### Remove Trailing Slash
-Remove the trailing slash from higher level assets for Azure Blob, ADLS Gen1,and ADLS Gen2
+Remove the trailing slash from higher level assets for Azure Blob, ADLS Gen1, and ADLS Gen2
Applies to: Azure Blob, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-sensitivity-label.md
Sensitivity labels are supported in the Microsoft Purview Data Map for the follo
|Data type |Sources | ||| |Automatic labeling for files | - Azure Blob Storage</br>- Azure Files</br>- Azure Data Lake Storage Gen 1 and Gen 2</br>- Amazon S3|
-|Automatic labeling for schematized data assets | - SQL server</br>- Azure SQL database</br>- Azure SQL Database Managed Instance</br>- Azure Synapse Analytics workspaces</br>- Azure Cosmos Database (SQL API)</br> - Azure database for MySQL</br> - Azure database for PostgreSQL</br> - Azure Data Explorer</br> |
+|Automatic labeling for schematized data assets | - SQL server</br>- Azure SQL database</br>- Azure SQL Managed Instance</br>- Azure Synapse Analytics workspaces</br>- Azure Cosmos Database (SQL API)</br> - Azure database for MySQL</br> - Azure database for PostgreSQL</br> - Azure Data Explorer</br> |
| | | ## Labeling for SQL databases
purview How To Automatically Label Your Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-automatically-label-your-content.md
For more information on how to set up scans on various assets in the Microsoft P
|Source |Reference | ||| |**Files within Storage** | [Register and Scan Azure Blob Storage](register-scan-azure-blob-storage-source.md) </br> [Register and scan Azure Files](register-scan-azure-files-storage-source.md) [Register and scan Azure Data Lake Storage Gen1](register-scan-adls-gen1.md) </br>[Register and scan Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</br>[Register and scan Amazon S3](register-scan-amazon-s3.md) |
-|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos Database (SQL API)](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
+|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos Database (SQL API)](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
| | | ## View labels on assets in the catalog
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
Title: Resource group and subscription access provisioning by data owner
+ Title: Resource group and subscription access provisioning by data owner (preview)
description: Step-by-step guide showing how a data owner can create access policies to resource groups or subscriptions. Previously updated : 05/10/2022 Last updated : 05/27/2022
-# Resource group and subscription access provisioning by data owner (preview)
+# Resource group and subscription access provisioning by data owner (Preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] [Access policies](concept-data-owner-policies.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
Title: Access provisioning by data owner to Azure Storage datasets
+ Title: Access provisioning by data owner to Azure Storage datasets (preview)
description: Step-by-step guide showing how data owners can create access policies to datasets in Azure Storage
Previously updated : 05/12/2022 Last updated : 05/27/2022
-# Access provisioning by data owner to Azure Storage datasets (preview)
+# Access provisioning by data owner to Azure Storage datasets (Preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Title: Authoring and publishing data owner access policies
+ Title: Authoring and publishing data owner access policies (preview)
description: Step-by-step guide on how a data owner can author and publish access policies in Microsoft Purview
Previously updated : 4/18/2022 Last updated : 05/27/2022 # Authoring and publishing data owner access policies (Preview)
purview How To Integrate With Azure Security Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-integrate-with-azure-security-products.md
This document explains the steps required for connecting a Microsoft Purview acc
Microsoft Purview provides rich insights into the sensitivity of your data. This makes it valuable to security teams using Microsoft Defender for Cloud to manage the organizationΓÇÖs security posture and protect against threats to their workloads. Data resources remain a popular target for malicious actors, making it crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments. The integration with Microsoft Purview expands visibility into the data layer, enabling security teams to prioritize resources that contain sensitive data.
-To take advantage of this [enrichment in Microsoft Defender for Cloud](../security-center/information-protection.md), no additional steps are needed in Microsoft Purview. Start exploring the security enrichments with Microsoft Defender for Cloud's [Inventory page](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/25) where you can see the list of data sources with classifications and sensitivity labels.
+To take advantage of this [enrichment in Microsoft Defender for Cloud](../security-center/information-protection.md), no more steps are needed in Microsoft Purview. Start exploring the security enrichments with Microsoft Defender for Cloud's [Inventory page](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/25) where you can see the list of data sources with classifications and sensitivity labels.
### Supported data sources The integration supports data sources in Azure and AWS; sensitive data discovered in these resources is shared with Microsoft Defender for Cloud:
The integration supports data sources in Azure and AWS; sensitive data discovere
- [Azure Files](./register-scan-azure-files-storage-source.md) - [Azure Database for MySQL](./register-scan-azure-mysql-database.md) - [Azure Database for PostgreSQL](./register-scan-azure-postgresql.md)-- [Azure SQL Managed Instance](./register-scan-azure-sql-database-managed-instance.md)
+- [Azure SQL Managed Instance](./register-scan-azure-sql-managed-instance.md)
- [Azure Dedicated SQL pool (formerly SQL DW)](./register-scan-azure-synapse-analytics.md) - [Azure SQL Database](./register-scan-azure-sql-database.md) - [Azure Synapse Analytics (Workspace)](./register-scan-synapse-workspace.md)
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
This article describes how you can create credentials in Microsoft Purview. Thes
A credential is authentication information that Microsoft Purview can use to authenticate to your registered data sources. A credential object can be created for various types of authentication scenarios, such as Basic Authentication requiring username/password. Credential capture specific information required to authenticate, based on the chosen type of authentication method. Credentials use your existing Azure Key Vaults secrets for retrieving sensitive authentication information during the Credential creation process.
-In Microsoft Purview, there are few options to use as authentication method to scan data sources such as the following options. Learn from each [data source article](azure-purview-connector-overview.md) for the its supported authentication.
+In Microsoft Purview, there are few options to use as authentication method to scan data sources such as the following options. Learn from each [data source article](azure-purview-connector-overview.md) for its supported authentication.
- [Microsoft Purview system-assigned managed identity](#use-microsoft-purview-system-assigned-managed-identity-to-set-up-scans) - [User-assigned managed identity](#create-a-user-assigned-managed-identity) (preview)
If you're using the Microsoft Purview system-assigned managed identity (SAMI) to
- [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#authentication-for-a-scan) - [Azure SQL Database](register-scan-azure-sql-database.md)-- [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md#authentication-for-registration)
+- [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md#authentication-for-registration)
- [Azure Synapse Workspace](register-scan-synapse-workspace.md#authentication-for-registration) - [Azure Synapse dedicated SQL pools (formerly SQL DW)](register-scan-azure-synapse-analytics.md#authentication-for-registration)
The following steps will show you how to create a UAMI for Microsoft Purview to
* [Azure Data Lake Gen 1](register-scan-adls-gen1.md) * [Azure Data Lake Gen 2](register-scan-adls-gen2.md) * [Azure SQL Database](register-scan-azure-sql-database.md)
-* [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)
+* [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)
* [Azure SQL Dedicated SQL pools](register-scan-azure-synapse-analytics.md) * [Azure Blob Storage](register-scan-azure-blob-storage-source.md)
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
Previously updated : 04/14/2022 Last updated : 05/27/2022 # Microsoft Purview product glossary
Information that is associated with data assets in Microsoft Purview, for exampl
## Approved The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request. ## Asset
-Any single object that is stored within a Microsoft Purview data catalog.
+Any single object that is stored within a Microsoft Purview Data Catalog.
> [!NOTE] > A single object in the catalog could potentially represent many objects in storage, for example, a resource set is an asset but it's made up of many partition files in storage. ## Azure Information Protection
An individual who is associated with an entity in the data catalog.
An operation that manages resources in your subscription, such as role-based access control and Azure policy, that are sent to the Azure Resource Manager end point. Control plane operations can also apply to resources outside of Azure across on-premises, multicloud, and SaaS sources. ## Credential A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group to grant access to a data asset.
-## Data catalog
-Microsoft Purview features that enable customers to view and manage the metadata for assets in your data estate.
+## Data Catalog
+A searchable inventory of assets and their associated metadata that allows users to find and curate data across a data estate. The Data Catalog also includes a business glossary where subject matter experts can provide terms and definitions to add a business context to an asset.
## Data curator A role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. ## Data map
An asset that has been scanned, classified (when applicable), and added to the M
## Insight reader A role that provides read-only access to insights reports for collections where the insights reader also has the **Data reader** role. ## Data Estate Insights
-An area within Microsoft Purview where you can view reports that summarize information about your data.
+An area of the Microsoft Purview governance portal that provides up-to-date reports and actionable insights about the data estate.
## Integration runtime The compute infrastructure used to scan in a data source. ## Lineage
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
To learn how to add permissions on each resource type within a subscription or r
- [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#authentication-for-a-scan) - [Azure SQL Database](register-scan-azure-sql-database.md#authentication-for-a-scan)-- [Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md#authentication-for-registration)
+- [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md#authentication-for-registration)
- [Azure Synapse Analytics](register-scan-azure-synapse-analytics.md#authentication-for-registration) ### Steps to register
To create and run a new scan, do the following:
Each credential will be considered as the method of authentication for all the resources under a particular type. You must set the chosen credential on the resources in order to successfully scan them, as described [earlier in this article](#authentication-for-registration). 1. Within each type, you can select to either scan all the resources or scan a subset of them by name: - If you leave the option as **All**, then future resources of that type will also be scanned in future scan runs.
- - If you select specific storage accounts or SQL databases, then future resources of that type created within this subscription or resource group will not be included for scans, unless the scan is explicitly edited in the future.
+ - If you select specific storage accounts or SQL databases, then future resources of that type created within this subscription or resource group won't be included for scans, unless the scan is explicitly edited in the future.
1. Select **Test connection**. This will first test access to check if you've applied the Microsoft Purview MSI file as a reader on the subscription or resource group. If you get an error message, follow [these instructions](#prerequisites-for-registration) to resolve it. Then it will test your authentication and connection to each of your selected sources and generate a report. The number of sources selected will impact the time it takes to generate this report. If failed on some resources, hovering over the **X** icon will display the detailed error message.
- :::image type="content" source="media/register-scan-azure-multiple-sources/test-connection.png" alt-text="Screenshot showing the scan set up slider, with the Test Connection button highlighted.":::
+ :::image type="content" source="media/register-scan-azure-multiple-sources/test-connection.png" alt-text="Screenshot showing the scan setup slider, with the Test Connection button highlighted.":::
:::image type="content" source="media/register-scan-azure-multiple-sources/test-connection-report.png" alt-text="Screenshot showing an example test connection report, with some connections passing and some failing. Hovering over one of the failed connections shows a detailed error report.":::
-1. After you test connection has passed, select **Continue** to proceed.
+1. After your test connection has passed, select **Continue** to proceed.
1. Select scan rule sets for each resource type that you chose in the previous step. You can also create scan rule sets inline.
To manage a scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-managed-instance.md
+
+ Title: 'Connect to and manage Azure SQL Managed Instance'
+description: This guide describes how to connect to Azure SQL Managed Instance in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure SQL Managed Instance source.
+++++ Last updated : 11/02/2021+++
+# Connect to and manage an Azure SQL Managed Instance in Microsoft Purview
+
+This article outlines how to register and Azure SQL Managed Instance, as well as how to authenticate and interact with the Azure SQL Managed Instance in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md)
+
+## Supported capabilities
+
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|
+||||||||
+| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No | No** |
+
+\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* An active [Microsoft Purview account](create-catalog-portal.md).
+
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+
+* [Configure public endpoint in Azure SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure)
+
+ > [!Note]
+ > We now support scanning Azure SQL Managed Instances over the private connection using Microsoft Purview ingestion private endpoints and a self-hosted integration runtime VM.
+ > For more information related to prerequisites, see [Connect to your Microsoft Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md)
+
+## Register
+
+This section describes how to register an Azure SQL Managed Instance in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
+
+### Authentication for registration
+
+If you need to create new authentication, you need to [authorize database access to SQL Database Managed Instance](/azure/azure-sql/database/logins-create-manage). There are three authentication methods that Microsoft Purview supports today:
+
+- [System or user assigned managed identity](#system-or-user-assigned-managed-identity-to-register)
+- [Service Principal](#service-principal-to-register)
+- [SQL authentication](#sql-authentication-to-register)
+
+#### System or user assigned managed identity to register
+
+You can use either your Microsoft Purview system-assigned managed identity (SAMI), or a [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) (UAMI) to authenticate. Both options allow you to assign authentication directly to Microsoft Purview, like you would for any other user, group, or service principal. The Microsoft Purview system-assigned managed identity is created automatically when the account is created and has the same name as your Microsoft Purview account. A user-assigned managed identity is a resource that can be created independently. To create one, you can follow our [user-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
+
+You can find your managed identity Object ID in the Azure portal by following these steps:
+
+For Microsoft Purview accountΓÇÖs system-assigned managed identity:
+1. Open the Azure portal, and navigate to your Microsoft Purview account.
+1. Select the **Properties** tab on the left side menu.
+1. Select the **Managed identity object ID** value and copy it.
+
+For user-assigned managed identity (preview):
+1. Open the Azure portal, and navigate to your Microsoft Purview account.
+1. Select the Managed identities tab on the left side menu
+1. Select the user assigned managed identities, select the intended identity to view the details.
+1. The object (principal) ID is displayed in the overview essential section.
+
+Either managed identity will need permission to get metadata for the database, schemas and tables, and to query the tables for classification.
+- Create an Azure AD user in Azure SQL Managed Instance by following the prerequisites and tutorial on [Create contained users mapped to Azure AD identities](/azure/azure-sql/database/authentication-aad-configure?tabs=azure-powershell#create-contained-users-mapped-to-azure-ad-identities)
+- Assign `db_datareader` permission to the identity.
+
+#### Service Principal to register
+
+There are several steps to allow Microsoft Purview to use service principal to scan your Azure SQL Managed Instance.
+
+#### Create or use an existing service principal
+
+To use a service principal, you can use an existing one or create a new one. If you're going to use an existing service principal, skip to the next step.
+If you have to create a new Service Principal, follow these steps:
+
+ 1. Navigate to the [Azure portal](https://portal.azure.com).
+ 1. Select **Azure Active Directory** from the left-hand side menu.
+ 1. Select **App registrations**.
+ 1. Select **+ New application registration**.
+ 1. Enter a name for the **application** (the service principal name).
+ 1. Select **Accounts in this organizational directory only**.
+ 1. For Redirect URI, select **Web** and enter any URL you want; it doesn't have to be real or work.
+ 1. Then select **Register**.
+
+#### Configure Azure AD authentication in the database account
+
+The service principal must have permission to get metadata for the database, schemas, and tables. It must also be able to query the tables to sample for classification.
+- [Configure and manage Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure)
+- Create an Azure AD user in Azure SQL Managed Instance by following the prerequisites and tutorial on [Create contained users mapped to Azure AD identities](/azure/azure-sql/database/authentication-aad-configure?tabs=azure-powershell#create-contained-users-mapped-to-azure-ad-identities)
+- Assign `db_datareader` permission to the identity.
+
+#### Add service principal to key vault and Microsoft Purview's credential
+
+It's required to get the service principal's application ID and secret:
+
+1. Navigate to your Service Principal in the [Azure portal](https://portal.azure.com)
+1. Copy the values the **Application (client) ID** from **Overview** and **Client secret** from **Certificates & secrets**.
+1. Navigate to your key vault
+1. Select **Settings > Secrets**
+1. Select **+ Generate/Import** and enter the **Name** of your choice and **Value** as the **Client secret** from your Service Principal
+1. Select **Create** to complete
+1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the Service Principal to set up your scan.
+
+#### SQL authentication to register
+
+> [!Note]
+> Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Microsoft Purview account should have the appropriate permissions to be able to scan the resource(s).
+
+You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure SQL Managed Instance if you don't have this login available. You'll need **username** and **password** for the next steps.
+
+1. Navigate to your key vault in the Azure portal
+1. Select **Settings > Secrets**
+1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your Azure SQL Managed Instance
+1. Select **Create** to complete
+1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to set up your scan.
+
+### Steps to register
+
+1. Navigate to your [Microsoft Purview governance portal](https://web.purview.azure.com/resource/)
+
+1. Select **Data Map** on the left navigation.
+
+1. Select **Register**
+
+1. Select **Azure SQL Managed Instance** and then **Continue**.
+
+1. Select **From Azure subscription**, select the appropriate subscription from the **Azure subscription** drop-down box and the appropriate server from the **Server name** drop-down box.
+
+1. Provide the **public endpoint fully qualified domain name** and **port number**. Then select **Register** to register the data source.
+
+ :::image type="content" source="media/register-scan-azure-sql-managed-instance/add-azure-sql-database-managed-instance.png" alt-text="Screenshot of register sources screen, with Name, subscription, server name, and endpoint filled out.":::
+
+ For Example: `foobar.public.123.database.windows.net,3342`
+
+## Scan
+
+Follow the steps below to scan an Azure SQL Managed Instance to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
+
+### Create and run scan
+
+To create and run a new scan, complete the following steps:
+
+1. Select the **Data Map** tab on the left pane in the Microsoft Purview governance portal.
+
+1. Select the Azure SQL Managed Instance source that you registered.
+
+1. Select **New scan**
+
+1. Select the credential to connect to your data source.
+
+ :::image type="content" source="media/register-scan-azure-sql-managed-instance/set-up-scan-sql-mi.png" alt-text="Screenshot of new scan window, with the Purview MSI selected as the credential, but a service principal, or SQL authentication also available.":::
+
+1. You can scope your scan to specific tables by choosing the appropriate items in the list.
+
+ :::image type="content" source="media/register-scan-azure-sql-managed-instance/scope-your-scan.png" alt-text="Screenshot of the scope your scan window, with a subset of tables selected for scanning.":::
+
+1. Then select a scan rule set. You can choose between the system default, existing custom rule sets, or create a new rule set inline.
+
+ :::image type="content" source="media/register-scan-azure-sql-managed-instance/scan-rule-set.png" alt-text="Screenshot of scan rule set window, with the system default scan rule set selected.":::
+
+1. Choose your scan trigger. You can set up a schedule or run the scan once.
+
+ :::image type="content" source="media/register-scan-azure-sql-managed-instance/trigger-scan.png" alt-text="Screenshot of the set scan trigger window, with the recurring tab selected.":::
+
+1. Review your scan and select **Save and run**.
++
+## Next steps
+
+Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
+- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
+- [Search Data Catalog](how-to-search-catalog.md)
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
Last updated 04/22/2022
This article lists prerequisites that help you get started quickly on Microsoft Purview planning and deployment.
-|No. |Prerequisite / Action |Required permission |Additional guidance and recommendations |
+|No. |Prerequisite / Action |Required permission |More guidance and recommendations |
|:|:|:|:| |1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to Microsoft Purview for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> | |2 |An active Azure Subscription |*Subscription Owner* |An Azure subscription is needed to deploy Microsoft Purview and its managed resources. If you don't have an Azure subscription, create a [free subscription](https://azure.microsoft.com/free/) before you begin. | |3 |Define whether you plan to deploy a Microsoft Purview with a managed event hub | N/A |A managed event hub is created as part of Microsoft Purview account creation, see Microsoft Purview account creation. You can publish messages to the event hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to the event hub kafka topic ATLAS_ENTITIES and user can consume and process it. | |4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md) in the Azure Subscription that is designated for Microsoft Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). |
-|5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Microsoft Purview</li><li>Azure Storage</li><li>Azure Event Hubs (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, please follow our [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Microsoft Purview accounts. |
+|5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Microsoft Purview</li><li>Azure Storage</li><li>Azure Event Hubs (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, follow our [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Microsoft Purview accounts. |
|6 | Define your network security requirements. | Network and Security architects. |<ul><li> Review [Microsoft Purview network architecture and best practices](concept-best-practices-network.md) to define what scenario is more relevant to your network requirements. </li><li>If private network is needed, use [Microsoft Purview Managed IR](catalog-managed-vnet.md) to scan Azure data sources when possible to reduce complexity and administrative overhead. </li></ul> | |7 |An Azure Virtual Network and Subnet(s) for Microsoft Purview private endpoints. | *Network Contributor* to create or update Azure VNet. |Use this step if you're planning to deploy [private endpoint connectivity with Microsoft Purview](catalog-private-link.md): <ul><li>Private endpoints for **Ingestion**.</li><li>Private endpoint for Microsoft Purview **Account**.</li><li>Private endpoint for Microsoft Purview **Portal**.</li></ul> <br> Deploy [Azure Virtual Network](../virtual-network/quick-create-portal.md) if you need one. | |8 |Deploy private endpoint for Azure data sources. |*Network Contributor* to set up private endpoints for each data source. |Perform this step, if you're planning to use [Private Endpoint for Ingestion](catalog-private-link-end-to-end.md). | |9 |Define whether to deploy new or use existing Azure Private DNS Zones. |Required [Azure Private DNS Zones](catalog-private-link-name-resolution.md) can be created automatically during Purview Account deployment using Subscription Owner / Contributor role |Use this step if you're planning to use Private Endpoint connectivity with Microsoft Purview. Required DNS Zones for Private Endpoint: <ul><li>privatelink.purview.azure.com</li><li>privatelink.purviewstudio.azure.com</li><li>privatelink.blob.core.windows.net</li><li>privatelink.queue.core.windows.net</li><li>privatelink.servicebus.windows.net</li></ul> | |10 |A management machine in your CorpNet or inside Azure VNet to launch the Microsoft Purview governance portal. |N/A |Use this step if you're planning to set **Allow Public Network** to **deny** on your Microsoft Purview Account. |
-|11 |Deploy a Microsoft Purview Account |Subscription Owner / Contributor |Purview account is deployed with 1 Capacity Unit and will scale up based [on demand](concept-elastic-data-map.md). |
-|12 |Deploy a Managed Integration Runtime and Managed private endpoints for Azure data sources. |*Data source admin* to setup Managed VNet inside Microsoft Purview. <br> *Network Contributor* to approve managed private endpoint for each Azure data source. |Perform this step if you're planning to use [Managed VNet](catalog-managed-vnet.md). within your Microsoft Purview account for scanning purposes. |
-|13 |Deploy Self-hosted integration runtime VMs inside your network. |Azure: *Virtual Machine Contributor* <br> On-prem: Application owner |Use this step if you're planning to perform any scans using [Self-hosted Integration Runtime](manage-integration-runtimes.md). |
+|11 |Deploy a Microsoft Purview Account |Subscription Owner / Contributor |Purview account is deployed with one Capacity Unit and will scale up based [on demand](concept-elastic-data-map.md). |
+|12 |Deploy a Managed Integration Runtime and Managed private endpoints for Azure data sources. |*Data source admin* to set up Managed VNet inside Microsoft Purview. <br> *Network Contributor* to approve managed private endpoint for each Azure data source. |Perform this step if you're planning to use [Managed VNet](catalog-managed-vnet.md). within your Microsoft Purview account for scanning purposes. |
+|13 |Deploy Self-hosted integration runtime VMs inside your network. |Azure: *Virtual Machine Contributor* <br> On-premises: Application owner |Use this step if you're planning to perform any scans using [Self-hosted Integration Runtime](manage-integration-runtimes.md). |
|14 |Create a Self-hosted integration runtime inside Microsoft Purview. |Data curator <br> VM Administrator or application owner |Use this step if you're planning to use Self-hosted Integration Runtime instead of Managed Integration Runtime or Azure Integration Runtime. <br><br> <br> [download](https://www.microsoft.com/en-us/download/details.aspx?id=39717) |
-|15 |Register your Self-hosted integration runtime | Virtual machine administrator |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **Private Endpoint** to scan to **any** data sources. |
-|16 |Grant Azure RBAC **Reader** role to **Microsoft Purview MSI** at data sources' Subscriptions |*Subscription owner* or *User Access Administrator* |Use this step if you're planning to register [multiple](register-scan-azure-multiple-sources.md) or **any** of the following data sources: <ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md)</li><li>[Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</li><li>[Azure SQL Database](register-scan-azure-sql-database.md)</li><li>[Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md)</li><li>[Azure Synapse Analytics](register-scan-synapse-workspace.md)</li></ul> |
-|17 |Grant Azure RBAC **Storage Blob Data Reader** role to **Microsoft Purview MSI** at data sources Subscriptions. |*Subscription owner* or *User Access Administrator* | **Skip** this step if you are using Private Endpoint to connect to data sources. Use this step if you have these data sources:<ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li></ul> |
-|18 |Enable network connectivity to allow AzureServices to access data sources: <br> e.g. Enable "**Allow trusted Microsoft services to access this storage account**". |*Owner* or *Contributor* at Data source |Use this step if **Service Endpoint** is used in your data sources. (Don't use this step if Private Endpoint is used) |
-|19 |Enable **Azure Active Directory Authentication** on **Azure SQL Servers**, **Azure SQL Managed Instance** and **Azure Synapse Analytics** |Azure SQL Server Contributor |Use this step if you have **Azure SQL DB** or **Azure SQL Managed Instance** or **Azure Synapse Analytics** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
-|20 |Grant **Microsoft Purview MSI** account with **db_datareader** role to Azure SQL databases and Azure SQL Managed Instance databases |Azure SQL Administrator |Use this step if you have **Azure SQL DB** or **Azure SQL Managed Instance** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
-|21 |Grant Azure RBAC **Storage Blob Data Reader** to **Synapse SQL Server** for staging Storage Accounts |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you are using Private Endpoint to connect to data sources. |
-|22 |Grant Azure RBAC **Reader** role to **Microsoft Purview MSI** at **Synapse workspace** resources |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you are using Private Endpoint to connect to data sources. |
-|23 |Grant Azure **Purview MSI account** with **db_datareader** role |Azure SQL Administrator |Use this step if you have **Azure Synapse Analytics (Dedicated SQL databases)**. <br> **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
-|24 |Grant **Microsoft Purview MSI** account with **sysadmin** role |Azure SQL Administrator |Use this step if you have Azure Synapse Analytics (Serverless SQL databases). **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
-|25 |Create an app registration or service principal inside your Azure Active Directory tenant | Azure Active Directory *Global Administrator* or *Application Administrator* | Use this step if you're planning to perform a scan on a data source using Delegated Auth or [Service Principal](create-service-principal-azure.md).|
-|26 |Create an **Azure Key Vault** and a **Secret** to save data source credentials or service principal secret. |*Contributor* or *Key Vault Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **ingestion private endpoints** to scan a data source. |
-|27 |Grant Key **Vault Access Policy** to Microsoft Purview MSI: **Secret: get/list** |*Key Vault Administrator* |Use this step if you have **on-premises** / **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Vault Access Policy](../key-vault/general/assign-access-policy.md). |
-|28 |Grant **Key Vault RBAC role** Key Vault Secrets User to Microsoft Purview MSI. | *Owner* or *User Access Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Azure role-based access control](../key-vault/general/rbac-guide.md). |
-|29 | Create a new connection to Azure Key Vault from the Microsoft Purview governance portal | *Data source admin* | Use this step if you are planning to use any of the following [authentication options](manage-credentials.md#create-a-new-credential) to scan a data source in Microsoft Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul>
+|15 |Register your Self-hosted integration runtime | Virtual machine administrator |Use this step if you have **on-premises** or **VM-based data sources** (for example, SQL Server). <br> Use this step are using **Private Endpoint** to scan to **any** data sources. |
+|16 |Grant Azure RBAC **Reader** role to **Microsoft Purview MSI** at data sources' Subscriptions |*Subscription owner* or *User Access Administrator* |Use this step if you're planning to register [multiple](register-scan-azure-multiple-sources.md) or **any** of the following data sources: <ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md)</li><li>[Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</li><li>[Azure SQL Database](register-scan-azure-sql-database.md)</li><li>[Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)</li><li>[Azure Synapse Analytics](register-scan-synapse-workspace.md)</li></ul> |
+|17 |Grant Azure RBAC **Storage Blob Data Reader** role to **Microsoft Purview MSI** at data sources Subscriptions. |*Subscription owner* or *User Access Administrator* | **Skip** this step if you're using Private Endpoint to connect to data sources. Use this step if you have these data sources:<ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li></ul> |
+|18 |Enable network connectivity to allow AzureServices to access data sources: <br> for example, Enable "**Allow trusted Microsoft services to access this storage account**". |*Owner* or *Contributor* at Data source |Use this step if **Service Endpoint** is used in your data sources. (Don't use this step if Private Endpoint is used) |
+|19 |Enable **Azure Active Directory Authentication** on **Azure SQL Servers**, **Azure SQL Managed Instance** and **Azure Synapse Analytics** |Azure SQL Server Contributor |Use this step if you have **Azure SQL DB** or **Azure SQL Managed Instance** or **Azure Synapse Analytics** as data source. **Skip** this step if you're using **Private Endpoint** to connect to data sources. |
+|20 |Grant **Microsoft Purview MSI** account with **db_datareader** role to Azure SQL databases and Azure SQL Managed Instance databases |Azure SQL Administrator |Use this step if you have **Azure SQL DB** or **Azure SQL Managed Instance** as data source. **Skip** this step if you're using **Private Endpoint** to connect to data sources. |
+|21 |Grant Azure RBAC **Storage Blob Data Reader** to **Synapse SQL Server** for staging Storage Accounts |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you're using Private Endpoint to connect to data sources. |
+|22 |Grant Azure RBAC **Reader** role to **Microsoft Purview MSI** at **Synapse workspace** resources |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you're using Private Endpoint to connect to data sources. |
+|23 |Grant Azure **Purview MSI account** with **db_datareader** role |Azure SQL Administrator |Use this step if you have **Azure Synapse Analytics (Dedicated SQL databases)**. <br> **Skip** this step if you're using **Private Endpoint** to connect to data sources. |
+|24 |Grant **Microsoft Purview MSI** account with **sysadmin** role |Azure SQL Administrator |Use this step if you have Azure Synapse Analytics (Serverless SQL databases). **Skip** this step if you're using **Private Endpoint** to connect to data sources. |
+|25 |Create an app registration or service principal inside your Azure Active Directory tenant | Azure Active Directory *Global Administrator* or *Application Administrator* | Use this step if you're planning to perform a scan on a data source using Delegated Author [Service Principal](create-service-principal-azure.md).|
+|26 |Create an **Azure Key Vault** and a **Secret** to save data source credentials or service principal secret. |*Contributor* or *Key Vault Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (for example, SQL Server). <br> Use this step are using **ingestion private endpoints** to scan a data source. |
+|27 |Grant Key **Vault Access Policy** to Microsoft Purview MSI: **Secret: get/list** |*Key Vault Administrator* |Use this step if you have **on-premises** / **VM-based data sources** (for example, SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Vault Access Policy](../key-vault/general/assign-access-policy.md). |
+|28 |Grant **Key Vault RBAC role** Key Vault Secrets User to Microsoft Purview MSI. | *Owner* or *User Access Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (for example, SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Azure role-based access control](../key-vault/general/rbac-guide.md). |
+|29 | Create a new connection to Azure Key Vault from the Microsoft Purview governance portal | *Data source admin* | Use this step if you're planning to use any of the following [authentication options](manage-credentials.md#create-a-new-credential) to scan a data source in Microsoft Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul>
|30 |Deploy a private endpoint for Power BI tenant |*Power BI Administrator* <br> *Network contributor* |Use this step if you're planning to register a Power BI tenant as data source and your Microsoft Purview account is set to **deny public access**. <br> For more information, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links). | |31 |Connect Azure Data Factory to Microsoft Purview from Azure Data Factory Portal. **Manage** -> **Microsoft Purview**. Select **Connect to a Purview account**. <br> Validate if Azure resource tag **catalogUri** exists in ADF Azure resource. |Azure Data Factory Contributor / Data curator |Use this step if you have **Azure Data Factory**. | |32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Microsoft Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning to extend **sensitivity labels to Microsoft Purview Data Map** <br> For more information, see [licensing requirements to use sensitivity labels on files and database columns in Microsoft Purview](sensitivity-labels-frequently-asked-questions.yml) |
-|33 |Consent "**Extend labeling to assets in Microsoft Purview Data Map**" |Compliance Administrator <br> Azure Information Protection Administrator |Use this step if you are interested in extending sensitivity labels to your data in the data map. <br> For more information, see [Labeling in the Microsoft Purview Data Map](create-sensitivity-label.md). |
+|33 |Consent "**Extend labeling to assets in Microsoft Purview Data Map**" |Compliance Administrator <br> Azure Information Protection Administrator |Use this step if you're interested in extending sensitivity labels to your data in the data map. <br> For more information, see [Labeling in the Microsoft Purview Data Map](create-sensitivity-label.md). |
|34 |Create new collections and assign roles in Microsoft Purview |*Collection admin* | [Create a collection and assign permissions in Microsoft Purview](./quickstart-create-collection.md). | |36 |Register and scan Data Sources in Microsoft Purview |*Data Source admin* <br> *Data Reader* or *Data Curator* | For more information, see [supported data sources and file types](azure-purview-connector-overview.md) | |35 |Grant access to data roles in the organization |*Collection admin* |Provide access to other teams to use Microsoft Purview: <ul><li> Data curator</li><li>Data reader</li><li>Collection admin</li><li>Data source admin</li><li>Policy Author</li><li>Workflow admin</li></ul> <br> For more information, see [Access control in Microsoft Purview](catalog-permissions.md). |
purview Tutorial Using Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-using-python-sdk.md
+
+ Title: "Tutorial: How to use Microsoft Purview Python SDK"
+description: This tutorial describes how to use the Microsoft Purview Python SDK to scan data and search the catalog.
++++ Last updated : 05/27/2022+
+# Customer intent: I can use the scanning and catalog Python SDKs to perform CRUD operations on data sources and scans, trigger scans and also to search the catalog.
++
+# Tutorial: Use the Microsoft Purview Python SDK
+
+This tutorial will introduce you to using the Microsoft Purview Python SDK. You can use the SDK to do all the most common Microsoft Purview operations programmatically, rather than through the Microsoft Purview governance portal.
+
+In this tutorial, you'll learn how us the SDK to:
+
+> [!div class="checklist"]
+>* Grant the required rights to work programmatically with Microsoft Purview
+>* Register a Blob Storage container as a data source in Microsoft Purview
+>* Define and run a scan
+>* Search the catalog
+>* Delete a data source
+
+## Prerequisites
+
+For this tutorial, you'll need:
+* [Python 3.6 or higher](https://www.python.org/downloads/)
+* An active Azure Subscription. [If you don't have one, you can create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure Active Directory tenant associated with your subscription.
+* An Azure Storage account. If you don't already have one, you can [follow our quickstart guide to create one](../storage/common/storage-account-create.md).
+* A Microsoft Purview account. If you don't already have one, you can [follow our quickstart guide to create one](create-catalog-portal.md).
+* A [service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) with a [client secret](../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options).
+
+## Give Microsoft Purview access to the Storage account
+
+Before being able to scan the content of the Storage account, you need to give Microsoft Purview the right role.
+
+1. Go to your Storage Account through the [Azure portal](https://portal.azure.com).
+1. Select Access Control (IAM).
+1. Select the Add button and select **Add role assignment**.
+
+ :::image type="content" source="media/tutorial-using-python-sdk/add-role-assignment-storage.png" alt-text="Screenshot of the Access Control menu in the Storage Account with the add button selected and then add role assignment selected.":::
+
+1. In the next window, search for the **Storage blob Reader** role and select it:
+
+ :::image type="content" source="media/tutorial-using-python-sdk/storage-blob-reader-role.png" alt-text="Screenshot of the add role assignment menu, with Storage Blob Data Reader selected from the list of available roles.":::
+
+1. Then go on the **Members** tab and select **Select members**:
+
+ :::image type="content" source="media/tutorial-using-python-sdk/select-members-blob-reader-role.png" alt-text="Screenshot of the add role assignment menu with the + Select members button selected.":::
+
+1. A new pane appears on the right. Search and select the name of your existing Microsoft Purview instance.
+1. You can then select **Review + Assign**.
+
+Microsoft Purview now has the required reading right to scan your Blob Storage.
+
+## Grant your application the access to your Microsoft Purview account
+
+1. First, you'll need the Client ID, Tenant ID, and Client secret from your service principal. To find this information, select your **Azure Active Directory**.
+1. Then, select **App registrations**.
+1. Select your application and locate the required information:
+ * Name
+ * Client ID (or Application ID)
+ * Tenant ID (or Directory ID)
+
+ :::image type="content" source="media/tutorial-using-python-sdk/app-registration-info.png" alt-text="Screenshot of the service principal page in the Azure portal with the Client ID and Tenant ID highlighted.":::
+ * [Client secret](../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options)
+
+ :::image type="content" source="media/tutorial-using-python-sdk/get-service-principal-secret.png" alt-text="Screenshot of the service principal page in the Azure portal, with the Certificates & secrets tab selected, showing the available client certificates and secrets.":::
+
+1. You now need to give the relevant Microsoft Purview roles to your service principal. To do so, access your Microsoft Purview instance. Select **Open Microsoft Purview governance portal** or open [the Microsoft Purview's governance portal directly](https://web.purview.azure.com/) and choose the instance that you deployed.
+
+1. Inside the Microsoft Purview governance portal, select **Data map**, then **Collections**:
+
+ :::image type="content" source="media/tutorial-using-python-sdk/purview-collections.png" alt-text="Screenshot of the Microsoft Purview governance portal left menu. The data map tab is selected, then the collections tab is selected.":::
+
+1. Select the collection you want to work with, and go on the **Role assignments** tab. Add the service principal in the following roles:
+ * Collection admins
+ * Data source admins
+ * Data curators
+ * Data readers
+
+1. For each role, select the **Edit role assignments** button and select the role you want to add the service principal to. Or select the **Add** button next to each role, and add the service principal by searching its name or Client ID as shown below:
+
+ :::image type="content" source="media/tutorial-using-python-sdk/add-role-purview.png" alt-text="Screenshot of the Role assignments menu under a collection in the Microsoft Purview governance portal. The add user button is select next to the Collection admins tab. The add or remove collection admins pane is shown, with a search for the service principal in the text box.":::
+
+## Install the Python packages
+
+1. Open a new command prompt or terminal
+1. Install the Azure identity package for authentication:
+ ```bash
+ pip install azure-identity
+ ```
+1. Install the Microsoft Purview Scanning Client package:
+ ```bash
+ pip install azure-purview-scanning
+ ```
+1. Install the Microsoft Purview Administration Client package:
+ ```bash
+ pip install azure-purview-administration
+ ```
+1. Install the Microsoft Purview Client package:
+ ```bash
+ pip install azure-purview-catalog
+ ```
+1. Install the Microsoft Purview Account package:
+ ```bash
+ pip install azure-purview-account
+ ```
+1. Install the Azure Core package:
+ ```bash
+ pip install azure-core
+ ```
+
+## Create Python script file
+
+Create a plain text file, and save it as a python script with the suffix .py.
+For example: tutorial.py.
+
+## Instantiate a Scanning, Catalog, and Administration client
+
+In this section, you learn how to instantiate:
+* A scanning client useful to registering data sources, creating and managing scan rules, triggering a scan, etc.
+* A catalog client useful to interact with the catalog through searching, browsing the discovered assets, identifying the sensitivity of your data, etc.
+* An administration client is useful for interacting with the Microsoft Purview Data Map itself, for operations like listing collections.
+
+First you need to authenticate to your Azure Active Directory. For this, you'll use the [client secret you created](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
++
+1. Start with required import statements: our three clients, the credentials statement, and an Azure exceptions statement.
+ ```python
+ from azure.purview.scanning import PurviewScanningClient
+ from azure.purview.catalog import PurviewCatalogClient
+ from azure.purview.administration.account import PurviewAccountClient
+ from azure.identity import ClientSecretCredential
+ from azure.core.exceptions import HttpResponseError
+ ```
+
+1. Specify the following information in the code:
+ * Client ID (or Application ID)
+ * Tenant ID (or Directory ID)
+ * Client secret
+
+ ```python
+ client_id = "<your client id>"
+ client_secret = "<your client secret>"
+ tenant_id = "<your tenant id>"
+ ```
+
+1. You also need to specify the name of your Microsoft Purview account:
+
+ ```python
+ reference_name_purview = "<name of your Microsoft Purview account>"
+ ```
+1. You can now instantiate the three clients:
+
+ ```python
+ def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+ def get_purview_client():
+ credentials = get_credentials()
+ client = PurviewScanningClient(endpoint=f"https://{reference_name_purview}.scan.purview.azure.com", credential=credentials, logging_enable=True)
+ return client
+
+ def get_catalog_client():
+ credentials = get_credentials()
+ client = PurviewCatalogClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+
+ def get_admin_client():
+ credentials = get_credentials()
+ client = PurviewAccountClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+ ```
+
+Many of our scripts will start with these same steps, as we'll need these clients to interact with the account.
+
+## Register a data source
+
+In this section, you'll register your Blob Storage.
+
+1. Like we discussed in the previous section, first you'll import the clients you'll need to access your Microsoft Purview account. Also import the Azure error response package so you can troubleshoot, and the ClientSecretCredential to construct your Azure credentials.
+
+ ```python
+ from azure.purview.administration.account import PurviewAccountClient
+ from azure.purview.scanning import PurviewScanningClient
+ from azure.core.exceptions import HttpResponseError
+ from azure.identity import ClientSecretCredential
+ ```
+
+1. Gather the resource ID for your storage account by following this guide: [get the resource ID for a storage account.](../storage/common/storage-account-get-info.md#get-the-resource-id-for-a-storage-account)
+
+1. Then, in your python file, define the following information to be able to register the Blob storage programmatically:
+
+ ```python
+ storage_name = "<name of your Storage Account>"
+ storage_id = "<id of your Storage Account>"
+ rg_name = "<name of your resource group>"
+ rg_location = "<location of your resource group>"
+ reference_name_purview = "<name of your Microsoft Purview account>"
+ ```
+
+1. Provide the name of the collection where you'd like to register your blob storage. (It should be the same collection where you applied permissions earlier. If it isn't, first apply permissions to this collection.) If it's the root collection, use the same name as your Microsoft Purview instance.
+
+ ```python
+ collection_name = "<name of your collection>"
+ ```
+
+1. Create a function to construct the credentials to access your Microsoft Purview account:
+
+ ```python
+ client_id = "<your client id>"
+ client_secret = "<your client secret>"
+ tenant_id = "<your tenant id>"
++
+ def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+ ```
+
+1. All collections in the Microsoft Purview data map have a **friendly name** and a **name**.
+ * The **friendly name** name is the one you see on the collection. For example: Sales.
+ * The **name** for all collections (except the root collection) is a six-character name assigned by the data map.
+
+ Python needs this six-character name to reference any sub collections. To convert your **friendly name** automatically to the six-character collection name needed in your script, add this block of code:
+
+ ```python
+ def get_admin_client():
+ credentials = get_credentials()
+ client = PurviewAccountClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+
+ try:
+ admin_client = get_admin_client()
+ except ValueError as e:
+ print(e)
+
+ collection_list = client.collections.list_collections()
+ for collection in collection_list:
+ if collection["friendlyName"].lower() == collection_name.lower():
+ collection_name = collection["name"]
+ ```
+
+1. For both clients, and depending on the operations, you also need to provide an input body. To register a source, you'll need to provide an input body for data source registration:
+
+ ```python
+ ds_name = "<friendly name for your data source>"
+
+ body_input = {
+ "kind": "AzureStorage",
+ "properties": {
+ "endpoint": "endpoint": f"https://{storage_name}.blob.core.windows.net/",
+ "resourceGroup": rg_name,
+ "location": rg_location,
+ "resourceName": storage_name,
+ "resourceId": storage_id,
+ "collection": {
+ "type": "CollectionReference",
+ "referenceName": collection_name
+ },
+ "dataUseGovernance": "Disabled"
+ }
+ }
+ ```
+
+1. Now you can call your Microsoft Purview clients and register the data source.
+
+ ```python
+ def get_purview_client():
+ credentials = get_credentials()
+ client = PurviewScanningClient(endpoint=f"https://{reference_name_purview}.scan.purview.azure.com", credential=credentials, logging_enable=True)
+ return client
+
+ try:
+ client = get_purview_client()
+ except ValueError as e:
+ print(e)
+
+ try:
+ response = client.data_sources.create_or_update(ds_name, body=body_input)
+ print(response)
+ print(f"Data source {ds_name} successfully created or updated")
+ except HttpResponseError as e:
+ print(e)
+ ```
+
+When the registration process succeeds, you can see an enriched body response from the client.
+
+In the following sections, you'll scan the data source you registered and search the catalog. Each of these scripts will be very similarly structured to this registration script.
+
+### Full code
+
+```python
+from azure.purview.scanning import PurviewScanningClient
+from azure.identity import ClientSecretCredential
+from azure.core.exceptions import HttpResponseError
+from azure.purview.administration.account import PurviewAccountClient
+
+client_id = "<your client id>"
+client_secret = "<your client secret>"
+tenant_id = "<your tenant id>"
+reference_name_purview = "<name of your Microsoft Purview account>"
+storage_name = "<name of your Storage Account>"
+storage_id = "<id of your Storage Account>"
+rg_name = "<name of your resource group>"
+rg_location = "<location of your resource group>"
+collection_name = "<name of your collection>"
+ds_name = "<friendly data source name>"
+
+def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+def get_purview_client():
+ credentials = get_credentials()
+ client = PurviewScanningClient(endpoint=f"https://{reference_name_purview}.scan.purview.azure.com", credential=credentials, logging_enable=True)
+ return client
+
+def get_admin_client():
+ credentials = get_credentials()
+ client = PurviewAccountClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+
+try:
+ admin_client = get_admin_client()
+except ValueError as e:
+ print(e)
+
+collection_list = admin_client.collections.list_collections()
+for collection in collection_list:
+ if collection["friendlyName"].lower() == collection_name.lower():
+ collection_name = collection["name"]
++
+body_input = {
+ "kind": "AzureStorage",
+ "properties": {
+ "endpoint": f"https://{storage_name}.blob.core.windows.net/",
+ "resourceGroup": rg_name,
+ "location": rg_location,
+ "resourceName": storage_name,
+ "resourceId": storage_id,
+ "collection": {
+ "type": "CollectionReference",
+ "referenceName": collection_name
+ },
+ "dataUseGovernance": "Disabled"
+ }
+}
+
+try:
+ client = get_purview_client()
+except ValueError as e:
+ print(e)
+
+try:
+ response = client.data_sources.create_or_update(ds_name, body=body_input)
+ print(response)
+ print(f"Data source {ds_name} successfully created or updated")
+except HttpResponseError as e:
+ print(e)
+```
+
+## Scan the data source
+
+Scanning a data source can be done in two steps:
+
+1. Create a scan definition
+1. Trigger a scan run
+
+In this tutorial, you'll use the default scan rules for Blob Storage containers. However, you can also [create custom scan rules programmatically with the Microsoft Purview Scanning Client](/python/api/azure-purview-scanning/azure.purview.scanning.operations.scanrulesetsoperations).
+
+Now let's scan the data source you registered above.
+
+1. Add an import statement to generate [unique identifier](https://en.wikipedia.org/wiki/Universally_unique_identifier), call the Microsoft Purview scanning client, the Microsoft Purview administration client, the Azure error response package to be able to troubleshoot, and the client secret credential to gather your Azure credentials.
+
+ ```python
+ import uuid
+ from azure.purview.scanning import PurviewScanningClient
+ from azure.purview.administration.account import PurviewAccountClient
+ from azure.core.exceptions import HttpResponseError
+ from azure.identity import ClientSecretCredential
+ ```
+
+1. Create a scanning client using your credentials:
+
+ ```python
+ client_id = "<your client id>"
+ client_secret = "<your client secret>"
+ tenant_id = "<your tenant id>"
+
+ def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+ def get_purview_client():
+ credentials = get_credentials()
+ client = PurviewScanningClient(endpoint=f"https://{reference_name_purview}.scan.purview.azure.com", credential=credentials, logging_enable=True)
+ return client
+
+ try:
+ client = get_purview_client()
+ except ValueError as e:
+ print(e)
+ ```
+
+1. Add the code to gather the internal name of your collection. (For more information, see the previous section):
+
+ ```python
+ collection_name = "<name of the collection where you will be creating the scan>"
+
+ def get_admin_client():
+ credentials = get_credentials()
+ client = PurviewAccountClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+
+ try:
+ admin_client = get_admin_client()
+ except ValueError as e:
+ print(e)
+
+ collection_list = client.collections.list_collections()
+ for collection in collection_list:
+ if collection["friendlyName"].lower() == collection_name.lower():
+ collection_name = collection["name"]
+ ```
+
+1. Then, create a scan definition:
+
+ ```python
+ ds_name = "<name of your registered data source>"
+ scan_name = "<name of the scan you want to define>"
+ reference_name_purview = "<name of your Microsoft Purview account>"
+
+ body_input = {
+ "kind":"AzureStorageMsi",
+ "properties": {
+ "scanRulesetName": "AzureStorage",
+ "scanRulesetType": "System", #We use the default scan rule set
+ "collection":
+ {
+ "referenceName": collection_name,
+ "type": "CollectionReference"
+ }
+ }
+ }
+
+ try:
+ response = client.scans.create_or_update(data_source_name=ds_name, scan_name=scan_name, body=body_input)
+ print(response)
+ print(f"Scan {scan_name} successfully created or updated")
+ except HttpResponseError as e:
+ print(e)
+ ```
+
+1. Now that the scan is defined you can trigger a scan run with a unique ID:
+
+ ```python
+ run_id = uuid.uuid4() #unique id of the new scan
+
+ try:
+ response = client.scan_result.run_scan(data_source_name=ds_name, scan_name=scan_name, run_id=run_id)
+ print(response)
+ print(f"Scan {scan_name} successfully started")
+ except HttpResponseError as e:
+ print(e)
+ ```
+
+### Full code
+
+```python
+import uuid
+from azure.purview.scanning import PurviewScanningClient
+from azure.purview.administration.account import PurviewAccountClient
+from azure.identity import ClientSecretCredential
+
+ds_name = "<name of your registered data source>"
+scan_name = "<name of the scan you want to define>"
+reference_name_purview = "<name of your Microsoft Purview account>"
+client_id = "<your client id>"
+client_secret = "<your client secret>"
+tenant_id = "<your tenant id>"
+collection_name = "<name of the collection where you will be creating the scan>"
+
+def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+def get_purview_client():
+ credentials = get_credentials()
+ client = PurviewScanningClient(endpoint=f"https://{reference_name_purview}.scan.purview.azure.com", credential=credentials, logging_enable=True)
+ return client
+
+def get_admin_client():
+ credentials = get_credentials()
+ client = PurviewAccountClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+
+try:
+ admin_client = get_admin_client()
+except ValueError as e:
+ print(e)
+
+collection_list = admin_client.collections.list_collections()
+for collection in collection_list:
+ if collection["friendlyName"].lower() == collection_name.lower():
+ collection_name = collection["name"]
++
+try:
+ client = get_purview_client()
+except AzureError as e:
+ print(e)
+
+body_input = {
+ "kind":"AzureStorageMsi",
+ "properties": {
+ "scanRulesetName": "AzureStorage",
+ "scanRulesetType": "System",
+ "collection": {
+ "type": "CollectionReference",
+ "referenceName": collection_name
+ }
+ }
+}
+
+try:
+ response = client.scans.create_or_update(data_source_name=ds_name, scan_name=scan_name, body=body_input)
+ print(response)
+ print(f"Scan {scan_name} successfully created or updated")
+except HttpResponseError as e:
+ print(e)
+
+run_id = uuid.uuid4() #unique id of the new scan
+
+try:
+ response = client.scan_result.run_scan(data_source_name=ds_name, scan_name=scan_name, run_id=run_id)
+ print(response)
+ print(f"Scan {scan_name} successfully started")
+except HttpResponseError as e:
+ print(e)
+```
+
+## Search catalog
+
+Once a scan is complete, it's likely that assets have been discovered and even classified. This process can take some time to complete after a scan, so you may need to wait before running this next portion of code. Wait for your scan to show **completed**, and the assets to appear in the Microsoft Purview Data Catalog.
+
+Once the assets are ready, you can use the Microsoft Purview Catalog client to search the whole catalog.
+
+1. This time you need to import the **catalog** client instead of the scanning one. Also include the HTTPResponse error and ClientSecretCredential.
+
+ ```python
+ from azure.purview.catalog import PurviewCatalogClient
+ from azure.identity import ClientSecretCredential
+ from azure.core.exceptions import HttpResponseError
+ ```
+
+1. Create a function to get the credentials to access your Microsoft Purview account, and instantiate the catalog client.
+
+ ```python
+ client_id = "<your client id>"
+ client_secret = "<your client secret>"
+ tenant_id = "<your tenant id>"
+ reference_name_purview = "<name of your Microsoft Purview account>"
+
+ def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+ def get_catalog_client():
+ credentials = get_credentials()
+ client = PurviewCatalogClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+
+ try:
+ client_catalog = get_catalog_client()
+ except ValueError as e:
+ print(e)
+ ```
+
+1. Configure your search criteria and keywords in the input body:
+
+ ```python
+ keywords = "keywords you want to search"
+
+ body_input={
+ "keywords": keywords
+ }
+ ```
+
+ Here you only specify keywords, but keep in mind [you can add many other fields to further specify your query](/python/api/azure-purview-catalog/azure.purview.catalog.operations.discoveryoperations#azure-purview-catalog-operations-discoveryoperations-query).
+
+1. Search the catalog:
+
+ ```python
+ try:
+ response = client_catalog.discovery.query(search_request=body_input)
+ print(response)
+ except HttpResponseError as e:
+ print(e)
+ ```
+
+### Full code
+
+```python
+from azure.purview.catalog import PurviewCatalogClient
+from azure.identity import ClientSecretCredential
+from azure.core.exceptions import HttpResponseError
+
+client_id = "<your client id>"
+client_secret = "<your client secret>"
+tenant_id = "<your tenant id>"
+reference_name_purview = "<name of your Microsoft Purview account>"
+keywords = "<keywords you want to search for>"
+
+def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+def get_catalog_client():
+ credentials = get_credentials()
+ client = PurviewCatalogClient(endpoint=f"https://{reference_name_purview}.purview.azure.com/", credential=credentials, logging_enable=True)
+ return client
+
+body_input={
+ "keywords": keywords
+}
+
+try:
+ catalog_client = get_catalog_client()
+except ValueError as e:
+ print(e)
+
+try:
+ response = catalog_client.discovery.query(search_request=body_input)
+ print(response)
+except HttpResponseError as e:
+ print(e)
+```
+
+## Delete a data source
+
+In this section, you'll learn how to delete the data source you registered earlier. This operation is fairly simple, and is done with the scanning client.
+
+1. Import the **scanning** client. Also include the HTTPResponse error and ClientSecretCredential.
+
+ ```python
+ from azure.purview.scanning import PurviewScanningClient
+ from azure.identity import ClientSecretCredential
+ from azure.core.exceptions import HttpResponseError
+ ```
+
+1. Create a function to get the credentials to access your Microsoft Purview account, and instantiate the scanning client.
+
+ ```python
+ client_id = "<your client id>"
+ client_secret = "<your client secret>"
+ tenant_id = "<your tenant id>"
+ reference_name_purview = "<name of your Microsoft Purview account>"
+
+ def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+ def get_scanning_client():
+ credentials = get_credentials()
+ PurviewScanningClient(endpoint=f"https://{reference_name_purview}.scan.purview.azure.com", credential=credentials, logging_enable=True)
+ return client
+
+ try:
+ client_scanning = get_scanning_client()
+ except ValueError as e:
+ print(e)
+ ```
+
+1. Delete the data source:
+
+ ```python
+ ds_name = "<name of the registered data source you want to delete>"
+ try:
+ response = client_scanning.data_sources.delete(ds_name)
+ print(response)
+ print(f"Data source {ds_name} successfully deleted")
+ except HttpResponseError as e:
+ print(e)
+ ```
+
+### Full code
+
+```python
+from azure.purview.scanning import PurviewScanningClient
+from azure.identity import ClientSecretCredential
+from azure.core.exceptions import HttpResponseError
++
+client_id = "<your client id>"
+client_secret = "<your client secret>"
+tenant_id = "<your tenant id>"
+reference_name_purview = "<name of your Microsoft Purview account>"
+ds_name = "<name of the registered data source you want to delete>"
+
+def get_credentials():
+ credentials = ClientSecretCredential(client_id=client_id, client_secret=client_secret, tenant_id=tenant_id)
+ return credentials
+
+def get_scanning_client():
+ credentials = get_credentials()
+ client = PurviewScanningClient(endpoint=f"https://{reference_name_purview}.scan.purview.azure.com", credential=credentials, logging_enable=True)
+ return client
+
+try:
+ client_scanning = get_scanning_client()
+except ValueError as e:
+ print(e)
+
+try:
+ response = client_scanning.data_sources.delete(ds_name)
+ print(response)
+ print(f"Data source {ds_name} successfully deleted")
+except HttpResponseError as e:
+ print(e)
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about the Python Microsoft Purview Scanning Client](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-purview-scanning/1.0.0b2/https://docsupdatetracker.net/index.html)
+> [Learn more about the Python Microsoft Purview Catalog Client](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-purview-catalog/1.0.0b2/https://docsupdatetracker.net/index.html)
+
remote-rendering Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/blob-storage.md
Details about SAS can be found at the [SAS documentation](../../../storage/commo
A SAS URI can be generated using one of: -- az PowerShell module
+- Az PowerShell module
- see the [example PowerShell scripts](../../samples/powershell-example-scripts.md) - [az command line](/cli/azure/install-azure-cli) - [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)
An example of using Shared Access Signatures in asset conversion is shown in Con
To start converting a model, you need to upload it, using one of the following options: -- [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) - a convenient UI to upload/download/manage files on azure blob storage
+- [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) - a convenient UI to upload/download/manage files on Azure blob storage
- [Azure command line](../../../storage/blobs/storage-quickstart-blobs-cli.md) - [Azure PowerShell module](/powershell/azure/install-az-ps) - see the [Example PowerShell scripts](../../samples/powershell-example-scripts.md) - [Using a storage SDK (Python, C# ... )](../../../storage/index.yml) - [Using the Azure Storage REST APIs](/rest/api/storageservices/blob-service-rest-api)
+- [Using the Azure Remote Rendering Toolkit (ARRT)](../../samples/azure-remote-rendering-asset-tool.md)
For an example of how to upload data for conversion refer to Conversion.ps1 of the [PowerShell Example Scripts](../../samples/powershell-example-scripts.md#script-conversionps1).
-> [!Note]
+> [!NOTE]
+>
> When uploading an input model take care to avoid long file names and/or folder structures in order to avoid [Windows path length limit](/windows/win32/fileio/maximum-file-path-limitation) issues on the service. ## Get a SAS URI for the converted model
remote-rendering Azure Remote Rendering Asset Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/samples/azure-remote-rendering-asset-tool.md
Title: Azure Remote Rendering Asset Tool
-description: Learn about the Azure Remote Rendering Asset Tool (ARRT) which is an open-source desktop application developed in C++/Qt.
-- Previously updated : 06/09/2020
+ Title: Azure Remote Rendering Toolkit
+description: Learn about the Azure Remote Rendering Toolkit (ARRT) which is an open-source desktop application developed in C++/Qt.
++ Last updated : 05/27/2022
-# Azure Remote Rendering Asset Tool (ARRT)
+# Azure Remote Rendering Toolkit (ARRT)
-Azure Remote Rendering Asset Tool (ARRT) is an open-source desktop application developed in C++/Qt.
+Azure Remote Rendering Toolkit (ARRT) is an open-source desktop application developed in C++/Qt that is meant to help getting start with Azure Remote Rendering. Additionally it's meant as a sample application for how to integrate Remote Rendering into your own product.
![ARRT](./media/azure-remote-rendering-asset-tool.png "ARRT screenshot")
Azure Remote Rendering Asset Tool (ARRT) is an open-source desktop application d
The application can be used to:
-* Upload a 3D model
-* Control the model conversion
-* Create and manage a remote rendering session
-* Load a 3D model
-* Preview the 3D model
-* Modify its materials
+* Upload files into an Azure Storage account.
+* Convert 3D models for ARR
+* Start a new ARR session or connect to an existing one.
+* Render converted 3D models using ARR.
+* See basic performance numbers.
-You can also use it as an open-source sample to learn how to implement a front end for the Azure Remote Rendering C++ SDK, using the Azure Storage Client Library for managing the 3D model conversion.
+## GitHub Repository
-## Source repository
-
-You can find the source code and the documentation on the [GitHub ARRT repository](https://github.com/Azure/azure-remote-rendering-asset-tool).
+The [ARRT GitHub repository](https://github.com/Azure/azure-remote-rendering-asset-tool) contains full C++ source, documentation and [pre-built binaries](https://github.com/Azure/azure-remote-rendering-asset-tool/releases).
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
You might find some variation in maximum limits if your service happens to be pr
## Document limits
-As of October 2018, there are no longer any document count limits for any new service created at any billable tier (Basic, S1, S2, S3, S3 HD) in any region. Older services created prior to October 2018 may still be subject to document count limits.
-
-To determine whether your service has document limits, use the [GET Service Statistics REST API](/rest/api/searchservice/get-service-statistics). Document limits are reflected in the response, with `null` indicating no limits.
-
-> [!NOTE]
-> Although there are no document limits imposed by the service, there is a shard limit of approximately 24 billion documents per index on Basic, S1, S2, and S3 search services. For S3 HD, the shard limit is 2 billion documents per index. Each element of a complex collection counts as a separate document in terms of shard limits.
+There are no longer any document limits per service in Azure Cognitive Search, however, there is a limit of approximately 24 billion documents per index on Basic, S1, S2, and S3 search services. For S3 HD, the limit is 2 billion documents per index. Each element of a complex collection counts as a separate document in terms of these limits.
### Document size limits per API call
service-connector Quickstart Portal Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-container-apps.md
Get started with Service Connector by using the Azure portal to create a new ser
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](/container-apps/quickstart-portal).
+- An application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](/azure/container-apps/quickstart-portal).
## Sign in to Azure
service-connector Tutorial Connect Web App App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-connect-web-app-app-configuration.md
Service Connector manages the connection configuration for you:
Service Connector manages the connection configuration for you: -- Set up the web app's `AZURE_APPCONFIGURATION_CONNECTIONSTRING` to let the application access it and get the App Configuration connection string. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/connection-string/Microsoft.Azure.ServiceConnector.Sample/Program.cs#L9-L12).-- Activate the web app's system-assigned managed authentication and grant App Configuration a Data Reader role to let the application authenticate to the App Configuration using DefaultAzureCredential from Azure.Identity. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/connection-string/Microsoft.Azure.ServiceConnector.Sample/Program.cs#L43).
+- Set up the web app's `AZURE_APPCONFIGURATION_CONNECTIONSTRING` to let the application access it and get the App Configuration connection string. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/connection-string/ServiceConnectorSample/Program.cs#L9-L12).
+- Activate the web app's system-assigned managed authentication and grant App Configuration a Data Reader role to let the application authenticate to the App Configuration using DefaultAzureCredential from Azure.Identity. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/connection-string/ServiceConnectorSample/Program.cs#L43).
site-recovery Hyper V Azure Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-powershell-resource-manager.md
Last updated 01/10/2020 -
+ms.tool: azure-powershell
# Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager
static-web-apps Add Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/add-mongoose.md
Last updated 01/25/2021
# Tutorial: Access data in Cosmos DB using Mongoose with Azure Static Web Apps
-[Mongoose](https://mongoosejs.com/) is the most popular ODM (Object Document Mapping) client for Node.js. Allowing you to design a data structure and enforce validation, Mongoose provides all the tooling necessary to interact with databases that support the Mongoose API. [Cosmos DB](../cosmos-db/mongodb-introduction.md) supports the necessary Mongoose APIs and is available as a back-end server option on Azure.
+[Mongoose](https://mongoosejs.com/) is the most popular ODM (Object Document Mapping) client for Node.js. Allowing you to design a data structure and enforce validation, Mongoose provides all the tooling necessary to interact with databases that support the MongoDB API. [Cosmos DB](../cosmos-db/mongodb-introduction.md) supports the necessary MongoDB APIs and is available as a back-end server option on Azure.
In this tutorial, you learn how to:
Sign in to the [Azure portal](https://portal.azure.com).
Begin by creating a [Cosmos DB serverless](../cosmos-db/serverless.md) account. By using a serverless account, you only pay for the resources as they are used and avoid needing to create a full infrastructure. 1. Navigate to [https://portal.azure.com](https://portal.azure.com)
-2. Click **Create a resource**
+2. Select **Create a resource**
3. Enter **Azure Cosmos DB** in the search box
-4. Click **Azure Cosmos DB**
-5. Click **Create**
+4. Select **Azure Cosmos DB**
+5. Select **Create**
6. If prompted, under **Azure Cosmos DB API for MongoDB** select **Create** 7. Configure your Azure Cosmos DB Account with the following information - Subscription: Choose the subscription you wish to use
- - Resource: Click **Create new**, and set the name to **aswa-mongoose**
+ - Resource: Select **Create new**, and set the name to **aswa-mongoose**
- Account name: A unique value is required - Location: **West US 2** - Capacity mode: **Serverless (preview)** - Version: **4.0** :::image type="content" source="media/add-mongoose/cosmos-db.png" alt-text="Create new Cosmos DB instance":::
-8. Click **Review + create**
-9. Click **Create**
+8. Select **Review + create**
+9. Select **Create**
The creation process will take a few minutes. Later steps will return to the database to gather the connection string.
This tutorial uses a GitHub template repository to help you create your applicat
1. Navigate to the [starter template](https://github.com/login?return_to=/staticwebdev/mongoose-starter/generate) 2. Choose the **owner** (if using an organization other than your main account) 3. Name your repository **aswa-mongoose-tutorial**
-4. Click **Create repository from template**
+4. Select **Create repository from template**
5. Return to the [Azure portal](https://portal.azure.com)
-6. Click **Create a resource**
+6. Select **Create a resource**
7. Type **static web app** in the search box 8. Select **Static Web App**
-9. Click **Create**
+9. Select **Create**
10. Configure your Azure Static Web App with the following information - Subscription: Choose the same subscription as before - Resource group: Select **aswa-mongoose** - Name: **aswa-mongoose-tutorial** - Region: **West US 2**
- - Click **Sign in with GitHub**
- - Click **Authorize** if prompted to allow Azure Static Web Apps to create the GitHub Action to enable deployment
+ - Select **Sign in with GitHub**
+ - Select **Authorize** if prompted to allow Azure Static Web Apps to create the GitHub Action to enable deployment
- Organization: Your GitHub account name - Repository: **aswa-mongoose-tutorial** - Branch: **main**
- - Build presets: Choose **Custom**
- - App location: **/public**
+ - Build presets: Choose **React**
+ - App location: **/**
- Api location: **api**
- - Output location: *leave blank*
+ - Output location: **build**
:::image type="content" source="media/add-mongoose/azure-static-web-apps.png" alt-text="Completed Azure Static Web Apps form":::
-11. Click **Review and create**
-12. Click **Create**
-13. The creation process takes a few moments; click on **Go to resource** once the static web app is provisioned
+11. Select **Review and create**
+12. Select **Create**
+13. The creation process takes a few moments; select **Go to resource** once the static web app is provisioned
## Configure database connection string In order to allow the web app to communicate with the database, the database connection string is stored as an [Application Setting](application-settings.md). Setting values are accessible in Node.js using the `process.env` object.
-1. Click **Home** in the upper left corner of the Azure portal (or navigate back to [https://portal.azure.com](https://portal.azure.com))
-2. Click **Resource groups**
-3. Click **aswa-mongoose**
-4. Click the name of your database account - it will have a type of **Azure Cosmos DB API for Mongo DB**
-5. Under **Settings** click **Connection String**
+1. Select **Home** in the upper left corner of the Azure portal (or navigate back to [https://portal.azure.com](https://portal.azure.com))
+2. Select **Resource groups**
+3. Select **aswa-mongoose**
+4. Select the name of your database account - it will have a type of **Azure Cosmos DB API for Mongo DB**
+5. Under **Settings** select **Connection String**
6. Copy the connection string listed under **PRIMARY CONNECTION STRING**
-7. In the breadcrumbs, click **aswa-mongoose**
-8. Click **aswa-mongoose-tutorial** to return to the website instance
-9. Under **Settings** click **Configuration**
-10. Click **Add** and create a new Application Setting with the following values
- - Name: **CONNECTION_STRING**
- - Value: Paste the connection string you copied earlier
-11. Click **OK**
-12. Click **Save**
+7. In the breadcrumbs, select **aswa-mongoose**
+8. Select **aswa-mongoose-tutorial** to return to the website instance
+9. Under **Settings** select **Configuration**
+10. Select **Add** and create a new Application Setting with the following values
+ - Name: **AZURE_COSMOS_CONNECTION_STRING**
+ - Value: \<Paste the connection string you copied earlier\>
+11. Select **OK**
+12. Select **Add** and create a new Application Setting with the following values for name of the database
+ - Name: **AZURE_COSMOS_DATABASE_NAME**
+ - Value: **todo**
+13. Select **Save**
## Navigate to your site You can now explore the static web app.
-1. Click **Overview**
-1. Click the URL displayed in the upper right
+1. Select **Overview**
+1. Select the URL displayed in the upper right
1. It will look similar to `https://calm-pond-05fcdb.azurestaticapps.net`
-1. Click **Please login to see your list of tasks**
-1. Click **Grant consent** to access the application
-1. Create a new task by typing in a title and clicking **Add task**
+1. Select **Please login to see your list of tasks**
+1. Select **Grant consent** to access the application
+1. Create a new lists by typing a name into the textbox labeled **create new list** and selecting **Save**
+1. Create a new task by typing in a title in the textbox labeled **create new item** and selecting **Save**
1. Confirm the task is displayed (it may take a moment)
-1. Mark the task as complete by **clicking the checkbox**
+1. Mark the task as complete by **selecting the check**; the task will be moved to the **Done items** section of the page
1. **Refresh the page** to confirm a database is being used ## Clean up resources
If you're not going to continue to use this application, delete
the resource group with the following steps: 1. Return to the [Azure portal](https://portal.azure.com)
-2. Click **Resource groups**
-3. Click **aswa-mongoose**
-4. Click **Delete resource group**
+2. Select **Resource groups**
+3. Select **aswa-mongoose**
+4. Select **Delete resource group**
5. Type **aswa-mongoose** into the textbox
-6. Click **Delete**
+6. Select **Delete**
## Next steps
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md
Previously updated : 02/10/2021 Last updated : 05/12/2022
For details about the permissions required for each Azure Storage operation on a
## Call PowerShell commands using Azure AD credentials - To use Azure PowerShell to sign in and run subsequent operations against Azure Storage using Azure AD credentials, create a storage context to reference the storage account, and include the `-UseConnectedAccount` parameter. The following example shows how to create a container in a new storage account from Azure PowerShell using your Azure AD credentials. Remember to replace placeholder values in angle brackets with your own values:
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets
To delete a container in JavaScript, use one of the following methods: -- BlobServiceClient.[deleteContainer](/javascript/api/@azure/storage-blob/blobserviceclien#@azure-storage-blob-blobserviceclient-deletecontainer)-- ContainerClient.[delete](/javascript/api/@azure/storage-blob/containerclien#@azure-storage-blob-containerclient-delete)-- ContainerClient.[deleteIfExists](/javascript/api/@azure/storage-blob/containerclien#@azure-storage-blob-containerclient-deleteifexists)
+- BlobServiceClient.[deleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer#@azure-storage-blob-blobserviceclient-deletecontainer)
+- ContainerClient.[delete](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer)
+- ContainerClient.[deleteIfExists](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-containerclient-deleteifexists)
After you delete a container, you can't create a container with the same name for at *least* 30 seconds. Attempting to create a container with the same name will fail with HTTP error code 409 (Conflict). Any other operations on the container or the blobs it contains will fail with HTTP error code 404 (Not Found).
async function deleteContainersWithPrefix(blobServiceClient, blobNamePrefix){
When container soft delete is enabled for a storage account, a container and its contents may be recovered after it has been deleted, within a retention period that you specify. You can restore a soft deleted container by calling. -- BlobServiceClient.[undeleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-undeletecontainer)
+- BlobServiceClient.[undeleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainert#@azure-storage-blob-blobserviceclient-undeletecontainer)
The following example finds a deleted container, gets the version ID of that deleted container, and then passes that ID into the **undeleteContainer** method to restore the container.
storage Storage Ref Azcopy Bench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-bench.md
description: This article provides reference information for the azcopy bench co
Previously updated : 07/24/2020 Last updated : 05/26/2022
-# azcopy benchmark
+# azcopy bench
-Runs a performance benchmark by uploading or downloading test data to or from a specified destination.
-For uploads, the test data is automatically generated.
+Runs a performance benchmark by uploading or downloading test data to or from a specified destination. For uploads, the test data is automatically generated.
The benchmark command runs the same process as 'copy', except that:
- - Instead of requiring both source and destination parameters, benchmark takes just one. This is the
- blob container, Azure Files Share, or Azure Data Lake Storage Gen2 file system that you want to upload to or download from.
+- Instead of requiring both source and destination parameters, benchmark takes just one. This is the blob container, Azure Files Share, or Azure Data Lake Storage Gen2 file system that you want to upload to or download from.
- - The 'mode' parameter describes whether AzCopy should test uploads to or downloads from given target. Valid values are 'Upload'
+- The 'mode' parameter describes whether AzCopy should test uploads to or downloads from given target. Valid values ar`e 'Upload'
and 'Download'. Default value is 'Upload'.
- - For upload benchmarks, the payload is described by command-line parameters, which control how many files are autogenerated and
- how significant the files are. The generation process takes place entirely in memory. Disk is not used.
+- For upload benchmarks, the payload is described by command line parameters, which control how many files are auto-generated and
+ how big they are. The generation process takes place entirely in memory. Disk isn't used.
- - For downloads, the payload consists of whichever files already exist at the source. (See example below about how to generate
+- For downloads, the payload consists of whichever files already exist at the source. (See example below about how to generate
test files if needed).
+
+- Only a few of the optional parameters that are available to the copy command are supported.
+
+- Additional diagnostics are measured and reported.
+
+- For uploads, the default behavior is to delete the transferred data at the end of the test run. For downloads, the data is never actually saved locally.
- - Only a few of the optional parameters that are available to the copy command are supported.
-
- - Additional diagnostics are measured and reported.
-
- - For uploads, the default behavior is to delete the transferred data at the end of the test run. For downloads, the data
- is never saved locally.
-
-Benchmark mode will automatically tune itself to the number of parallel TCP connections that gives
-the maximum throughput. It will display that number at the end. To prevent autotuning, set the
-AZCOPY_CONCURRENCY_VALUE environment variable to a specific number of connections.
+Benchmark mode will automatically tune itself to the number of parallel TCP connections that gives the maximum throughput. It will display that number at the end. To prevent auto-tuning, set the COPY_CONCURRENCY_VALUE environment variable to a specific number of connections.
All the usual authentication types are supported. However, the most convenient approach for benchmarking upload is typically to create an empty container with a SAS token and use SAS authentication. (Download mode requires a set of test data to be present in the target container.)-
-## Related conceptual articles
--- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Optimize the performance of AzCopy v10 with Azure Storage](storage-use-azcopy-optimize.md)-
-## Examples
-
+
```azcopy
-azcopy benchmark [destination] [flags]
+azcopy bench [destination] [flags]
```
-Run a benchmark test with default parameters (suitable for benchmarking networks up to 1 Gbps):'
+## Examples
-```azcopy
-azcopy bench "https://[account].blob.core.windows.net/[container]?<SAS>"
-```
+Run an upload benchmark with default parameters (suitable for benchmarking networks up to 1 Gbps):
-Run a benchmark test that uploads 100 files, each 2 GiB in size: (suitable for benchmarking on a fast network, for example, 10 Gbps):'
+`azcopy bench "https://[account].blob.core.windows.net/[container]?<SAS>"`
-```azcopy
-azcopy bench "https://[account].blob.core.windows.net/[container]?<SAS>"--file-count 100 --size-per-file 2G
-```
+Run a benchmark test that uploads 100 files, each 2 GiB in size: (suitable for benchmarking on a fast network, e.g. 10 Gbps):'
-Run a benchmark test but use 50,000 files, each 8 MiB in size and compute their MD5 hashes (in the same way that the `--put-md5` flag does this
-in the copy command). The purpose of `--put-md5` when benchmarking is to test whether MD5 computation affects throughput for the
-selected file count and size:
+`azcopy bench "https://[account].blob.core.windows.net/[container]?<SAS>" --file-count 100 --size-per-file 2G`
-```azcopy
-azcopy bench --mode='Upload' "https://[account].blob.core.windows.net/[container]?<SAS>" --file-count 50000 --size-per-file 8M --put-md5
-```
+Same as above, but use 50,000 files, each 8 MiB in size and compute their MD5 hashes (in the same way that the --put-md5 flag does this
+in the copy command). The purpose of --put-md5 when benchmarking is to test whether MD5 computation affects throughput for the selected file count and size:
+
+`azcopy bench --mode='Upload' "https://[account].blob.core.windows.net/[container]?<SAS>" --file-count 50000 --size-per-file 8M --put-md5`
Run a benchmark test that downloads existing files from a target
-```azcopy
-azcopy bench --mode='Download' "https://[account].blob.core.windows.net/[container]?<SAS?"
-```
+`azcopy bench --mode='Download' "https://[account].blob.core.windows.net/[container]?<SAS?"`
-Run an upload that does not delete the transferred files. (These files can then serve as the payload for a download test)
+Run an upload that doesn't delete the transferred files. (These files can then serve as the payload for a download test)
-```azcopy
-azcopy bench "https://[account].blob.core.windows.net/[container]?<SAS>" --file-count 100 --delete-test-data=false
-```
+`azcopy bench "https://[account].blob.core.windows.net/[container]?<SAS>" --file-count 100 --delete-test-data=false`
## Options
-**--blob-type** string Defines the type of blob at the destination. Used to allow benchmarking different blob types. Identical to the same-named parameter in the copy command (default "Detect").
+`--blob-type string` defines the type of blob at the destination. Used to allow benchmarking different blob types. Identical to the same-named parameter in the copy command (default "Detect")
-**--block-size-mb** float Use this block size (specified in MiB). Default is automatically calculated based on file size. Decimal fractions are allowed - for example, 0.25. Identical to the same-named parameter in the copy command.
+`--block-size-mb float` Use this block size (specified in MiB). Default is automatically calculated based on file size. Decimal fractions are allowed - for example, 0.25. Identical to the same-named parameter in the copy command
-**--check-length** Check the length of a file on the destination after the transfer. If there is a mismatch between source and destination, the transfer is marked as failed. (default true)
+`--check-length` Check the length of a file on the destination after the transfer. If there's a mismatch between source and destination, the transfer is marked as failed. (default true)
-**--delete-test-data** If true, the benchmark data will be deleted at the end of the benchmark run. Set it to false if you want to keep the data at the destination - for example, to use it for manual tests outside benchmark mode (default true).
+`--delete-test-data` If true, the benchmark data will be deleted at the end of the benchmark run. Set it to false if you want to keep the data at the destination - for example, to use it for manual tests outside benchmark mode (default true)
-**--file-count** uint. The number of autogenerated data files to use (default 100).
+`--file-count` (uint) number of auto-generated data files to use (default 100)
-**--help** Help for bench
+`-h`, `--help` help for bench
-**--log-level** string Define the log verbosity for the log file, available levels: INFO(all requests/responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default "INFO")
+`--log-level` (string) define the log verbosity for the log file, available levels: INFO(all requests/responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default "INFO")
-**--mode** string Defines if Azcopy should test uploads or downloads from this target. Valid values are 'upload' and 'download'. Defaulted option is 'upload'. (default 'upload')
+`--mode` (string) Defines if Azcopy should test uploads or downloads from this target. Valid values are 'upload' and 'download'. Defaulted option is 'upload'. (default "upload")
-**--number-of-folders** uint If larger than 0, create folders to divide up the data.
+`--number-of-folders` (uint) If larger than 0, create folders to divide up the data.
-**--put-md5** Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob/file. (By default the hash is NOT created.) Identical to the same-named parameter in the copy command.
+`--put-md5` Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob/file. (By default the hash is NOT created.) Identical to the same-named parameter in the copy command
-**--size-per-file** string Size of each autogenerated data file. Must be a number immediately followed by K, M, or G. E.g. 12k or 200G (default "250M").
+`--size-per-file` (string) Size of each auto-generated data file. Must be a number immediately followed by K, M or G. E.g. 12k or 200G (default "250M")
## Options inherited from parent commands
-**--cap-mbps float** Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
-**--output-type** string Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text").
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
-**--trusted-microsoft-suffixes** string Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-configuration-settings.md
description: This article provides reference information for AzCopy V10 configur
Previously updated : 04/02/2021 Last updated : 05/26/2022
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
description: This article provides reference information for the azcopy copy com
Previously updated : 09/01/2021 Last updated : 05/26/2022
Copies source data to a destination location.
Copies source data to a destination location. The supported directions are:
- - local <-> Azure Blob (SAS or OAuth authentication)
- - local <-> Azure Files (Share/directory SAS authentication)
- - local <-> Azure Data Lake Storage Gen 2 (SAS, OAuth, or shared key authentication)
- - Azure Blob (SAS or public) -> Azure Blob (SAS or OAuth authentication)
- - Azure Blob (SAS or public) -> Azure Files (SAS)
- - Azure Files (SAS) -> Azure Files (SAS)
- - Azure Files (SAS) -> Azure Blob (SAS or OAuth authentication)
- - Amazon Web Services (AWS) S3 (Access Key) -> Azure Block Blob (SAS or OAuth authentication)
- - Google Cloud Storage (Service Account Key) -> Azure Block Blob (SAS or OAuth authentication) [Preview]
+- local <-> Azure Blob (SAS or OAuth authentication)
+- local <-> Azure Files (Share/directory SAS authentication)
+- local <-> Azure Data Lake Storage Gen2 (SAS, OAuth, or SharedKey authentication)
+- Azure Blob (SAS or public) -> Azure Blob (SAS or OAuth authentication)
+- Azure Blob (SAS or public) -> Azure Files (SAS)
+- Azure Files (SAS) -> Azure Files (SAS)
+- Azure Files (SAS) -> Azure Blob (SAS or OAuth authentication)
+- AWS S3 (Access Key) -> Azure Block Blob (SAS or OAuth authentication)
+- Google Cloud Storage (Service Account Key) -> Azure Block Blob (SAS or OAuth authentication)
-For more information, see the examples section of this article.
+Refer to the examples for more information.
-## Related conceptual articles
+### Advanced
-- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Tutorial: Migrate on-premises data to cloud storage with AzCopy](storage-use-azcopy-migrate-on-premises-data.md)-- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data)-- [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)
+AzCopy automatically detects the content type of the files when uploading from the local disk, based on the file extension or content (if no extension is specified).
-## Advanced
-
-AzCopy automatically detects the content type of the files based on the file extension or content (if no extension is specified) when you upload them from the local disk.
-
-The built-in lookup table is small, but on Unix, it is augmented by the local system's `mime.types` file(s) if they are available under one or more of these names:
+The built-in lookup table is small, but on Unix, it's augmented by the local system's mime.types file(s) if available under one or more of these names:
- /etc/mime.types - /etc/apache2/mime.types - /etc/apache/mime.types
-On Windows, MIME types are extracted from the registry. This feature can be turned off with the help of a flag. For more information, see the flag section of this article.
+On Windows, MIME types are extracted from the registry. This feature can be turned off with the help of a flag. Refer to the flag section.
-If you set an environment variable by using the command line, that variable will be readable in your command-line history. Consider clearing variables that contain credentials from your command-line history. To keep variables from appearing in your history, you can use a script to prompt the user for their credentials, and to set the environment variable.
+If you set an environment variable by using the command line, that variable will be readable in your command line history. Consider clearing variables that contain credentials from your command line history. To keep variables from appearing in your history, you can use a script to prompt the user for their credentials, and to set the environment variable.
-```
+```azcopy
azcopy copy [source] [destination] [flags] ``` ## Examples
-Upload a single file by using OAuth authentication. If you have not yet logged into AzCopy, run the `azcopy login` command before you run the following command.
+Upload a single file by using OAuth authentication. If you haven't yet logged into AzCopy, please run the azcopy login command before you run the following command.
-```azcopy
-azcopy cp "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]"
-```
+`azcopy cp "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]"`
-Same as above, but this time, also compute MD5 hash of the file content and save it as the blob's Content-MD5 property:
+Same as above, but this time also compute MD5 hash of the file content and save it as the blob's Content-MD5 property:
-```azcopy
-azcopy cp "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]" --put-md5
-```
+`azcopy cp "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]" --put-md5`
Upload a single file by using a SAS token:
-```azcopy
-azcopy cp "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"
-```
+`azcopy cp "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"`
Upload a single file by using a SAS token and piping (block blobs only):
+
+`cat "/path/to/file.txt" | azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" --from-to PipeBlob`
-```azcopy
-cat "/path/to/file.txt" | azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]
-```
+Upload a single file by using OAuth and piping (block blobs only):
-Upload an entire directory by using a SAS token:
+`cat "/path/to/file.txt" | azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]" --from-to PipeBlob`
-```azcopy
-azcopy cp "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive
-```
+Upload an entire directory by using a SAS token:
+
+`azcopy cp "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true`
or
-```azcopy
-azcopy cp "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive --put-md5
-```
+`azcopy cp "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true --put-md5`
Upload a set of files by using a SAS token and wildcard (*) characters:
-```azcopy
-azcopy cp "/path/*foo/*bar/*.pdf" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]"
-```
+`azcopy cp "/path/*foo/*bar/*.pdf" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]"`
Upload files and directories by using a SAS token and wildcard (*) characters:
-```azcopy
-azcopy cp "/path/*foo/*bar*" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive
-```
+`azcopy cp "/path/*foo/*bar*" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true`
Upload files and directories to Azure Storage account and set the query-string encoded tags on the blob. -- To set tags {key = "bla bla", val = "foo"} and {key = "bla bla 2", val = "bar"}, use the following syntax : `azcopy cp "/path/*foo/*bar*" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --blob-tags="bla%20bla=foo&bla%20bla%202=bar"`-
+- To set tags {key = "bla bla", val = "foo"} and {key = "bla bla 2", val = "bar"}, use the following syntax:
+- `azcopy cp "/path/*foo/*bar*" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --blob-tags="bla%20bla=foo&bla%20bla%202=bar"`
- Keys and values are URL encoded and the key-value pairs are separated by an ampersand('&')- - While setting tags on the blobs, there are additional permissions('t' for tags) in SAS without which the service will give authorization error back.
-Download a single file by using OAuth authentication. If you have not yet logged into AzCopy, run the `azcopy login` command before you run the following command.
+Download a single file by using OAuth authentication. If you haven't yet logged into AzCopy, please run the azcopy login command before you run the following command.
-```azcopy
-azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]" "/path/to/file.txt"
-```
+`azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]" "/path/to/file.txt"`
Download a single file by using a SAS token:
-```azcopy
-azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "/path/to/file.txt"
-```
+`azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "/path/to/file.txt"`
Download a single file by using a SAS token and then piping the output to a file (block blobs only):
+
+`azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" --from-to BlobPipe > "/path/to/file.txt"`
-```azcopy
-azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" > "/path/to/file.txt"
-```
+Download a single file by using OAuth and then piping the output to a file (block blobs only):
+
+`azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]" --from-to BlobPipe > "/path/to/file.txt"`
Download an entire directory by using a SAS token:-
-```azcopy
-azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" "/path/to/dir" --recursive
-```
+
+`azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" "/path/to/dir" --recursive=true`
A note about using a wildcard character (*) in URLs: There's only two supported ways to use a wildcard character in a URL. -- You can use one just after the final forward slash (/) of a URL. This use of the wildcard character copies all of the files in a directory directly to the destination without placing them into a subdirectory.
+- You can use one just after the final forward slash (/) of a URL. This copies all of the files in a directory directly to the destination without placing them into a subdirectory.
-- You can also use a wildcard character in the name of a container as long as the URL refers only to a container and not to a blob. You can use this approach to obtain files from a subset of containers.
+- You can also use one in the name of a container as long as the URL refers only to a container and not to a blob. You can use this approach to obtain files from a subset of containers.
Download the contents of a directory without copying the containing directory itself.
-```azcopy
-azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/folder]/*?[SAS]" "/path/to/dir"
-```
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/folder]/*?[SAS]" "/path/to/dir"`
Download an entire storage account.
-```azcopy
-azcopy cp "https://[srcaccount].blob.core.windows.net/" "/path/to/dir" --recursive
-```
+`azcopy cp "https://[srcaccount].blob.core.windows.net/" "/path/to/dir" --recursive`
Download a subset of containers within a storage account by using a wildcard symbol (*) in the container name.
-```azcopy
-azcopy cp "https://[srcaccount].blob.core.windows.net/[container*name]" "/path/to/dir" --recursive
-```
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[container*name]" "/path/to/dir" --recursive`
+
+Download all the versions of a blob from Azure Storage to local directory. Ensure that source is a valid blob, destination is a local folder and `versionidsFile` which takes in a path to the file where each version is written on a separate line. All the specified versions will get downloaded in the destination folder specified.
+
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[containername]/[blobname]" "/path/to/dir" --list-of-versions="/another/path/to/dir/[versionidsFile]"`
Copy a single blob to another blob by using a SAS token.
-```azcopy
-azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"
-```
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"`
-Copy a single blob to another blob by using a SAS token and an Auth token. You have to use a SAS token at the end of the source account URL, but the destination account doesn't need one if you log into AzCopy by using the `azcopy login` command.
+Copy a single blob to another blob by using a SAS token and an OAuth token. You have to use a SAS token at the end of the source account URL, but the destination account doesn't need one if you log into AzCopy by using the azcopy login command.
-```azcopy
-azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]"
-```
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]"`
Copy one blob virtual directory to another by using a SAS token:
-```azcopy
-azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true
-```
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true`
Copy all blob containers, directories, and blobs from storage account to another by using a SAS token:
-```azcopy
-azcopy cp "https://[srcaccount].blob.core.windows.net?[SAS]" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive
-```
+`azcopy cp "https://[srcaccount].blob.core.windows.net?[SAS]" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive=true`
-Copy a single object to Blob Storage from Amazon Web Services (AWS) S3 by using an access key and a SAS token. First, set the environment variable `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for AWS S3 source.
+Copy a single object to Blob Storage from Amazon Web Services (AWS) S3 by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.
+
+`azcopy cp "https://s3.amazonaws.com/[bucket]/[object]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"`
-```azcopy
-azcopy cp "https://s3.amazonaws.com/[bucket]/[object]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"
-```
+Copy an entire directory to Blob Storage from AWS S3 by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.
-Copy an entire directory to Blob Storage from AWS S3 by using an access key and a SAS token. First, set the environment variable `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for AWS S3 source.
+`azcopy cp "https://s3.amazonaws.com/[bucket]/[folder]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true`
-```azcopy
-azcopy cp "https://s3.amazonaws.com/[bucket]/[folder]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive
-```
+Refer to https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html to better understand the [folder] placeholder.
- Refer to https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html to better understand the [folder] placeholder.
+Copy all buckets to Blob Storage from Amazon Web Services (AWS) by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.
-Copy all buckets to Blob Storage from Amazon Web Services (AWS) by using an access key and a SAS token. First, set the environment variable `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for AWS S3 source.
+`azcopy cp "https://s3.amazonaws.com/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive=true`
-```azcopy
-azcopy cp "https://s3.amazonaws.com/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive
-```
+Copy all buckets to Blob Storage from an Amazon Web Services (AWS) region by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.
-Copy all buckets to Blob Storage from an Amazon Web Services (AWS) region by using an access key and a SAS token. First, set the environment variable `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for AWS S3 source.
+`azcopy cp "https://s3-[region].amazonaws.com/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive=true`
-```azcopy
-- azcopy cp "https://s3-[region].amazonaws.com/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive
-```
+Copy a subset of buckets by using a wildcard symbol (*) in the bucket name. Like the previous examples, you'll need an access key and a SAS token. Make sure to set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.
-Copy a subset of buckets by using a wildcard symbol (*) in the bucket name. Like the previous examples, you'll need an access key and a SAS token. Make sure to set the environment variable `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for AWS S3 source.
+`azcopy cp "https://s3.amazonaws.com/[bucket*name]/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive=true`
-```azcopy
-- azcopy cp "https://s3.amazonaws.com/[bucket*name]/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive
-```
+Copy blobs from one blob storage to another and preserve the tags from source. To preserve tags, use the following syntax:
+
+`azcopy cp "https://[account].blob.core.windows.net/[source_container]/[path/to/directory]?[SAS]" "https://[account].blob.core.windows.net/[destination_container]/[path/to/directory]?[SAS]" --s2s-preserve-blob-tags=true`
Transfer files and directories to Azure Storage account and set the given query-string encoded tags on the blob. -- To set tags {key = "bla bla", val = "foo"} and {key = "bla bla 2", val = "bar"}, use the following syntax : `azcopy cp "https://[account].blob.core.windows.net/[source_container]/[path/to/directory]?[SAS]" "https://[account].blob.core.windows.net/[destination_container]/[path/to/directory]?[SAS]" --blob-tags="bla%20bla=foo&bla%20bla%202=bar"`
+- To set tags {key = "bla bla", val = "foo"} and {key = "bla bla 2", val = "bar"}, use the following syntax:
+
+ `azcopy cp "https://[account].blob.core.windows.net/[source_container]/[path/to/directory]?[SAS]" "https://[account].blob.core.windows.net/[destination_container]/[path/to/directory]?[SAS]" --blob-tags="bla%20bla=foo&bla%20bla%202=bar"`
- Keys and values are URL encoded and the key-value pairs are separated by an ampersand('&') - While setting tags on the blobs, there are additional permissions('t' for tags) in SAS without which the service will give authorization error back.
-Copy a single object to Blob Storage from Google Cloud Storage by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for Google Cloud Storage source.
-
-```azcopy
-azcopy cp "https://storage.cloud.google.com/[bucket]/[object]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"
-```
+Copy a single object to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.
+
+`azcopy cp "https://storage.cloud.google.com/[bucket]/[object]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"`
-Copy an entire directory to Blob Storage from Google Cloud Storage by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for Google Cloud Storage source.
+Copy an entire directory to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.
-```azcopy
- - azcopy cp "https://storage.cloud.google.com/[bucket]/[folder]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true
-```
+`azcopy cp "https://storage.cloud.google.com/[bucket]/[folder]" "https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true`
-Copy an entire bucket to Blob Storage from Google Cloud Storage by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for Google Cloud Storage source.
+Copy an entire bucket to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.
-```azcopy
-azcopy cp "https://storage.cloud.google.com/[bucket]" "https://[destaccount].blob.core.windows.net/?[SAS]" --recursive=true
-```
+`azcopy cp "https://storage.cloud.google.com/[bucket]" "https://[destaccount].blob.core.windows.net/?[SAS]" --recursive=true`
-Copy all buckets to Blob Storage from Google Cloud Storage by using a service account key and a SAS token. First, set the environment variables GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT=<`project-id`> for GCS source
+Copy all buckets to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variables GOOGLE_APPLICATION_CREDENTIALS and `GOOGLE_CLOUD_PROJECT=<project-id>` for GCS source
-```azcopy
- - azcopy cp "https://storage.cloud.google.com/" "https://[destaccount].blob.core.windows.net/?[SAS]" --recursive=true
-```
+`azcopy cp "https://storage.cloud.google.com/" "https://[destaccount].blob.core.windows.net/?[SAS]" --recursive=true`
-Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from Google Cloud Storage by using a service account key and a SAS token for destination. First, set the environment variables GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT=<`project-id`> for the Google Cloud Storage source.
+Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from Google Cloud Storage (GCS) by using a service account key and a SAS token for destination. First, set the environment variables `GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT=<project-id>` for GCS source
-```azcopy
-azcopy cp "https://storage.cloud.google.com/[bucket*name]/" "https://[destaccount].blob.core.windows.net/?[SAS]" --recursive=true
-```
+`azcopy cp "https://storage.cloud.google.com/[bucket*name]/" "https://[destaccount].blob.core.windows.net/?[SAS]" --recursive=true`
## Options
-**--backup** Activates Windows' SeBackupPrivilege for uploads, or SeRestorePrivilege for downloads, to allow AzCopy to see and read all files, regardless of their file system permissions, and to restore all permissions. Requires that the account running AzCopy already has these permissions (for example, has Administrator rights or is a member of the `Backup Operators` group). This flag activates privileges that the account already has.
+`--as-subdir` True by default. Places folder sources as subdirectories under the destination. (default true)
+
+`--backup` Activates Windows' SeBackupPrivilege for uploads, or SeRestorePrivilege for downloads, to allow AzCopy to see read all files, regardless of their file system permissions, and to restore all permissions. Requires that the account running AzCopy already has these permissions (for example, has Administrator rights or is a member of the 'Backup Operators' group). This flag activates privileges that the account already has
+
+`--blob-tags` (string) Set tags on blobs to categorize data in your storage account
-**--blob-tags** string Set tags on blobs to categorize data in your storage account.
+`--blob-type` (string) Defines the type of blob at the destination. This is used for uploading blobs and when copying between accounts (default 'Detect'). Valid values include 'Detect', 'BlockBlob', 'PageBlob', and 'AppendBlob'. When copying between accounts, a value of 'Detect' causes AzCopy to use the type of source blob to determine the type of the destination blob. When uploading a file, 'Detect' determines if the file is a VHD or a VHDX file based on the file extension. If the file is either a VHD or VHDX file, AzCopy treats the file as a page blob. (default "Detect")
-**--blob-type** string Defines the type of blob at the destination. This is used for uploading blobs and when copying between accounts (default `Detect`). Valid values include `Detect`, `BlockBlob`, `PageBlob`, and `AppendBlob`. When copying between accounts, a value of `Detect` causes AzCopy to use the type of source blob to determine the type of the destination blob. When uploading a file, `Detect` determines if the file is a VHD or a VHDX file based on the file extension. If the file is ether a VHD or VHDX file, AzCopy treats the file as a page blob. (default "Detect")
+`--block-blob-tier` (string) upload block blob to Azure Storage using this blob tier. (default "None")
-**--block-blob-tier** string Upload block blob to Azure Storage using this blob tier. (default "None")
+`--block-size-mb` (float) Use this block size (specified in MiB) when uploading to Azure Storage, and downloading from Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
-**--block-size-mb** float Use this block size (specified in MiB) when uploading to Azure Storage, and downloading from Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
+`--cache-control` (string) Set the cache-control header. Returned on download.
-**--cache-control** string Set the cache-control header. Returned on download.
+`--check-length` Check the length of a file on the destination after the transfer. If there's a mismatch between source and destination, the transfer is marked as failed. (default true)
-**--check-length** Check the length of a file on the destination after the transfer. If there is a mismatch between source and destination, the transfer is marked as failed. (default value is `true`)
+`--check-md5` (string) Specifies how strictly MD5 hashes should be validated when downloading. Only available when downloading. Available options: NoCheck, LogOnly, FailIfDifferent, FailIfDifferentOrMissing. (default 'FailIfDifferent') (default "FailIfDifferent")
-**--check-md5** string Specifies how strictly MD5 hashes should be validated when downloading. Only available when downloading. Available options: `NoCheck`, `LogOnly`, `FailIfDifferent`, `FailIfDifferentOrMissing`. (default `FailIfDifferent`) (default "FailIfDifferent")
+`--content-disposition` (string) Set the content-disposition header. Returned on download.
-**--content-disposition** string Set the content-disposition header. Returned on download.
+`--content-encoding` (string) Set the content-encoding header. Returned on download.
-**--content-encoding** string Set the content-encoding header. Returned on download.
+`--content-language` (string) Set the content-language header. Returned on download.
-**--content-language** string Set the content-language header. Returned on download.
+`--content-type` (string) Specifies the content type of the file. Implies no-guess-mime-type. Returned on download.
-**--content-type** string Specifies the content type of the file. Implies no-guess-mime-type. Returned on download.
+`--cpk-by-name` (string) Client provided key by name that lets clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key name will be fetched from Azure Key Vault and will be used to encrypt the data
-**--cpk-by-name** string Client provided key by name let clients making requests against Azure Blob Storage an option to provide an encryption key on a per-request basis. Provided key name will be fetched from Azure Key Vault and will be used to encrypt the data.
+`--cpk-by-value` Client provided key by name that let clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables
-**--cpk-by-value** Client provided key by name let clients making requests against Azure Blob Storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables.
+`--decompress` Automatically decompress files when downloading, if their content-encoding indicates that they're compressed. The supported content-encoding values are 'gzip' and 'deflate'. File extensions of '.gz'/'.gzip' or '.zz' aren't necessary, but will be removed if present.
-**--decompress** Automatically decompress files when downloading, if their content-encoding indicates that they are compressed. The supported content-encoding values are `gzip` and `deflate`. File extensions of `.gz`/`.gzip` or `.zz` aren't necessary, but will be removed if present.
+`--disable-auto-decoding` False by default to enable automatic decoding of illegal chars on Windows. Can be set to true to disable automatic decoding.
-**--dry-run** Prints the file paths that would be copied by this command. This flag does not copy the actual files.
+`--dry-run` Prints the file paths that would be copied by this command. This flag doesn't copy the actual files.
-**--disable-auto-decoding** False by default to enable automatic decoding of illegal chars on Windows. Can be set to `true` to disable automatic decoding.
+`--exclude-attributes` (string) (Windows only) Exclude files whose attributes match the attribute list. For example: A;S;R
-**--exclude-attributes** string (Windows only) Excludes files whose attributes match the attribute list. For example: A;S;R
+`--exclude-blob-type` (string) Optionally specifies the type of blob (BlockBlob/ PageBlob/ AppendBlob) to exclude when copying blobs from the container or the account. Use of this flag isn't applicable for copying data from non azure-service to service. More than one blob should be separated by ';'.
-**--exclude-blob-type** string Optionally specifies the type of blob (`BlockBlob`/ `PageBlob`/ `AppendBlob`) to exclude when copying blobs from the container or the account. Use of this flag is not applicable for copying data from non-Azure service to service. More than one blob should be separated by `;`.
+`--exclude-path` (string) Exclude these paths when copying. This option doesn't support wildcard characters (*). Checks relative path prefix(For example: myFolder;myFolder/subDirName/file.pdf). When used in combination with account traversal, paths don't include the container name.
-**--exclude-path** string Exclude these paths when copying. This option does not support wildcard characters (*). Checks relative path prefix(For example: `myFolder;myFolder/subDirName/file.pdf`). When used in combination with account traversal, paths do not include the container name.
+`--exclude-pattern` (string) Exclude these files when copying. This option supports wildcard characters (*)
-**--exclude-pattern** string Exclude these files when copying. This option supports wildcard characters (*).
+`--exclude-regex` (string) Exclude all the relative path of the files that align with regular expressions. Separate regular expressions with ';'.
-**--exclude-regex** string Exclude all the relative path of the files that align with regular expressions. Separate regular expressions with ';'.
+`--follow-symlinks` Follow symbolic links when uploading from local file system.
-**--follow-symlinks** Follow symbolic links when uploading from local file system.
+`--force-if-read-only` When overwriting an existing file on Windows or Azure Files, force the overwrite to work even if the existing file has its read-only attribute set
-**--force-if-read-only** When overwriting an existing file on Windows or Azure Files, force the overwrite to work even if the existing file has
-its read-only attribute set.
+`--from-to` (string) Optionally specifies the source destination combination. For Example: LocalBlob, BlobLocal, LocalBlobFS. Piping: BlobPipe, PipeBlob
-**--from-to** string Optionally specifies the source destination combination. For Example: `LocalBlob`, `BlobLocal`, `LocalBlobFS`.
+`-h`, `--help` help for copy
-**--help** help for copy.
+`--include-after` (string) Include only those files modified on or after the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. E.g., `2020-08-19T15:04:00Z` for a UTC time, or `2020-08-19` for midnight (00:00) in the local timezone. As of AzCopy 10.5, this flag applies only to files, not folders, so folder properties won't be copied when using this flag with `--preserve-smb-info` or `--preserve-smb-permissions`.
-**--include-after** string Include only those files modified on or after the given date/time. The value should be in ISO8601 format. If no timezone
-is specified, the value is assumed to be in the local timezone of the machine running AzCopy. for example, `2020-08-19T15:04:00Z` for a UTC time, or `2020-08-19` for midnight (00:00) in the local timezone. As at AzCopy 10.5, this flag applies only to files, not folders, so folder properties won't be copied when using this flag with `--preserve-smb-info` or `--preserve-permissions`.
+`--include-attributes` (string) (Windows only) Include files whose attributes match the attribute list. For example: A;S;R
- **--include-before** string Include only those files modified before or on the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. E.g. `2020-08-19T15:04:00Z` for a UTC time, or `2020-08-19` for midnight (00:00) in the local timezone. As of AzCopy 10.7, this flag applies only to files, not folders, so folder properties won't be copied when using this flag with `--preserve-smb-info` or `--preserve-permissions`.
+`--include-before` (string) Include only those files modified before or on the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. for example, `2020-08-19T15:04:00Z` for a UTC time, or `2020-08-19` for midnight (00:00) in the local timezone. As of AzCopy 10.7, this flag applies only to files, not folders, so folder properties won't be copied when using this flag with `--preserve-smb-info` or `--preserve-smb-permissions`.
-**--include-attributes** string (Windows only) Includes files whose attributes match the attribute list. For example: A;S;R
+`--include-directory-stub` False by default to ignore directory stubs. Directory stubs are blobs with metadata `hdi_isfolder:true`. Setting value to true will preserve directory stubs during transfers.
-**--include-path** string Include only these paths when copying. This option does not support wildcard characters (*). Checks relative path prefix (For example: `myFolder;myFolder/subDirName/file.pdf`).
+`--include-path` (string) Include only these paths when copying. This option doesn't support wildcard characters (*). Checks relative path prefix (For example: myFolder;myFolder/subDirName/file.pdf).
-**--include-directory-stub** False by default to ignore directory stubs. Directory stubs are blobs with metadata 'hdi_isfolder:true'. Setting value to true will preserve directory stubs during transfers.
+`--include-pattern` (string) Include only these files when copying. This option supports wildcard characters (*). Separate files by using a ';'.
-**--include-pattern** string Include only these files when copying. This option supports wildcard characters (*). Separate files by using a `;`.
+`--include-regex` (string) Include only the relative path of the files that align with regular expressions. Separate regular expressions with ';'.
-**--include-regex** string Include only the relative path of the files that align with regular expressions. Separate regular expressions with ';'.
+`--list-of-versions` (string) Specifies a file where each version ID is listed on a separate line. Ensure that the source must point to a single blob and all the version IDs specified in the file using this flag must belong to the source blob only. AzCopy will download the specified versions in the destination folder provided.
-**--list-of-versions** string Specifies a file where each version ID is listed on a separate line. Ensure that the source must point to a single blob and all the version IDs specified in the file using this flag must belong to the source blob only. AzCopy will download the specified versions in the destination folder provided. For more information, see [Download previous versions of a blob](./storage-use-azcopy-v10.md#transfer-data).
+`--log-level` (string) Define the log verbosity for the log file, available levels: INFO(all requests/responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default 'INFO'). (default "INFO")
-**--log-level** string Define the log verbosity for the log file, available levels: INFO(all requests/responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default `INFO`).
+`--metadata` (string) Upload to Azure Storage with these key-value pairs as metadata.
-**--metadata** string Upload to Azure Storage with these key-value pairs as metadata.
+`--no-guess-mime-type` Prevents AzCopy from detecting the content-type based on the extension or content of the file.
-**--no-guess-mime-type** Prevents AzCopy from detecting the content-type based on the extension or content of the file.
+`--overwrite` (string) Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default 'true') Possible values include 'true', 'false', 'prompt', and 'ifSourceNewer'. For destinations that support folders, conflicting folder-level properties will be overwritten this flag is 'true' or if a positive response is provided to the prompt. (default "true")
-**--overwrite** string Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default `true`) Possible values include `true`, `false`, `prompt`, and `ifSourceNewer`. For destinations that support folders, conflicting folder-level properties will be overwritten this flag is `true` or if a positive response is provided to the prompt. (default "true")
+`--page-blob-tier` (string) Upload page blob to Azure Storage using this blob tier. (default 'None'). (default "None")
-**--page-blob-tier** string Upload page blob to Azure Storage using this blob tier. (default `None`). (default "None")
+`--preserve-last-modified-time` Only available when destination is file system.
-**--preserve-last-modified-time** Only available when destination is file system.
+`--preserve-owner` Only has an effect in downloads, and only when `--preserve-smb-permissions` is used. If true (the default), the file Owner and Group are preserved in downloads. If set to false,
-**--preserve-owner** Only has an effect in downloads, and only when `--preserve-permissions` is used. If true (the default), the file Owner and Group are preserved in downloads. If set to false,`--preserve-permissions` will still preserve ACLs but Owner and Group will be based on the user running AzCopy (default true)
+`--preserve-smb-permissions` will still preserve ACLs but Owner and Group will be based on the user running AzCopy (default true)
-**--preserve-smb-info** True by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). Only the attribute bits supported by Azure Files will be transferred; any others will be ignored. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for Last Write Time that is never preserved for folders.
+`--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Azure Data Lake Storage Gen2 to Azure Data Lake Storage Gen2). For Hierarchical Namespace accounts, you'll need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern).
-**--preserve-permissions** False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Data Lake Storage Gen 2 to Data Lake Storage Gen 2). For accounts that have a hierarchical namespace, you will need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you will also need the --backup flag to restore permissions where the new Owner will not be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (e.g. include-pattern).
+`--preserve-smb-info` For SMB-aware locations, flag will be set to true by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). Only the attribute bits supported by Azure Files will be transferred; any others will be ignored. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for `Last Write Time` which is never preserved for folders. (default true)
-**--put-md5** Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading.
+`--put-md5` Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading.
-**--recursive** Look into subdirectories recursively when uploading from local file system.
+`--recursive` Look into subdirectories recursively when uploading from local file system.
-**--s2s-detect-source-changed** Detect if the source file/blob changes while it is being read. (This parameter only applies to service-to-service copies, because the corresponding check is permanently enabled for uploads and downloads.)
+`--s2s-detect-source-changed` Detect if the source file/blob changes while it is being read. (This parameter only applies to service-to-service copies, because the corresponding check is permanently enabled for uploads and downloads.)
-**--s2s-handle-invalid-metadata** string Specifies how invalid metadata keys are handled. Available options: ExcludeIfInvalid, FailIfInvalid, RenameIfInvalid. (default `ExcludeIfInvalid`).
+`--s2s-handle-invalid-metadata` (string) Specifies how invalid metadata keys are handled. Available options: ExcludeIfInvalid, FailIfInvalid, RenameIfInvalid. (default 'ExcludeIfInvalid'). (default "ExcludeIfInvalid")
-**--s2s-preserve-access-tier** Preserve access tier during service to service copy. Refer to [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md) to ensure destination storage account supports setting access tier. In the cases that setting access tier is not supported, use s2sPreserveAccessTier=false to bypass copying access tier. (default `true`).
+`--s2s-preserve-access-tier` Preserve access tier during service to service copy. Refer to [Azure Blob storage: hot, cool, and archive access tiers](/azure/storage/blobs/storage-blob-storage-tiers) to ensure destination storage account supports setting access tier. In the cases that setting access tier isn't supported, make sure to use s2sPreserveAccessTier=false to bypass copying access tier. (default true). (default true)
-**--s2s-preserve-blob-tags** Preserve index tags during service to service transfer from one blob storage to another.
+`--s2s-preserve-blob-tags` Preserve index tags during service to service transfer from one blob storage to another
-**--s2s-preserve-properties** Preserve full properties during service to service copy. For AWS S3 and Azure File non-single file source, the list operation doesn't return full properties of objects and files. To preserve full properties, AzCopy needs to send one additional request per object or file. (default true)
+`--s2s-preserve-properties` Preserve full properties during service to service copy. For AWS S3 and Azure File non-single file source, the list operation doesn't return full properties of objects and files. To
+preserve full properties, AzCopy needs to send one more request per object or file. (default true)
## Options inherited from parent commands
-**--cap-mbps float** Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
-**--output-type** string Format of the command's output. The choices include: text, json. The default value is `text`. (default "text")
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
-**--trusted-microsoft-suffixes** string Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is `*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net`. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Doc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-doc.md
description: This article provides reference information for the azcopy doc comm
Previously updated : 07/24/2020 Last updated : 05/26/2022
By default, the files are stored in a folder named 'doc' inside the current dire
azcopy doc [flags] ```
-## Related conceptual articles
--- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data)-- [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)- ## Options
-|Option|Description|
-|--|--|
-|-h, --help|Shows help content for the doc command.|
+`-h`, `--help` help for doc
+`--output-location` (string) where to put the generated markdown files (default "./doc")
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string | Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text").
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.
## See also
storage Storage Ref Azcopy Env https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-env.md
description: This article provides reference information for the azcopy env comm
Previously updated : 07/24/2020 Last updated : 05/26/2022
Shows the environment variables that can configure AzCopy's behavior. For a comp
## Synopsis
+Shows the environment variables that you can use to configure the behavior of AzCopy.
+
+If you set an environment variable by using the command line, that variable will be readable in your command line history. Consider clearing variables that contain credentials from your command line history. To keep variables from appearing in your history, you can use a script to prompt the user for their credentials, and to set the environment variable.
+ ```azcopy azcopy env [flags] ```
-> [!IMPORTANT]
-> If you set an environment variable by using the command line, that variable will be readable in your command line history. Consider clearing variables that contain credentials from your command line history. To keep variables from appearing in your history, you can use a script to prompt the user for their credentials, and to set the environment variable.
-
-## Related conceptual articles
--- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data)-- [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)- ## Options
-|Option|Description|
-|--|--|
-|-h, --help|Shows help content for the env command. |
-|--show-sensitive|Shows sensitive/secret environment variables.|
+`-h`, `--help` help for env
+`--show-sensitive` Shows sensitive/secret environment variables.
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string | Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps float` Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;
+*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Jobs Clean https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-clean.md
description: This article provides reference information for the azcopy jobs cle
Previously updated : 07/24/2020 Last updated : 05/26/2022
Remove all log and plan files for all jobs
-```
+```azcopy
azcopy jobs clean [flags] ```
azcopy jobs clean [flags]
## Examples
-```
- azcopy jobs clean --with-status=completed
+```azcopy
+azcopy jobs clean --with-status=completed
``` ## Options
-**--help** Help for clean.
-
-**--with-status** string Only remove the jobs with this status, available values: `Canceled`, `Completed`, `Failed`, `InProgress`, `All` (default `All`)
+`-h`, `--help` help for clean
+`--with-status` (string) only remove the jobs with this status, available values: All, Canceled, Failed, Completed CompletedWithErrors, CompletedWithSkipped, CompletedWithErrorsAndSkipped (default "All")
## Options inherited from parent commands
-**--cap-mbps float** Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
-**--output-type** string Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
-**--trusted-microsoft-suffixes** string Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Jobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-list.md
description: This article provides reference information for the azcopy jobs lis
Previously updated : 07/24/2020 Last updated : 05/26/2022
azcopy jobs list [flags]
## Options
-|Option|Description|
-|--|--|
-|-h, --help|Show help content for the list command.|
+`-h`, `--help` help for list
+`--with-status` (string) List the jobs with given status, available values: All, Canceled, Failed, InProgress, Completed, CompletedWithErrors, CompletedWithFailures, CompletedWithErrorsAndSkipped (default "All")
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string | Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Jobs Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-remove.md
description: This article provides reference information for the azcopy jobs rem
Previously updated : 07/24/2020 Last updated : 05/26/2022
Remove all files associated with the given job ID.
> [!NOTE] > You can customize the location where log and plan files are saved. See the [azcopy env](storage-ref-azcopy-env.md) command to learn more.
-```
+```azcopy
azcopy jobs remove [jobID] [flags] ```
azcopy jobs remove [jobID] [flags]
## Examples
-```
+```azcopy
azcopy jobs rm e52247de-0323-b14d-4cc8-76e0be2e2d44 ``` ## Options
-**--help** Help for remove.
+`--help` Help for remove.
## Options inherited from parent commands
-**--cap-mbps float** Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
-**--output-type** string Format of the command's output. The choices include: text, json. The default value is `text`. (default `text`)
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
-**--trusted-microsoft-suffixes** string Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Jobs Resume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-resume.md
description: This article provides reference information for the azcopy jobs res
Previously updated : 07/24/2020 Last updated : 05/26/2022
azcopy jobs resume [jobID] [flags]
- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data) - [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)
-## Options
+### Options
-|Option|Description|
-|--|--|
-|--destination-sas string|Destination SAS of the destination for given Job ID.|
-|--exclude string|Filter: Exclude these failed transfer(s) when resuming the job. Files should be separated by ';'.|
-|-h, --help|Show help content for the resume command.|
-|--include string|Filter: only include these failed transfer(s) when resuming the job. Files should be separated by ';'.|
-|--source-sas string |source SAS of the source for given Job ID.|
+`--destination-sas` (string) destination SAS token of the destination for a given Job ID.
-## Options inherited from parent commands
+`--exclude` (string) Filter: exclude these failed transfer(s) when resuming the job. Files should be separated by ';'.
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`-h`, `--help` help for resume
+
+`--include` (string) Filter: only include these failed transfer(s) when resuming the job. Files should be separated by ';'.
+
+`--source-sas` (string) Source SAS token of the source for a given Job ID.
+
+### Options inherited from parent commands
+
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Jobs Show https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-show.md
description: This article provides reference information for the azcopy jobs sho
Previously updated : 07/24/2020 Last updated : 05/26/2022
azcopy jobs show [jobID] [flags]
## Options
-|Option|Description|
-|--|--|
-|-h, --help|Shows help content for the show command.|
-|--with-status string|Only list the transfers of job with this status, available values: Started, Success, Failed|
+`-h`, `--help` Help for show
+`--with-status` (string) Only list the transfers of job with this status, available values: Started, Success, Failed.
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs.md
description: This article provides reference information for the azcopy jobs com
Previously updated : 07/24/2020 Last updated : 05/26/2022
azcopy jobs show [jobID]
## Options
-|Option|Description|
-|--|--|
-|-h, --help|Show help content for the jobs command.|
+`-h`, `--help` Help for jobs
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-list.md
description: This article provides reference information for the azcopy list com
Previously updated : 09/21/2021 Last updated : 05/26/2022
azcopy list [containerURL] [flags]
## Examples ```azcopy
-azcopy list [containerURL]
+azcopy list [containerURL] --properties [semicolon(;) separated list of attributes (LastModifiedTime, VersionId, BlobType, BlobAccessTier, ContentType, ContentEncoding, LeaseState, LeaseDuration, LeaseStatus) enclosed in double quotes (")]
``` ## Options
-|Option|Description|
-|--|--|
-|-h, --help|Show help content for the list command.|
-|--machine-readable|Lists file sizes in bytes.|
-|--mega-units|Displays units in orders of 1000, not 1024.|
-| --properties | delimiter (;) separated values of properties required in list output. |
-|--running-tally|Counts the total number of files and their sizes.|
+`-h`, `--help` Help for list
+
+`--machine-readable` Lists file sizes in bytes.
+
+`--mega-units` Displays units in orders of 1000, not 1024.
+
+`--properties` (string) delimiter (;) separated values of properties required in list output.
+
+`--running-tally` Counts the total number of files and their sizes.
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Login Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-login-status.md
+
+ Title: azcopy login status | Microsoft Docs
+description: This article provides reference information for the azcopy login status command.
+++ Last updated : 05/26/2022+++++
+# azcopy login status
+
+Lists the entities in a given resource.
+
+## Synopsis
+
+Prints if you're currently logged in to your Azure Storage account.
+
+```azcopy
+azcopy login status [flags]
+```
+
+## Related conceptual articles
+
+- [Get started with AzCopy](storage-use-azcopy-v10.md)
+- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data)
+- [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)
+
+### Options
+
+`--endpoint` Prints the Azure Active Directory endpoint that is being used in the current session.
+
+`-h`, `--help` Help for status
+
+`--tenant` Prints the Azure Active Directory tenant ID that is currently being used in session.
+
+### Options inherited from parent commands
+
+`--aad-endpoint` (string) The Azure Active Directory endpoint to use. The default (https://login.microsoftonline.com) is correct for the global Azure cloud. Set this parameter when authenticating in a national cloud. Not needed for Managed Service Identity
+
+`--application-id` (string) Application ID of user-assigned identity. Required for service principal auth.
+
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+
+`--certificate-path` (string) Path to certificate for SPN authentication. Required for certificate-based service principal auth.
+
+`--identity` Log in using virtual machine's identity, also known as managed service identity (MSI).
+
+`--identity-client-id` (string) Client ID of user-assigned identity.
+
+`--identity-object-id` (string) Object ID of user-assigned identity.
+
+`--identity-resource-id` (string) Resource ID of user-assigned identity.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--service-principal` Log in via Service Principal Name (SPN) by using a certificate or a secret. The client secret or certificate password must be placed in the appropriate environment variable.
+Type AzCopy env to see names and descriptions of environment variables.
+
+`--tenant-id` (string) The Azure Active Directory tenant ID to use for OAuth device interactive login.
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
+
+## See also
+
+- [azcopy](storage-ref-azcopy.md)
storage Storage Ref Azcopy Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-login.md
description: This article provides reference information for the azcopy login co
Previously updated : 07/24/2020 Last updated : 05/26/2022
azcopy login [flags]
Log in interactively with default AAD tenant ID set to common:
-```azcopy
-azcopy login
-```
+`azcopy login`
Log in interactively with a specified tenant ID:
-```azcopy
-azcopy login --tenant-id "[TenantID]"
-```
+`azcopy login --tenant-id "[TenantID]"`
Log in by using the system-assigned identity of a Virtual Machine (VM):
-```azcopy
-azcopy login --identity
-```
+`azcopy login --identity`
Log in by using the user-assigned identity of a VM and a Client ID of the service identity:
-```azcopy
-azcopy login --identity --identity-client-id "[ServiceIdentityClientID]"
-```
+`azcopy login --identity --identity-client-id "[ServiceIdentityClientID]"`
Log in by using the user-assigned identity of a VM and an Object ID of the service identity:
-```azcopy
-azcopy login --identity --identity-object-id "[ServiceIdentityObjectID]"
-```
+`azcopy login --identity --identity-object-id "[ServiceIdentityObjectID]"`
Log in by using the user-assigned identity of a VM and a Resource ID of the service identity:
-```azcopy
-azcopy login --identity --identity-resource-id "/subscriptions/<subscriptionId>/resourcegroups/myRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myID"
-```
+`azcopy login --identity --identity-resource-id "/subscriptions/<subscriptionId>/resourcegroups/myRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myID"`
Log in as a service principal by using a client secret:+ Set the environment variable AZCOPY_SPA_CLIENT_SECRET to the client secret for secret based service principal auth.
-```azcopy
-azcopy login --service-principal --application-id <your service principal's application ID>
-```
+`azcopy login --service-principal --application-id <your service principal's application ID>`
Log in as a service principal by using a certificate and it's password:
-Set the environment variable AZCOPY_SPA_CERT_PASSWORD to the certificate's password for cert-based service principal auth:
+Set the environment variable `AZCOPY_SPA_CERT_PASSWORD` to the certificate's password for cert based service principal auth
-```azcopy
-azcopy login --service-principal --certificate-path /path/to/my/cert --application-id <your service principal's application ID>
-```
+`azcopy login --service-principal --certificate-path /path/to/my/cert --application-id <your service principal's application ID>`
+
+Treat /path/to/my/cert as a path to a PEM or PKCS12 file--. AzCopy doesn't reach into the system cert store to obtain your certificate. `--certificate-path` is mandatory when doing cert-based service principal auth.
-Treat `/path/to/my/cert` as a path to a PEM or PKCS12 file. AzCopy does not reach into the system cert store to obtain your certificate.
+Subcommand for login to check the login status of your current session.
-`--certificate-path` is mandatory when doing cert-based service principal auth.
+`azcopy login status`
## Options
-**--aad-endpoint** string The Azure Active Directory endpoint to use. The default (https://login.microsoftonline.com) is correct for the global Azure cloud. Set this parameter when authenticating in a national cloud. Not needed for Managed Service Identity.
+`--aad-endpoint` (string) The Azure Active Directory endpoint to use. The default (https://login.microsoftonline.com) is correct for the global Azure cloud. Set this parameter when authenticating in a national cloud. Not needed for Managed Service Identity
-**--application-id** string Application ID of user-assigned identity. Required for service principal auth.
+`--application-id` (string) Application ID of user-assigned identity. Required for service principal auth.
-**--certificate-path** string Path to certificate for SPN authentication. Required for certificate-based service principal auth.
+`--certificate-path` (string) Path to certificate for SPN authentication. Required for certificate-based service principal auth.
-**--help** help for the `azcopy login` command.
+`-h`, `--help` Help for login
-**--identity** Login using virtual machine's identity, also known as managed service identity (MSI).
+`--identity` Log in using virtual machine's identity, also known as managed service identity (MSI).
-**--identity-client-id** string Client ID of user-assigned identity.
+`--identity-client-id` (string) Client ID of user-assigned identity.
-**--identity-object-id** string Object ID of user-assigned identity.
+`--identity-object-id` (string) Object ID of user-assigned identity.
-**--identity-resource-id** string Resource ID of user-assigned identity.
+`--identity-resource-id` (string) Resource ID of user-assigned identity.
-**--service-principal** Log in via Service Principal Name (SPN) by using a certificate or a secret. The client secret or certificate password must be placed in the appropriate environment variable. Type AzCopy env to see names and descriptions of environment variables.
+`--service-principal` Log in via Service Principal Name (SPN) by using a certificate or a secret. The client secret or certificate password must be placed in the appropriate environment variable. Type
+AzCopy env to see names and descriptions of environment variables.
-**--tenant-id** string The Azure Active Directory tenant ID to use for OAuth device interactive login.
+`--tenant-id` (string) The Azure Active Directory tenant ID to use for OAuth device interactive login.
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Logout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-logout.md
description: This article provides reference information for the azcopy logout c
Previously updated : 07/24/2020 Last updated : 05/26/2022
azcopy logout [flags]
- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data) - [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)
-## Options
+### Options
-|Option|Description|
-|--|--|
-|-h, --help|Show help content for the logout command.|
+`-h`, `--help` help for logout
-## Options inherited from parent commands
+### Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Make https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-make.md
description: This article provides reference information for the azcopy make com
Previously updated : 07/24/2020 Last updated : 05/26/2022
azcopy make "https://[account-name].[blob,file,dfs].core.windows.net/[top-level-
## Options
-|Option|Description|
-|--|--|
-|-h, --help|Show help content for the make command. |
-|--quota-gb uint32|Specifies the maximum size of the share in gigabytes (GB), zero means you accept the file service's default quota.|
+`-h`, `--help` help for make
+`--quota-gb` (uint32) Specifies the maximum size of the share in gigabytes (GiB), 0 means you accept the file service's default quota.
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-remove.md
description: This article provides reference information for the azcopy remove c
Previously updated : 09/21/2021 Last updated : 05/26/2022
azcopy remove [resourceURL] [flags]
Remove a single blob by using a SAS token:
-```azcopy
-azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"
-```
+`azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]"`
Remove an entire virtual directory by using a SAS token:
-```azcopy
-azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true
-```
+`azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true`
Remove only the blobs inside of a virtual directory, but don't remove any subdirectories or blobs within those subdirectories:
-```azcopy
-azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --recursive=false
-```
+`azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --recursive=false`
-Remove a subset of blobs in a virtual directory (For example: remove only jpg and pdf files, or if the blob name is `exactName`):
+Remove a subset of blobs in a virtual directory (For example: remove only jpg and pdf files, or if the blob name is "exactName"):
-```azcopy
-azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true --include-pattern="*.jpg;*.pdf;exactName"
-```
+`azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true --include-pattern="*.jpg;*.pdf;exactName"`
Remove an entire virtual directory but exclude certain blobs from the scope (For example: every blob that starts with foo or ends with bar):
-```azcopy
-azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true --exclude-pattern="foo*;*bar"
-```
+`azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --recursive=true --exclude-pattern="foo*;*bar"`
+
+Remove specified version IDs of a blob from Azure Storage. Ensure that source is a valid blob and `versionidsfile` which takes in a path to the file where each version is written on a separate line. All the specified versions will be removed from Azure Storage.
+
+`azcopy rm "https://[srcaccount].blob.core.windows.net/[containername]/[blobname]" "/path/to/dir" --list-of-versions="/path/to/dir/[versionidsfile]"`
Remove specific blobs and virtual directories by putting their relative paths (NOT URL-encoded) in a file:
-```azcopy
-azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/parent/dir]" --recursive=true --list-of-files=/usr/bar/list.txt
-- file content:
- dir1/dir2
- blob1
- blob2
-```
+`azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/parent/dir]" --recursive=true --list-of-files=/usr/bar/list.txt`
Remove a single file from a Blob Storage account that has a hierarchical namespace (include/exclude not supported):
-```azcopy
-azcopy rm "https://[account].dfs.core.windows.net/[container]/[path/to/file]?[SAS]"
-```
+`azcopy rm "https://[account].dfs.core.windows.net/[container]/[path/to/file]?[SAS]"`
Remove a single directory from a Blob Storage account that has a hierarchical namespace (include/exclude not supported):
-```azcopy
-azcopy rm "https://[account].dfs.core.windows.net/[container]/[path/to/directory]?[SAS]"
-```
+`azcopy rm "https://[account].dfs.core.windows.net/[container]/[path/to/directory]?[SAS]"`
## Options
-**--delete-snapshots** string By default, the delete operation fails if a blob has snapshots. Specify `include` to remove the root blob and all its snapshots; alternatively specify `only` to remove only the snapshots but keep the root blob.
+`--delete-snapshots` (string) By default, the delete operation fails if a blob has snapshots. Specify 'include' to remove the root blob and all its snapshots; alternatively specify 'only' to remove only the snapshots but keep the root blob.
+
+`--dry-run` Prints the path files that would be removed by the command. This flag doesn't trigger the removal of the files.
-**--dry-run** Prints the path files that would be removed by the command. This flag does not trigger the removal of the files.
+`--exclude-path` (string) Exclude these paths when removing. This option doesn't support wildcard characters (*). Checks relative path prefix. For example: myFolder;myFolder/subDirName/file.pdf
-**--exclude-path** string Exclude these paths when removing. This option does not support wildcard characters (*). Checks relative path prefix. For example: `myFolder;myFolder/subDirName/file.pdf`
+`--exclude-pattern` (string) Exclude files where the name matches the pattern list. For example: *.jpg;*.pdf;exactName
-**--exclude-pattern** string Exclude files where the name matches the pattern list. For example: `*.jpg`;`*.pdf`;`exactName`
+`--force-if-read-only` When deleting an Azure Files file or folder, force the deletion to work even if the existing object has its read-only attribute set
-**--force-if-read-only** When deleting an Azure Files file or folder, force the deletion to work even if the existing object is has its read-only attribute set.
+`--from-to` (string) Optionally specifies the source destination combination. For Example: BlobTrash, FileTrash, BlobFSTrash
-**--from-to** string Optionally specifies the source destination combination. For Example: BlobTrash, FileTrash, BlobFSTrash
+`-h`, `--help` help for remove
-**--help** help for remove.
+`--include-path` (string) Include only these paths when removing. This option doesn't support wildcard characters (*). Checks relative path prefix. For example: myFolder;myFolder/subDirName/file.pdf
-**--include-path** string Include only these paths when removing. This option does not support wildcard characters (*). Checks relative path prefix. For example: `myFolder;myFolder/subDirName/file.pdf`
+`--include-pattern` (string) Include only files where the name matches the pattern list. For example: *.jpg;*.pdf;exactName
-**--include-pattern** string Include only files where the name matches the pattern list. For example: *`.jpg`;*`.pdf`;`exactName`
+`--list-of-files` (string) Defines the location of a file which contains the list of files and directories to be deleted. The relative paths should be delimited by line breaks, and the paths should NOT be URL-encoded.
-**--list-of-files** string Defines the location of a file, which contains the list of files and directories to be deleted. The relative paths should be
-delimited by line breaks, and the paths should NOT be URL-encoded.
+`--list-of-versions` (string) Specifies a file where each version ID is listed on a separate line. Ensure that the source must point to a single blob and all the version IDs specified in the file using this flag must belong to the source blob only. Specified version IDs of the given blob will get deleted from Azure Storage.
-**--list-of-versions** string Specifies a file where each version ID is listed on a separate line. Ensure that the source must point to a single blob and all the version IDs specified in the file using this flag must belong to the source blob only. Specified version IDs of the given blob will get deleted from Azure Storage.
+`--log-level` (string) Define the log verbosity for the log file. Available levels include: INFO(all requests/responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default 'INFO') (default "INFO")
-**--log-level** string Define the log verbosity for the log file. Available levels include: `INFO`(all requests/responses), `WARNING`(slow responses), `ERROR`(only failed requests), and `NONE`(no output logs). (default `INFO`) (default `INFO`)
+`--permanent-delete` (string) This is a preview feature that PERMANENTLY deletes soft-deleted snapshots/versions. Possible values include 'snapshots', 'versions', 'snapshotsandversions', 'none'. (default "none")
-**--recursive** Look into subdirectories recursively when syncing between directories.
+`--recursive` Look into subdirectories recursively when syncing between directories.
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps float|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps float` Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md
description: This article provides reference information for the azcopy sync com
Previously updated : 09/01/2021 Last updated : 05/26/2022
Replicates the source location to the destination location. This article provide
## Synopsis
-The last modified times are used for comparison. The file is skipped if the last modified time in the destination is more recent.
-
-The supported pairs are:
--- Local <-> Azure Blob (either SAS or OAuth authentication can be used)
+The last modified times are used for comparison. The file is skipped if the last modified time in the destination is more recent. The supported pairs are:
+
+- Local <-> Azure Blob / Azure File (either SAS or OAuth authentication can be used)
- Azure Blob <-> Azure Blob (Source must include a SAS or is publicly accessible; either SAS or OAuth authentication can be used for destination) - Azure File <-> Azure File (Source must include a SAS or is publicly accessible; SAS authentication should be used for destination)-- Local <-> Azure File - Azure Blob <-> Azure File The sync command differs from the copy command in several ways:
-1. By default, the recursive flag is true and sync copies all subdirectories. Sync only copies the top-level files inside a directory if the recursive flag is false.
-2. When syncing between virtual directories, add a trailing slash to the path (refer to examples) if there's a blob with the same name as one of the virtual directories.
-3. If the `--delete-destination` flag is set to true or prompt, then sync will delete files and blobs at the destination that are not present at the source.
-
-## Related conceptual articles
--- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Tutorial: Migrate on-premises data to cloud storage with AzCopy](storage-use-azcopy-migrate-on-premises-data.md)-- [Transfer data with AzCopy and Blob storage](./storage-use-azcopy-v10.md#transfer-data)-- [Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)-
-### Advanced
+ 1. By default, the recursive flag is true and sync copies all subdirectories. Sync only copies the top-level files inside a directory if the recursive flag is false.
+ 2. When syncing between virtual directories, add a trailing slash to the path (refer to examples) if there's a blob with the same name as one of the virtual directories.
+ 3. If the 'deleteDestination' flag is set to true or prompt, then sync will delete files and blobs at the destination that aren't present at the source.
-If you don't specify a file extension, AzCopy automatically detects the content type of the files when uploading from the local disk, based on the file extension or content (if no extension is specified).
+Advanced:
-The built-in lookup table is small, but on Unix, it's augmented by the local system's mime.types file(s) if available under one or more of these names:
+Note that if you don't specify a file extension, AzCopy automatically detects the content type of the files when uploading from the local disk, based on the file extension or content.
+The built-in lookup table is small but on Unix it's augmented by the local system's mime.types file(s) if available under one or more of these names:
+
- /etc/mime.types - /etc/apache2/mime.types - /etc/apache/mime.types On Windows, MIME types are extracted from the registry.
+Also note that sync works off of the last modified times exclusively. So in the case of Azure File <-> Azure File,
+the header field Last-Modified is used instead of x-ms-file-change-time, which means that metadata changes at the source can also trigger a full copy.
+ ```azcopy
-azcopy sync <source> <destination> [flags]
+azcopy sync [flags]
``` ## Examples Sync a single file:
-```azcopy
-azcopy sync "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]"
-```
+`azcopy sync "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]"`
Same as above, but also compute an MD5 hash of the file content, and then save that MD5 hash as the blob's Content-MD5 property.
-```azcopy
-azcopy sync "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]" --put-md5
-```
+`azcopy sync "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]" --put-md5`
Sync an entire directory including its subdirectories (note that recursive is by default on):
-```azcopy
-azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]"
-```
-
+`azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]"`
or-
-```azcopy
-azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --put-md5
-```
+`azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --put-md5`
Sync only the files inside of a directory but not subdirectories or the files inside of subdirectories:
-```azcopy
-azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --recursive=false
-```
+`azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --recursive=false`
-Sync a subset of files in a directory (For example: only jpg and pdf files, or if the file name is `exactName`):
+Sync a subset of files in a directory (For example: only jpg and pdf files, or if the file name is "exactName"):
-```azcopy
-azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --include-pattern="*.jpg;*.pdf;exactName"
-```
+`azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --include-pattern="*.jpg;*.pdf;exactName"`
Sync an entire directory but exclude certain files from the scope (For example: every file that starts with foo or ends with bar):
-```azcopy
-azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --exclude-pattern="foo*;*bar"
-```
+`azcopy sync "/path/to/dir" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --exclude-pattern="foo*;*bar"`
Sync a single blob:
-```azcopy
-azcopy sync "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "https://[account].blob.core.windows.net/[container]/[path/to/blob]"
-```
+`azcopy sync "https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]" "https://[account].blob.core.windows.net/[container]/[path/to/blob]"`
Sync a virtual directory:
-```azcopy
-azcopy sync "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]?[SAS]" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --recursive=true
-```
+`azcopy sync "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]?[SAS]" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]" --recursive=true`
Sync a virtual directory that has the same name as a blob (add a trailing slash to the path in order to disambiguate):
-```azcopy
-azcopy sync "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]/?[SAS]" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]/" --recursive=true
-```
+`azcopy sync "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]/?[SAS]" "https://[account].blob.core.windows.net/[container]/[path/to/virtual/dir]/" --recursive=true`
-Sync an Azure File directory:
+Sync an Azure File directory (same syntax as Blob):
-```azcopy
-azcopy sync "https://[account].file.core.windows.net/[share]/[path/to/dir]?[SAS]" "https://[account].file.core.windows.net/[share]/[path/to/dir]?[SAS]" --recursive=true
-```
+`azcopy sync "https://[account].file.core.windows.net/[share]/[path/to/dir]?[SAS]" "https://[account].file.core.windows.net/[share]/[path/to/dir]" --recursive=true`
-> [!NOTE]
-> If include/exclude flags are used together, only files matching the include patterns would be looked at, but those matching the exclude patterns would be always be ignored.
+Note: if include and exclude flags are used together, only files matching the include patterns are used, but those matching the exclude patterns are ignored.
## Options
-**--block-size-mb** float Use this block size (specified in MiB) when uploading to Azure Storage or downloading from Azure Storage. Default is automatically calculated based on file size. Decimal fractions are allowed (For example: `0.25`).
+`--block-size-mb` (float) Use this block size (specified in MiB) when uploading to Azure Storage or downloading from Azure Storage. Default is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
+
+`--check-md5` (string) Specifies how strictly MD5 hashes should be validated when downloading. This option is only available when downloading. Available values include: NoCheck, LogOnly, FailIfDifferent, FailIfDifferentOrMissing. (default 'FailIfDifferent'). (default "FailIfDifferent")
-**--check-md5** string Specifies how strictly MD5 hashes should be validated when downloading. This option is only available when downloading. Available values include: `NoCheck`, `LogOnly`, `FailIfDifferent`, `FailIfDifferentOrMissing`. (default `FailIfDifferent`). (default `FailIfDifferent`)
+`--cpk-by-name` (string) Client provided key by name let clients that make requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key name will be fetched from Azure Key Vault and will be used to encrypt the data
-**--cpk-by-name** string Client provided key by name let clients making requests against Azure Blob Storage an option to provide an encryption key on a per-request basis. Provided key name will be fetched from Azure Key Vault and will be used to encrypt the data
+`--cpk-by-value` Client provided key by name let clients that make requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables
-**--cpk-by-value** Client provided key by name let clients making requests against Azure Blob Storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables
+`--delete-destinatio` (string) Defines whether to delete extra files from the destination that aren't present at the source. Could be set to true, false, or prompt. If set to prompt, the user will be asked a question before scheduling files and blobs for deletion. (default 'false'). (default "false")
-**--delete-destination** string Defines whether to delete extra files from the destination that are not present at the source. Could be set to `true`, `false`, or `prompt`. If set to `prompt`, the user will be asked a question before scheduling files and blobs for deletion. (default `false`). (default `false`)
+`--dry-run` Prints the path of files that would be copied or removed by the sync command. This flag doesn't copy or remove the actual files.
-**--dry-run** Prints the path of files that would be copied or removed by the sync command. This flag does not copy or remove the actual files.
+`--exclude-attributes` (string) (Windows only) Exclude files whose attributes match the attribute list. For example: A;S;R
-**--exclude-attributes** string (Windows only) Excludes files whose attributes match the attribute list. For example: `A;S;R`
+`--exclude-path` (string) Exclude these paths when comparing the source against the destination. This option doesn't support wildcard characters (*). Checks relative path prefix(For example: myFolder;myFolder/subDirName/file.pdf).
-**--exclude-path** string Exclude these paths when comparing the source against the destination. This option does not support wildcard characters (*). Checks relative path prefix(For example: `myFolder;myFolder/subDirName/file.pdf`).
+`--exclude-pattern` (string) Exclude files where the name matches the pattern list. For example: *.jpg;*.pdf;exactName
-**--exclude-pattern** string Exclude files where the name matches the pattern list. For example: `*.jpg;*.pdf;exactName`
+`--exclude-regex` (string) Exclude the relative path of the files that match with the regular expressions. Separate regular expressions with ';'.
-**--exclude-regex** string Exclude the relative path of the files that match with the regular expressions. Separate regular expressions with ';'.
+`--from-to` (string) Optionally specifies the source destination combination. For Example: LocalBlob, BlobLocal, LocalFile, FileLocal, BlobFile, FileBlob, etc.
-**--help** help for sync.
+`-h`, `--help` help for sync
-**--include-attributes** string (Windows only) Includes only files whose attributes match the attribute list. For example: `A;S;R`
+`--include-attributes` (string) (Windows only) Include only files whose attributes match the attribute list. For example: A;S;R
-**--include-pattern** string Include only files where the name matches the pattern list. For example: `*.jpg;*.pdf;exactName`
+`--include-pattern` (string) Include only files where the name matches the pattern list. For example: *.jpg;*.pdf;exactName
-**--include-regex** string Include only the relative path of the files that align with regular expressions. Separate regular expressions with ';'.
+`--include-regex` (string) Include the relative path of the files that match with the regular expressions. Separate regular expressions with ';'.
-**--log-level** string Define the log verbosity for the log file, available levels: `INFO`(all requests and responses), `WARNING`(slow responses), `ERROR`(only failed requests), and `NONE`(no output logs). (default `INFO`).
+`--log-level` (string) Define the log verbosity for the log file, available levels: INFO(all requests and responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default INFO). (default "INFO")
-**--mirror-mode** Disable last-modified-time based comparison and overwrites the conflicting files and blobs at the destination if this flag is set to `true`. Default is `false`.
+`--mirror-mode` Disable last-modified-time based comparison and overwrites the conflicting files and blobs at the destination if this flag is set to true. Default is false
-**--preserve-smb-info** True by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for Last Write Time that is not preserved for folders.
+`--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or ADLS Gen 2 to ADLS Gen 2). For Hierarchical Namespace accounts, you'll need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern).
-**--preserve-permissions** False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Data Lake Storage Gen 2 to Data Lake Storage Gen 2). For accounts that have a hierarchical namespace, you will need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you will also need the --backup flag to restore permissions where the new Owner will not be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (e.g. include-pattern).
+`--preserve-smb-info` For SMB-aware locations, flag will be set to true by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Azure Files). This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for Last Write Time that isn't preserved for folders. (default true)
-**--put-md5** Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading.
+`--put-md5` Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading.
-**--recursive** `True` by default, look into subdirectories recursively when syncing between directories. (default `True`).
+`--recursive` True by default, look into subdirectories recursively when syncing between directories. (default true). (default true)
-**--s2s-preserve-access-tier** Preserve access tier during service to service copy. Refer to [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md) to ensure destination storage account supports setting access tier. In the cases that setting access tier is not supported, please use `--s2s-preserve-access-tier=false` to bypass copying access tier. (default `true`).
+`--s2s-preserve-access-tier` Preserve access tier during service to service copy. Refer to [Azure Blob storage: hot, cool, and archive access tiers](../blobs/storage-blob-storage-tiers.md) to ensure destination storage account supports setting access tier. In the cases that setting access tier isn't supported, please use `s2sPreserveAccessTier=false` to bypass copying access tier. (default true). (default true)
-**--s2s-preserve-blob-tags** Preserve index tags during service to service sync from one blob storage to another.
+`--s2s-preserve-blob-tags` Preserve index tags during service to service sync from one blob storage to another
## Options inherited from parent commands
-|Option|Description|
-|||
-|--cap-mbps uint32|Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.|
-|--output-type string|Format of the command's output. The choices include: text, json. The default value is "text".|
-|--trusted-microsoft-suffixes string |Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.|
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it's omitted, the throughput isn't capped.
+
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
+
+`--trusted-microsoft-suffixes` (string) Specifies other domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Ref Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy.md
description: This article provides reference information for the azcopy command.
Previously updated : 07/24/2020 Last updated : 05/26/2022
# azcopy
+Current version: 10.15.0
+ AzCopy is a command-line tool that moves data into and out of Azure Storage. See the [Get started with AzCopy](storage-use-azcopy-v10.md) article to download AzCopy and learn about the ways that you can provide authorization credentials to the storage service. ## Synopsis
To report issues or to learn more about the tool, see [https://github.com/Azure/
## Options
-**--cap-mbps** (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
+`--cap-mbps` (float) Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn't capped.
-**--help** Help for azcopy
+`-h`, `--help` help for azcopy
-**--output-type** (string) Format of the command's output. The choices include: text, json. The default value is `text`. (default `text`)
+`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
-**--trusted-microsoft-suffixes** (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
description: Learn how to deploy Azure File Sync, from start to finish, using th
Previously updated : 05/24/2022 Last updated : 05/27/2022
We strongly recommend that you read [Planning for an Azure Files deployment](../
# [Portal](#tab/azure-portal)
-1. An Azure file share in the same region that you want to deploy Azure File Sync. For more information, see:
+1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see:
- [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync. - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
-1. At least one supported instance of Windows Server or Windows Server cluster to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
+2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
+4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
+
+ > [!NOTE]
+ > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
# [PowerShell](#tab/azure-powershell)
-1. An Azure file share in the same region that you want to deploy Azure File Sync. For more information, see:
+1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see:
- [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync. - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
-1. At least one supported instance of Windows Server or Windows Server cluster to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
+2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
+
+4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
+
+ > [!NOTE]
+ > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
-1. The Az PowerShell module may be used with either PowerShell 5.1 or PowerShell 6+. You may use the Az PowerShell module for Azure File Sync on any supported system, including non-Windows systems, however the server registration cmdlet must always be run on the Windows Server instance you are registering (this can be done directly or via PowerShell remoting). On Windows Server 2012 R2, you can verify that you are running at least PowerShell 5.1.\* by looking at the value of the **PSVersion** property of the **$PSVersionTable** object:
+5. The Az PowerShell module may be used with either PowerShell 5.1 or PowerShell 6+. You may use the Az PowerShell module for Azure File Sync on any supported system, including non-Windows systems, however the server registration cmdlet must always be run on the Windows Server instance you are registering (this can be done directly or via PowerShell remoting). On Windows Server 2012 R2, you can verify that you are running at least PowerShell 5.1.\* by looking at the value of the **PSVersion** property of the **$PSVersionTable** object:
```powershell $PSVersionTable.PSVersion
We strongly recommend that you read [Planning for an Azure Files deployment](../
> [!IMPORTANT] > If you plan to use the Server Registration UI, rather than registering directly from PowerShell, you must use PowerShell 5.1.
-1. If you have opted to use PowerShell 5.1, ensure that at least .NET 4.7.2 is installed. Learn more about [.NET Framework versions and dependencies](/dotnet/framework/migration-guide/versions-and-dependencies) on your system.
+6. If you have opted to use PowerShell 5.1, ensure that at least .NET 4.7.2 is installed. Learn more about [.NET Framework versions and dependencies](/dotnet/framework/migration-guide/versions-and-dependencies) on your system.
> [!IMPORTANT] > If you are installing .NET 4.7.2+ on Windows Server Core, you must install with the `quiet` and `norestart` flags or the installation will fail. For example, if installing .NET 4.8, the command would look like the following:
We strongly recommend that you read [Planning for an Azure Files deployment](../
> Start-Process -FilePath "ndp48-x86-x64-allos-enu.exe" -ArgumentList "/q /norestart" -Wait > ```
-1. The Az PowerShell module, which can be installed by following the instructions here: [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+7. The Az PowerShell module, which can be installed by following the instructions here: [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
> [!NOTE] > The Az.StorageSync module is now installed automatically when you install the Az PowerShell module. # [Azure CLI](#tab/azure-cli)
-1. An Azure file share in the same region that you want to deploy Azure File Sync. For more information, see:
+1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see:
- [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync. - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
-1. At least one supported instance of Windows Server or Windows Server cluster to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
+2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
-1. [Install the Azure CLI](/cli/azure/install-azure-cli)
+4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
+
+ > [!NOTE]
+ > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
+
+5. [Install the Azure CLI](/cli/azure/install-azure-cli)
If you prefer, you can also use Azure Cloud Shell to complete the steps in this tutorial. Azure Cloud Shell is an interactive shell environment that you use through your browser. Start Cloud Shell by using one of these methods:
We strongly recommend that you read [Planning for an Azure Files deployment](../
- Select the **Cloud Shell** button on the menu bar at the upper right corner in the [Azure portal](https://portal.azure.com)
-1. Sign in.
+6. Sign in.
Sign in using the [az login](/cli/azure/reference-index#az-login) command if you're using a local install of the CLI.
We strongly recommend that you read [Planning for an Azure Files deployment](../
Follow the steps displayed in your terminal to complete the authentication process.
-1. Install the [az filesync](/cli/azure/storagesync) Azure CLI extension.
+7. Install the [az filesync](/cli/azure/storagesync) Azure CLI extension.
```azurecli az extension add --name storagesync
The Azure File Sync agent is a downloadable package that enables Windows Server
You can download the agent from the [Microsoft Download Center](https://go.microsoft.com/fwlink/?linkid=858257). When the download is finished, double-click the MSI package to start the Azure File Sync agent installation. > [!IMPORTANT]
-> If you intend to use Azure File Sync with a Failover Cluster, the Azure File Sync agent must be installed on every node in the cluster. Each node in the cluster must be registered to work with Azure File Sync. The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
+> If you are using Azure File Sync with a Failover Cluster, the Azure File Sync agent must be installed on every node in the cluster. Each node in the cluster must be registered to work with Azure File Sync.
We recommend that you do the following: - Leave the default installation path (C:\Program Files\Azure\StorageSyncAgent), to simplify troubleshooting and server maintenance.
The default maximum number of VSS snapshots per volume (64) as well as the defau
If a maximum of 64 VSS snapshots per volume is not the correct setting for you, then [change that value via a registry key](/windows/win32/backup/registry-keys-for-backup-and-restore#maxshadowcopies). For the new limit to take effect, you need to re-run the cmdlet to enable previous version compatibility on every volume it was previously enabled, with the -Force flag to take the new maximum number of VSS snapshots per volume into account. This will result in a newly calculated number of compatible days. Please note that this change will only take effect on newly tiered files and overwrite any customizations on the VSS schedule you might have made.
+VSS snapshots by default can consume up to 10% of the volume space. To adjust the amount of storage that can be used for VSS snapshots, use the [vssadmin resize shadowstorage](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc788050(v=ws.11)) command.
+ <a id="proactive-recall"></a> ## Proactively recall new and changed files from an Azure file share
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
description: Plan for a deployment with Azure File Sync, a service that allows y
Previously updated : 04/05/2022 Last updated : 05/27/2022
We'll use an example to illustrate how to estimate the amount of free space woul
In this case, Azure File Sync would need about 209,500,000 KiB (209.5 GiB) of space for this namespace. Add this amount to any additional free space that is desired in order to figure out how much free space is required for this disk. ### Failover Clustering
-1. Windows Server Failover Clustering is supported by Azure File Sync for the "File Server for general use" deployment option.
+1. Windows Server Failover Clustering is supported by Azure File Sync for the "File Server for general use" deployment option. For more information on how to configure the "File Server for general use" role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
2. The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks 3. Failover Clustering is not supported on "Scale-Out File Server for application data" (SOFS) or on Clustered Shared Volumes (CSVs) or local disks.
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md
If a server is not listed under **Registered servers** for a Storage Sync Servic
### Cloud endpoint creation errors
-<a id="cloud-endpoint-using-share"></a>**Cloud endpoint creation fails, with this error: "The specified Azure FileShare is already in use by a different CloudEndpoint"**
-This error occurs if the Azure file share is already in use by another cloud endpoint.
-
-If you see this message and the Azure file share currently is not in use by a cloud endpoint, complete the following steps to clear the Azure File Sync metadata on the Azure file share:
-
-> [!Warning]
-> Deleting the metadata on an Azure file share that is currently in use by a cloud endpoint causes Azure File Sync operations to fail. If you then use this file share for sync in a different sync group, data loss for files in the old sync group is almost certain.
-
-1. In the Azure portal, go to your Azure file share.  
-2. Right-click the Azure file share, and then select **Edit metadata**.
-3. Right-click **SyncService**, and then select **Delete**.
+<a id="cloud-endpoint-mgmtinternalerror"></a>**Cloud endpoint creation fails, with this error: "MgmtInternalError"**
+This error can occur if the Azure File Sync service cannot access the storage account due to SMB security settings. To enable Azure File Sync to access the storage account, the SMB security settings on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
<a id="cloud-endpoint-authfailed"></a>**Cloud endpoint creation fails, with this error: "AuthorizationFailed"** This error occurs if your user account doesn't have sufficient rights to create a cloud endpoint.
To determine whether your user account role has the required permissions:
* **Role assignment** should have **Read** and **Write** permissions. * **Role definition** should have **Read** and **Write** permissions.
+<a id="cloud-endpoint-using-share"></a>**Cloud endpoint creation fails, with this error: "The specified Azure FileShare is already in use by a different CloudEndpoint"**
+This error occurs if the Azure file share is already in use by another cloud endpoint.
+
+If you see this message and the Azure file share currently is not in use by a cloud endpoint, complete the following steps to clear the Azure File Sync metadata on the Azure file share:
+
+> [!Warning]
+> Deleting the metadata on an Azure file share that is currently in use by a cloud endpoint causes Azure File Sync operations to fail. If you then use this file share for sync in a different sync group, data loss for files in the old sync group is almost certain.
+
+1. In the Azure portal, go to your Azure file share.  
+2. Right-click the Azure file share, and then select **Edit metadata**.
+3. Right-click **SyncService**, and then select **Delete**.
+ ### Server endpoint creation and deletion errors <a id="-2134375898"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134375898 or 0x80c80226)**
Verify you have the latest Azure File Sync agent version installed and give the
This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception is not checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings) section in the deployment guide.
-<a id="-2147024891"></a>**Sync failed because permissions on the System Volume Information folder are incorrect.**
+<a id="-2147024891"></a>**Sync failed with access denied due to security settings on the storage account or NTFS permissions on the server.**
| Error | Code | |-|-|
This error occurs if the firewall and virtual network settings are enabled on th
| **Error string** | ERROR_ACCESS_DENIED | | **Remediation required** | Yes |
-This error can occur if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
+This error can occur if the Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
-To resolve this issue, perform the following steps:
+1. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings)
+3. Verify the **NT AUTHORITY\SYSTEM** account has permissions to the System Volume Information folder on the volume where the server endpoint is located by performing the following steps:
-1. Download [Psexec](/sysinternals/downloads/psexec) tool.
-2. Run the following command from an elevated command prompt to launch a command prompt using the system account: `PsExec.exe -i -s -d cmd`
-3. From the command prompt running under the system account, run the following command to confirm the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder: `cacls "drive letter:\system volume information" /T /C`
-4. If the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder, run the following command: `cacls "drive letter:\system volume information" /T /E /G "NT AUTHORITY\SYSTEM:F"`
- - If step #4 fails with access denied, run the following command to take ownership of the System Volume Information folder and then repeat step #4: `takeown /A /R /F "drive letter:\System Volume Information"`
+ a. Download [Psexec](/sysinternals/downloads/psexec) tool.
+ b. Run the following command from an elevated command prompt to launch a command prompt using the system account: `PsExec.exe -i -s -d cmd`
+ c. From the command prompt running under the system account, run the following command to confirm the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder: `cacls "drive letter:\system volume information" /T /C`
+ d. If the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder, run the following command: `cacls "drive letter:\system volume information" /T /E /G "NT AUTHORITY\SYSTEM:F"`
+ - If step #d fails with access denied, run the following command to take ownership of the System Volume Information folder and then repeat step #d: `takeown /A /R /F "drive letter:\System Volume Information"`
<a id="-2134375810"></a>**Sync failed because the Azure file share was deleted and recreated.**
storage Storage Files Configure P2s Vpn Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md
description: How to configure a Point-to-Site (P2S) VPN on Windows for use with
Previously updated : 10/19/2019 Last updated : 05/27/2022
The article details the steps to configure a Point-to-Site VPN on Windows (Windo
- A virtual network with a private endpoint for the storage account containing the Azure file share you want to mount on-premises. To learn more about how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell).
+- A [gateway subnet](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings#gwsub) must be created on the virtual network.
+ ## Collect environment information
-In order to set up the point-to-site VPN, we first need to collect some information about your environment for use throughout the guide. See the [prerequisites](#prerequisites) section if you have not already created a storage account, virtual network, and/or private endpoints.
+In order to set up the point-to-site VPN, we first need to collect some information about your environment for use throughout the guide. See the [prerequisites](#prerequisites) section if you have not already created a storage account, virtual network, gateway subnet, and/or private endpoints.
Remember to replace `<resource-group>`, `<vnet-name>`, `<subnet-name>`, and `<storage-account-name>` with the appropriate values for your environment.
foreach($line in $rawRootCertificate) {
``` ## Deploy virtual network gateway
-The Azure virtual network gateway is the service that your on-premises Windows machines will connect to. Deploying this service requires two basic components: a public IP that will identify the gateway to your clients wherever they are in the world and a root certificate you created earlier which will be used to authenticate your clients.
+The Azure virtual network gateway is the service that your on-premises Windows machines will connect to. Before deploying the virtual network gateway, a [gateway subnet](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings#gwsub) must be created on the virtual network.
+
+Deploying this service requires two basic components:
+
+1. A public IP address that will identify the gateway to your clients wherever they are in the world
+2. The root certificate you created earlier, which will be used to authenticate your clients
-Remember to replace `<desired-vpn-name-here>` with the name you would like for these resources.
+Remember to replace `<desired-vpn-name-here>` and `<desired-region-here>` in the below script with the proper values for these variables.
> [!Note] > Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this PowerShell script will block for the deployment to be completed. This is expected.
Remember to replace `<desired-vpn-name-here>` with the name you would like for t
```PowerShell $vpnName = "<desired-vpn-name-here>" $publicIpAddressName = "$vpnName-PublicIP"
+$region = "<desired-region-here>"
$publicIPAddress = New-AzPublicIpAddress ` -ResourceGroupName $resourceGroupName `
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
uname -r
If the connection was successful, you should see something similar to the following output:
- ```ouput
+ ```output
Connection to <your-storage-account> 445 port [tcp/microsoft-ds] succeeded! ```
storage Storage How To Use Files Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-mac.md
description: Learn how to mount an Azure file share over SMB with macOS using Fi
Previously updated : 09/23/2020 Last updated : 05/26/2022
[Azure Files](storage-files-introduction.md) is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted with the industry standard SMB 3 protocol by macOS High Sierra 10.13+. This article shows two different ways to mount an Azure file share on macOS: with the Finder UI and using the Terminal. ## Prerequisites for mounting an Azure file share on macOS
-* **Storage account name**: To mount an Azure file share, you will need the name of the storage account.
+* **Storage account name**: To mount an Azure file share, you'll need the name of the storage account.
-* **Storage account key**: To mount an Azure file share, you will need the primary (or secondary) storage key. SAS keys are not currently supported for mounting.
+* **Storage account key**: To mount an Azure file share, you'll need the primary (or secondary) storage key. SAS keys are not currently supported for mounting.
-* **Ensure port 445 is open**: SMB communicates over TCP port 445. On your client machine (the Mac), check to make sure your firewall is not blocking TCP port 445.
+* **Ensure port 445 is open**: SMB communicates over TCP port 445. On your client machine (the Mac), check to make sure your firewall isn't blocking TCP port 445. If your organization or ISP is blocking port 445, you may need to set up a VPN from on-premises to your Azure storage account with Azure Files exposed on your internal network using private endpoints, so that the traffic will go through a secure tunnel as opposed to over the internet. For more information, see [Networking considerations for direct Azure file share access](storage-files-networking-overview.md). To see the summary of ISPs that allow or disallow access from port 445, go to [TechNet](https://social.technet.microsoft.com/wiki/contents/articles/32346.azure-summary-of-isps-that-allow-disallow-access-from-port-445.aspx).
## Applies to | File share type | SMB | NFS |
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Mount an Azure file share via Finder
-1. **Open Finder**: Finder is open on macOS by default, but you can ensure it is the currently selected application by clicking the "macOS face icon" on the dock:
+1. **Open Finder**: Finder is open on macOS by default, but you can ensure that it's the currently selected application by clicking the "macOS face icon" on the dock:
![The macOS face icon](./media/storage-how-to-use-files-mac/mount-via-finder-1.png) 2. **Select "Connect to Server" from the "Go" Menu**: Using the UNC path from the prerequisites, convert the beginning double backslash (`\\`) to `smb://` and all other backslashes (`\`) to forwards slashes (`/`). Your link should look like the following:
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
Consider using Azure Synapse Analytics when you:
- Need the ability to scale compute and storage. - Want to save on costs by pausing compute resources when you don't need them.
-Rather than Azure Synapse Analytics, consider other options for operational (OLTP) workloads that have:
+Rather than Azure Synapse Analytics, consider other options for operational online transaction processing (OLTP) workloads that have:
- High frequency reads and writes. - Large numbers of singleton selects.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
Title: "Design and performance for Netezza migrations"
-description: Learn how Netezza and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes.
+description: Learn how Netezza and Azure Synapse Analytics SQL databases differ in their approach to high query performance on exceptionally large data volumes.
This article is part one of a seven part series that provides guidance on how to
## Overview
+Due to end of support from IBM, many existing users of Netezza data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, and PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
+ > [!TIP] > More than just a database&mdash;the Azure environment includes a comprehensive set of capabilities and tools.
-Due to end of support from IBM, many existing users of Netezza data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, and PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
-
-Although Netezza and Azure Synapse are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach:
+Although Netezza and Azure Synapse Analytics are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach:
- Legacy Netezza systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud based and uses Azure storage and compute resources.
Azure Synapse provides best-of-breed relational database performance by using te
- Reduced storage and disaster recovery costs. -- Lower overall TCO and better cost control (OPEX).
+- Lower overall TCO, better cost control, and streamlined operational expenditure (OPEX).
To maximize these benefits, migrate new or existing data and applications to the Azure Synapse platform. In many organizations, this will include migrating an existing data warehouse from legacy on-premises platforms such as Netezza. At a high level, the basic process includes these steps:
Legacy Netezza environments have typically evolved over time to encompass multip
- Create a template for further migrations specific to the source Netezza environment and the current tools and processes that are already in place.
-A good candidate for an initial migration from the Netezza environment that would enable the items above, is typically one that implements a BI/Analytics workload (rather than an OLTP workload) with a data model that can be migrated with minimal modifications&mdash;normally a start or snowflake schema.
+A good candidate for an initial migration from the Netezza environment that would enable the items above, is typically one that implements a BI/Analytics workload, rather than an online transaction processing (OLTP) workload, with a data model that can be migrated with minimal modifications&mdash;normally a star or snowflake schema.
The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10TB range.
However, it's important to understand where performance optimizations such as in
Netezza implements some database objects that aren't directly supported in Azure Synapse, but there are methods to achieve the same functionality within the new environment: -- Zone Maps&mdash;In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
+- Zone Maps: In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
- `INTEGER` columns of length 8 bytes or less. - Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`. - `CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause. You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning. -- Clustered Base tables (CBT)&mdash;In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
+- Clustered Base tables (CBT): In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
In Azure Synapse, you can achieve a similar effect by use of partitioning and/or use of other indexes. -- Materialized views&mdash;Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
+- Materialized views: Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
Azure Synapse supports materialized views, with the same functionality as Netezza.
Azure Synapse Analytics also supports stored procedures using T-SQL. If you must
In Netezza, a sequence is a named database object created via `CREATE SEQUENCE` that can provide the unique value via the `NEXT VALUE FOR` method. Use these to generate unique numbers for use as surrogate key values for primary key values.
-Within Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled via use of [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or using SQL code to create the next sequence number in a series.
+Within Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled via use of [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or SQL code to create the next sequence number in a series.
### Extract metadata and data from a Netezza environment
You can edit existing Netezza CREATE TABLE and CREATE VIEW scripts to create the
However, all the information that specifies the current definitions of tables and views within the existing Netezza environment is maintained within system catalog tables. These tables are the best source of this information, as it's guaranteed to be up to date and complete. User-maintained documentation may not be in sync with the current table definitions.
-Access the information in these tables via utilities such as `nz_ddl_table` and generate the `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
+Access the information in these tables via utilities such as `nz_ddl_table` and generate the `CREATE TABLE` DDL statements for the equivalent tables in Azure Synapse.
Third-party migration and ETL tools also use the catalog information to achieve the same result.
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md
Title: "Data migration, ETL, and load for Netezza migration"
-description: Learn how to plan your data migration from Netezza to Azure Synapse to minimize the risk and impact on users.
+description: Learn how to plan your data migration from Netezza to Azure Synapse Analytics to minimize the risk and impact on users.
This query uses the helper function `FORMAT_TABLE_ACCESS` and the digit at the e
This question comes up often since companies often want to lower the impact of changes on the data warehouse data model to improve agility. Companies see an opportunity to do so during a migration to modernize their data model. This approach carries a higher risk because it could impact ETL jobs populating the data warehouse from a data warehouse to feed dependent data marts. Because of that risk, it's usually better to redesign on this scale after the data warehouse migration.
-Even if a data model change is an intended part of the overall migration, it's good practice to migrate the existing model as-is to the new environment (Azure Synapse in this case), rather than do any re-engineering on the new platform during migration. This approach has the advantage of minimizing the impact on existing production systems, while also leveraging the performance and elastic scalability of the Azure platform for one-off re-engineering tasks.
+Even if a data model change is an intended part of the overall migration, it's good practice to migrate the existing model as-is to the new environment (Azure Synapse Analytics in this case), rather than do any re-engineering on the new platform during migration. This approach has the advantage of minimizing the impact on existing production systems, while also leveraging the performance and elastic scalability of the Azure platform for one-off re-engineering tasks.
When migrating from Netezza, often the existing data model is already suitable for as-is migration to Azure Synapse.
There's another potential benefit to this approach: by implementing the aggregat
The primary drivers for choosing a virtual data mart implementation over a physical data mart are: -- More agility&mdash;a virtual data mart is easier to change than physical tables and the associated ETL processes.
+- More agility, since a virtual data mart is easier to change than physical tables and the associated ETL processes.
-- Lower total cost of ownership&mdash;a virtualized implementation requires fewer data stores and copies of data.
+- Lower total cost of ownership, since a virtualized implementation requires fewer data stores and copies of data.
- Elimination of ETL jobs to migrate and simplify data warehouse architecture in a virtualized environment. -- Performance&mdash;although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
+- Performance, since although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
### Data migration from Netezza
Azure Data Factory can be used to move data from a legacy Netezza environment. F
> [!TIP] > Plan the approach to ETL migration ahead of time and leverage Azure facilities where appropriate.
-For ETL/ELT processing, legacy Netezza data warehouses may use custom-built scripts using Netezza utilities such as nzsql and nzload, or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Netezza data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment, while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](../../sql-data-warehouse/design-elt-data-loading.md).
+For ETL/ELT processing, legacy Netezza data warehouses may use custom-built scripts using Netezza utilities such as nzsql and nzload, or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Netezza data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment, while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL design approach](../../sql-data-warehouse/design-elt-data-loading.md).
The following sections discuss migration options and make recommendations for various use cases. This flowchart summarizes one approach:
The following sections discuss migration options and make recommendations for va
The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
-In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
+In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
> [!TIP] > Leverage investment in existing third-party tools to reduce cost and risk.
When it comes to migrating data from a Netezza data warehouse, there are some ba
Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Netezza system and loaded into the new environment. There are two basic approaches: -- **File Extract**&mdash;Extract the data from the Netezza tables to flat files, normally in CSV format, via nzsql with the -o option or via the `CREATE EXTERNAL TABLE` statement. Use an external table whenever possible since it's the most efficient in terms of data throughput. The following SQL example, creates a CSV file via an external table:
+- **File extract**: Extract the data from the Netezza tables to flat files, normally in CSV format, via nzsql with the -o option or via the `CREATE EXTERNAL TABLE` statement. Use an external table whenever possible since it's the most efficient in terms of data throughput. The following SQL example, creates a CSV file via an external table:
```sql CREATE EXTERNAL TABLE '/data/export.csv' USING (delimiter ',')
Once the database tables to be migrated have been created in Azure Synapse, you
Microsoft provides various options to move large volumes of data, including AzCopy for moving files across the network into Azure Storage, Azure ExpressRoute for moving bulk data over a private network connection, and Azure Data Box for files moving to a physical storage device that's then shipped to an Azure data center for loading. For more information, see [data transfer](/azure/architecture/data-guide/scenarios/data-transfer). -- **Direct extract and load across network**&mdash;The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Netezza system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to land the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Netezza database and the Azure environment. For very large data volumes, this approach may not be practical.
+- **Direct extract and load across network**: The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Netezza system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to land the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Netezza database and the Azure environment. For very large data volumes, this approach may not be practical.
There's also a hybrid approach that uses both methods. For example, you can use the direct network extract approach for smaller dimension tables and samples of the larger fact tables to quickly provide a test environment in Azure Synapse. For large volume historical fact tables, you can use the file extract and transfer approach using Azure Data Box.
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
Title: "Security, access, and operations for Netezza migrations"
-description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse and Netezza.
+description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse Analytics and Netezza.
This article is part three of a seven part series that provides guidance on how
## Security considerations
-This article discusses the methods of connection for existing legacy Netezza environments and how they can be migrated to Azure Synapse with minimal risk and user impact.
+This article discusses the methods of connection for existing legacy Netezza environments and how they can be migrated to Azure Synapse Analytics with minimal risk and user impact.
-It's assumed that there's a requirement to migrate the existing methods of connection and user/role/permission structure as-is. If this isn't the case, then use Azure utilities such as Azure portal to create and manage a new security regime.
+We assume there's a requirement to migrate the existing methods of connection and user, role, and permission structure as is. If this isn't the case, then you can use Azure utilities from the Azure portal to create and manage a new security regime.
-For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
+For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options, see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
### Connection and authentication
Netezza administration tasks typically fall into two categories:
IBM&reg; Netezza&reg; offers several ways or interfaces that you can use to perform the various system and database management tasks: -- Netezza commands (nz* commands) are installed in the /nz/kit/bin directory on the Netezza host. For many of the nz* commands, you must be able to sign into the Netezza system to access and run those commands. In most cases, users sign in as the default nz user account, but you can create other Linux user accounts on your system. Some commands require you to specify a database user account, password, and database to ensure that you've permission to do the task.
+- Netezza commands (nz* commands) are installed in the /nz/kit/bin directory on the Netezza host. For many of the nz* commands, you must be able to sign into the Netezza system to access and run those commands. In most cases, users sign in as the default nz user account, but you can create other Linux user accounts on your system. Some commands require you to specify a database user account, password, and database to ensure that you have permission to do the task.
-- The Netezza CLI client kits package a subset of the nz* commands that can be run from Windows and UNIX client systems. The client commands might also require you to specify a database user account, password, and database to ensure that you've database administrative and object permissions to perform the task.
+- The Netezza CLI client kits package a subset of the nz* commands that can be run from Windows and UNIX client systems. The client commands might also require you to specify a database user account, password, and database to ensure that you have database administrative and object permissions to perform the task.
- The SQL commands support administration tasks and queries within a SQL database session. You can run the SQL commands from the Netezza nzsql command interpreter or through SQL APIs such as ODBC, JDBC, and the OLE DB Provider. You must have a database user account to run the SQL commands with appropriate permissions for the queries and tasks that you perform.
The portal also enables integration with other Azure monitoring services such as
> [!TIP] > Low-level and system-wide metrics are automatically logged in Azure Synapse.
-Resource utilization statistics for the Azure Synapse are automatically logged within the system. The metrics include usage statistics for CPU, memory, cache, I/O and temporary workspace for each query as well as connectivity information&mdash;such as failed connection attempts.
+
+Resource utilization statistics for Azure Synapse are automatically logged within the system. The metrics for each query include usage statistics for CPU, memory, cache, I/O, and temporary workspace, as well as connectivity information like failed connection attempts.
Azure Synapse provides a set of [Dynamic management views](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
Distributed Replicated Block Device (DRBD) is a block device driver that mirrors
> [!TIP] > Azure Synapse creates snapshots automatically to ensure fast recovery times.
-Azure Synapse uses database snapshots to provide high availability of the warehouse. A data warehouse snapshot creates a restore point that can be used to recover or copy a data warehouse to a previous state. Since Azure Synapse is a distributed system, a data warehouse snapshot consists of many files that are in Azure storage. Snapshots capture incremental changes from the data stored in your data warehouse.
+Azure Synapse uses database snapshots to provide high availability of the warehouse. A data warehouse snapshot creates a restore point that can be used to recover or copy a data warehouse to a previous state. Since Azure Synapse is a distributed system, a data warehouse snapshot consists of many files that are in Azure Storage. Snapshots capture incremental changes from the data stored in your data warehouse.
> [!TIP] > Use user-defined snapshots to define a recovery point before key updates.
As well as the snapshots described previously, Azure Synapse also performs as st
### Workload management > [!TIP]
-> In a production data warehouse, there are typically mixed workloads which have different resource usage characteristics running concurrently.
+> In a production data warehouse, there are typically mixed workloads with different resource usage characteristics running concurrently.
Netezza incorporates various features for managing workloads:
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md
Title: "Visualization and reporting for Netezza migrations"
-description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse compared to Netezza.
+description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse Analytics compared to Netezza.
Almost every organization accesses data warehouses and data marts using a range
- Operational applications that request BI on demand, by invoking queries and reports as-a-service on a BI platform, which in turn queries data in the data warehouse or data marts that are being migrated. -- Interactive data science development tools, such as Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter notebooks.
+- Interactive data science development tools, such as Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter Notebooks.
The migration of visualization and reporting as part of a data warehouse migration program means that all the existing queries, reports, and dashboards generated and issued by these tools and applications, need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration.
To make that happen, everything that BI tools and applications depend on needs t
In addition, all the required data needs to be migrated to ensure the same results appear in the same reports and dashboards that now query data on Azure Synapse. User expectation will undoubtedly be that migration is seamless and there will be no surprises that destroy their confidence in the migrated system on Azure Synapse. So, this is an area where you must take extreme care and communicate as much as possible to allay any fears in your user base. Their expectations are that: -- Table structure will be the same if directly referred to in queries
+- Table structure will be the same if directly referred to in queries.
-- Table and column names remain the same if directly referred to in queries; for instance, so that calculated fields defined on columns in BI tools don't fail when aggregate reports are produced
+- Table and column names remain the same if directly referred to in queries; for instance, so that calculated fields defined on columns in BI tools don't fail when aggregate reports are produced.
-- Historical analysis remains the same
+- Historical analysis remains the same.
-- Data types should, if possible, remain the same
+- Data types should, if possible, remain the same.
-- Query behavior remains the same
+- Query behavior remains the same.
-- ODBC / JDBC drivers are tested to make sure nothing has changed in terms of query behavior
+- ODBC/JDBC drivers are tested to make sure nothing has changed in terms of query behavior.
> [!TIP] > Views and SQL queries using proprietary SQL query extensions are likely to result in incompatibilities that impact BI reports and dashboards.
If your existing BI tools run on premises, ensure that they're able to connect t
There's a lot to think about here, so let's look at all this in more detail. > [!TIP]
-> A lift and shift data warehouse migration are likely to minimize any disruption to reports, dashboards, and other visualizations.
+> A lift and shift data warehouse migration is likely to minimize any disruption to reports, dashboards, and other visualizations.
-## Minimize the impact of data warehouse migration on BI tools and reports using data virtualization
+## Minimize the impact of data warehouse migration on BI tools and reports by using data virtualization
> [!TIP] > Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
This breaks the dependency between business users utilizing self-service BI tool
> [!TIP] > Schema alterations to tune your data model for Azure Synapse can be hidden from users.
-By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](../../partner/data-integration.md) provides a useful data virtualization software.
+By introducing data virtualization, any schema alterations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](../../partner/data-integration.md) provide useful data virtualization software.
## Identify high priority reports to migrate first
A key question when migrating your existing reports and dashboards to Azure Syna
These factors are discussed in more detail later in this article.
-Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straightforward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
### Migrate reports based on usage Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and don't currently offer any value. So, do you have any mechanism for finding out which reports and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
-If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you've and defining their business purpose and usage statistics.
+If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you have and defining their business purpose and usage statistics.
-For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to de-commission those reports to optimize your migration efforts. A key question worth asking when deciding to de-commission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
+For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to decommission those reports to optimize your migration efforts. A key question worth asking when deciding to decommission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
### Migrate reports based on business value
-Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this overtime.
+Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this over time.
This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), and KPI targets that need to be achieved and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
-It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives contribution is required at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
+It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives requires contribution at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
| **Level** | **Report / dashboard name** | **Business purpose** | **Department used** | **Usage frequency** | **Business priority** | |-|-|-|-|-|-|
While this may seem too time consuming, you need a mechanism to understand the c
> [!TIP] > Data migration strategy could also dictate which reports and visualizations get migrated first.
-If your migration strategy is based on migrating "data marts first", clearly, the order of data mart migration will have a bearing on which reports and dashboards can be migrated first to run on Azure Synapse. Again, this is likely to be a business-value-related decision. Prioritizing which data marts are migrated first reflects business priorities. Metadata discovery tools can help you here by showing you which reports rely on data in which data mart tables.
+If your migration strategy is based on migrating data marts first, the order of data mart migration will have a bearing on which reports and dashboards can be migrated first to run on Azure Synapse. Again, this is likely to be a business-value-related decision. Prioritizing which data marts are migrated first reflects business priorities. Metadata discovery tools can help you here by showing you which reports rely on data in which data mart tables.
## Migration incompatibility issues that can impact reports and visualizations
BI tool reports and dashboards, and other visualizations, are produced by issuin
- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
-In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it may be able to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
+In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it's possible to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
> [!TIP] > Querying the system catalog of your legacy warehouse DBMS is a quick and straightforward way to identify schema incompatibilities with Azure Synapse.
A key element in data warehouse migration is the testing of reports and dashboar
- Test analytical functionality.
-For information about how to migrate users, user groups, roles, and privileges, see the [Security, access, and operations for Netezza migrations](3-security-access-operations.md) which is part of this series of articles.
+For information about how to migrate users, user groups, roles, and privileges, see [Security, access, and operations for Netezza migrations](3-security-access-operations.md), which is part of this series.
> [!TIP] > Build an automated test suite to make tests repeatable.
-It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
+It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
> [!TIP] > Leverage tools that can compare metadata lineage to verify results.
-Ad-hoc analysis and reporting are more challenging and requires a set of tests to be compiled to verify that results are consistent across your legacy data warehouse DBMS and Azure Synapse. If reports and dashboards are inconsistent, then having the ability to compare metadata lineage across original and migrated systems is extremely valuable during migration testing, as it can highlight differences and pinpoint where they occurred when these aren't easy to detect. This is discussed in more detail later in this article.
+Ad-hoc analysis and reporting are more challenging and require a set of tests to be compiled to verify that results are consistent across your legacy data warehouse DBMS and Azure Synapse. If reports and dashboards are inconsistent, then having the ability to compare metadata lineage across original and migrated systems is extremely valuable during migration testing, as it can highlight differences and pinpoint where they occurred when these aren't easy to detect. This is discussed in more detail later in this article.
In terms of security, the best way to do this is to create roles, assign access privileges to roles, and then attach users to roles. To access your newly migrated data warehouse, set up an automated process to create new users, and to do role assignment. To detach users from roles, you can follow the same steps.
It's also important to communicate the cut-over to all users, so they know what'
A critical success factor in migrating reports and dashboards is understanding lineage. Lineage is metadata that shows the journey that data has taken, so you can see the path from the report/dashboard all the way back to where the data originates. It shows how data has gone from point to point, its location in the data warehouse and/or data mart, and where it's used&mdash;for example, in what reports. It helps you understand what happens to data as it travels through different data stores&mdash;files and database&mdash;different ETL pipelines, and into reports. If business users have access to data lineage, it improves trust, breeds confidence, and enables more informed business decisions. > [!TIP]
-> Tools that automate metadata collection and show end-to- end lineage in a multi-vendor environment are valuable when it comes to migration.
+> Tools that automate metadata collection and show end-to-end lineage in a multi-vendor environment are valuable when it comes to migration.
-In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you've Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
+In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you have Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end to end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
Data lineage visualization not only reduces time, effort, and error in the migra
By leveraging automated metadata discovery and data lineage tools that can compare lineage, you can verify if a report is produced using data migrated to Azure Synapse and if it's produced in the same way as in your legacy environment. This kind of capability also helps you determine: -- What data needs to be migrated to ensure successful report and dashboard execution on Azure Synapse
+- What data needs to be migrated to ensure successful report and dashboard execution on Azure Synapse.
-- What transformations have been and should be performed to ensure successful execution on Azure Synapse
+- What transformations have been and should be performed to ensure successful execution on Azure Synapse.
-- How to reduce report duplication
+- How to reduce report duplication.
This substantially simplifies the data migration process, because the business will have a better idea of the data assets it has and what needs to be migrated to enable a solid reporting environment on Azure Synapse. > [!TIP] > Azure Data Factory and several third-party ETL tools support lineage.
-Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](../../partner/data-integration.md) provide automated metadata discovery, data lineage, and lineage comparison tools.
+Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](../../partner/data-integration.md) provide automated metadata discovery, data lineage, and lineage comparison tools.
## Migrate BI tool semantic layers to Azure Synapse Analytics
However, if data structures change, then data is stored in unsupported data type
You can't rely on documentation to find out where the issues are likely to be. Making use of `EXPLAIN` statements is a pragmatic and quick way to identify incompatibilities in SQL. Rework these to achieve similar results in Azure Synapse. In addition, it's recommended that you make use of automated metadata discovery and lineage tools to help you identify duplicate reports, reports that are no longer valid because they're using data from data sources that you no longer use, and to understand dependencies. Some of these tools help compare lineage to verify that reports running in your legacy data warehouse environment are produced identically in Azure Synapse.
-Don't migrate reports that you no longer use. BI tool usage data can help determine which ones aren't in use. For the visualizations and reports that you do want to migrate, migrate all users, user groups, roles, and privileges, and associate these reports with strategic business objectives and priorities to help you identify report insight contribution to specific objectives. This is useful if you're using business value to drive your report migration strategy. If you're migrating by data store,&mdash;data mart by data mart&mdash;then metadata will also help you identify which reports are dependent on which tables and views, so that you can focus on migrating to these first.
+Don't migrate reports that you no longer use. BI tool usage data can help determine which ones aren't in use. For the visualizations and reports that you do want to migrate, migrate all users, user groups, roles, and privileges, and associate these reports with strategic business objectives and priorities to help you identify report insight contribution to specific objectives. This is useful if you're using business value to drive your report migration strategy. If you're migrating by data store, data mart by data mart, then metadata will also help you identify which reports are dependent on which tables and views, so that you can focus on migrating to these first.
Finally, consider data virtualization to shield BI tools and applications from structural changes to the data warehouse and/or the data mart data model that may occur during migration. You can also use a common vocabulary with data virtualization to define a common semantic layer that guarantees consistent common data names, definitions, metrics, hierarchies, joins, and more across all BI tools and applications in a migrated Azure Synapse environment.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md
Title: "Minimize SQL issues for Netezza migrations"
-description: Learn how to minimize the risk of SQL issues when migrating from Netezza to Azure Synapse.
+description: Learn how to minimize the risk of SQL issues when migrating from Netezza to Azure Synapse Analytics.
This article is part five of a seven part series that provides guidance on how t
In 2003, Netezza initially released their data warehouse appliance product. It reduced the cost of entry and improved the ease-of-use of massively parallel processing (MPP) techniques to enable data processing at scale more efficiently than the existing mainframe or other MPP technologies available at the time. Since then, the product has evolved and has many installations among large financial institutions, telecommunications, and retail companies. The original implementation used proprietary hardware including field programmable gate arrays&mdash;or FPGAs&mdash;and was accessible via ODBC or JDBC network connection over TCP/IP.
-Most existing Netezza installations are on-premises, so many users are considering migrating some or all their Netezza data to Azure Synapse to gain the benefits of a move to a modern cloud environment.
+Most existing Netezza installations are on-premises, so many users are considering migrating some or all their Netezza data to Azure Synapse Analytics to gain the benefits of a move to a modern cloud environment.
> [!TIP] > Many existing Netezza installations are data warehouses using a dimensional data model. Netezza technology is often used to implement a data warehouse, supporting complex analytic queries on large data volumes using SQL. Dimensional data models&mdash;star or snowflake schemas&mdash;are common, as is the implementation of data marts for individual departments.
-This combination of SQL and dimensional data models simplifies migration to Azure Synapse, since the basic concepts and SQL skills are transferable. The recommended approach is to migrate the existing data model as-is to reduce risk and time taken. Even if the eventual intention is to make changes to the data model (for example, moving to a Data Vault model), perform an initial as-is migration and then make changes within the Azure cloud environment, leveraging the performance, elastic scalability, and cost advantages there.
+This combination of SQL and dimensional data models simplifies migration to Azure Synapse, since the basic concepts and SQL skills are transferable. The recommended approach is to migrate the existing data model as-is to reduce risk and time taken. Even if the eventual intention is to make changes to the data model (for example, moving to a data vault model), perform an initial as-is migration and then make changes within the Azure cloud environment, leveraging the performance, elastic scalability, and cost advantages there.
-While the SQL language has been standardized, individual vendors have in some cases implemented proprietary extensions. This document highlights potential SQL differences you may encounter while migrating from a legacy Netezza environment, and to provide workarounds.
+While the SQL language has been standardized, individual vendors have in some cases implemented proprietary extensions. This document highlights potential SQL differences you may encounter while migrating from a legacy Netezza environment, and provides workarounds.
### Use Azure Data Factory to implement a metadata-driven migration
Automate and orchestrate the migration process by making use of the capabilities
Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;that can ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
-By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de).
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de).
## SQL DDL differences between Netezza and Azure Synapse
It's important to understand where performance optimizations&mdash;such as index
Netezza implements some database objects that aren't directly supported in Azure Synapse, but there are methods to achieve the same functionality within the new environment: -- Zone Maps&mdash;In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
+- Zone Maps: In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
- `INTEGER` columns of length 8 bytes or less. - Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`. - `CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause. You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning. -- Clustered Base tables (CBT)&mdash;In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
+- Clustered Base tables (CBT): In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
In Azure Synapse, you can achieve a similar effect by use of partitioning and/or use of other indexes. -- Materialized views&mdash;Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
+- Materialized views: Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
Azure Synapse supports materialized views, with the same functionality as Netezza.
Netezza implements some database objects that aren't directly supported in Azure
> [!TIP] > Assess the impact of unsupported data types as part of the preparation phase.
-Most Netezza data types have a direct equivalent in the Azure Synapse. The following table shows these data types along with the recommended approach for mapping them.
+Most Netezza data types have a direct equivalent in Azure Synapse. The following table shows these data types along with the recommended approach for mapping them.
| Netezza Data Type | Azure Synapse Data Type | |--|-|
Edit existing Netezza `CREATE TABLE` and `CREATE VIEW` scripts to create the equ
However, all the information that specifies the current definitions of tables and views within the existing Netezza environment is maintained within system catalog tables. This is the best source of this information as it's guaranteed to be up to date and complete. Be aware that user-maintained documentation may not be in sync with the current table definitions.
-Access this information by using utilities such as `nz_ddl_table` and generate the `CREATE TABLE DDL` statements. Edit these statements for the equivalent tables in Azure Synapse.
+Access this information by using utilities such as `nz_ddl_table` and generate the `CREATE TABLE` DDL statements. Edit these statements for the equivalent tables in Azure Synapse.
> [!TIP] > Third-party tools and services can automate data mapping tasks.
There are [Microsoft partners](../../partner/data-integration.md) who offer tool
> [!TIP] > SQL DML commands `SELECT`, `INSERT` and `UPDATE` have standard core elements but may also implement different syntax options.
-The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE` and `DELETE`. Both Netezza and Azure Synapse use these commands, but in some cases there are implementation differences.
+The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE`, and `DELETE`. Both Netezza and Azure Synapse use these commands, but in some cases there are implementation differences.
The following sections discuss the Netezza-specific DML commands that you should consider during a migration to Azure Synapse. ### SQL DML syntax differences
-Be aware of these differences in SQL Data Manipulation Language (DML) syntax between Netezza SQL and Azure Synapse when migrating:
+Be aware of these differences in SQL DML syntax between Netezza SQL and Azure Synapse when migrating:
- `STRPOS`: In Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
See the following sections for more information on each of these elements.
#### Functions
-As with most database products, Netezza supports system functions and user-defined functions within the SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated. System functions where there's no equivalent, such arbitrary user-defined functions, may need to be recoded using the languages available in the target environment. Azure Synapse uses the popular Transact-SQL language to implement user-defined functions. Netezza user-defined functions are coded in nzlua or C++ languages.
+As with most database products, Netezza supports system functions and user-defined functions within the SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated. System functions where there's no equivalent, such as arbitrary user-defined functions, may need to be recoded using the languages available in the target environment. Azure Synapse uses the popular Transact-SQL language to implement user-defined functions. Netezza user-defined functions are coded in nzlua or C++ languages.
#### Stored procedures
SQL Azure Data Warehouse also supports stored procedures using T-SQL, so if you
In Netezza, a sequence is a named database object created via `CREATE SEQUENCE` that can provide the unique value via the `NEXT VALUE FOR` method. Use these to generate unique numbers for use as surrogate key values for primary key values.
-In Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled using [Identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory) using SQL code to create the next sequence number in a series.
+In Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled using [identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory) using SQL code to create the next sequence number in a series.
### Use [EXPLAIN](/sql/t-sql/queries/explain-transact-sql?msclkid=91233fc1cff011ec9dff597671b7ae97) to validate legacy SQL
The IBM Netezza to T-SQL compliant with Azure Synapse SQL data type mapping is i
## Summary
-Typical existing legacy Netezza installations are implemented in a way that makes migration to Azure Synapse easy. They use SQL for analytical queries on large data volumes, and are in some form of dimensional data model. These factors make it a good candidate for migration to Azure Synapse.
+Typical existing legacy Netezza installations are implemented in a way that makes migration to Azure Synapse easy. They use SQL for analytical queries on large data volumes, and are in some form of dimensional data model. These factors make them good candidates for migration to Azure Synapse.
To minimize the task of migrating the actual SQL code, follow these recommendations: -- Initial migration of the data warehouse should be as-is to minimize risk and time taken, even if the eventual final environment will incorporate a different data model such as Data Vault.
+- Initial migration of the data warehouse should be as-is to minimize risk and time taken, even if the eventual final environment will incorporate a different data model such as data vault.
- Understand the differences between Netezza SQL implementation and Azure Synapse.
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/6-microsoft-third-party-migration-tools.md
This article is part six of a seven part series that provides guidance on how to
## Data warehouse migration tools
-By migrating your existing data warehouse to Azure Synapse, you benefit from:
+By migrating your existing data warehouse to Azure Synapse Analytics, you benefit from:
- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database.
You can develop simple or comprehensive ETL and ELT processes without coding or
You can use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
-A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users who want to visually discover, explore, and prepare data at scale without writing code. This capability, similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, offers self-service data preparation. Business users can prepare and integrate data through a spreadsheet style user interface with drop-down transform options.
+A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users who want to visually discover, explore, and prepare data at scale without writing code. This capability, similar to Microsoft Excel Power Query or Microsoft Power BI dataflows, offers self-service data preparation. Business users can prepare and integrate data through a spreadsheet-style user interface with drop-down transform options.
Azure Data Factory is the recommended approach for implementing data integration and ETL/ELT processes for an Azure Synapse environment, especially if existing legacy processes need to be refactored.
Azure Data Factory is the recommended approach for implementing data integration
#### Azure ExpressRoute
-Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
#### AzCopy
The [COPY](/sql/t-sql/statements/copy-into-transact-sql) statement provides the
PolyBase provides the fastest and most scalable method of loading bulk data into Azure Synapse. PolyBase leverages the MPP architecture to use parallel loading, to give the fastest throughput, and can read data from flat files in Azure Blob Storage or directly from external data sources and other relational databases via connectors.
-PolyBase can also directly read from files compressed with gzip&mdash;this reduces the physical volume of data moved during the load process. PolyBase supports popular data formats such as delimited text, ORC and Parquet.
+PolyBase can also directly read from files compressed with gzip&mdash;this reduces the physical volume of data moved during the load process. PolyBase supports popular data formats such as delimited text, ORC, and Parquet.
> [!TIP] > Invoke PolyBase from Azure Data Factory as part of a migration pipeline.
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md
Title: "Beyond Netezza migration, implementing a modern data warehouse in Microsoft Azure"
-description: Learn how a Netezza migration to Azure Synapse lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem.
+description: Learn how a Netezza migration to Azure Synapse Analytics lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem.
This article is part seven of a seven part series that provides guidance on how
## Beyond data warehouse migration to Azure
-One of the key reasons to migrate your existing data warehouse to Azure Synapse is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integrating with technologies like:
+One of the key reasons to migrate your existing data warehouse to Azure Synapse Analytics is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integration with technologies like:
-- Azure Data Lake Storage&mdash;for cost effective data ingestion, staging, cleansing and transformation to free up data warehouse capacity occupied by fast growing staging tables
+- Azure Data Lake Storage, for cost effective data ingestion, staging, cleansing, and transformation to free up data warehouse capacity occupied by fast growing staging tables.
-- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data
+- Azure Data Factory, for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data.
-- [The Open Data Model Common Data Initiative](/common-data-model/)&mdash;to share consistent trusted data across multiple technologies including:
+- [The Open Data Model Common Data Initiative](/common-data-model/), to share consistent trusted data across multiple technologies, including:
- Azure Synapse - Azure Synapse Spark - Azure HDInsight
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- Azure IoT - Microsoft ISV Partners -- [Microsoft's data science technologies](/azure/architecture/data-science-process/platforms-and-tools) including:
- - Azure ML studio
- - Azure Machine Learning Service
+- [Microsoft's data science technologies](/azure/architecture/data-science-process/platforms-and-tools), including:
+ - Azure Machine Learning Studio
+ - Azure Machine Learning
- Azure Synapse Spark (Spark as a service) - Jupyter Notebooks - RStudio - ML.NET
- - Visual Studio .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale.
+ - .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale.
-- [Azure HDInsight](../../../hdinsight/index.yml)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
+- [Azure HDInsight](../../../hdinsight/index.yml), to leverage big data analytical processing and join big data with Azure Synapse data by creating a logical data warehouse using PolyBase.
-- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka), to integrate with live streaming data within Azure Synapse.
-There's often acute demand to integrate with [Machine Learning](../../machine-learning/what-is-machine-learning.md) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
+There's often acute demand to integrate with [machine learning](../../machine-learning/what-is-machine-learning.md) to enable custom-built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
In addition, there's an opportunity to integrate Azure Synapse with Microsoft partner tools on Azure to shorten time to value.
Let's look at these in more detail to understand how you can take advantage of t
Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
-The rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT), means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
+The rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT) streams, means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation, and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
Once you've migrated your data warehouse to Azure Synapse, Microsoft provides the ability to modernize your ETL processing by ingesting data into, and staging data in, Azure Data Lake Storage. You can then clean, transform and integrate your data at scale using Data Factory before loading it into Azure Synapse in parallel using PolyBase.
For ELT strategies, consider offloading ELT processing to Azure Data Lake to eas
### Microsoft Azure Data Factory > [!TIP]
-> Data Factory allows you to build scalable data integration pipelines code free.
+> Data Factory allows you to build scalable data integration pipelines code-free.
-[Microsoft Azure Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
+[Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
-- Data Factory allows you to build scalable data integration pipelines code free. Easily acquire data at scale. Pay only for what you use and connect to on premises, cloud, and SaaS based data sources.
+- Build scalable data integration pipelines code-free. Easily acquire data at scale. Pay only for what you use and connect to on premises, cloud, and SaaS-based data sources.
-- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale and take automatic action such a recommendation, an alert, and more.
+- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale. Take automatic action, such as a recommendation or alert.
-- Seamlessly author, monitor and manage pipelines that span data stores both on-premises and in the cloud.
+- Seamlessly author, monitor, and manage pipelines that span data stores both on-premises and in the cloud.
- Enable pay-as-you-go scale out in alignment with customer growth.
Implement Data Factory pipeline development from any of several places including
- Programmatically from .NET and Python using a multi-language SDK -- Azure Resource Manager (ARM) Templates
+- Azure Resource Manager (ARM) templates
- REST APIs Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid as they can connect, ingest, clean, transform and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
-Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
+Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real-time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
#### Use cases
Data engineers can profile data quality and view the results of individual data
> [!TIP] > Data Factory pipelines are also extensible since Data Factory allows you to write your own code and run it as part of a pipeline.
-Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
+Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool Notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
-Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
#### Utilize Spark to scale data integration
-Under the covers, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
+Internally, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
#### Link self-service data prep and Data Factory ETL processing using wrangling data flows > [!TIP] > Data Factory support for wrangling data flows in addition to mapping data flows means that business and IT can work together on a common platform to integrate data.
-Another new capability in Data Factory is wrangling data flows. This lets business users (also known as citizen data integrators and data engineers) make use of the platform to visually discover, explore and prepare data at scale without writing code. This easy-to-use Data Factory capability is similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet-style UI with drop-down transforms to prepare and integrate data. The following screenshot shows an example Data Factory wrangling data flow.
+Another new capability in Data Factory is wrangling data flows. This lets business users (also known as citizen data integrators and data engineers) make use of the platform to visually discover, explore, and prepare data at scale without writing code. This easy-to-use Data Factory capability is similar to Microsoft Excel Power Query or Microsoft Power BI dataflows, where self-service data preparation business users use a spreadsheet-style UI with drop-down transforms to prepare and integrate data. The following screenshot shows an example Data Factory wrangling data flow.
:::image type="content" source="../media/6-microsoft-3rd-party-migration-tools/azure-data-factory-wrangling-dataflows.png" border="true" alt-text="Screenshot showing an example of Azure Data Factory wrangling dataflows.":::
-This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
+This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
#### Link data and analytics in analytical pipelines In addition to cleaning and transforming data, Azure Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
-Models developed code-free with Azure ML Studio or with the Azure Machine Learning Service SDK using Azure Synapse Spark Pool Notebooks or using R in RStudio can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
+Models developed code-free with Azure Machine Learning Studio or with the Azure Machine Learning SDK using Azure Synapse Spark Pool Notebooks or using R in RStudio can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
-Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
## A lake database to share consistent trusted data > [!TIP] > Microsoft has created a lake database to describe core data entities to be shared across the enterprise.
-A key objective in any data integration set-up is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust.
+A key objective in any data integration setup is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust.
> [!TIP]
-> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure ML, Azure Synapse Spark, and Azure HDInsight.
+> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure Machine Learning, Azure Synapse Spark, and Azure HDInsight.
To achieve this goal, establish a set of common data names and definitions describing logical data entities that need to be shared across the enterprise&mdash;such as customer, account, product, supplier, orders, payments, returns, and so forth. Once this is done, IT and business professionals can use data integration software to create these common data assets and store them to maximize their reuse to drive consistency everywhere. > [!TIP] > Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
-Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
+Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake Storage by using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure Machine Learning. The following diagram shows a lake database used in Azure Synapse Analytics.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-analytics-lake-database.png" border="true" alt-text="Screenshot showing how a lake database can be used in Azure Synapse Analytics.":::
Another key requirement in modernizing your migrated data warehouse is to integr
Microsoft offers a range of technologies to build predictive analytical models using machine learning, analyze unstructured data using deep learning, and perform other kinds of advanced analytics. This includes: -- Azure ML Studio
+- Azure Machine Learning Studio
-- Azure Machine Learning Service
+- Azure Machine Learning
- Azure Synapse Spark Pool Notebooks - ML.NET (API, CLI or .NET Model Builder for Visual Studio) -- Visual Studio .NET for Apache Spark
+- .NET for Apache Spark
Data scientists can use RStudio (R) and Jupyter Notebooks (Python) to develop analytical models, or they can use other frameworks such as Keras or TensorFlow.
-#### Azure ML Studio
+#### Azure Machine Learning Studio
-Azure ML Studio is a fully managed cloud service that lets you easily build, deploy, and share predictive analytics via a drag-and-drop web-based user interface. The next screenshot shows an Azure Machine Learning studio user interface.
+Azure Machine Learning Studio is a fully managed cloud service that lets you easily build, deploy, and share predictive analytics via a drag-and-drop web-based user interface. The next screenshot shows an Azure Machine Learning Studio user interface.
-#### Azure Machine Learning Service
+#### Azure Machine Learning
> [!TIP]
-> Azure Machine Learning Service provides an SDK for developing machine learning models using several open-source frameworks.
+> Azure Machine Learning provides an SDK for developing machine learning models using several open-source frameworks.
-Azure Machine Learning Service provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning Service from Azure notebooks (a Jupyter notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning Service provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning Service uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning Service from Visual Studio with a Visual Studio for AI extension.
+Azure Machine Learning provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning from Azure notebooks (a Jupyter Notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning from Visual Studio with a Visual Studio for AI extension.
> [!TIP] > Organize and manage related data stores, experiments, trained models, docker images and deployed services in workspaces.
Azure Machine Learning Service provides a software development kit (SDK) and ser
Jobs running in Azure Synapse Spark Pool Notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
-Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the ML flow open-source framework to manage the machine learning lifecycle.
+Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the MLflow open-source framework to manage the machine learning lifecycle.
#### ML.NET
Autoscaling and auto-termination are also supported to reduce total cost of owne
ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like .NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
-#### Visual Studio .NET for Apache Spark
+#### .NET for Apache Spark
-Visual Studio .NET for Apache&reg; Spark&trade; aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+.NET for Apache Spark aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
-### Utilize Azure Analytics with your data warehouse
+### Use Azure Synapse Analytics with your data warehouse
> [!TIP]
-> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in your Azure Synapse.
+> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in Azure Synapse.
Combine machine learning models built using the tools with Azure Synapse by: -- Using machine learning models in batch mode or in real time to produce new insights, and add them to what you already know in Azure Synapse.
+- Using machine learning models in batch mode or in real-time to produce new insights, and add them to what you already know in Azure Synapse.
- Using the data in Azure Synapse to develop and train new predictive models for deployment elsewhere, such as in other applications. -- Deploying machine learning models&mdash;including those trained elsewhere&mdash;in Azure Synapse to analyze data in the data warehouse and drive new business value.
+- Deploying machine learning models, including those trained elsewhere, in Azure Synapse to analyze data in the data warehouse and drive new business value.
> [!TIP] > Produce new insights using machine learning on Azure in batch or in real-time and add to what you know in your data warehouse.
-In terms of machine learning model development, data scientists can use RStudio, Jupyter notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning Service to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
+In terms of machine learning model development, data scientists can use RStudio, Jupyter Notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-train-predict.png" border="true" alt-text="Screenshot of an Azure Synapse Analytics train and predict model.":::
In addition, you can ingest big data&mdash;such as social network data or review
## Integrate live streaming data into Azure Synapse Analytics
-When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
+When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real-time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
> [!TIP] > Integrate your data warehouse with streaming data from IoT devices or clickstream.
Once you've successfully migrated your data warehouse to Azure Synapse, you can
> [!TIP] > Ingest streaming data into Azure Data Lake Storage from Microsoft Event Hub or Kafka, and access it from Azure Synapse using PolyBase external tables.
-To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources) and land it in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse analytics is integrated with streaming data in Azure Data Lake.
+To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources). Store the data in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse Analytics is integrated with streaming data in Data Lake Storage.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-datalake-streaming-data.png" border="true" alt-text="Screenshot of Azure Synapse Analytics with streaming data in an Azure Data Lake.":::
Since these platforms are producing new insights, it's normal to see a requireme
> [!TIP] > The ability to make data in multiple analytical data stores look like it's all in one system and join it to Azure Synapse is known as a logical data warehouse architecture.
-By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
+By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake Storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/complex-data-warehouse-structure.png" alt-text="Screenshot showing an example of a complex data warehouse structure accessed through user interface methods.":::
-The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
> [!TIP] > A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
This article is part one of a seven part series that provides guidance on how to
## Overview
+Many existing users of Teradata data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, or PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
+ > [!TIP] > More than just a database&mdash;the Azure environment includes a comprehensive set of capabilities and tools.
-Many existing users of Teradata data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, or PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
-
-Although Teradata and Azure Synapse are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach:
+Although Teradata and Azure Synapse Analytics are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach:
- Legacy Teradata systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud based and uses Azure storage and compute resources.
Azure Synapse provides best-of-breed relational database performance by using te
- Reduced storage and disaster recovery costs. -- Lower overall TCO and better cost control (OPEX).
+- Lower overall TCO, better cost control, and streamlined operational expenditure (OPEX).
To maximize these benefits, migrate new or existing data and applications to the Azure Synapse platform. In many organizations, this will include migrating an existing data warehouse from legacy on-premises platforms such as Teradata. At a high level, the basic process includes these steps:
Legacy Teradata environments have typically evolved over time to encompass multi
- Create a template for further migrations specific to the source Teradata environment and the current tools and processes that are already in place.
-A good candidate for an initial migration from the Teradata environment that would enable the items above, is typically one that implements a BI/Analytics workload (rather than an OLTP workload) with a data model that can be migrated with minimal modifications&mdash;normally a start or snowflake schema.
+A good candidate for an initial migration from the Teradata environment that would enable the items above, is typically one that implements a BI/Analytics workload, rather than an online transaction processing (OLTP) workload, with a data model that can be migrated with minimal modifications&mdash;normally a star or snowflake schema.
The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10TB range.
-To minimize the risk and reduce implementation time for the initial migration project, confine the scope of the migration to just the data marts, such as the OLAP DB part a Teradata warehouse. However, this won't address the broader topics such as ETL migration and historical data migration. Address these topics in later phases of the project, once the migrated data mart layer is back filled with the data and processes required to build them.
+To minimize the risk and reduce implementation time for the initial migration project, confine the scope of the migration to just the data marts, such as the OLAP DB part of a Teradata warehouse. However, this won't address the broader topics such as ETL migration and historical data migration. Address these topics in later phases of the project, once the migrated data mart layer is backfilled with the data and processes required to build them.
#### Lift and shift as-is versus a phased approach incorporating changes
This is a good fit for existing Teradata environments where a single data mart i
##### Phased approach incorporating modifications
-In cases where a legacy warehouse has evolved over a long time, you may need to re-engineer to maintain the required performance levels or to support new data like IoT steams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
+In cases where a legacy warehouse has evolved over a long time, you may need to re-engineer to maintain the required performance levels or to support new data like IoT streams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
Microsoft recommends moving the existing data model as-is to Azure (optionally using a VM Teradata instance in Azure) and using the performance and flexibility of the Azure environment to apply the re-engineering changes, leveraging Azure's capabilities to make the changes without impacting the existing source system.
When migrating from an on-premises Teradata environment, you can leverage the Az
With this approach, standard Teradata utilities such as Teradata Parallel Data Transporter can efficiently move the subset of Teradata tables being migrated onto the VM instance. Then, all migration tasks can take place within the Azure environment. This approach has several benefits: -- After the initial replication of data, the source system isn't impacted by the migration tasks
+- After the initial replication of data, the source system isn't impacted by the migration tasks.
-- The familiar Teradata interfaces, tools, and utilities are available within the Azure environment
+- The familiar Teradata interfaces, tools, and utilities are available within the Azure environment.
-- Once in the Azure environment, there are no potential issues with network bandwidth availability between the on-premises source system and the cloud target system
+- Once in the Azure environment, there are no potential issues with network bandwidth availability between the on-premises source system and the cloud target system.
-- Tools like Azure Data Factory can efficiently call utilities like Teradata Parallel Transporter to migrate data quickly and easily
+- Tools like Azure Data Factory can efficiently call utilities like Teradata Parallel Transporter to migrate data quickly and easily.
-- The migration process is orchestrated and controlled entirely within the Azure environment, keeping everything in a single place
+- The migration process is orchestrated and controlled entirely within the Azure environment, keeping everything in a single place.
#### Use Azure Data Factory to implement a metadata-driven migration
Querying within the Azure Synapse environment is limited to a single database. S
When migrating tables between different technologies, only the raw data and the metadata that describes it gets physically moved between the two environments. Other database elements from the source system&mdash;such as indexes&mdash;aren't migrated, as these may not be needed or may be implemented differently within the new target environment.
-However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if a NUSI (Non-unique secondary index) has been created within the source Teradata environment, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if a non-unique secondary index (NUSI) has been created within the source Teradata environment, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
#### High availability for the database
The Azure environment also includes specific features for complex analytics on t
There are a few differences in SQL Data Manipulation Language (DML) syntax between Teradata SQL and Azure Synapse (T-SQL) that you should be aware of during migration: -- `QUALIFY`&mdash;Teradata supports the `QUALIFY` operator. For example:
+- `QUALIFY`: Teradata supports the `QUALIFY` operator. For example:
```sql SELECT col1
There are a few differences in SQL Data Manipulation Language (DML) syntax betwe
) WHERE rn = 1; ``` -- Date Arithmetic&mdash;Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as 'SELECT DATE1-DATE2 FROM...'
+- Date arithmetic: Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as `SELECT DATE1 - DATE2 FROM...`
-- In Group by ordinal, explicitly provide the T-SQL column name.
+- In `GROUP BY` ordinal, explicitly provide the T-SQL column name.
-- Teradata supports LIKE ANY syntax such as:
+- `LIKE ANY`: Teradata supports `LIKE ANY` syntax such as:
```sql SELECT * FROM CUSTOMER
Azure Synapse doesn't support trigger creation, but trigger creation can be impl
##### Sequences
-With Azure Synapse, sequences are handled in a similar way to Teradata. Use [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or using SQL code to create the next sequence number in a series.
+With Azure Synapse, sequences are handled in a similar way to Teradata. Use [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or SQL code to create the next sequence number in a series.
### Extract metadata and data from a Teradata environment
Call Teradata Parallel Transporter directly from Azure Data Factory. This is the
Recommended data formats for the extracted data include delimited text files (also called Comma Separated Values or CSV), Optimized Row Columnar (ORC), or Parquet files.
-For more detailed information on the process of migrating data and ETL from a Teradata environment, see Section 2.1. Data Migration ETL and Load from Teradata.
+For more detailed information on the process of migrating data and ETL from a Teradata environment, see [Data migration, ETL, and load for Teradata migration](2-etl-load-migration-considerations.md).
## Performance recommendations for Teradata migrations
This section highlights lower-level implementation differences between Teradata
Azure enables the specification of data distribution methods for individual tables. The aim is to reduce the amount of data that must be moved between processing nodes when executing a query.
-For large table-large table joins, hash distributing one or, ideally, both tables on one of the join columns&mdash;which has a wide range of values to help ensure an even distribution. Perform join processing locally, as the data rows to be joined will already be collocated on the same processing node.
+For large table-large table joins, hash distribute one or ideally both tables on one of the join columns&mdash;which has a wide range of values to help ensure an even distribution. Perform join processing locally, as the data rows to be joined will already be collocated on the same processing node.
Another way to achieve local joins for small table-large table joins&mdash;typically dimension table to fact table in a star schema model&mdash;is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md))&mdash;in which case, the hash distribution approach as described above is more appropriate. For more information, see [Distributed tables design](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
Title: "Data migration, ETL, and load for Teradata migrations"
-description: Learn how to plan your data migration from Teradata to Azure Synapse to minimize the risk and impact on users.
+description: Learn how to plan your data migration from Teradata to Azure Synapse Analytics to minimize the risk and impact on users.
If logging is enabled and the log history is accessible, other information, such
This question comes up often since companies often want to lower the impact of changes on the data warehouse data model to improve agility. Companies see an opportunity to do so during a migration to modernize their data model. This approach carries a higher risk because it could impact ETL jobs populating the data warehouse from a data warehouse to feed dependent data marts. Because of that risk, it's usually better to redesign on this scale after the data warehouse migration.
-Even if a data model change is an intended part of the overall migration, it's good practice to migrate the existing model as-is to the new environment (Azure Synapse in this case), rather than do any re-engineering on the new platform during migration. This approach has the advantage of minimizing the impact on existing production systems, while also leveraging the performance and elastic scalability of the Azure platform for one-off re-engineering tasks.
+Even if a data model change is an intended part of the overall migration, it's good practice to migrate the existing model as-is to the new environment (Azure Synapse Analytics in this case), rather than do any re-engineering on the new platform during migration. This approach has the advantage of minimizing the impact on existing production systems, while also leveraging the performance and elastic scalability of the Azure platform for one-off re-engineering tasks.
When migrating from Teradata, consider creating a Teradata environment in a VM within Azure as a stepping stone in the migration process.
There's another potential benefit to this approach: by implementing the aggregat
The primary drivers for choosing a virtual data mart implementation over a physical data mart are: -- More agility&mdash;a virtual data mart is easier to change than physical tables and the associated ETL processes.
+- More agility, since a virtual data mart is easier to change than physical tables and the associated ETL processes.
-- Lower total cost of ownership&mdash;a virtualized implementation requires fewer data stores and copies of data.
+- Lower total cost of ownership, since a virtualized implementation requires fewer data stores and copies of data.
- Elimination of ETL jobs to migrate and simplify data warehouse architecture in a virtualized environment. -- Performance&mdash;although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
+- Performance, since although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
### Data migration from Teradata
You can get an accurate number for the volume of data to be mitigated for a give
> [!TIP] > Plan the approach to ETL migration ahead of time and leverage Azure facilities where appropriate.
-For ETL/ELT processing, legacy Teradata data warehouses may use custom-built scripts using Teradata utilities such as BTEQ and Teradata Parallel Transporter (TPT), or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Teradata data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](../../sql-data-warehouse/design-elt-data-loading.md).
+For ETL/ELT processing, legacy Teradata data warehouses may use custom-built scripts using Teradata utilities such as BTEQ and Teradata Parallel Transporter (TPT), or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Teradata data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL design approach](../../sql-data-warehouse/design-elt-data-loading.md).
The following sections discuss migration options and make recommendations for various use cases. This flowchart summarizes one approach:
The following sections discuss migration options and make recommendations for va
The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
-In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
+In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
In the Teradata environment, some or all ETL processing may be performed by custom scripts using Teradata-specific utilities like BTEQ and TPT. In this case, your approach should be to re-engineer using Data Factory.
One way of testing Teradata SQL for compatibility with Azure Synapse is to captu
### Use third party ETL tools
-As described in the previous section, in many cases the existing legacy data warehouse system will already be populated and maintained by third-party ETL products. For a list of Microsoft data integration partners for Azure Synapse, see [Data Integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
+As described in the previous section, in many cases the existing legacy data warehouse system will already be populated and maintained by third-party ETL products. For a list of Microsoft data integration partners for Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
## Data loading from Teradata
When migrating data from a Teradata data warehouse, there are some basic questio
Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Teradata system and load it into the new environment. There are two basic approaches: -- **File Extract**&mdash;Extract the data from the Teradata tables to flat files, normally in CSV format, via BTEQ, Fast Export, or Teradata Parallel Transporter (TPT). Use TPT whenever possible since it's the most efficient in terms of data throughput.
+- **File extract**: Extract the data from the Teradata tables to flat files, normally in CSV format, via BTEQ, Fast Export, or Teradata Parallel Transporter (TPT). Use TPT whenever possible since it's the most efficient in terms of data throughput.
This approach requires space to land the extracted data files. The space could be local to the Teradata source database (if sufficient storage is available), or remote in Azure Blob Storage. The best performance is achieved when a file is written locally, since that avoids network overhead.
Once the database tables to be migrated have been created in Azure Synapse, you
Microsoft provides different options to move large volumes of data, including AZCopy for moving files across the network into Azure Storage, Azure ExpressRoute for moving bulk data over a private network connection, and Azure Data Box where the files are moved to a physical storage device that's then shipped to an Azure data center for loading. For more information, see [data transfer](/azure/architecture/data-guide/scenarios/data-transfer). -- **Direct extract and load across network**&mdash;The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Teradata system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to 'land' the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Teradata database and the Azure environment. For very large data volumes this approach may not be practical.
+- **Direct extract and load across network**: The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Teradata system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to 'land' the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Teradata database and the Azure environment. For very large data volumes this approach may not be practical.
There's also a hybrid approach that uses both methods. For example, you can use the direct network extract approach for smaller dimension tables and samples of the larger fact tables to quickly provide a test environment in Azure Synapse. For the large volume historical fact tables, you can use the file extract and transfer approach using Azure Data Box.
Other benefits of this approach include reduced impact on the Teradata system du
#### Which tools can be used?
-The task of data transformation and movement is the basic function of all ETL products. If one of these products is already in use in the existing Teradata environment, then using the existing ETL tool may simplify data migration data from Teradata to Azure Synapse. This approach assumes that the ETL tool supports Azure Synapse as a target environment. For more information on tools that support Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
+The task of data transformation and movement is the basic function of all ETL products. If one of these products is already in use in the existing Teradata environment, then using the existing ETL tool may simplify data migration from Teradata to Azure Synapse. This approach assumes that the ETL tool supports Azure Synapse as a target environment. For more information on tools that support Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
If you're using an ETL tool, consider running that tool within the Azure environment to benefit from Azure cloud performance, scalability, and cost, and free up resources in the Teradata data center. Another benefit is reduced data movement between the cloud and on-premises environments.
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
Title: "Security, access, and operations for Teradata migrations"
-description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse and Teradata.
+description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse Analytics and Teradata.
This article is part three of a seven part series that provides guidance on how
## Security considerations
-This article discusses connection methods for existing legacy Teradata environments and how they can be migrated to Azure Synapse with minimal risk and user impact.
+This article discusses connection methods for existing legacy Teradata environments and how they can be migrated to Azure Synapse Analytics with minimal risk and user impact.
-We assume there's a requirement to migrate the existing methods of connection and user, role, and permission structure as is. If this isn't the case, then you can use Azure utilities such as Azure portal to create and manage a new security regime.
+We assume there's a requirement to migrate the existing methods of connection and user, role, and permission structure as is. If this isn't the case, then you can use Azure utilities from the Azure portal to create and manage a new security regime.
-For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
+For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options, see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
### Connection and authentication
-#### Teradata Authorization Options
+#### Teradata authorization options
> [!TIP] > Authentication in both Teradata and Azure Synapse can be "in database" or through external methods. Teradata supports several mechanisms for connection and authorization. Valid mechanism values are: -- **TD1**&mdash;selects Teradata 1 as the authentication mechanism. Username and password are required.
+- **TD1**, which selects Teradata 1 as the authentication mechanism. Username and password are required.
-- **TD2**&mdash;selects Teradata 2 as the authentication mechanism. Username and password are required.
+- **TD2**, which selects Teradata 2 as the authentication mechanism. Username and password are required.
-- **TDNEGO**&mdash;selects one of the authentication mechanisms automatically based on the policy, without user involvement.
+- **TDNEGO**, which selects one of the authentication mechanisms automatically based on the policy, without user involvement.
-- **LDAP**&mdash;selects Lightweight Directory Access Protocol (LDAP) as the Authentication Mechanism. The application provides the username and password.
+- **LDAP**, which selects Lightweight Directory Access Protocol (LDAP) as the authentication mechanism. The application provides the username and password.
-- **KRB5**&mdash;selects Kerberos (KRB5) on Windows clients working with Windows servers. To log on using KRB5, the user needs to supply a domain, username, and password. The domain is specified by setting the username to `MyUserName@MyDomain`.
+- **KRB5**, which selects Kerberos (KRB5) on Windows clients working with Windows servers. To log on using KRB5, the user needs to supply a domain, username, and password. The domain is specified by setting the username to `MyUserName@MyDomain`.
-- **NTLM**&mdash;selects NTLM on Windows clients working with Windows servers. The application provides the username and password.
+- **NTLM**, which selects NTLM on Windows clients working with Windows servers. The application provides the username and password.
Kerberos (KRB5), Kerberos Compatibility (KRB5C), NT LAN Manager (NTLM), and NT LAN Manager Compatibility (NTLMC) are for Windows only.
Use the table `AccessRightsAbbv` to look up the full text of the access right, a
| Teradata permission name | Teradata type | Azure Synapse equivalent | |||--| | **ABORT SESSION** | AS | KILL DATABASE CONNECTION |
-| **ALTER EXTERNAL PROCEDURE** | AE | \*\*\*\* |
+| **ALTER EXTERNAL PROCEDURE** | AE | <sup>4</sup> |
| **ALTER FUNCTION** | AF | ALTER FUNCTION | | **ALTER PROCEDURE** | AP | ALTER PROCEDURE | | **CHECKPOINT** | CP | CHECKPOINT | | **CREATE AUTHORIZATION** | CA | CREATE LOGIN | | **CREATE DATABASE** | CD | CREATE DATABASE |
-| **CREATE EXTERNAL** **PROCEDURE** | CE | \*\*\*\* |
+| **CREATE EXTERNAL** **PROCEDURE** | CE | <sup>4</sup> |
| **CREATE FUNCTION** | CF | CREATE FUNCTION |
-| **CREATE GLOP** | GC | \*\*\* |
-| **CREATE MACRO** | CM | CREATE PROCEDURE \*\* |
+| **CREATE GLOP** | GC | <sup>3</sup> |
+| **CREATE MACRO** | CM | CREATE PROCEDURE <sup>2</sup> |
| **CREATE OWNER PROCEDURE** | OP | CREATE PROCEDURE | | **CREATE PROCEDURE** | PC | CREATE PROCEDURE |
-| **CREATE PROFILE** | CO | CREATE LOGIN \* |
+| **CREATE PROFILE** | CO | CREATE LOGIN <sup>1</sup> |
| **CREATE ROLE** | CR | CREATE ROLE | | **DROP DATABASE** | DD | DROP DATABASE| | **DROP FUNCTION** | DF | DROP FUNCTION |
-| **DROP GLOP** | GD | \*\*\* |
-| **DROP MACRO** | DM | DROP PROCEDURE \*\* |
+| **DROP GLOP** | GD | <sup>3</sup> |
+| **DROP MACRO** | DM | DROP PROCEDURE <sup>2</sup> |
| **DROP PROCEDURE** | PD | DELETE PROCEDURE |
-| **DROP PROFILE** | DO | DROP LOGIN \* |
+| **DROP PROFILE** | DO | DROP LOGIN <sup>1</sup> |
| **DROP ROLE** | DR | DELETE ROLE | | **DROP TABLE** | DT | DROP TABLE |
-| **DROP TRIGGER** | DG | \*\*\* |
+| **DROP TRIGGER** | DG | <sup>3</sup> |
| **DROP USER** | DU | DROP USER | | **DROP VIEW** | DV | DROP VIEW |
-| **DUMP** | DP | \*\*\*\* |
+| **DUMP** | DP | <sup>4</sup> |
| **EXECUTE** | E | EXECUTE | | **EXECUTE FUNCTION** | EF | EXECUTE | | **EXECUTE PROCEDURE** | PE | EXECUTE |
-| **GLOP MEMBER** | GM | \*\*\* |
+| **GLOP MEMBER** | GM | <sup>3</sup> |
| **INDEX** | IX | CREATE INDEX | | **INSERT** | I | INSERT |
-| **MONRESOURCE** | MR | \*\*\*\*\* |
-| **MONSESSION** | MS | \*\*\*\*\* |
-| **OVERRIDE DUMP CONSTRAINT** | OA | \*\*\*\* |
-| **OVERRIDE RESTORE CONSTRAINT** | OR | \*\*\*\* |
+| **MONRESOURCE** | MR | <sup>5</sup> |
+| **MONSESSION** | MS | <sup>5</sup> |
+| **OVERRIDE DUMP CONSTRAINT** | OA | <sup>4</sup> |
+| **OVERRIDE RESTORE CONSTRAINT** | OR | <sup>4</sup> |
| **REFERENCES** | RF | REFERENCES |
-| **REPLCONTROL** | RO | \*\*\*\*\* |
-| **RESTORE** | RS | \*\*\*\* |
+| **REPLCONTROL** | RO | <sup>5</sup> |
+| **RESTORE** | RS | <sup>4</sup> |
| **SELECT** | R | SELECT |
-| **SETRESRATE** | SR | \*\*\*\*\* |
-| **SETSESSRATE** | SS | \*\*\*\*\* |
-| **SHOW** | SH | \*\*\* |
+| **SETRESRATE** | SR | <sup>5</sup> |
+| **SETSESSRATE** | SS | <sup>5</sup> |
+| **SHOW** | SH | <sup>3</sup> |
| **UPDATE** | U | UPDATE |
-Comments on the `AccessRightsAbbv` table:
+`AccessRightsAbbv` table notes:
-\* Teradata `PROFILE` is functionally equivalent to `LOGIN` in Azure Synapse
+1. Teradata `PROFILE` is functionally equivalent to `LOGIN` in Azure Synapse.
-\*\* In Teradata there are macros and stored procedures. The following table summarizes the differences between them:
+1. The following table summarizes the differences between macros and stored procedures in Teradata. In Azure Synapse, procedures provide the functionality described in the table.
- | MACRO | Stored procedure |
- |-|-|
- | Contains SQL | Contains SQL |
- | May contain BTEQ dot commands | Contains comprehensive SPL |
- | May receive parameter values passed to it | May receive parameter values passed to it |
- | May retrieve one or more rows | Must use a cursor to retrieve more than one row |
- | Stored in DBC PERM space | Stored in DATABASE or USER PERM |
- | Returns rows to the client | May return one or more values to client as parameters |
+ | Macro | Stored procedure |
+ |-|-|
+ | Contains SQL | Contains SQL |
+ | May contain BTEQ dot commands | Contains comprehensive SPL |
+ | May receive parameter values passed to it | May receive parameter values passed to it |
+ | May retrieve one or more rows | Must use a cursor to retrieve more than one row |
+ | Stored in DBC PERM space | Stored in DATABASE or USER PERM |
+ | Returns rows to the client | May return one or more values to client as parameters |
-In Azure Synapse, procedures can be used to provide this functionality.
+1. `SHOW`, `GLOP`, and `TRIGGER` have no direct equivalent in Azure Synapse.
-\*\*\* `SHOW`, `GLOP`, and `TRIGGER` have no direct equivalent in Azure Synapse.
+1. These features are managed automatically by the system in Azure Synapse. See [Operational considerations](#operational-considerations).
-\*\*\*\* These features are managed automatically by the system in Azure Synapse&mdash;see [Operational considerations](#operational-considerations).
+1. In Azure Synapse, these features are handled outside of the database.
-\*\*\*\*\* In Azure Synapse, these features are handled outside of the database.
-
-Refer to [Azure Synapse Analytics security permissions](../../guidance/security-white-paper-introduction.md).
+For more information about access rights in Azure Synapse, see to [Azure Synapse Analytics security permissions](../../guidance/security-white-paper-introduction.md).
## Operational considerations
Purge these tables when the associated removable media is expired and overwritte
- `DBC.RCConfiguration`: archive/recovery config -- `DBC.RCMedia`: VolSerial for Archive/recovery
+- `DBC.RCMedia`: VolSerial for archive/recovery
Azure Synapse has an option to automatically create statistics so that they can be used as needed. Perform defragmentation of indexes and data blocks manually, on a scheduled basis, or automatically. Leveraging native built-in Azure capabilities can reduce the effort required in a migration exercise.
Teradata provides several tools to monitor the operation including Teradata View
Database administrators can use Teradata Viewpoint to determine system status, trends, and individual query status. By observing trends in system usage, system administrators are better able to plan project implementations, batch jobs, and maintenance to avoid peak periods of use. Business users can use Teradata Viewpoint to quickly access the status of reports and queries and drill down into details. > [!TIP]
-> Azure portal provides a UI to manage monitoring and auditing tasks for all Azure data and processes.
+> The Azure portal provides a UI to manage monitoring and auditing tasks for all Azure data and processes.
Similarly, Azure Synapse provides a rich monitoring experience within the Azure portal to provide insights into your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs.
The portal also enables integration with other Azure monitoring services such as
> [!TIP] > Low-level and system-wide metrics are automatically logged in Azure Synapse.
-Resource utilization statistics for the Azure Synapse are automatically logged within the system. The metrics include usage statistics for CPU, memory, cache, I/O and temporary workspace for each query as well as connectivity information (such as failed connection attempts).
+Resource utilization statistics for Azure Synapse are automatically logged within the system. The metrics for each query include usage statistics for CPU, memory, cache, I/O, and temporary workspace, as well as connectivity information like failed connection attempts.
Azure Synapse provides a set of [Dynamic Management Views](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
For more information, see [Azure Synapse operations and management options](/azu
### High Availability (HA) and Disaster Recovery (DR)
-Teradata implements features such as Fallback, Archive Restore Copy utility (ARC), and Data Stream Architecture (DSA) to provide protection against data loss and high availability (HA) via replication and archive of data. Disaster Recovery options include Dual-Active systems, DR as a service, or a replacement system depending on the recovery time requirement.
+Teradata implements features such as Fallback, Archive Restore Copy utility (ARC), and Data Stream Architecture (DSA) to provide protection against data loss and high availability (HA) via replication and archive of data. Disaster Recovery (DR) options include Dual Active Solution, DR as a service, or a replacement system depending on the recovery time requirement.
> [!TIP] > Azure Synapse creates snapshots automatically to ensure fast recovery times.
-Azure Synapse uses database snapshots to provide high availability of the warehouse. A data warehouse snapshot creates a restore point that can be used to recover or copy a data warehouse to a previous state. Since Azure Synapse is a distributed system, a data warehouse snapshot consists of many files that are in Azure storage. Snapshots capture incremental changes from the data stored in your data warehouse.
+Azure Synapse uses database snapshots to provide high availability of the warehouse. A data warehouse snapshot creates a restore point that can be used to recover or copy a data warehouse to a previous state. Since Azure Synapse is a distributed system, a data warehouse snapshot consists of many files that are in Azure Storage. Snapshots capture incremental changes from the data stored in your data warehouse.
Azure Synapse automatically takes snapshots throughout the day creating restore points that are available for seven days. This retention period can't be changed. Azure Synapse supports an eight-hour recovery point objective (RPO). A data warehouse can be restored in the primary region from any one of the snapshots taken in the past seven days.
As well as the snapshots described previously, Azure Synapse also performs as st
### Workload management > [!TIP]
-> In a production data warehouse, there are typically mixed workloads which have different resource usage characteristics running concurrently.
+> In a production data warehouse, there are typically mixed workloads with different resource usage characteristics running concurrently.
A workload is a class of database requests with common traits whose access to the database can be managed with a set of rules. Workloads are useful for:
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/4-visualization-reporting.md
Title: "Visualization and reporting for Teradata migrations"
-description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse compared to Teradata.
+description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse Analytics compared to Teradata.
Almost every organization accesses data warehouses and data marts by using a ran
- Custom analytic applications that have embedded BI tool functionality inside the application. -- Operational applications that request BI on demand by invoking queries and reports as-a-service on a BI platform, that in-turn queries data in the data warehouse or data marts that are being migrated.
+- Operational applications that request BI on demand by invoking queries and reports as-a-service on a BI platform, that in turn queries data in the data warehouse or data marts that are being migrated.
-- Interactive data science development tools, for instance, Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter notebooks.
+- Interactive data science development tools, for instance, Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter Notebooks.
The migration of visualization and reporting as part of a data warehouse migration program, means that all the existing queries, reports, and dashboards generated and issued by these tools and applications need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration. > [!TIP] > Existing users, user groups, roles and assignments of access security privileges need to be migrated first for migration of reports and visualizations to succeed.
-To make that happen, everything that BI tools and applications depend on still needs to work once you migrate your data warehouse schema and data to Azure Synapse. That includes the obvious and the not so obvious&mdash;such as access and security. While access and security are discussed in [another guide](3-security-access-operations.md) in this series, it's a prerequisite to accessing data in the migrated system. Access and security include ensuring that:
+To make that happen, everything that BI tools and applications depend on still needs to work once you migrate your data warehouse schema and data to Azure Synapse. That includes the obvious and the not so obvious&mdash;such as access and security. Access and security are important considerations for data access in the migrated system, and are specifically discussed in [another guide](3-security-access-operations.md) in this series. When you address access and security, ensure that:
- Authentication is migrated to let users sign in to the data warehouse and data mart databases on Azure Synapse.
To make that happen, everything that BI tools and applications depend on still n
In addition, all the required data needs to be migrated to ensure the same results appear in the same reports and dashboards that now query data on Azure Synapse. User expectation will undoubtedly be that migration is seamless and there will be no surprises that destroy their confidence in the migrated system on Azure Synapse. So, this is an area where you must take extreme care and communicate as much as possible to allay any fears in your user base. Their expectations are that: -- Table structure will be the same if directly referred to in queries
+- Table structure will be the same if directly referred to in queries.
-- Table and column names remain the same if directly referred to in queries; for instance, so that calculated fields defined on columns in BI tools don't fail when aggregate reports are produced
+- Table and column names remain the same if directly referred to in queries; for instance, so that calculated fields defined on columns in BI tools don't fail when aggregate reports are produced.
-- Historical analysis remains the same
+- Historical analysis remains the same.
-- Data types should, if possible, remain the same
+- Data types should, if possible, remain the same.
-- Query behavior remains the same
+- Query behavior remains the same.
-- ODBC / JDBC drivers are tested to make sure nothing has changed in terms of query behavior
+- ODBC/JDBC drivers are tested to make sure nothing has changed in terms of query behavior.
> [!TIP] > Views and SQL queries using proprietary SQL query extensions are likely to result in incompatibilities that impact BI reports and dashboards.
If your existing BI tools run on premises, ensure that they're able to connect t
There's a lot to think about here, so let's look at all this in more detail. > [!TIP]
-> A lift and shift data warehouse migration are likely to minimize any disruption to reports, dashboards, and other visualizations.
+> A lift and shift data warehouse migration is likely to minimize any disruption to reports, dashboards, and other visualizations.
-## Minimize the impact of data warehouse migration on BI tools and reports using data virtualization
+## Minimize the impact of data warehouse migration on BI tools and reports by using data virtualization
> [!TIP] > Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
This breaks the dependency between business users utilizing self-service BI tool
> [!TIP] > Schema alterations to tune your data model for Azure Synapse can be hidden from users.
-By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](../../partner/data-integration.md) provides a useful data virtualization software.
+By introducing data virtualization, any schema alterations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](../../partner/data-integration.md) provide useful data virtualization software.
## Identify high priority reports to migrate first
A key question when migrating your existing reports and dashboards to Azure Syna
These factors are discussed in more detail later in this article.
-Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straightforward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
### Migrate reports based on usage Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and don't currently offer any value. So, do you have any mechanism for finding out which reports and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
-If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you've and defining their business purpose and usage statistics.
+If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you have and defining their business purpose and usage statistics.
-For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to de-commission those reports to optimize your migration efforts. A key question worth asking when deciding to de-commission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
+For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to decommission those reports to optimize your migration efforts. A key question worth asking when deciding to decommission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
### Migrate reports based on business value
-Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this overtime.
+Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this over time.
This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), and KPI targets that need to be achieved and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
-It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives contribution is required at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
+It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives requires contribution at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
| **Level** | **Report / dashboard name** | **Business purpose** | **Department used** | **Usage frequency** | **Business priority** | |-|-|-|-|-|-|
While this may seem too time consuming, you need a mechanism to understand the c
> [!TIP] > Data migration strategy could also dictate which reports and visualizations get migrated first.
-If your migration strategy is based on migrating "data marts first", clearly, the order of data mart migration will have a bearing on which reports and dashboards can be migrated first to run on Azure Synapse. Again, this is likely to be a business-value-related decision. Prioritizing which data marts are migrated first reflects business priorities. Metadata discovery tools can help you here by showing you which reports rely on data in which data mart tables.
+If your migration strategy is based on migrating data marts first, the order of data mart migration will have a bearing on which reports and dashboards can be migrated first to run on Azure Synapse. Again, this is likely to be a business-value-related decision. Prioritizing which data marts are migrated first reflects business priorities. Metadata discovery tools can help you here by showing you which reports rely on data in which data mart tables.
## Migration incompatibility issues that can impact reports and visualizations
BI tool reports and dashboards, and other visualizations, are produced by issuin
- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse. For example, Teradata Geospatial or Interval data types.
-In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it may be able to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
+In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it's possible to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
> [!TIP] > Querying the system catalog of your legacy warehouse DBMS is a quick and straightforward way to identify schema incompatibilities with Azure Synapse.
A key element in data warehouse migration is the testing of reports and dashboar
- Test analytical functionality.
-For information about how to migrate users, user groups, roles, and privileges, see the [Security, access, and operations for Teradata migrations](3-security-access-operations.md) which is part of this series of articles.
+For information about how to migrate users, user groups, roles, and privileges, see [Security, access, and operations for Teradata migrations](3-security-access-operations.md), which is part of this series.
> [!TIP] > Build an automated test suite to make tests repeatable.
-It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
+It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
> [!TIP] > Leverage tools that can compare metadata lineage to verify results.
-Ad-hoc analysis and reporting are more challenging and requires a set of tests to be compiled to verify that results are consistent across your legacy data warehouse DBMS and Azure Synapse. If reports and dashboards are inconsistent, then having the ability to compare metadata lineage across original and migrated systems is extremely valuable during migration testing, as it can highlight differences and pinpoint where they occurred when these aren't easy to detect. This is discussed in more detail later in this article.
+Ad-hoc analysis and reporting are more challenging and require a set of tests to be compiled to verify that results are consistent across your legacy data warehouse DBMS and Azure Synapse. If reports and dashboards are inconsistent, then having the ability to compare metadata lineage across original and migrated systems is extremely valuable during migration testing, as it can highlight differences and pinpoint where they occurred when these aren't easy to detect. This is discussed in more detail later in this article.
In terms of security, the best way to do this is to create roles, assign access privileges to roles, and then attach users to roles. To access your newly migrated data warehouse, set up an automated process to create new users, and to do role assignment. To detach users from roles, you can follow the same steps.
It's also important to communicate the cut-over to all users, so they know what'
A critical success factor in migrating reports and dashboards is understanding lineage. Lineage is metadata that shows the journey that data has taken, so you can see the path from the report/dashboard all the way back to where the data originates. It shows how data has gone from point to point, its location in the data warehouse and/or data mart, and where it's used&mdash;for example, in what reports. It helps you understand what happens to data as it travels through different data stores&mdash;files and database&mdash;different ETL pipelines, and into reports. If business users have access to data lineage, it improves trust, breeds confidence, and enables more informed business decisions. > [!TIP]
-> Tools that automate metadata collection and show end-to- end lineage in a multi-vendor environment are valuable when it comes to migration.
+> Tools that automate metadata collection and show end-to-end lineage in a multi-vendor environment are valuable when it comes to migration.
-In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you've Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
+In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you have Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end to end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
Data lineage visualization not only reduces time, effort, and error in the migra
By leveraging automated metadata discovery and data lineage tools that can compare lineage, you can verify if a report is produced using data migrated to Azure Synapse and if it's produced in the same way as in your legacy environment. This kind of capability also helps you determine: -- What data needs to be migrated to ensure successful report and dashboard execution on Azure Synapse
+- What data needs to be migrated to ensure successful report and dashboard execution on Azure Synapse.
-- What transformations have been and should be performed to ensure successful execution on Azure Synapse
+- What transformations have been and should be performed to ensure successful execution on Azure Synapse.
-- How to reduce report duplication
+- How to reduce report duplication.
This substantially simplifies the data migration process, because the business will have a better idea of the data assets it has and what needs to be migrated to enable a solid reporting environment on Azure Synapse. > [!TIP] > Azure Data Factory and several third-party ETL tools support lineage.
-Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](../../partner/data-integration.md) provide automated metadata discovery, data lineage, and lineage comparison tools.
+Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](../../partner/data-integration.md) provide automated metadata discovery, data lineage, and lineage comparison tools.
## Migrate BI tool semantic layers to Azure Synapse Analytics
However, if data structures change, then data is stored in unsupported data type
You can't rely on documentation to find out where the issues are likely to be. Making use of `EXPLAIN` statements is a pragmatic and quick way to identify incompatibilities in SQL. Rework these to achieve similar results in Azure Synapse. In addition, it's recommended that you make use of automated metadata discovery and lineage tools to help you identify duplicate reports, reports that are no longer valid because they're using data from data sources that you no longer use, and to understand dependencies. Some of these tools help compare lineage to verify that reports running in your legacy data warehouse environment are produced identically in Azure Synapse.
-Don't migrate reports that you no longer use. BI tool usage data can help determine which ones aren't in use. For the visualizations and reports that you do want to migrate, migrate all users, user groups, roles, and privileges, and associate these reports with strategic business objectives and priorities to help you identify report insight contribution to specific objectives. This is useful if you're using business value to drive your report migration strategy. If you're migrating by data store,&mdash;data mart by data mart&mdash;then metadata will also help you identify which reports are dependent on which tables and views, so that you can focus on migrating to these first.
+Don't migrate reports that you no longer use. BI tool usage data can help determine which ones aren't in use. For the visualizations and reports that you do want to migrate, migrate all users, user groups, roles, and privileges, and associate these reports with strategic business objectives and priorities to help you identify report insight contribution to specific objectives. This is useful if you're using business value to drive your report migration strategy. If you're migrating by data store, data mart by data mart, then metadata will also help you identify which reports are dependent on which tables and views, so that you can focus on migrating to these first.
Finally, consider data virtualization to shield BI tools and applications from structural changes to the data warehouse and/or the data mart data model that may occur during migration. You can also use a common vocabulary with data virtualization to define a common semantic layer that guarantees consistent common data names, definitions, metrics, hierarchies, joins, and more across all BI tools and applications in a migrated Azure Synapse environment.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/5-minimize-sql-issues.md
Title: "Minimize SQL issues for Teradata migrations"
-description: Learn how to minimize the risk of SQL issues when migrating from Teradata to Azure Synapse.
+description: Learn how to minimize the risk of SQL issues when migrating from Teradata to Azure Synapse Analytics.
This article is part five of a seven part series that provides guidance on how t
In 1984, Teradata initially released their database product. It introduced massively parallel processing (MPP) techniques to enable data processing at a scale more efficiently than the existing mainframe technologies available at the time. Since then, the product has evolved and has many installations among large financial institutions, telecommunications, and retail companies. The original implementation used proprietary hardware and was channel attached to mainframes&mdash;typically IBM or IBM-compatible processors.
-While more recent announcements have included network connectivity and the availability of Teradata technology stack in the cloud (including Azure), most existing installations are on premises, so many users are considering migrating some or all their Teradata data to Azure Synapse to gain the benefits of a move to a modern cloud environment.
+While more recent announcements have included network connectivity and the availability of the Teradata technology stack in the cloud (including Azure), most existing installations are on premises, so many users are considering migrating some or all their Teradata data to Azure Synapse Analytics to gain the benefits of a move to a modern cloud environment.
> [!TIP] > Many existing Teradata installations are data warehouses using a dimensional data model. Teradata technology is often used to implement a data warehouse, supporting complex analytic queries on large data volumes using SQL. Dimensional data models&mdash;star or snowflake schemas&mdash;are common, as is the implementation of data marts for individual departments.
-This combination of SQL and dimensional data models simplifies migration to Azure Synapse, since the basic concepts and SQL skills are transferable. The recommended approach is to migrate the existing data model as-is to reduce risk and time taken. Even if the eventual intention is to make changes to the data model (for example, moving to a Data Vault model), perform an initial as-is migration and then make changes within the Azure cloud environment, leveraging the performance, elastic scalability, and cost advantages there.
+This combination of SQL and dimensional data models simplifies migration to Azure Synapse, since the basic concepts and SQL skills are transferable. The recommended approach is to migrate the existing data model as-is to reduce risk and time taken. Even if the eventual intention is to make changes to the data model (for example, moving to a data vault model), perform an initial as-is migration and then make changes within the Azure cloud environment, leveraging the performance, elastic scalability, and cost advantages there.
-While the SQL language has been standardized, individual vendors have in some cases implemented proprietary extensions. This document highlights potential SQL differences you may encounter while migrating from a legacy Teradata environment, and to provide workarounds.
+While the SQL language has been standardized, individual vendors have in some cases implemented proprietary extensions. This document highlights potential SQL differences you may encounter while migrating from a legacy Teradata environment, and provides workarounds.
### Use an Azure VM Teradata instance as part of a migration
Leverage the Azure environment when running a migration from an on-premises Tera
With this approach, standard Teradata utilities such as Teradata Parallel Data Transporter (or third-party data replication tools such as Attunity Replicate) can be used to efficiently move the subset of Teradata tables that are to be migrated onto the VM instance, and then all migration tasks can take place within the Azure environment. This approach has several benefits: -- After the initial replication of data, the source system isn't impacted by the migration tasks
+- After the initial replication of data, the source system isn't impacted by the migration tasks.
-- The familiar Teradata interfaces, tools and utilities are available within the Azure environment
+- The familiar Teradata interfaces, tools, and utilities are available within the Azure environment.
-- Once in the Azure environment there are no potential issues with network bandwidth availability between the on-premises source system and the cloud target system
+- Once in the Azure environment there are no potential issues with network bandwidth availability between the on-premises source system and the cloud target system.
-- Tools such as Azure Data Factory can efficiently call utilities such as Teradata Parallel Transporter to migrate data quickly and easily
+- Tools such as Azure Data Factory can efficiently call utilities such as Teradata Parallel Transporter to migrate data quickly and easily.
-- The migration process is orchestrated and controlled entirely within the Azure environment
+- The migration process is orchestrated and controlled entirely within the Azure environment.
### Use Azure Data Factory to implement a metadata-driven migration
Automate and orchestrate the migration process by making use of the capabilities
Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;that can ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
-By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de).
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de).
## SQL DDL differences between Teradata and Azure Synapse
The following sections discuss Teradata-specific options to consider during a mi
When migrating tables between different technologies, only the raw data and its descriptive metadata gets physically moved between the two environments. Other database elements from the source system, such as indexes and log files, aren't directly migrated as these may not be needed or may be implemented differently within the new target environment. For example, there's no equivalent of the `MULTISET` option within Teradata's `CREATE TABLE` syntax.
-It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if a NUSI has been created in the source Teradata environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if a non-unique secondary index (NUSI) has been created in the source Teradata environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
### Unsupported Teradata table types > [!TIP] > Standard tables within Azure Synapse can support migrated Teradata time series and temporal tables.
-Teradata includes support for special table types for time series and temporal data. The syntax and some of the functions for these table types isn't directly supported within Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
+Teradata includes support for special table types for time series and temporal data. The syntax and some of the functions for these table types aren't directly supported within Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
Teradata implements the temporal query functionality via query rewriting to add additional filters within a temporal query to limit the applicable date range. If this functionality is currently in use within the source Teradata environment and is to be migrated, then this additional filtering will need to be added into the relevant temporal queries.
-The Azure environment also includes specific features for complex analytics on time&mdash;series data at scale called [time series insights](https://azure.microsoft.com/services/time-series-insights/)&mdash;this is aimed at IoT data analysis applications and may be more appropriate for this use-case.
+The Azure environment also includes specific features for complex analytics on time-series data at scale called [time series insights](https://azure.microsoft.com/services/time-series-insights/)&mdash;this is aimed at IoT data analysis applications and may be more appropriate for this use-case.
### Teradata data type mapping
Most Teradata data types have a direct equivalent in Azure Synapse. This table s
| AN | ARRAY | Not supported in Azure Synapse | | AT | TIME | TIME | | BF | BYTE | BINARY |
-| BO | BLOB | BLOB data type isn\'t directly supported but can be replaced with BINARY |
+| BO | BLOB | BLOB data type isn't directly supported but can be replaced with BINARY. |
| BV | VARBYTE | BINARY | | CF | VARCHAR | CHAR |
-| CO | CLOB | CLOB data type isn\'t directly supported but can be replaced with VARCHAR |
+| CO | CLOB | CLOB data type isn't directly supported but can be replaced with VARCHAR. |
| CV | VARCHAR | VARCHAR | | D | DECIMAL | DECIMAL | | DA | DATE | DATE |
-| DH | INTERVAL DAY TO HOUR | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| DM | INTERVAL DAY TO MINUTE | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| DS | INTERVAL DAY TO SECOND | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| DT | DATASET | DATASET data type is supported in Azure Synapse |
-| DY | INTERVAL DAY | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| DH | INTERVAL DAY TO HOUR | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| DM | INTERVAL DAY TO MINUTE | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| DS | INTERVAL DAY TO SECOND | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| DT | DATASET | DATASET data type is supported in Azure Synapse. |
+| DY | INTERVAL DAY | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
| F | FLOAT | FLOAT |
-| HM | INTERVAL HOUR TO MINUTE | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| HR | INTERVAL HOUR | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| HS | INTERVAL HOUR TO SECOND | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| HM | INTERVAL HOUR TO MINUTE | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| HR | INTERVAL HOUR | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| HS | INTERVAL HOUR TO SECOND | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
| I1 | BYTEINT | TINYINT | | I2 | SMALLINT | SMALLINT | | I8 | BIGINT | BIGINT | | I | INTEGER | INT |
-| JN | JSON | JSON data type isn't currently directly supported within Azure Synapse, but JSON data can be stored in a VARCHAR field |
-| MI | INTERVAL MINUTE | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| MO | INTERVAL MONTH | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| MS | INTERVAL MINUTE TO SECOND | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| JN | JSON | JSON data type isn't currently directly supported within Azure Synapse, but JSON data can be stored in a VARCHAR field. |
+| MI | INTERVAL MINUTE | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| MO | INTERVAL MONTH | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| MS | INTERVAL MINUTE TO SECOND | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
| N | NUMBER | NUMERIC | | PD | PERIOD(DATE) | Can be converted to VARCHAR or split into two separate dates |
-| PM | PERIOD (TIMESTAMP WITH TIME ZONE) | Can be converted to VARCHAR or split into two separate timestamps (DATETIMEOFFSET). |
-| PS | PERIOD(TIMESTAMP) | Can be converted to VARCHAR or split into two separate timestamps (DATETIMEOFFSET). |
-| PT | PERIOD(TIME) | Can be converted to VARCHAR or split into two separate times. |
-| PZ | PERIOD (TIME WITH TIME ZONE) | Can be converted to VARCHAR or split into two separate times but WITH TIME ZONE isn\'t supported for TIME. |
-| SC | INTERVAL SECOND | INTERVAL data types aren\'t supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| PM | PERIOD (TIMESTAMP WITH TIME ZONE) | Can be converted to VARCHAR or split into two separate timestamps (DATETIMEOFFSET) |
+| PS | PERIOD(TIMESTAMP) | Can be converted to VARCHAR or split into two separate timestamps (DATETIMEOFFSET) |
+| PT | PERIOD(TIME) | Can be converted to VARCHAR or split into two separate times |
+| PZ | PERIOD (TIME WITH TIME ZONE) | Can be converted to VARCHAR or split into two separate times but WITH TIME ZONE isn't supported for TIME |
+| SC | INTERVAL SECOND | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
| SZ | TIMESTAMP WITH TIME ZONE | DATETIMEOFFSET | | TS | TIMESTAMP | DATETIME or DATETIME2 |
-| TZ | TIME WITH TIME ZONE | TIME WITH TIME ZONE isn\'t supported because TIME is stored using \"wall clock\" time only without a time zone offset |
-| XM | XML | XML data type isn't currently directly supported within Azure Synapse, but XML data can be stored in a VARCHAR field |
-| YM | INTERVAL YEAR TO MONTH | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
-| YR | INTERVAL YEAR | INTERVAL data types aren\'t supported in Azure Synapse. but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD) |
+| TZ | TIME WITH TIME ZONE | TIME WITH TIME ZONE isn't supported because TIME is stored using \"wall clock\" time only without a time zone offset. |
+| XM | XML | XML data type isn't currently directly supported within Azure Synapse, but XML data can be stored in a VARCHAR field. |
+| YM | INTERVAL YEAR TO MONTH | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
+| YR | INTERVAL YEAR | INTERVAL data types aren't supported in Azure Synapse, but date calculations can be done with the date comparison functions (for example, DATEDIFF and DATEADD). |
Use the metadata from the Teradata catalog tables to determine whether any of these data types are to be migrated and allow for this in the migration plan. For example, use a SQL query like this one to find any occurrences of unsupported data types that need attention.
Edit existing Teradata `CREATE TABLE` and `CREATE VIEW` scripts to create the eq
However, all the information that specifies the current definitions of tables and views within the existing Teradata environment is maintained within system catalog tables. This is the best source of this information as it's guaranteed to be up to date and complete. Be aware that user-maintained documentation may not be in sync with the current table definitions.
-Access this information via views onto the catalog such as `DBC.ColumnsV` and generate the equivalent `CREATE TABLE DDL` statements for the equivalent tables in Azure Synapse.
+Access this information via views onto the catalog such as `DBC.ColumnsV` and generate the equivalent `CREATE TABLE` DDL statements for the equivalent tables in Azure Synapse.
> [!TIP] > Third-party tools and services can automate data mapping tasks.
There are [Microsoft partners](../../partner/data-integration.md) who offer tool
> [!TIP] > SQL DML commands `SELECT`, `INSERT` and `UPDATE` have standard core elements but may also implement different syntax options.
-The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE` and `DELETE`. Both Teradata and Azure Synapse use these commands, but in some cases there are implementation differences.
+The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE`, and `DELETE`. Both Teradata and Azure Synapse use these commands, but in some cases there are implementation differences.
The following sections discuss the Teradata-specific DML commands that you should consider during a migration to Azure Synapse. ### SQL DML syntax differences
-Be aware of these differences in SQL Data Manipulation Language (DML) syntax between Teradata SQL and Azure Synapse when migrating:
+There are a few differences in SQL DML syntax between Teradata SQL and Azure Synapse (T-SQL) that you should be aware of during migration:
-- `QUALIFY`&mdash;Teradata supports the `QUALIFY` operator. For example:
+- `QUALIFY`: Teradata supports the `QUALIFY` operator. For example:
```sql SELECT col1
Be aware of these differences in SQL Data Manipulation Language (DML) syntax bet
) WHERE rn = 1; ``` -- Date Arithmetic&mdash;Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as `SELECT DATE1&mdash;DATE2 FROM...`.
+- Date arithmetic: Azure Synapse has operators such as `DATEADD` and `DATEDIFF` which can be used on `DATE` or `DATETIME` fields. Teradata supports direct subtraction on dates such as `SELECT DATE1 - DATE2 FROM...`
-- In Group by ordinal, explicitly provide the T-SQL column name.
+- In `GROUP BY` ordinal, explicitly provide the T-SQL column name.
-- `LIKE ANY`&mdash;Teradata supports `LIKE ANY` syntax such as:
+- `LIKE ANY`: Teradata supports `LIKE ANY` syntax such as:
```sql SELECT * FROM CUSTOMER
See the following sections for more information on each of these elements.
#### Functions
-As with most database products, Teradata supports system functions and user-defined functions within the SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated. System functions where there's no equivalent, such arbitrary user-defined functions, may need to be recoded using the languages available in the target environment. Azure Synapse uses the popular Transact-SQL language to implement user-defined functions.
+As with most database products, Teradata supports system functions and user-defined functions within the SQL implementation. When migrating to another database platform such as Azure Synapse, common system functions are available and can be migrated without change. Some system functions may have slightly different syntax, but the required changes can be automated. System functions where there's no equivalent, such as arbitrary user-defined functions, may need to be recoded using the languages available in the target environment. Azure Synapse uses the popular Transact-SQL language to implement user-defined functions.
#### Stored procedures
Azure Synapse doesn't support the creation of triggers, but you can implement th
#### Sequences
-Azure Synapse sequences are handled in a similar way to Teradata, using [Identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory).
+Azure Synapse sequences are handled in a similar way to Teradata, using [identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory).
#### Teradata to T-SQL mapping
This table shows the Teradata to T-SQL compliant with Azure Synapse SQL data typ
## Summary
-Typical existing legacy Teradata installations are implemented in a way that makes migration to Azure Synapse easy. They use SQL for analytical queries on large data volumes, and are in some form of dimensional data model. These factors make it a good candidate for migration to Azure Synapse.
+Typical existing legacy Teradata installations are implemented in a way that makes migration to Azure Synapse easy. They use SQL for analytical queries on large data volumes, and are in some form of dimensional data model. These factors make them good candidates for migration to Azure Synapse.
To minimize the task of migrating the actual SQL code, follow these recommendations: -- Initial migration of the data warehouse should be as-is to minimize risk and time taken, even if the eventual final environment will incorporate a different data model such as Data Vault.
+- Initial migration of the data warehouse should be as-is to minimize risk and time taken, even if the eventual final environment will incorporate a different data model such as data vault.
- Consider using a Teradata instance in an Azure VM as a stepping stone as part of the migration process.
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/6-microsoft-third-party-migration-tools.md
Title: "Tools for Teradata data warehouse migration to Azure Synapse Analytics"
-description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Teradata to Azure Synapse.
+description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Teradata to Azure Synapse Analytics.
This article is part six of a seven part series that provides guidance on how to
## Data warehouse migration tools
-By migrating your existing data warehouse to Azure Synapse, you benefit from:
+By migrating your existing data warehouse to Azure Synapse Analytics, you benefit from:
- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database.
You can develop simple or comprehensive ETL and ELT processes without coding or
You can use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
-A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users who want to visually discover, explore, and prepare data at scale without writing code. This capability, similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, offers self-service data preparation. Business users can prepare and integrate data through a spreadsheet style user interface with drop-down transform options.
+A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users who want to visually discover, explore, and prepare data at scale without writing code. This capability, similar to Microsoft Excel Power Query or Microsoft Power BI dataflows, offers self-service data preparation. Business users can prepare and integrate data through a spreadsheet-style user interface with drop-down transform options.
Azure Data Factory is the recommended approach for implementing data integration and ETL/ELT processes for an Azure Synapse environment, especially if existing legacy processes need to be refactored.
Azure Data Factory is the recommended approach for implementing data integration
#### Azure ExpressRoute
-Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
#### AzCopy
The [COPY](/sql/t-sql/statements/copy-into-transact-sql) statement provides the
PolyBase provides the fastest and most scalable method of loading bulk data into Azure Synapse. PolyBase leverages the MPP architecture to use parallel loading, to give the fastest throughput, and can read data from flat files in Azure Blob Storage or directly from external data sources and other relational databases via connectors.
-PolyBase can also directly read from files compressed with gzip&mdash;this reduces the physical volume of data moved during the load process. PolyBase supports popular data formats such as delimited text, ORC and Parquet.
+PolyBase can also directly read from files compressed with gzip&mdash;this reduces the physical volume of data moved during the load process. PolyBase supports popular data formats such as delimited text, ORC, and Parquet.
> [!TIP] > Invoke PolyBase from Azure Data Factory as part of a migration pipeline.
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
Title: "Beyond Teradata migration, implementing a modern data warehouse in Microsoft Azure"
-description: Learn how a Teradata migration to Azure Synapse lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem.
+description: Learn how a Teradata migration to Azure Synapse Analytics lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem.
This article is part seven of a seven part series that provides guidance on how
## Beyond data warehouse migration to Azure
-One of the key reasons to migrate your existing data warehouse to Azure Synapse is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integrating with technologies like:
+One of the key reasons to migrate your existing data warehouse to Azure Synapse Analytics is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integrating with technologies like:
-- Azure Data Lake Storage&mdash;for cost effective data ingestion, staging, cleansing and transformation to free up data warehouse capacity occupied by fast growing staging tables
+- Azure Data Lake Storage, for cost effective data ingestion, staging, cleansing, and transformation to free up data warehouse capacity occupied by fast growing staging tables.
-- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data
+- Azure Data Factory, for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data.
-- [The Open Data Model Common Data Initiative](/common-data-model/)&mdash;to share consistent trusted data across multiple technologies including:
+- [The Open Data Model Common Data Initiative](/common-data-model/), to share consistent trusted data across multiple technologies including:
- Azure Synapse - Azure Synapse Spark - Azure HDInsight
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- Microsoft ISV Partners - [Microsoft's data science technologies](/azure/architecture/data-science-process/platforms-and-tools) including:
- - Azure ML studio
- - Azure Machine Learning Service
+ - Azure Machine Learning Studio
+ - Azure Machine Learning
- Azure Synapse Spark (Spark as a service) - Jupyter Notebooks - RStudio - ML.NET
- - Visual Studio .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale.
+ - .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale.
-- [Azure HDInsight](../../../hdinsight/index.yml)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
+- [Azure HDInsight](../../../hdinsight/index.yml), to leverage big data analytical processing and join big data with Azure Synapse data by creating a logical data warehouse using PolyBase.
-- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka), to integrate with live streaming data within Azure Synapse.
-There's often acute demand to integrate with [Machine Learning](../../machine-learning/what-is-machine-learning.md) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
+There's often acute demand to integrate with [machine learning](../../machine-learning/what-is-machine-learning.md) to enable custom-built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
In addition, there's an opportunity to integrate Azure Synapse with Microsoft partner tools on Azure to shorten time to value.
Let's look at these in more detail to understand how you can take advantage of t
Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
-This, along with other new data&mdash;like Internet of Things (IoT) data, coming into the enterprise, means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
+The rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT) streams, means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation, and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
Once you've migrated your data warehouse to Azure Synapse, Microsoft provides the ability to modernize your ETL processing by ingesting data into, and staging data in, Azure Data Lake Storage. You can then clean, transform and integrate your data at scale using Data Factory before loading it into Azure Synapse in parallel using PolyBase.
For ELT strategies, consider offloading ELT processing to Azure Data Lake to eas
### Microsoft Azure Data Factory > [!TIP]
-> Data Factory allows you to build scalable data integration pipelines code free.
+> Data Factory allows you to build scalable data integration pipelines code-free.
-[Microsoft Azure Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
+[Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
-- Data Factory allows you to build scalable data integration pipelines code free. Easily acquire data at scale. Pay only for what you use and connect to on premises, cloud, and SaaS based data sources.
+- Build scalable data integration pipelines code-free. Easily acquire data at scale. Pay only for what you use and connect to on-premises, cloud, and SaaS-based data sources.
-- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale and take automatic action such a recommendation, an alert, and more.
+- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale. Take automatic action, such as a recommendation or alert.
-- Seamlessly author, monitor and manage pipelines that span data stores both on-premises and in the cloud.
+- Seamlessly author, monitor, and manage pipelines that span data stores both on-premises and in the cloud.
-- Enable pay as you go scale out in alignment with customer growth.
+- Enable pay-as-you-go scale out in alignment with customer growth.
> [!TIP] > Data Factory can connect to on-premises, cloud, and SaaS data.
Implement Data Factory pipeline development from any of several places including
- Programmatically from .NET and Python using a multi-language SDK -- Azure Resource Manager (ARM) Templates
+- Azure Resource Manager (ARM) templates
- REST APIs Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid as they can connect, ingest, clean, transform and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
-Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
+Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real-time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
#### Use cases > [!TIP] > Build data warehouses on Microsoft Azure.
-> [!TIP]
-> Build training data sets in data science to develop machine learning models.
- Data Factory can support multiple use cases, including: - Preparing, integrating, and enriching data from cloud and on-premises data sources to populate your migrated data warehouse and data marts on Microsoft Azure Synapse.
Data Factory can support multiple use cases, including:
- Preparing, integrating, and enriching data for data-driven business applications running on the Azure cloud on top of operational data stores like Azure Cosmos DB.
+> [!TIP]
+> Build training data sets in data science to develop machine learning models.
+ #### Data sources
-Data Factory lets you connect with [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a Self-Hosted Integration Runtime, securely accesses on-premises data sources and supports secure, scalable data transfer.
+Azure Data Factory lets you use [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
#### Transform data using Azure Data Factory
Data engineers can profile data quality and view the results of individual data
> [!TIP] > Data Factory pipelines are also extensible since Data Factory allows you to write your own code and run it as part of a pipeline.
-Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
+Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool Notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
-Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
#### Utilize Spark to scale data integration
-Under the covers, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
+Internally, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
#### Link self-service data prep and Data Factory ETL processing using wrangling data flows > [!TIP] > Data Factory support for wrangling data flows in addition to mapping data flows means that business and IT can work together on a common platform to integrate data.
-Another new capability in Data Factory is wrangling data flows. This lets business users (also known as citizen data integrators and data engineers) make use of the platform to visually discover, explore and prepare data at scale without writing code. This easy-to-use Data Factory capability is similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet-style UI with drop-down transforms to prepare and integrate data. The following screenshot shows an example Data Factory wrangling data flow.
+Another new capability in Data Factory is wrangling data flows. This lets business users (also known as citizen data integrators and data engineers) make use of the platform to visually discover, explore, and prepare data at scale without writing code. This easy-to-use Data Factory capability is similar to Microsoft Excel Power Query or Microsoft Power BI dataflows, where self-service data preparation business users use a spreadsheet-style UI with drop-down transforms to prepare and integrate data. The following screenshot shows an example Data Factory wrangling data flow.
:::image type="content" source="../media/6-microsoft-3rd-party-migration-tools/azure-data-factory-wrangling-dataflows.png" border="true" alt-text="Screenshot showing an example of Azure Data Factory wrangling dataflows.":::
-This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
+This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
#### Link data and analytics in analytical pipelines
-In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
+In addition to cleaning and transforming data, Azure Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
-Models developed code-free with Azure ML Studio, Azure Machine Learning Service SDK using Azure Synapse Spark Pool Notebooks, or using R in RStudio, can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
+Models developed code-free with Azure Machine Learning Studio or with the Azure Machine Learning SDK using Azure Synapse Spark Pool Notebooks or using R in RStudio can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
-Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
## A lake database to share consistent trusted data > [!TIP] > Microsoft has created a lake database to describe core data entities to be shared across the enterprise.
-A key objective in any data integration set-up is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust.
+A key objective in any data integration setup is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust.
> [!TIP]
-> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure ML, Azure Synapse Spark, and Azure HDInsight.
+> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure Machine Learning, Azure Synapse Spark, and Azure HDInsight.
To achieve this goal, establish a set of common data names and definitions describing logical data entities that need to be shared across the enterprise&mdash;such as customer, account, product, supplier, orders, payments, returns, and so forth. Once this is done, IT and business professionals can use data integration software to create these common data assets and store them to maximize their reuse to drive consistency everywhere. > [!TIP] > Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
-Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
+Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake Storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure Machine Learning. The following diagram shows a lake database used in Azure Synapse Analytics.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-analytics-lake-database.png" border="true" alt-text="Screenshot showing how a lake database can be used in Azure Synapse Analytics.":::
Another key requirement in modernizing your migrated data warehouse is to integr
Microsoft offers a range of technologies to build predictive analytical models using machine learning, analyze unstructured data using deep learning, and perform other kinds of advanced analytics. This includes: -- Azure ML Studio
+- Azure Machine Learning Studio
-- Azure Machine Learning Service
+- Azure Machine Learning
- Azure Synapse Spark Pool Notebooks - ML.NET (API, CLI or .NET Model Builder for Visual Studio) -- Visual Studio .NET for Apache Spark
+- .NET for Apache Spark
Data scientists can use RStudio (R) and Jupyter Notebooks (Python) to develop analytical models, or they can use other frameworks such as Keras or TensorFlow.
-#### Azure ML Studio
+#### Azure Machine Learning Studio
-Azure ML Studio is a fully managed cloud service that lets you easily build, deploy, and share predictive analytics via a drag-and-drop web-based user interface. The next screenshot shows an Azure Machine Learning studio user interface.
+Azure Machine Learning Studio is a fully managed cloud service that lets you easily build, deploy, and share predictive analytics via a drag-and-drop web-based user interface. The next screenshot shows an Azure Machine Learning Studio user interface.
-#### Azure Machine Learning Service
+#### Azure Machine Learning
> [!TIP]
-> Azure Machine Learning Service provides an SDK for developing machine learning models using several open-source frameworks.
+> Azure Machine Learning provides an SDK for developing machine learning models using several open-source frameworks.
-Azure Machine Learning Service provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning Service from Azure notebooks (a Jupyter notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning Service provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning Service uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning Service from Visual Studio with a Visual Studio for AI extension.
+Azure Machine Learning provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning from Azure notebooks (a Jupyter Notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning from Visual Studio with a Visual Studio for AI extension.
> [!TIP] > Organize and manage related data stores, experiments, trained models, docker images and deployed services in workspaces.
Azure Machine Learning Service provides a software development kit (SDK) and ser
Jobs running in Azure Synapse Spark Pool Notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
-Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the ML flow open-source framework to manage the machine learning lifecycle.
+Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the MLflow open-source framework to manage the machine learning lifecycle.
#### ML.NET
Autoscaling and auto-termination are also supported to reduce total cost of owne
ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like .NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
-#### Visual Studio .NET for Apache Spark
+#### .NET for Apache Spark
-Visual Studio .NET for Apache&reg; Spark&trade; aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+.NET for Apache Spark aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
-### Utilize Azure Analytics with your data warehouse
+### Use Azure Synapse Analytics with your data warehouse
> [!TIP]
-> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in your Azure Synapse.
+> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in Azure Synapse.
Combine machine learning models built using the tools with Azure Synapse by: -- Using machine learning models in batch mode or in real time to produce new insights, and add them to what you already know in Azure Synapse.
+- Using machine learning models in batch mode or in real-time to produce new insights, and add them to what you already know in Azure Synapse.
- Using the data in Azure Synapse to develop and train new predictive models for deployment elsewhere, such as in other applications. -- Deploying machine learning models&mdash;including those trained elsewhere&mdash;in Azure Synapse to analyze data in the data warehouse and drive new business value.
+- Deploying machine learning models, including those trained elsewhere, in Azure Synapse to analyze data in the data warehouse and drive new business value.
> [!TIP] > Produce new insights using machine learning on Azure in batch or in real-time and add to what you know in your data warehouse.
-In terms of machine learning model development, data scientists can use RStudio, Jupyter notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning Service to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
+In terms of machine learning model development, data scientists can use RStudio, Jupyter Notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-train-predict.png" border="true" alt-text="Screenshot of an Azure Synapse Analytics train and predict model.":::
In addition, you can ingest big data&mdash;such as social network data or review
## Integrate live streaming data into Azure Synapse Analytics
-When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
+When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real-time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
> [!TIP] > Integrate your data warehouse with streaming data from IoT devices or clickstream.
Once you've successfully migrated your data warehouse to Azure Synapse, you can
> [!TIP] > Ingest streaming data into Azure Data Lake Storage from Microsoft Event Hub or Kafka, and access it from Azure Synapse using PolyBase external tables.
-To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources) and land it in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse analytics is integrated with streaming data in Azure Data Lake.
+To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources). Store the data in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse Analytics is integrated with streaming data in Azure Data Lake.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-datalake-streaming-data.png" border="true" alt-text="Screenshot of Azure Synapse Analytics with streaming data in an Azure Data Lake.":::
PolyBase offers the capability to create a logical data warehouse to simplify us
This is attractive because many companies have adopted 'workload optimized' analytical data stores over the last several years in addition to their data warehouses. Examples of these platforms on Azure include: -- Azure Data Lake Storage with Azure Synapse Spark Pool Notebook (Spark-as-a-service), for big data analytics
+- Azure Data Lake Storage with Azure Synapse Spark Pool Notebook (Spark-as-a-service), for big data analytics.
-- Azure HDInsight (Hadoop as-a-service), also for big data analytics
+- Azure HDInsight (Hadoop as-a-service), also for big data analytics.
-- NoSQL Graph databases for graph analysis, which could be done in Azure Cosmos DB
+- NoSQL Graph databases for graph analysis, which could be done in Azure Cosmos DB.
-- Azure Event Hubs and Azure Stream Analytics, for real-time analysis of data in motion
+- Azure Event Hubs and Azure Stream Analytics, for real-time analysis of data in motion.
You may have non-Microsoft equivalents of some of these. You may also have a master data management (MDM) system that needs to be accessed for consistent trusted data on customers, suppliers, products, assets, and more.
Since these platforms are producing new insights, it's normal to see a requireme
> [!TIP] > The ability to make data in multiple analytical data stores look like it's all in one system and join it to Azure Synapse is known as a logical data warehouse architecture.
-By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
+By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake Storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/complex-data-warehouse-structure.png" alt-text="Screenshot showing an example of a complex data warehouse structure accessed through user interface methods.":::
-The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
> [!TIP] > A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
synapse-analytics Create Data Warehouse Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-azure-cli.md
Last updated 11/20/2020 -
+ms.tool: azure-cli
+ # Quickstart: Create a Synapse SQL pool with Azure CLI
synapse-analytics How To Pause Resume Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/how-to-pause-resume-pipelines.md
Evaluate the desired state, Pause or Resume, and the current status, Online, or
1. On the **Activities** tab, copy the code below into the **Expression**. ```HTTP
- @concat(activity('CheckState').output.properties.status,'-',pipeline().parameters.PauseOrResume)
+ @concat(activity('CheckState').output.value[0].properties.status,'-',pipeline().parameters.PauseOrResume)
``` Where Check State is the name of the preceding Web activity with output.properties.status defining the current status and pipeline().parameters.PauseOrResume indicates the desired state.
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
This is the list of known limitations for Azure Synapse Link for SQL.
* When enabling Azure Synapse Link for SQL on your Azure SQL Database, you should ensure that aggressive log truncation is disabled. ### SQL Server 2022 only
-* When creating SQL Server linked service, choose SQL Authentication, Windows Authentication or Azure AD Authentication.
+* When creating SQL Server linked service, choose SQL Authentication.
* Azure Synapse Link for SQL works with SQL Server on Linux, but HA scenarios with Linux Pacemaker aren't supported. Shelf hosted IR cannot be installed on Linux environment. * Azure Synapse Link for SQL can't be enabled on databases that are transactional replication publishers or distributors. * If the SAS key of landing zone expires and gets rotated during the snapshot process, the new key won't get picked up. The snapshot will fail and restart automatically with the new key.
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
Title: Troubleshoot Azure Virtual Desktop Agent Issues - Azure
-description: How to resolve common agent and connectivity issues.
-
+description: How to resolve common Azure Virtual Desktop Agent and connectivity issues.
+ Previously updated : 12/16/2020 Last updated : 05/26/2022 # Troubleshoot common Azure Virtual Desktop Agent issues The Azure Virtual Desktop Agent can cause connection issues because of multiple factors:
- - An error on the broker that makes the agent stop the service.
- - Problems with updates.
- - Issues with installing during the agent installation, which disrupts connection to the session host.
+
+ - An error on the broker that makes the agent stop the service.
+ - Problems with updates.
+ - Issues with installing during the agent installation, which disrupts connection to the session host.
This article will guide you through solutions to these common scenarios and how to address connection issues.
->[!NOTE]
->For troubleshooting issues related to session connectivity and the Azure Virtual Desktop agent, we recommend you review the event logs in **Event Viewer** > **Windows Logs** > **Application**. Look for events that have one of the following sources to identify your issue:
+> [!NOTE]
+> For troubleshooting issues related to session connectivity and the Azure Virtual Desktop agent, we recommend you review the event logs on your session host virtual machines (VMs) by going to **Event Viewer** > **Windows Logs** > **Application**. Look for events that have one of the following sources to identify your issue:
> >- WVD-Agent >- WVD-Agent-Updater
This article will guide you through solutions to these common scenarios and how
## Error: The RDAgentBootLoader and/or Remote Desktop Agent Loader has stopped running
-If you're seeing any of the following issues, this means that the boot loader, which loads the agent, was unable to install the agent properly and the agent service isn't running:
+If you're seeing any of the following issues, this means that the boot loader, which loads the agent, was unable to install the agent properly and the agent service isn't running on your session host VM:
+ - **RDAgentBootLoader** is either stopped or not running. - There's no status for **Remote Desktop Agent Loader**. To resolve this issue, start the RDAgent boot loader: 1. In the Services window, right-click **Remote Desktop Agent Loader**.
-2. Select **Start**. If this option is greyed out for you, you don't have administrator permissions and will need to get them to start the service.
-3. Wait 10 seconds, then right-click **Remote Desktop Agent Loader**.
-4. Select **Refresh**.
-5. If the service stops after you started and refreshed it, you may have a registration failure. For more information, see [INVALID_REGISTRATION_TOKEN](#error-invalid_registration_token).
+1. Select **Start**. If this option is greyed out for you, you don't have administrator permissions and will need to get them to start the service.
+1. Wait 10 seconds, then right-click **Remote Desktop Agent Loader**.
+1. Select **Refresh**.
+1. If the service stops after you started and refreshed it, you may have a registration failure. For more information, see [INVALID_REGISTRATION_TOKEN](#error-invalid_registration_token).
## Error: INVALID_REGISTRATION_TOKEN
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **INVALID_REGISTRATION_TOKEN** in the description, the registration token that you have isn't recognized as valid.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **INVALID_REGISTRATION_TOKEN** in the description, the registration token that has been used isn't recognized as valid.
To resolve this issue, create a valid registration token: 1. To create a new registration token, follow the steps in the [Generate a new registration key for the VM](#step-3-generate-a-new-registration-key-for-the-vm) section.
-2. Open the Registry Editor.
-3. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **RDInfraAgent**.
-4. Select **IsRegistered**.
-5. In the **Value data:** entry box, type **0** and select **Ok**.
-6. Select **RegistrationToken**.
-7. In the **Value data:** entry box, paste the registration token from step 1.
+1. Open Registry Editor.
+1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**.
+1. Select **IsRegistered**.
+1. In the **Value data:** entry box, type **0** and select **Ok**.
+1. Select **RegistrationToken**.
+1. In the **Value data:** entry box, paste the registration token from step 1.
> [!div class="mx-imgBorder"] > ![Screenshot of IsRegistered 0](media/isregistered-token.png)
-8. Open a command prompt as an administrator.
-9. Enter **net stop RDAgentBootLoader**.
-10. Enter **net start RDAgentBootLoader**.
-11. Open the Registry Editor.
-12. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **RDInfraAgent**.
-13. Verify that **IsRegistered** is set to 1 and there is nothing in the data column for **RegistrationToken**.
+1. Open a PowerShell prompt as an administrator and run the following command to restart the RDAgentBootLoader service:
- > [!div class="mx-imgBorder"]
- > ![Screenshot of IsRegistered 1](media/isregistered-registry.png)
+ ```powershell
+ Restart-Service RDAgentBootLoader
+ ```
+
+1. Go back to Registry Editor.
+1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**.
+1. Verify that **IsRegistered** is set to 1 and there is nothing in the data column for **RegistrationToken**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of IsRegistered 1](media/isregistered-registry.png)
## Error: Agent cannot connect to broker with INVALID_FORM
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 that says "INVALID_FORM" in the description, something went wrong with the communication between the agent and the broker. The agent can't connect to the broker or reach a particular URL because of certain firewall or DNS settings.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **INVALID_FORM** in the description, the agent can't connect to the broker or reach a particular endpoint. This may be because of certain firewall or DNS settings.
+
+To resolve this issue, check that you can reach the two endpoints referred to as *BrokerURI* and *BrokerURIGlobal*:
-To resolve this issue, check that you can reach BrokerURI and BrokerURIGlobal:
-1. Open the Registry Editor.
-2. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **RDInfraAgent**.
-3. Make note of the values for **BrokerURI** and **BrokerURIGlobal**.
+1. Open Registry Editor.
+1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**.
+1. Make note of the values for **BrokerURI** and **BrokerURIGlobal**.
> [!div class="mx-imgBorder"] > ![Screenshot of broker uri and broker uri global](media/broker-uri.png)
-
-4. Open a browser and go to *\<BrokerURI\>api/health*.
- - Make sure you use the value from step 3 in the **BrokerURI**. In this section's example, it would be <https://rdbroker-g-us-r0.wvd.microsoft.com/api/health>.
-5. Open another tab in the browser and go to *\<BrokerURIGlobal\>api/health*.
- - Make sure you use the value from step 3 in the **BrokerURIGlobal** link. In this section's example, it would be <https://rdbroker.wvd.microsoft.com/api/health>.
-6. If the network isn't blocking broker connection, both pages will load successfully and will show a message that says **"RD Broker is Healthy"** as shown in the following screenshots.
+1. Open a web browser and enter your value for *BrokerURI* in the address bar and add */api/health* to the end, for example `https://rdbroker-g-us-r0.wvd.microsoft.com/api/health`.
+1. Open another tab in the browser and enter your value for *BrokerURIGlobal* in the address bar and add */api/health* to the end, for example `https://rdbroker.wvd.microsoft.com/api/health`.
+1. If your network isn't blocking the connection to the broker, both pages will load successfully and will show a message stating **RD Broker is Healthy**, as shown in the following screenshots:
> [!div class="mx-imgBorder"] > ![Screenshot of successfully loaded broker uri access](media/broker-uri-web.png)
To resolve this issue, check that you can reach BrokerURI and BrokerURIGlobal:
> [!div class="mx-imgBorder"] > ![Screenshot of successfully loaded broker global uri access](media/broker-global.png) -
-7. If the network is blocking broker connection, the pages will not load, as shown in the following screenshot.
+1. If the network is blocking broker connection, the pages will not load, as shown in the following screenshot.
> [!div class="mx-imgBorder"] > ![Screenshot of unsuccessful loaded broker access](media/unsuccessful-broker-uri.png)
To resolve this issue, check that you can reach BrokerURI and BrokerURIGlobal:
> [!div class="mx-imgBorder"] > ![Screenshot of unsuccessful loaded broker global access](media/unsuccessful-broker-global.png)
-8. If the network is blocking these URLs, you will need to unblock the required URLs. For more information, see [Required URL List](safe-url-list.md).
-9. If this does not resolve your issue, make sure that you do not have any group policies with ciphers that block the agent to broker connection. Azure Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/concept-end-to-end-tls.md#supported-cipher-suites). For more information, see [Connection Security](network-connectivity.md#connection-security).
+ You will need to unblock the required endpoints and then repeat steps 4 to 7. For more information, see [Required URL List](safe-url-list.md).
+
+1. If this does not resolve your issue, make sure that you do not have any group policies with ciphers that block the agent to broker connection. Azure Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/concept-end-to-end-tls.md#supported-cipher-suites). For more information, see [Connection Security](network-connectivity.md#connection-security).
## Error: 3703
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3703 that says "RD Gateway Url: is not accessible" in the description, the agent is unable to reach the gateway URLs. To successfully connect to your session host and allow network traffic to these endpoints to bypass restrictions, you must unblock the URLs from the [Required URL List](safe-url-list.md). Also, make sure your firewall or proxy settings don't block these URLs. Unblocking these URLs is required to use Azure Virtual Desktop.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3703 with **RD Gateway Url: is not accessible** in the description, the agent is unable to reach the gateway URLs. To successfully connect to your session host, you must allow network traffic to the URLs from the [Required URL List](safe-url-list.md). Also, make sure your firewall or proxy settings don't block these URLs. Unblocking these URLs is required to use Azure Virtual Desktop.
To resolve this issue, verify that your firewall and/or DNS settings are not blocking these URLs: 1. [Use Azure Firewall to protect Azure Virtual Desktop deployments.](../firewall/protect-azure-virtual-desktop.md).
-2. Configure your [Azure Firewall DNS settings](../firewall/dns-settings.md).
+1. Configure your [Azure Firewall DNS settings](../firewall/dns-settings.md).
## Error: 3019
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3019, this means the agent can't reach the web socket transport URLs. To successfully connect to your session host and allow network traffic to bypass these restrictions, you must unblock the URLs listed in the the [Required URL list](safe-url-list.md). Work with the Azure Networking team to make sure your firewall, proxy, and DNS settings aren't blocking these URLs. You can also check your network trace logs to identify where the Azure Virtual Desktop service is being blocked. If you open a support request for this particular issue, make sure to attach your network trace logs to the request.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3019, this means the agent can't reach the web socket transport URLs. To successfully connect to your session host and allow network traffic to bypass these restrictions, you must unblock the URLs listed in the the [Required URL list](safe-url-list.md). Work with your networking team to make sure your firewall, proxy, and DNS settings aren't blocking these URLs. You can also check your network trace logs to identify where the Azure Virtual Desktop service is being blocked. If you open a Microsoft Support case for this particular issue, make sure to attach your network trace logs to the request.
## Error: InstallationHealthCheckFailedException
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 that says "InstallationHealthCheckFailedException" in the description, that means the stack listener isn't working because the terminal server has toggled the registry key for the stack listener.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **InstallationHealthCheckFailedException** in the description, this means the stack listener isn't working because the terminal server has toggled the registry key for the stack listener.
To resolve this issue:
-1. Check to see if [the stack listener is working](#error-stack-listener-isnt-working-on-windows-10-2004-vm).
-2. If the stack listener isn't working, [manually uninstall and reinstall the stack component](#error-vms-are-stuck-in-unavailable-or-upgrading-state).
+1. Check to see if [the stack listener is working](#error-stack-listener-isnt-working-on-a-windows-10-2004-session-host-vm)
+1. If the stack listener isn't working, [manually uninstall and reinstall the stack component](#error-session-host-vms-are-stuck-in-unavailable-or-upgrading-state).
## Error: ENDPOINT_NOT_FOUND
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 that says "ENDPOINT_NOT_FOUND" in the description that means the broker couldn't find an endpoint to establish a connection with. This connection issue can happen for one of the following reasons:
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **ENDPOINT_NOT_FOUND** in the description, this means the broker couldn't find an endpoint to establish a connection with. This connection issue can happen for one of the following reasons:
-- There aren't VMs in your host pool-- The VMs in your host pool aren't active-- All VMs in your host pool have exceeded the max session limit-- None of the VMs in your host pool have the agent service running on them
+- There aren't any session host VMs in your host pool.
+- The session host VMs in your host pool aren't active.
+- All session host VMs in your host pool have exceeded the max session limit.
+- None of the VMs in your host pool have the agent service running on them.
To resolve this issue: 1. Make sure the VM is powered on and hasn't been removed from the host pool.
-2. Make sure that the VM hasn't exceeded the max session limit.
-3. Make sure the [agent service is running](#error-the-rdagentbootloader-andor-remote-desktop-agent-loader-has-stopped-running) and the [stack listener is working](#error-stack-listener-isnt-working-on-windows-10-2004-vm).
-4. Make sure [the agent can connect to the broker](#error-agent-cannot-connect-to-broker-with-invalid_form).
-5. Make sure [your VM has a valid registration token](#error-invalid_registration_token).
-6. Make sure [the VM registration token hasn't expired](./faq.yml).
+1. Make sure that the VM hasn't exceeded the max session limit.
+1. Make sure the [agent service is running](#error-the-rdagentbootloader-andor-remote-desktop-agent-loader-has-stopped-running) and the [stack listener is working](#error-stack-listener-isnt-working-on-a-windows-10-2004-session-host-vm).
+1. Make sure [the agent can connect to the broker](#error-agent-cannot-connect-to-broker-with-invalid_form).
+1. Make sure [your VM has a valid registration token](#error-invalid_registration_token).
+1. Make sure [the VM registration token hasn't expired](./faq.yml).
## Error: InstallMsiException
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **InstallMsiException** in the description, the installer is already running for another application while you're trying to install the agent, or a policy is blocking the msiexec.exe program from running.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **InstallMsiException** in the description, the installer is already running for another application while you're trying to install the agent, or group policy is blocking `msiexec.exe` from running.
-To resolve this issue, disable the following policy:
- - Turn off Windows Installer
- - Category Path: Computer Configuration\Administrative Templates\Windows Components\Windows Installer
-
->[!NOTE]
->This isn't a comprehensive list of policies, just the ones we're currently aware of.
+To check whether group policy is blocking `msiexec.exe` from running:
-To disable a policy:
-1. Open a command prompt as an administrator.
-2. Enter and run **rsop.msc**.
-3. In the **Resultant Set of Policy** window that pops up, go to the category path.
-4. Select the policy.
-5. Select **Disabled**.
-6. Select **Apply**.
+1. Open Resultant Set of Policy by running **rsop.msc** from an elevated command prompt.
+1. In the **Resultant Set of Policy** window that pops up, go to **Computer Configuration > Administrative Templates > Windows Components > Windows Installer > Turn off Windows Installer**. If the state is **Enabled**, work with your Active Directory team to allow `msiexec.exe` to run.
> [!div class="mx-imgBorder"] > ![Screenshot of Windows Installer policy in Resultant Set of Policy](media/gpo-policy.png)
-## Error: Win32Exception
+ > [!NOTE]
+ > This isn't a comprehensive list of policies, just the one we're currently aware of.
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **InstallMsiException** in the description, a policy is blocking cmd.exe from launching. Blocking this program prevents you from running the console window, which is what you need to use to restart the service whenever the agent updates.
+## Error: Win32Exception
-To resolve this issue, disable the following policy:
- - Prevent access to the command prompt
- - Category Path: User Configuration\Administrative Templates\System
-
->[!NOTE]
->This isn't a comprehensive list of policies, just the ones we're currently aware of.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **InstallMsiException** in the description, a policy is blocking `cmd.exe` from launching. Blocking this program prevents you from running the console window, which is what you need to use to restart the service whenever the agent updates.
-To disable a policy:
-1. Open a command prompt as an administrator.
-2. Enter and run **rsop.msc**.
-3. In the **Resultant Set of Policy** window that pops up, go to the category path.
-4. Select the policy.
-5. Select **Disabled**.
-6. Select **Apply**.
+1. Open Resultant Set of Policy by running **rsop.msc** from an elevated command prompt.
+1. In the **Resultant Set of Policy** window that pops up, go to **User Configuration > Administrative Templates > System > Prevent access to the command prompt**. If the state is **Enabled**, work with your Active Directory team to allow `cmd.exe` to run.
-## Error: Stack listener isn't working on Windows 10 2004 VM
+## Error: Stack listener isn't working on a Windows 10 2004 session host VM
-Run **qwinsta** in your command prompt and make note of the version number that appears next to **rdp-sxs**. If you're not seeing the **rdp-tcp** and **rdp-sxs** components say **Listen** next to them or they aren't showing up at all after running **qwinsta**, it means that there's a stack issue. Stack updates get installed along with agent updates, and when this installation goes awry, the Azure Virtual Desktop Listener won't work.
+On your session host VM, from a command prompt run `qwinsta.exe` and make note of the version number that appears next to **rdp-sxs** in the *SESSIONNAME* column. If the *STATE* column for **rdp-tcp** and **rdp-sxs** entries isn't **Listen**, or if **rdp-tcp** and **rdp-sxs** entries aren't listed at all, it means that there's a stack issue. Stack updates get installed along with agent updates, but if this hasn't been successful, the Azure Virtual Desktop Listener won't work.
To resolve this issue:+ 1. Open the Registry Editor.
-2. Go to **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **WinStations**.
-3. Under **WinStations** you may see several folders for different stack versions, select the folder that matches the version information you saw when running **qwinsta** in your Command Prompt.
-4. Find **fReverseConnectMode** and make sure its data value is **1**. Also make sure that **fEnableWinStation** is set to **1**.
+1. Go to **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations**.
+1. Under **WinStations** you may see several folders for different stack versions, select a folder that matches the version information you saw when running `qwinsta.exe` in a command prompt.
+ 1. Find **fReverseConnectMode** and make sure its data value is **1**. Also make sure that **fEnableWinStation** is set to **1**.
- > [!div class="mx-imgBorder"]
- > ![Screenshot of fReverseConnectMode](media/fenable-2.png)
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of fReverseConnectMode](media/fenable-2.png)
-5. If **fReverseConnectMode** isn't set to **1**, select **fReverseConnectMode** and enter **1** in its value field.
-6. If **fEnableWinStation** isn't set to **1**, select **fEnableWinStation** and enter **1** into its value field.
-7. Restart your VM.
+ 1. If **fReverseConnectMode** isn't set to **1**, select **fReverseConnectMode** and enter **1** in its value field.
+ 1. If **fEnableWinStation** isn't set to **1**, select **fEnableWinStation** and enter **1** into its value field.
+1. Repeat the previous steps for each folder that matches the version information you saw when running `qwinsta.exe` in a command prompt.
->[!NOTE]
->To change the **fReverseConnectMode** or **fEnableWinStation** mode for multiple VMs at a time, you can do one of the following two things:
->
->- Export the registry key from the machine that you already have working and import it into all other machines that need this change.
->- Create a group policy object (GPO) that sets the registry key value for the machines that need the change.
+ > [!TIP]
+ > To change the **fReverseConnectMode** or **fEnableWinStation** mode for multiple VMs at a time, you can do one of the following two things:
+ >
+ > - Export the registry key from the machine that you already have working and import it into all other machines that need this change.
+ > - Create a group policy object (GPO) that sets the registry key value for the machines that need the change.
-7. Go to **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **ClusterSettings**.
-8. Under **ClusterSettings**, find **SessionDirectoryListener** and make sure its data value is **rdp-sxs...**.
-9. If **SessionDirectoryListener** isn't set to **rdp-sxs...**, you'll need to follow the steps in the [Uninstall the agent and boot loader](#step-1-uninstall-all-agent-boot-loader-and-stack-component-programs) section to first uninstall the agent, boot loader, and stack components, and then [Reinstall the agent and boot loader](#step-4-reinstall-the-agent-and-boot-loader). This will reinstall the side-by-side stack.
+1. Restart your session host VM.
+1. Open the Registry Editor.
+1. Go to **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\ClusterSettings**.
+1. Under **ClusterSettings**, find **SessionDirectoryListener** and make sure its data value is `rdp-sxs<version number`, where `<version number` matches the version information you saw when running `qwinsta.exe` in a command prompt .
+1. If **SessionDirectoryListener** isn't set to `rdp-sxs<version number`, you'll need to follow the steps in the section [Your issue isn't listed here or wasn't resolved](#your-issue-isnt-listed-here-or-wasnt-resolved) below.
## Error: DownloadMsiException
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **DownloadMsiException** in the description, there isn't enough space on the disk for the RDAgent.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **DownloadMsiException** in the description, there isn't enough space on the disk for the RDAgent.
To resolve this issue, make space on your disk by:
- - Deleting files that are no longer in user
- - Increasing the storage capacity of your VM
+ - Deleting files that are no longer in user.
+ - Increasing the storage capacity of your session host VM.
## Error: Agent fails to update with MissingMethodException
-Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3389 that says "MissingMethodException: Method not found" in the description, that means the Azure Virtual Desktop agent didn't update successfully and reverted to an earlier version. This may be because the version number of the .NET framework currently installed on your VMs is lower than 4.7.2. To resolve this issue, you need to upgrade the .NET to version 4.7.2 or later by following the installation instructions in the [.NET Framework documentation](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2).
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3389 with **MissingMethodException: Method not found** in the description, this means the Azure Virtual Desktop agent didn't update successfully and reverted to an earlier version. This may be because the version number of the .NET framework currently installed on your VMs is lower than 4.7.2. To resolve this issue, you need to upgrade the .NET to version 4.7.2 or later by following the installation instructions in the [.NET Framework documentation](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2).
+
+## Error: Session host VMs are stuck in Unavailable or Upgrading state
+
+If the status listed for session hosts in your host pool always says **Unavailable** or **Upgrading**, the agent or stack didn't install successfully.
+
+To resolve this issue, first reinstall the side-by-side stack:
+1. Sign in to your session host VM as an administrator.
+1. From an elevated PowerShell prompt run `qwinsta.exe` and make note of the version number that appears next to **rdp-sxs** in the *SESSIONNAME* column. If the *STATE* column for **rdp-tcp** and **rdp-sxs** entries isn't **Listen**, or if **rdp-tcp** and **rdp-sxs** entries aren't listed at all, it means that there's a stack issue.
-## Error: VMs are stuck in Unavailable or Upgrading state
+1. Run the following command to stop the RDAgentBootLoader service:
-Open a PowerShell window as an administrator and run the following cmdlet:
+ ```powershell
+ Stop-Service RDAgentBootLoader
+ ```
-```powershell
-Get-AzWvdSessionHost -ResourceGroupName <resourcegroupname> -HostPoolName <hostpoolname> | Select-Object *
-```
+1. Go to **Control Panel** > **Programs** > **Programs and Features**, or on Windows 11 go to the **Settings App > Apps**.
+1. Uninstall the latest version of the **Remote Desktop Services SxS Network Stack** or the version listed in Registry Editor in **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations** under the value for **ReverseConnectionListener**.
+1. Back at the PowerShell prompt, run the following commands to add the file path of the latest installer available on your session host VM for the side-by-side stack to a variable and list its name:
-If the status listed for the session host or hosts in your host pool always says "Unavailable" or "Upgrading," the agent or stack didn't install successfully.
+ ```powershell
+ $sxsMsi = (Get-ChildItem "$env:SystemDrive\Program Files\Microsoft RDInfra\" | ? Name -like SxSStack*.msi | Sort-Object CreationTime -Descending | Select-Object -First 1).FullName
+ $sxsMsi
+ ```
-To resolve this issue, reinstall the side-by-side stack:
-1. Open a command prompt as an administrator.
-2. Enter **net stop RDAgentBootLoader**.
-3. Go to **Control Panel** > **Programs** > **Programs and Features**.
-4. Uninstall the latest version of the **Remote Desktop Services SxS Network Stack** or the version listed in **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **WinStations** under **ReverseConnectListener**.
-5. Open a console window as an administrator and go to **Program Files** > **Microsoft RDInfra**.
-6. Select the **SxSStack** component or run the **`msiexec /i SxsStack-<version>.msi`** command to install the MSI.
-8. Restart your VM.
-9. Go back to the command prompt and run the **qwinsta** command.
-10. Verify that the stack component installed in step 6 says **Listen** next to it.
- - If so, enter **net start RDAgentBootLoader** in the command prompt and restart your VM.
- - If not, you will need to [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
+1. Install the latest installer available on your session host VM for the side-by-side stack by running the following command:
+
+ ```powershell
+ msiexec /i $sxsMsi
+ ```
+
+1. Restart your session host VM.
+1. From a command prompt run `qwinsta.exe` again and verify the *STATE* column for **rdp-tcp** and **rdp-sxs** entries is **Listen**. If not, you will need to [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
## Error: Connection not found: RDAgent does not have an active connection to the broker
-Your VMs may be at their connection limit, so the VM can't accept new connections.
+Your session host VMs may be at their connection limit and can't accept new connections.
-To resolve this issue:
- - Decrease the max session limit. This ensures that resources are more evenly distributed across session hosts and will prevent resource depletion.
- - Increase the resource capacity of the VMs.
+To resolve this issue, either:
+- Decrease the max session limit. This ensures that resources are more evenly distributed across session hosts and will prevent resource depletion.
+- Increase the resource capacity of the session host VMs.
## Error: Operating a Pro VM or other unsupported OS The side-by-side stack is only supported by Windows Enterprise or Windows Server SKUs, which means that operating systems like Pro VM aren't. If you don't have an Enterprise or Server SKU, the stack will be installed on your VM but won't be activated, so you won't see it show up when you run **qwinsta** in your command line.
-To resolve this issue, create a VM that is Windows Enterprise or Windows Server.
-1. Go to [Virtual machine details](create-host-pools-azure-marketplace.md#virtual-machine-details) and follow steps 1-12 to set up one of the following recommended images:
- - Windows 10 Enterprise multi-session, version 1909
- - Windows 10 Enterprise multi-session, version 1909 + Microsoft 365 Apps
- - Windows Server 2019 Datacenter
- - Windows 10 Enterprise multi-session, version 2004
- - Windows 10 Enterprise multi-session, version 2004 + Microsoft 365 Apps
-2. Select **Review and Create**.
+To resolve this issue, [create session host VMs](expand-existing-host-pool.md) using a [supported operating system](prerequisites.md#operating-systems-and-licenses).
## Error: NAME_ALREADY_REGISTERED
-The name of your VM has already been registered and is probably a duplicate.
+The name of your session host VM has already been registered and is probably a duplicate.
To resolve this issue: 1. Follow the steps in the [Remove the session host from the host pool](#step-2-remove-the-session-host-from-the-host-pool) section.
-2. [Create another VM](expand-existing-host-pool.md#add-virtual-machines-with-the-azure-portal). Make sure to choose a unique name for this VM.
-3. Go to the [Azure portal](https://portal.azure.com) and open the **Overview** page for the host pool your VM was in.
-4. Open the **Session Hosts** tab and check to make sure all session hosts are in that host pool.
-5. Wait for 5-10 minutes for the session host status to say **Available**.
+1. [Create another VM](expand-existing-host-pool.md#add-virtual-machines-with-the-azure-portal). Make sure to choose a unique name for this VM.
+1. Go to the [Azure portal](https://portal.azure.com) and open the **Overview** page for the host pool your VM was in.
+1. Open the **Session Hosts** tab and check to make sure all session hosts are in that host pool.
+1. Wait for 5-10 minutes for the session host status to say **Available**.
> [!div class="mx-imgBorder"] > ![Screenshot of available session host](media/hostpool-portal.png) ## Your issue isn't listed here or wasn't resolved
-If you can't find your issue in this article or the instructions didn't help you, we recommend you uninstall, reinstall, and re-register Azure Virtual Desktop Agent. The instructions in this section will show you how to reregister your VM to the Azure Virtual Desktop service by uninstalling all agent, boot loader, and stack components, removing the session host from the host pool, generating a new registration key for the VM, and reinstalling the agent and boot loader. If one or more of the following scenarios apply to you, follow these instructions:
-- Your VM is stuck in **Upgrading** or **Unavailable**-- Your stack listener isn't working and you're running on Windows 10 1809, 1903, or 1909-- You're receiving an **EXPIRED_REGISTRATION_TOKEN** error-- You're not seeing your VMs show up in the session hosts list-- You don't see the **Remote Desktop Agent Loader** in the Services window-- You don't see the **RdAgentBootLoader** component in the Task Manager-- You're receiving a **Connection Broker couldn't validate the settings** error on custom image VMs-- The instructions in this article didn't resolve your issue
+If you can't find your issue in this article or the instructions didn't help you, we recommend you uninstall, reinstall, and re-register the Azure Virtual Desktop Agent. The instructions in this section will show you how to reregister your session host VM to the Azure Virtual Desktop service by:
+1. Uninstalling all agent, boot loader, and stack components
+1. Removing the session host from the host pool
+1. Generating a new registration key for the VM
+1. Reinstalling the Azure Virtual Desktop Agent and boot loader.
+
+Follow these instructions in this section if one or more of the following scenarios apply to you:
+
+- The state of your session host VM is stuck as **Upgrading** or **Unavailable**.
+- Your stack listener isn't working and you're running on Windows 10 version 1809, 1903, or 1909.
+- You're receiving an **EXPIRED_REGISTRATION_TOKEN** error.
+- You're not seeing your session host VMs show up in the session hosts list.
+- You don't see the **Remote Desktop Agent Loader** service in the Services console.
+- You don't see the **RdAgentBootLoader** component as a running process in Task Manager.
+- You're receiving a **Connection Broker couldn't validate the settings** error on custom image VMs.
+- Previous sections in this article didn't resolve your issue.
### Step 1: Uninstall all agent, boot loader, and stack component programs
-Before reinstalling the agent, boot loader, and stack, you must uninstall any existing component programs from your VM. To uninstall all agent, boot loader, and stack component programs:
-1. Sign in to your VM as an administrator.
-2. Go to **Control Panel** > **Programs** > **Programs and Features**.
-3. Remove the following programs:
+Before reinstalling the agent, boot loader, and stack, you must uninstall any existing components from your VM. To uninstall all agent, boot loader, and stack component programs:
+1. Sign in to your session host VM as an administrator.
+2. Go to **Control Panel** > **Programs** > **Programs and Features**, or on Windows 11 go to the **Settings App > Apps**.
+3. Uninstall the following programs, then restart your session host VM:
+
+ > [!CAUTION]
+ > When uninstalling **Remote Desktop Services SxS Network Stack**, you'll be prompted that *Remote Desktop Services* and *Remote Desktop Services UserMode Port Redirector* should be closed. If you're connected to the session host VM using RDP, select **Do not close applications** then select **OK**, otherwise your RDP connection will be closed.
+ >
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing prompt that Remote Desktop Services and Remote Desktop Services UserMode Port Redirector should be closed](media/uninstall-remote-desktop-services-sxs-network-stack.png)
+ - Remote Desktop Agent Boot Loader - Remote Desktop Services Infrastructure Agent - Remote Desktop Services Infrastructure Geneva Agent - Remote Desktop Services SxS Network Stack
->[!NOTE]
->You may see multiple instances of these programs. Make sure to remove all of them.
+ > [!NOTE]
+ > You may see multiple instances of these programs. Make sure to remove all of them.
> [!div class="mx-imgBorder"] > ![Screenshot of uninstalling programs](media/uninstall-program.png)
Before reinstalling the agent, boot loader, and stack, you must uninstall any ex
### Step 2: Remove the session host from the host pool When you remove the session host from the host pool, the session host is no longer registered to that host pool. This acts as a reset for the session host registration. To remove the session host from the host pool:
-1. Go to the **Overview** page for the host pool that your VM is in, in the [Azure portal](https://portal.azure.com).
-2. Go to the **Session Hosts** tab to see the list of all session hosts in that host pool.
-3. Look at the list of session hosts and select the VM that you want to remove.
-4. Select **Remove**.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+1. Select **Host pools** and select the name of the host pool that your session host VM is in.
+1. Select **Session Hosts** to see the list of all session hosts in that host pool.
+1. Look at the list of session hosts and tick the box next to the session host that you want to remove.
+1. Select **Remove**.
> [!div class="mx-imgBorder"] > ![Screenshot of removing VM from host pool](media/remove-sh.png) ### Step 3: Generate a new registration key for the VM
-You must generate a new registration key that is used to re-register your VM to the host pool and to the service. To generate a new registration key for the VM:
-1. Open the [Azure portal](https://portal.azure.com) and go to the **Overview** page for the host pool of the VM you want to edit.
-2. Select **Registration key**.
+You must generate a new registration key that is used to re-register your session VM to the host pool and to the service. To generate a new registration key for the VM:
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+1. Select **Host pools** and select the name of the host pool that your session host VM is in.
+1. On the **Overview** blade, select **Registration key**.
> [!div class="mx-imgBorder"] > ![Screenshot of registration key in portal](media/reg-key.png)
-3. Open the **Registration key** tab and select **Generate new key**.
-4. Enter the expiration date and then select **Ok**.
+1. Open the **Registration key** tab and select **Generate new key**.
+1. Enter the expiration date and then select **Ok**.
->[!NOTE]
->The expiration date can be no less than an hour and no longer than 27 days from its generation time and date. We highly recommend you set the expiration date to the 27 day maximum.
+ > [!NOTE]
+ > The expiration date can be no less than an hour and no longer than 27 days from its generation time and date. Generate a registration key only for as long as you need.
-5. Copy the newly generated key to your clipboard. You'll need this key later.
+1. Copy the newly generated key to your clipboard or download the file. You'll need this key later.
### Step 4: Reinstall the agent and boot loader By reinstalling the most updated version of the agent and boot loader, the side-by-side stack and Geneva monitoring agent automatically get installed as well. To reinstall the agent and boot loader:
-1. Sign in to your VM as an administrator and use the correct version of the agent installer for your deployment depending on which version of Windows your VM is running. If you have a Windows 10 VM, follow the instructions in [Register virtual machines](create-host-pools-powershell.md#register-the-virtual-machines-to-the-azure-virtual-desktop-host-pool) to download the **Azure Virtual Desktop Agent** and the **Azure Virtual Desktop Agent Bootloader**. If you have a Windows 7 VM, follow steps 13-14 in [Register virtual machines](deploy-windows-7-virtual-machine.md#configure-a-windows-7-virtual-machine) to download the **Azure Virtual Desktop Agent** and the **Azure Virtual Desktop Agent Manager**.
- > [!div class="mx-imgBorder"]
- > ![Screenshot of agent and bootloader download page](media/download-agent.png)
+1. Sign in to your session host VM as an administrator and use the correct version of the agent installer for the operating system of your session host VM:
+ 1. For Windows 10 and Windows 11:
+ 1. [Azure Virtual Desktop Agent](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrmXv)
+ 1. [Azure Virtual Desktop Agent Bootloader](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrxrH)
+ 1. For Windows 7:
+ 1. [Azure Virtual Desktop Agent](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3JZCm)
+ 1. [Azure Virtual Desktop Agent Bootloader](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3K2e3)
+
+ > [!TIP]
+ > For each of the the agent and boot loader installers you downloaded, you may need to unblock them. Right-click each file and select **Properties**, then select **Unblock**, and finally select **OK**.
-2. Right-click the agent and boot loader installers you downloaded.
-3. Select **Properties**.
-4. Select **Unblock**.
-5. Select **Ok**.
-6. Run the agent installer.
-7. When the installer asks you for the registration token, paste the registration key from your clipboard.
+1. Run the agent installer
+1. When the installer asks you for the registration token, paste the registration key from the from your clipboard.
> [!div class="mx-imgBorder"] > ![Screenshot of pasted registration token](media/pasted-agent-token.png)
-8. Run the boot loader installer.
-9. Restart your VM.
-10. Go to the [Azure portal](https://portal.azure.com) and open the **Overview** page for the host pool your VM belongs to.
-11. Go to the **Session Hosts** tab to see the list of all session hosts in that host pool.
-12. You should now see the session host registered in the host pool with the status **Available**.
+1. Run the boot loader installer.
+1. Restart your session VM.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+1. Select **Host pools** and select the name of the host pool that your session host VM is in.
+1. Select **Session Hosts** to see the list of all session hosts in that host pool.
+1. You should now see the session host registered in the host pool with the status **Available**.
> [!div class="mx-imgBorder"] > ![Screenshot of available session host](media/hostpool-portal.png)
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
>The below article is scoped to Azure Hybrid Benefit for BYOS VMs (AHB BYOS) which caters to conversion of custom image VMs and RHEL or SLES BYOS VMs. For conversion of RHEL PAYG or SLES PAYG VMs, refer to [Azure Hybrid Benefit for PAYG VMs here](./azure-hybrid-benefit-linux.md). >[!NOTE]
->Azure Hybrid Benefit for BYOS VMs is in Preview now. You can start using the capability on Azure by following steps provided in the [section below](#get-started).
+>Azure Hybrid Benefit for BYOS VMs is in Public Preview now. You can start using the capability on Azure by following steps provided in the [section below](#get-started).
Azure Hybrid Benefit for BYOS VMs is a licensing benefit that helps you to get software updates and integrated support for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) directly from Azure infrastructure. This benefit is available to RHEL and SLES custom image VMs (VMs generated from on-premises images), and to RHEL and SLES Marketplace bring-your-own-subscription (BYOS) VMs.
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
az vm update \
### [PowerShell](#tab/powershell)
-In order to provision a VM with Trusted Launch, it first needs to be enabled with the `TrustedLaunch` using the `Set-AzVmSecurityType` cmdlet. Then you can use the Set-AzVmUefi cmdlet to set the vTPM and SecureBoot configuration. Use the below snippet as a quick start, remember to replace the values in this example with your own.
+In order to provision a VM with Trusted Launch, it first needs to be enabled with the `TrustedLaunch` using the `Set-AzVmSecurityProfile` cmdlet. Then you can use the Set-AzVmUefi cmdlet to set the vTPM and SecureBoot configuration. Use the below snippet as a quick start, remember to replace the values in this example with your own.
```azurepowershell-interactive $resourceGroup = "myResourceGroup"
$vm = Set-AzVMOSDisk -VM $vm `
-StorageAccountTypeΓÇ»"StandardSSD_LRS"ΓÇ»` -CreateOptionΓÇ»"FromImage"
-$vm = Set-AzVmSecurityType -VM $vm `
+$vm = Set-AzVmSecurityProfile -VM $vm `
-SecurityType "TrustedLaunch" $vm = Set-AzVmUefi -VM $vm `
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
At the time of this writing, EUS support has ended for RHEL <= 7.4. See the "Red
* RHEL 7.6 EUS support ends May 31, 2021 * RHEL 7.7 EUS support ends August 30, 2021
-### Switch a RHEL VM 7.x to EUS (version-lock to a specific minor version)
-Use the following instructions to lock a RHEL 7.x VM to a particular minor release (run as root):
-
->[!NOTE]
-> This only applies for RHEL 7.x versions for which EUS is available. At the time of this writing, this includes RHEL 7.2-7.7. More details are available at the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
-
-1. Disable non-EUS repos:
- ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel7'
- ```
-
-1. Add EUS repos:
- ```bash
- yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7-eus.config' install 'rhui-azure-rhel7-eus'
- ```
-
-1. Lock the `releasever` variable (run as root):
- ```bash
- echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
- ```
-
- >[!NOTE]
- > The above instruction will lock the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 7.5 > /etc/yum/vars/releasever` will lock your RHEL version to RHEL 7.5.
-
-1. Update your RHEL VM
- ```bash
- sudo yum update
- ```
-
-### Switch a RHEL VM 8.x to EUS (version-lock to a specific minor version)
-Use the following instructions to lock a RHEL 8.x VM to a particular minor release (run as root):
-
->[!NOTE]
-> This only applies for RHEL 8.x versions for which EUS is available. At the time of this writing, this includes RHEL 8.1-8.2. More details are available at the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
-
-1. Disable non-EUS repos:
- ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel8'
- ```
-
-1. Get the EUS repos config file:
- ```bash
- wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config
- ```
-
-1. Add EUS repos:
- ```bash
- yum --config=rhui-microsoft-azure-rhel8-eus.config install rhui-azure-rhel8-eus
- ```
-
-1. Lock the `releasever` variable (run as root):
- ```bash
- echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
- ```
-
- >[!NOTE]
- > The above instruction will lock the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 8.1 > /etc/yum/vars/releasever` will lock your RHEL version to RHEL 8.1.
-
- >[!NOTE]
- > If there are permission issues to access the releasever, you can edit the file using 'nano /etc/yum/vars/releaseve' and add the image version details and save ('Ctrl+o' then press enter and then 'Ctrl+x').
-
-1. Update your RHEL VM
- ```bash
- sudo yum update
- ```
--
-### Switch a RHEL 7.x VM back to non-EUS (remove a version lock)
-Run the following as root:
-1. Remove the `releasever` file:
- ```bash
- rm /etc/yum/vars/releasever
- ```
-
-1. Disable EUS repos:
- ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel7-eus'
- ```
-
-1. Configure RHEL VM
- ```bash
- yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
- ```
-
-1. Update your RHEL VM
- ```bash
- sudo yum update
- ```
-
-### Switch a RHEL 8.x VM back to non-EUS (remove a version lock)
-Run the following as root:
-1. Remove the `releasever` file:
- ```bash
- rm /etc/yum/vars/releasever
- ```
-
-1. Disable EUS repos:
- ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel8-eus'
- ```
-
-1. Get the regular repos config file:
- ```bash
- wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config
- ```
-
-1. Add non-EUS repos:
- ```bash
- yum --config=rhui-microsoft-azure-rhel8.config install rhui-azure-rhel8
- ```
-
-1. Update your RHEL VM
- ```bash
- sudo yum update
- ```
- ## The IPs for the RHUI content delivery servers RHUI is available in all regions where RHEL on-demand images are available. It currently includes all public regions listed on the [Azure status dashboard](https://azure.microsoft.com/status/) page, Azure US Government, and Microsoft Azure Germany regions.
If you're using a network configuration to further restrict access from RHEL PAY
13.72.14.155 52.244.249.194
-# Azure Germany
-51.5.243.77
-51.4.228.145
``` >[!NOTE] >The new Azure US Government images,as of January 2020, will be using Public IP mentioned under Azure Global header above.
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
The different quantile levels in "exec_time" metrics help you differentiate betw
There's also an "exec_time_count" and "exec_time_sum" metric for each "exec_time" Summary metric.
-The metrics can be scraped through Azure Monitor for Containers or through Prometheus.
+The metrics can be scraped through Container insights or through Prometheus.
### Setup for Azure Monitor
-The first step is to enable Azure Monitor for containers for your Kubernetes cluster. Steps can be found in [Azure Monitor for containers Overview](../azure-monitor/containers/container-insights-overview.md). Once you have Azure Monitor for containers enabled, configure the [Azure Monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig) to enable NPM integration and collection of Prometheus NPM metrics. Azure monitor for containers ConfigMap has an ```integrations``` section with settings to collect NPM metrics. These settings are disabled by default in the ConfigMap. Enabling the basic setting ```collect_basic_metrics = true```, will collect basic NPM metrics. Enabling advanced setting ```collect_advanced_metrics = true``` will collect advanced metrics in addition to basic metrics.
+The first step is to enable Container insights for your Kubernetes cluster. Steps can be found in [Container insights Overview](../azure-monitor/containers/container-insights-overview.md). Once you have Container insights enabled, configure the [Container insights ConfigMap](https://aka.ms/container-azm-ms-agentconfig) to enable NPM integration and collection of Prometheus NPM metrics. Container insights ConfigMap has an ```integrations``` section with settings to collect NPM metrics. These settings are disabled by default in the ConfigMap. Enabling the basic setting ```collect_basic_metrics = true```, will collect basic NPM metrics. Enabling advanced setting ```collect_advanced_metrics = true``` will collect advanced metrics in addition to basic metrics.
After editing the ConfigMap, save it locally and apply the ConfigMap to your cluster as follows. `kubectl apply -f container-azm-ms-agentconfig.yaml`
-Below is a snippet from the [Azure monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig), which shows the NPM integration enabled with advanced metrics collection.
+Below is a snippet from the [Container insights ConfigMap](https://aka.ms/container-azm-ms-agentconfig), which shows the NPM integration enabled with advanced metrics collection.
``` integrations: |- [integrations.azure_network_policy_manager]
integrations: |-
``` Advanced metrics are optional, and turning them on will automatically turn on basic metrics collection. Advanced metrics currently include only `npm_ipset_counts`
-Learn more about [Azure monitor for containers collection settings in config map](../azure-monitor/containers/container-insights-agent-config.md)
+Learn more about [Container insights collection settings in config map](../azure-monitor/containers/container-insights-agent-config.md)
### Visualization Options for Azure Monitor Once NPM metrics collection is enabled, you can view the metrics in the Azure portal using Container Insights or in Grafana.
Set up your Grafana Server and configure a Log Analytics Data Source as describe
The dashboard has visuals similar to the Azure Workbook. You can add panels to chart & visualize NPM metrics from InsightsMetrics table. ### Setup for Prometheus Server
-Some users may choose to collect metrics with a Prometheus Server instead of Azure Monitor for containers. You merely need to add two jobs to your scrape config to collect NPM metrics.
+Some users may choose to collect metrics with a Prometheus Server instead of Container insights. You merely need to add two jobs to your scrape config to collect NPM metrics.
To install a simple Prometheus Server, add this helm repo on your cluster ```
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
The columns indicate whether the tag:
- Is suitable for rules that cover inbound or outbound traffic. - Supports [regional](https://azure.microsoft.com/regions) scope.-- Is usable in [Azure Firewall](../firewall/service-tags.md) rules.
+- Is usable in [Azure Firewall](../firewall/service-tags.md) rules as a *destination* rule only for inbound or outbound traffic.
-By default, service tags reflect the ranges for the entire cloud. Some service tags also allow more granular control by restricting the corresponding IP ranges to a specified region. For example, the service tag **Storage** represents Azure Storage for the entire cloud, but **Storage.WestUS** narrows the range to only the storage IP address ranges from the WestUS region. The following table indicates whether each service tag supports such regional scope. Note that the direction listed for each tag is a recommendation. For example, the AzureCloud tag may be used to allow inbound traffic. However, we don't recommend this in most scenarios since this means allowing traffic from all Azure IP's, including those used by other Azure customers.
+By default, service tags reflect the ranges for the entire cloud. Some service tags also allow more granular control by restricting the corresponding IP ranges to a specified region. For example, the service tag **Storage** represents Azure Storage for the entire cloud, but **Storage.WestUS** narrows the range to only the storage IP address ranges from the WestUS region. The following table indicates whether each service tag supports such regional scope, and the direction listed for each tag is a recommendation. For example, the AzureCloud tag may be used to allow inbound traffic. In most scenarios, we don't recommend allowing traffic from all Azure IPs since IPs used by other Azure customers are included as part of the service tag.
| Tag | Purpose | Can use inbound or outbound? | Can be regional? | Can use with Azure Firewall? | | | -- |::|::|::| | **ActionGroup** | Action Group. | Inbound | No | No |
-| **ApiManagement** | Management traffic for Azure API Management-dedicated deployments. <br/><br/>**Note**: This tag represents the Azure API Management service endpoint for control plane per region. This enables customers to perform management operations on the APIs, Operations, Policies, NamedValues configured on the API Management service. | Inbound | Yes | Yes |
+| **ApiManagement** | Management traffic for Azure API Management-dedicated deployments. <br/><br/>**Note**: This tag represents the Azure API Management service endpoint for control plane per region. The tag enables customers to perform management operations on the APIs, Operations, Policies, NamedValues configured on the API Management service. | Inbound | Yes | Yes |
| **ApplicationInsightsAvailability** | Application Insights Availability. | Inbound | No | No | | **AppConfiguration** | App Configuration. | Outbound | No | No | | **AppService** | Azure App Service. This tag is recommended for outbound security rules to web apps and Function apps. | Outbound | Yes | Yes |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureBackup** |Azure Backup.<br/><br/>**Note**: This tag has a dependency on the **Storage** and **AzureActiveDirectory** tags. | Outbound | No | Yes | | **AzureBotService** | Azure Bot Service. | Outbound | No | No | | **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). | Outbound | Yes | Yes |
-| **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. Refer to the [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors) for more details. <br/><br/> **Note**: The IP of the search service is not included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | No |
+| **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | No |
| **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Inbound / Outbound | Yes | Yes | | **AzureContainerRegistry** | Azure Container Registry. | Outbound | Yes | Yes | | **AzureCosmosDB** | Azure Cosmos DB. | Outbound | Yes | Yes |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureSphere** | This tag or the IP addresses covered by this tag can be used to restrict access to Azure Sphere Security Services. | Both | No | Yes | | **AzureStack** | Azure Stack Bridge services. </br> This tag represents the Azure Stack Bridge service endpoint per region. | Outbound | No | Yes | | **AzureTrafficManager** | Azure Traffic Manager probe IP addresses.<br/><br/>For more information on Traffic Manager probe IP addresses, see [Azure Traffic Manager FAQ](../traffic-manager/traffic-manager-faqs.md). | Inbound | No | Yes |
-| **AzureUpdateDelivery** | For accessing Windows Updates. <br/><br/>**Note**: This tag provides access to Windows Update metadata services. To successfully download updates you must also enable the **AzureFrontDoor.FirstParty** service tag and configure outbound security rules with the protocol and port defined as follows: <ul><li>AzureUpdateDelivery: TCP, port 443</li><li>AzureFrontDoor.FirstParty: TCP, port 80</li></ul> | Outbound | No | No |
+| **AzureUpdateDelivery** | For accessing Windows Updates. <br/><br/>**Note**: This tag provides access to Windows Update metadata services. To successfully download updates, you must also enable the **AzureFrontDoor.FirstParty** service tag and configure outbound security rules with the protocol and port defined as follows: <ul><li>AzureUpdateDelivery: TCP, port 443</li><li>AzureFrontDoor.FirstParty: TCP, port 80</li></ul> | Outbound | No | No |
| **BatchNodeManagement** | Management traffic for deployments dedicated to Azure Batch. | Both | No | Yes | | **CognitiveServicesManagement** | The address ranges for traffic for Azure Cognitive Services. | Both | No | No | | **DataFactory** | Azure Data Factory | Both | No | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **PowerQueryOnline** | Power Query Online. | Both | No | No | | **ServiceBus** | Azure Service Bus traffic that uses the Premium service tier. | Outbound | Yes | Yes | | **ServiceFabric** | Azure Service Fabric.<br/><br/>**Note**: This tag represents the Service Fabric service endpoint for control plane per region. This enables customers to perform management operations for their Service Fabric clusters from their VNET (endpoint eg. https:// westus.servicefabric.azure.com). | Both | No | No |
-| **Sql** | Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, Azure Database for MariaDB, and Azure Synapse Analytics.<br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure SQL Database service, but not a specific SQL database or server. This tag does not apply to SQL managed instance. | Outbound | Yes | Yes |
+| **Sql** | Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, Azure Database for MariaDB, and Azure Synapse Analytics.<br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure SQL Database service, but not a specific SQL database or server. This tag doesn't apply to SQL managed instance. | Outbound | Yes | Yes |
| **SqlManagement** | Management traffic for SQL-dedicated deployments. | Both | No | Yes | | **Storage** | Azure Storage. <br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure Storage service, but not a specific Azure Storage account. | Outbound | Yes | Yes | | **StorageSyncService** | Storage Sync Service. | Both | No | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **VirtualNetwork** | The virtual network address space (all IP address ranges defined for the virtual network), all connected on-premises address spaces, [peered](virtual-network-peering-overview.md) virtual networks, virtual networks connected to a [virtual network gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%3ftoc.json), the [virtual IP address of the host](./network-security-groups-overview.md#azure-platform-considerations), and address prefixes used on [user-defined routes](virtual-networks-udr-overview.md). This tag might also contain default routes. | Both | No | No | > [!NOTE]
+> - When using service tags with Azure Firewall, you can only create destination rules on inbound and outbound traffic. Source rules are not supported. For more information, see the [Azure Firewall Service Tags](../firewall/service-tags.md) doc.
> > - Service tags of Azure services denote the address prefixes from the specific cloud being used. For example, the underlying IP ranges that correspond to the **Sql** tag value on the Azure Public cloud will be different from the underlying ranges on the Azure China cloud. >
You can download JSON files that contain the current list of service tags togeth
- [Azure Public](https://www.microsoft.com/download/details.aspx?id=56519) - [Azure US Government](https://www.microsoft.com/download/details.aspx?id=57063) -- [Azure China](https://www.microsoft.com/download/details.aspx?id=57062)
+- [Azure China 21Vianet](https://www.microsoft.com/download/details.aspx?id=57062)
- [Azure Germany](https://www.microsoft.com/download/details.aspx?id=57064) The IP address ranges in these files are in CIDR notation.
-The following AzureCloud tags do not have regional names formatted according to the normal schema:
+The following AzureCloud tags don't have regional names formatted according to the normal schema:
- AzureCloud.centralfrance (FranceCentral) - AzureCloud.southfrance (FranceSouth) - AzureCloud.germanywc (GermanyWestCentral)
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
Each route contains an address prefix and next hop type. When traffic leaving a
The next hop types listed in the previous table represent how Azure routes traffic destined for the address prefix listed. Explanations for the next hop types follow:
-* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.md#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. Azure automatically routes traffic between subnets using the routes created for each address range. You don't need to define gateways for Azure to route traffic between subnets. Though a virtual network contains subnets, and each subnet has a defined address range, Azure doesn't create default routes for subnet address ranges. This is due each subnet address range is within an address range of the address space of a virtual network.
+* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.md#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. Azure automatically routes traffic between subnets using the routes created for each address range. You don't need to define gateways for Azure to route traffic between subnets. Though a virtual network contains subnets, and each subnet has a defined address range, Azure doesn't create default routes for subnet address ranges. This is because each subnet address range is within an address range of the address space of a virtual network.
* **Internet**: Routes traffic specified by the address prefix to the Internet. The system default route specifies the 0.0.0.0/0 address prefix. If you don't override Azure's default routes, Azure routes traffic for any address not specified by an address range within a virtual network, to the Internet, with one exception. If the destination address is for one of Azure's services, Azure routes the traffic directly to the service over Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services doesn't traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in. You can override Azure's default system route for the 0.0.0.0/0 address prefix with a [custom route](#custom-routes). * **None**: Traffic routed to the **None** next hop type is dropped, rather than routed outside the subnet. Azure automatically creates default routes for the following address prefixes:
vpn-gateway Reset Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/reset-gateway.md
Title: 'Reset a VPN gateway or connection to reestablish IPsec tunnels'
description: Learn how to reset a gateway or a gateway connection to reestablish IPsec tunnels. - Previously updated : 02/22/2021 Last updated : 05/26/2022 # Reset a VPN gateway or a connection
-Resetting an Azure VPN gateway or gateway connection is helpful if you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but are not able to establish IPsec tunnels with the Azure VPN gateways. This article helps you reset a VPN gateway or gateway connection.
+Resetting an Azure VPN gateway or gateway connection is helpful if you lose cross-premises VPN connectivity on one or more site-to-site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but aren't able to establish IPsec tunnels with the Azure VPN gateways. This article helps you reset a VPN gateway or gateway connection.
## What happens during a reset
A VPN gateway is composed of two VM instances running in an active-standby confi
When you issue the command to reset the gateway, the current active instance of the Azure VPN gateway is rebooted immediately. There will be a brief gap during the failover from the active instance (being rebooted), to the standby instance. The gap should be less than one minute.
-If the connection is not restored after the first reboot, issue the same command again to reboot the second VM instance (the new active gateway). If the two reboots are requested back to back, there will be a slightly longer period where both VM instances (active and standby) are being rebooted. This will cause a longer gap on the VPN connectivity, up to 30 to 45 minutes for VMs to complete the reboots.
+If the connection isn't restored after the first reboot, issue the same command again to reboot the second VM instance (the new active gateway). If the two reboots are requested back to back, there will be a slightly longer period where both VM instances (active and standby) are being rebooted. This will cause a longer gap on the VPN connectivity, up to 30 to 45 minutes for VMs to complete the reboots.
-After two reboots, if you are still experiencing cross-premises connectivity problems, please open a support request from the Azure portal.
+After two reboots, if you're still experiencing cross-premises connectivity problems, please open a support request from the Azure portal.
### Connection reset
-When you select to reset a connection, the gateway does not reboot. Only the selected connection is reset and restored.
+When you select to reset a connection, the gateway doesn't reboot. Only the selected connection is reset and restored.
## Reset a connection
You can reset a connection easily using the Azure portal.
1. On the **Connection** page, select **Reset** from the left menu. 1. On the **Reset** page, click **Reset** to reset the connection.
- :::image type="content" source="./media/reset-gateway/reset-connection.png" alt-text="Screenshot showing Reset.":::
+ :::image type="content" source="./media/reset-gateway/reset-connection.png" alt-text="Screenshot showing the Reset button selected." lightbox="./media/reset-gateway/reset-connection-expand.png":::
## Reset a VPN gateway
-Before you reset your gateway, verify the key items listed below for each IPsec Site-to-Site (S2S) VPN tunnel. Any mismatch in the items will result in the disconnect of S2S VPN tunnels. Verifying and correcting the configurations for your on-premises and Azure VPN gateways saves you from unnecessary reboots and disruptions for the other working connections on the gateways.
+Before you reset your gateway, verify the key items listed below for each IPsec site-to-site (S2S) VPN tunnel. Any mismatch in the items will result in the disconnect of S2S VPN tunnels. Verifying and correcting the configurations for your on-premises and Azure VPN gateways saves you from unnecessary reboots and disruptions for the other working connections on the gateways.
Verify the following items before resetting your gateway:
Reset-AzVirtualNetworkGateway -VirtualNetworkGateway $gw
Result:
-When you receive a return result, you can assume the gateway reset was successful. However, there is nothing in the return result that indicates explicitly that the reset was successful. If you want to look closely at the history to see exactly when the gateway reset occurred, you can view that information in the [Azure portal](https://portal.azure.com). In the portal, navigate to **'GatewayName' -> Resource Health**.
+When you receive a return result, you can assume the gateway reset was successful. However, there's nothing in the return result that indicates explicitly that the reset was successful. If you want to look closely at the history to see exactly when the gateway reset occurred, you can view that information in the [Azure portal](https://portal.azure.com). In the portal, navigate to **'GatewayName' -> Resource Health**.
#### <a name="resetclassic"></a>Classic deployment model
-The cmdlet for resetting a gateway is **Reset-AzureVNetGateway**. The Azure PowerShell cmdlets for Service Management must be installed locally on your desktop. You can't use Azure Cloud Shell. Before performing a reset, make sure you have the latest version of the [Service Management (SM) PowerShell cmdlets](/powershell/azure/servicemanagement/install-azure-ps#azure-service-management-cmdlets). When using this command, make sure you are using the full name of the virtual network. Classic VNets that were created using the portal have a long name that is required for PowerShell. You can view the long name by using 'Get-AzureVNetConfig -ExportToFile C:\Myfoldername\NetworkConfig.xml'.
+The cmdlet for resetting a gateway is **Reset-AzureVNetGateway**. The Azure PowerShell cmdlets for Service Management must be installed locally on your desktop. You can't use Azure Cloud Shell. Before performing a reset, make sure you have the latest version of the [Service Management (SM) PowerShell cmdlets](/powershell/azure/servicemanagement/install-azure-ps#azure-service-management-cmdlets). When using this command, make sure you're using the full name of the virtual network. Classic VNets that were created using the portal have a long name that is required for PowerShell. You can view the long name by using 'Get-AzureVNetConfig -ExportToFile C:\Myfoldername\NetworkConfig.xml'.
The following example resets the gateway for a virtual network named "Group TestRG1 TestVNet1" (which shows as simply "TestVNet1" in the portal):
az network vnet-gateway reset -n VNet5GW -g TestRG5
Result:
-When you receive a return result, you can assume the gateway reset was successful. However, there is nothing in the return result that indicates explicitly that the reset was successful. If you want to look closely at the history to see exactly when the gateway reset occurred, you can view that information in the [Azure portal](https://portal.azure.com). In the portal, navigate to **'GatewayName' -> Resource Health**.
+When you receive a return result, you can assume the gateway reset was successful. However, there's nothing in the return result that indicates explicitly that the reset was successful. If you want to look closely at the history to see exactly when the gateway reset occurred, you can view that information in the [Azure portal](https://portal.azure.com). In the portal, navigate to **'GatewayName' -> Resource Health**.
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 04/29/2022 Last updated : 05/26/2022
Create a local network gateway using the following values:
Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following values: * A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
-* The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, navigate to **Virtual network gateways**, then select the name of your gateway.
+* The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, go to **Virtual network gateways**, then select the name of your gateway.
[!INCLUDE [Configure a VPN device](../../includes/vpn-gateway-configure-vpn-device-include.md)]
Create a connection using the following values:
[!INCLUDE [Add a site-to-site connection](../../includes/vpn-gateway-add-site-to-site-connection-portal-include.md)]
-### <a name="addconnect"></a>To add another connection
+### <a name="configure-connect"></a>To configure additional connection settings (optional)
-You can connect to multiple on-premises sites from the same VPN gateway. If you want to configure multiple connections, the address spaces canΓÇÖt overlap between any of the connections.
+You can configure additional settings for your connection, if necessary. Otherwise, skip this section and leave the defaults in place.
-1. To add an additional connection, navigate to the VPN gateway, then select **Connections** to open the Connections page.
-1. Select **+Add** to add your connection. Adjust the connection type to reflect either VNet-to-VNet (if connecting to another VNet gateway), or Site-to-site.
-1. If you're connecting using Site-to-site and you haven't already created a local network gateway for the site you want to connect to, you can create a new one.
-1. Specify the shared key that you want to use, then select **OK** to create the connection.
## <a name="VerifyConnection"></a>Verify the VPN connection
Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connect
[!INCLUDE [reset a gateway](../../includes/vpn-gateway-reset-gw-portal-include.md)]
+### <a name="addconnect"></a>Add another connection
+
+You can create a connection to multiple on-premises sites from the same VPN gateway. If you want to configure multiple connections, the address spaces canΓÇÖt overlap between any of the connections.
+
+1. To add an additional connection, go to the VPN gateway, then select **Connections** to open the Connections page.
+1. Select **+Add** to add your connection. Adjust the connection type to reflect either VNet-to-VNet (if connecting to another VNet gateway), or Site-to-site.
+1. If you're connecting using Site-to-site and you haven't already created a local network gateway for the site you want to connect to, you can create a new one.
+1. Specify the shared key that you want to use, then select **OK** to create the connection.
+ ### <a name="additional"></a>Additional configuration considerations S2S configurations can be customized in a variety of ways. For more information, see the following articles: