Updates from: 06/10/2022 01:14:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
The **CryptographicKeys** element contains the following attributes:
| Attribute |Required | Description | | | -- | -- | | SamlMessageSigning |Yes | The X509 certificate (RSA key set) to use to sign SAML messages. Azure AD B2C uses this key to sign the requests and send them to the identity provider. |
-| SamlAssertionDecryption |No* | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. <br/><br/> * Required if the external IDP Encryts SAML assertions.|
+| SamlAssertionDecryption |No* | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. <br/><br/> * Required if the external IDP encrypts SAML assertions.|
| MetadataSigning |No | The X509 certificate (RSA key set) to use to sign SAML metadata. Azure AD B2C uses this key to sign the metadata. | ## Next steps
active-directory Check Status User Account Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/check-status-user-account-provisioning.md
Previously updated : 05/11/2021 Last updated : 05/30/2022
This article describes how to check the status of provisioning jobs after they h
## Overview
-Provisioning connectors are set up and configured using the [Azure portal](https://portal.azure.com), by following the [provided documentation](../saas-apps/tutorial-list.md) for the supported application. Once configured and running, provisioning jobs can be reported on using one of two methods:
+Provisioning connectors are set up and configured using the [Azure portal](https://portal.azure.com), by following the [provided documentation](../saas-apps/tutorial-list.md) for the supported application. Once configured and running, provisioning jobs can be reported on using the following methods:
-* **Azure portal** - This article primarily describes retrieving report information from the [Azure portal](https://portal.azure.com), which provides both a provisioning summary report as well as detailed provisioning audit logs for a given application.
-* **Audit API** - Azure Active Directory also provides an Audit API that enables programmatic retrieval of the detailed provisioning audit logs. See [Azure Active Directory audit API reference](/graph/api/resources/directoryaudit) for documentation specific to using this API. While this article does not specifically cover how to use the API, it does detail the types of provisioning events that are recorded in the audit log.
+- The [Azure portal](https://portal.azure.com)
+
+- Streaming the provisioning logs into [Azure Monitor](../app-provisioning/application-provisioning-log-analytics.md). This method allows for extended data retention and building custom dashboards, alerts, and queries.
+
+- Querying the [Microsoft Graph API](/graph/api/resources/provisioningobjectsummary) for the provisioning logs.
+
+- Downloading the provisioning logs as a CSV or JSON file.
### Definitions
This article uses the following terms, defined below:
## Getting provisioning reports from the Azure portal
-To get provisioning report information for a given application, start by launching the [Azure portal](https://portal.azure.com) and **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs (preview)** in the **Activity** section. You can also browse to the Enterprise Application for which provisioning is configured. For example, if you are provisioning users to LinkedIn Elevate, the navigation path to the application details is:
+To get provisioning report information for a given application, start by launching the [Azure portal](https://portal.azure.com) and **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs** in the **Activity** section. You can also browse to the Enterprise Application for which provisioning is configured. For example, if you are provisioning users to LinkedIn Elevate, the navigation path to the application details is:
**Azure Active Directory > Enterprise Applications > All applications > LinkedIn Elevate**
The **Current Status** should be the first place admins look to check on the ope
 ![Summary report](./media/check-status-user-account-provisioning/provisioning-progress-bar-section.png)
-## Provisioning logs (preview)
+## Provisioning logs
+
+All activities performed by the provisioning service are recorded in the Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). You can access the provisioning logs in the Azure portal by selecting **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs ** in the **Activity** section. You can search the provisioning data based on the name of the user or the identifier in either the source system or the target system. For details, see [Provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context).
-All activities performed by the provisioning service are recorded in the Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). You can access the provisioning logs in the Azure portal by selecting **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs (preview)** in the **Activity** section. You can search the provisioning data based on the name of the user or the identifier in either the source system or the target system. For details, see [Provisioning logs (preview)](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context).
-Logged activity event types include:
## Troubleshooting
For scenario-based guidance on how to troubleshoot automatic user provisioning,
## Additional Resources * [Managing user account provisioning for Enterprise Apps](configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
It's common for a security review to be required as part of a deployment. If you
If the automatic user provisioning implementation fails to work as desired in the production environment, the following rollback steps below can assist you in reverting to a previous known good state:
-1. Review the [provisioning summary report](../app-provisioning/check-status-user-account-provisioning.md) and [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md#provisioning-logs-preview) to determine what incorrect operations occurred on the affected users and/or groups.
+1. Review the [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md) to determine what incorrect operations occurred on the affected users and/or groups.
1. Use provisioning audit logs to determine the last known good state of the users and/or groups affected. Also review the source systems (Azure AD or AD).
Refer to the following links to troubleshoot any issues that may turn up during
* [Export or import your provisioning configuration by using Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md)
-* [Writing expressions for attribute mappings in Azure Active directory](../app-provisioning/functions-for-customizing-application-data.md)
+* [Writing expressions for attribute mappings in Azure Active directory](../app-provisioning/functions-for-customizing-application-data.md)
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
It's common for a security review to be required as part of the deployment of a
The cloud HR user provisioning implementation might fail to work as desired in the production environment. If so, the following rollback steps can assist you in reverting to a previous known good state.
-1. Review the [provisioning summary report](../app-provisioning/check-status-user-account-provisioning.md#getting-provisioning-reports-from-the-azure-portal) and [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md#provisioning-logs-preview) to determine what incorrect operations were performed on the affected users or groups. For more information on the provisioning summary report and logs, see [Manage cloud HR app user provisioning](#manage-your-configuration).
+1. Review the [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md#provisioning-logs) to determine what incorrect operations were performed on the affected users or groups. For more information on the provisioning summary report and logs, see [Manage cloud HR app user provisioning](#manage-your-configuration).
2. The last known good state of the users or groups affected can be determined through the provisioning audit logs or by reviewing the target systems (Azure AD or Active Directory). 3. Work with the app owner to update the users or groups affected directly in the app by using the last known good state values.
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
To collect debug logs for support diagnostics, use the following steps on the NP
``` Mkdir c:\NPS
- Cd NPS
+ Cd c:\NPS
netsh trace start Scenario=NetConnection capture=yes tracefile=c:\NPS\nettrace.etl logman create trace "NPSExtension" -ow -o c:\NPS\NPSExtension.etl -p {7237ED00-E119-430B-AB0F-C63360C8EE81} 0xffffffffffffffff 0xff -nb 16 16 -bs 1024 -mode Circular -f bincirc -max 4096 -ets logman update trace "NPSExtension" -p {EC2E6D3A-C958-4C76-8EA4-0262520886FF} 0xffffffffffffffff 0xff -ets
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
The web app sample in this tutorial uses the [express-session](https://www.npmjs
## Add app registration details
-1. Create an *.env* file in the root of your project folder. Then add the following code:
+1. Create a *.env* file in the root of your project folder. Then add the following code:
:::code language="text" source="~/ms-identity-node/App/.env":::
Fill in these details with the values you obtain from Azure app registration por
## Add code for user login and token acquisition
+1. Create a new file named *auth.js* under the *router* folder and add the following code there:
+ :::code language="js" source="~/ms-identity-node/App/routes/auth.js"::: 2. Next, update the *index.js* route by replacing the existing code with the following:
Fill in these details with the values you obtain from Azure app registration por
## Add code for calling the Microsoft Graph API
-Create a file named **fetch.js** in the root of your project and add the following code:
+Create a file named *fetch.js* in the root of your project and add the following code:
:::code language="js" source="~/ms-identity-node/App/fetch.js":::
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
Connect-MgGraph -TenantId $tenantId -Scopes 'Application.Read.All','Application.
# Look up the details about the server app's service principal and app role. $serverServicePrincipal = (Get-MgServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
-$serverServicePrincipalObjectId = $serverServicePrincipal.ObjectId
+$serverServicePrincipalObjectId = $serverServicePrincipal.Id
$appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id # Assign the managed identity access to the app role.
active-directory Agile Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/agile-provisioning-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Agile Provisioning'
+description: Learn how to configure single sign-on between Azure Active Directory and Agile Provisioning.
++++++++ Last updated : 05/23/2022++++
+# Tutorial: Azure AD SSO integration with Agile Provisioning
+
+In this tutorial, you'll learn how to integrate Agile Provisioning with Azure Active Directory (Azure AD). When you integrate Agile Provisioning with Azure AD, you can:
+
+* Control in Azure AD who has access to Agile Provisioning.
+* Enable your users to be automatically signed-in to Agile Provisioning with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Agile Provisioning single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Agile Provisioning supports **SP** and **IDP** initiated SSO.
+
+## Add Agile Provisioning from the gallery
+
+To configure the integration of Agile Provisioning into Azure AD, you need to add Agile Provisioning from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Agile Provisioning** in the search box.
+1. Select **Agile Provisioning** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Agile Provisioning
+
+Configure and test Azure AD SSO with Agile Provisioning using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Agile Provisioning.
+
+To configure and test Azure AD SSO with Agile Provisioning, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Agile Provisioning SSO](#configure-agile-provisioning-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Agile Provisioning test user](#create-agile-provisioning-test-user)** - to have a counterpart of B.Simon in Agile Provisioning that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Agile Provisioning** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `<CustomerFullyQualifiedName>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CustomerFullyQualifiedName>/web-portal/saml/SSO`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerFullyQualifiedName>/web-portal/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Agile Provisioning Client support team](mailto:support@flexcomlabs.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Agile Provisioning.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Agile Provisioning**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Agile Provisioning SSO
+
+To configure single sign-on on **Agile Provisioning** side, you need to send the **App Federation Metadata Url** to [Agile Provisioning support team](mailto:support@flexcomlabs.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Agile Provisioning test user
+
+In this section, you create a user called Britta Simon in Agile Provisioning. Work with [Agile Provisioning support team](mailto:support@flexcomlabs.com) to add the users in the Agile Provisioning platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Agile Provisioning Sign on URL where you can initiate the login flow.
+
+* Go to Agile Provisioning Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Agile Provisioning for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Agile Provisioning tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Agile Provisioning for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Agile Provisioning you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Airwatch Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airwatch-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with AirWatch | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with AirWatch'
description: Learn how to configure single sign-on between Azure Active Directory and AirWatch.
Previously updated : 01/20/2021 Last updated : 06/08/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * AirWatch single sign-on (SSO)-enabled subscription.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** page, enter the values for the following fields:
- 1. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.awmdm.com/AirWatch/Login?gid=companycode`
-
- 1. In the **Identifier (Entity ID)** text box, type the value as:
+ a. In the **Identifier (Entity ID)** text box, type the value as:
`AirWatch`
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL|
+ |--|
+ | `https://<SUBDOMAIN>.awmdm.com/<COMPANY_CODE>` |
+ | `https://<SUBDOMAIN>.airwatchportals.com/<COMPANY_CODE>` |
+ |
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<subdomain>.awmdm.com/AirWatch/Login?gid=companycode`
+ > [!NOTE]
- > This value is not the real. Update this value with the actual Sign-on URL. Contact [AirWatch Client support team](https://www.vmware.com/in/support/acquisitions/airwatch.html) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not the real. Update these values with the actual Reply URL and Sign-on URL. Contact [AirWatch Client support team](https://www.vmware.com/in/support/acquisitions/airwatch.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. AirWatch application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
active-directory Asccontracts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asccontracts-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ASC Contracts | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ASC Contracts'
description: Learn how to configure single sign-on between Azure Active Directory and ASC Contracts.
Previously updated : 01/17/2019 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory integration with ASC Contracts
+# Tutorial: Azure AD SSO integration with ASC Contracts
-In this tutorial, you learn how to integrate ASC Contracts with Azure Active Directory (Azure AD).
-Integrating ASC Contracts with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ASC Contracts with Azure Active Directory (Azure AD). When you integrate ASC Contracts with Azure AD, you can:
-* You can control in Azure AD who has access to ASC Contracts.
-* You can enable your users to be automatically signed-in to ASC Contracts (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ASC Contracts.
+* Enable your users to be automatically signed-in to ASC Contracts with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with ASC Contracts, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ASC Contracts single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ASC Contracts single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ASC Contracts supports **IDP** initiated SSO
+* ASC Contracts supports **IDP** initiated SSO.
-## Adding ASC Contracts from the gallery
+## Add ASC Contracts from the gallery
To configure the integration of ASC Contracts into Azure AD, you need to add ASC Contracts from the gallery to your list of managed SaaS apps.
-**To add ASC Contracts from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **ASC Contracts**, select **ASC Contracts** from result panel then click **Add** button to add the application.
-
- ![ASC Contracts in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ASC Contracts based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ASC Contracts needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ASC Contracts** in the search box.
+1. Select **ASC Contracts** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with ASC Contracts, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for ASC Contracts
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ASC Contracts Single Sign-On](#configure-asc-contracts-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ASC Contracts test user](#create-asc-contracts-test-user)** - to have a counterpart of Britta Simon in ASC Contracts that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with ASC Contracts using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ASC Contracts.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with ASC Contracts, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ASC Contracts SSO](#configure-asc-contracts-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ASC Contracts test user](#create-asc-contracts-test-user)** - to have a counterpart of B.Simon in ASC Contracts that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with ASC Contracts, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **ASC Contracts** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **ASC Contracts** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
-
- ![ASC Contracts Domain and URLs single sign-on information](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** page, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<subdomain>.asccontracts.com/shibboleth`
To configure Azure AD single sign-on with ASC Contracts, perform the following s
> [!NOTE] > These values are not real. Update these values with the actual Identifier and Reply URL. Contact ASC Networks Inc. (ASC) team at **613.599.6178** to get these values.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-6. On the **Set up ASC Contracts** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set up ASC Contracts** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure ASC Contracts Single Sign-On
-
-To configure single sign-on on **ASC Contracts** side, call ASC Networks Inc. (ASC) support at **613.599.6178** and provide them with the downloaded **Federation Metadata XML**. They set this application up to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ASC Contracts.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ASC Contracts.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ASC Contracts**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ASC Contracts**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure ASC Contracts SSO
-2. In the applications list, select **ASC Contracts**.
-
- ![The ASC Contracts link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ASC Contracts** side, call ASC Networks Inc. (ASC) support at **613.599.6178** and provide them with the downloaded **Federation Metadata XML**. They set this application up to have the SAML SSO connection set properly on both sides.
### Create ASC Contracts test user Work with ASC Networks Inc. (ASC) support team at **613.599.6178** to get the users added in the ASC Contracts platform.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the ASC Contracts tile in the Access Panel, you should be automatically signed in to the ASC Contracts for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the ASC Contracts for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ASC Contracts tile in the My Apps, you should be automatically signed in to the ASC Contracts for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ASC Contracts you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Carlsonwagonlit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/carlsonwagonlit-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Carlson Wagonlit Travel | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Carlson Wagonlit Travel.
+ Title: 'Tutorial: Azure AD SSO integration with CWT'
+description: Learn how to configure single sign-on between Azure Active Directory and CWT.
Previously updated : 07/21/2021 Last updated : 06/08/2022
-# Tutorial: Azure Active Directory integration with Carlson Wagonlit Travel
+# Tutorial: Azure AD SSO integration with CWT
-In this tutorial, you'll learn how to integrate Carlson Wagonlit Travel with Azure Active Directory (Azure AD). When you integrate Carlson Wagonlit Travel with Azure AD, you can:
+In this tutorial, you'll learn how to integrate CWT with Azure Active Directory (Azure AD). When you integrate CWT with Azure AD, you can:
-* Control in Azure AD who has access to Carlson Wagonlit Travel.
-* Enable your users to be automatically signed-in to Carlson Wagonlit Travel with their Azure AD accounts.
+* Control in Azure AD who has access to CWT.
+* Enable your users to be automatically signed-in to CWT with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Carlson Wagonlit Travel with Azu
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Carlson Wagonlit Travel single sign-on (SSO) enabled subscription.
+* CWT single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Carlson Wagonlit Travel supports **IDP** initiated SSO.
+* CWT supports **IDP** initiated SSO.
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+## Add CWT from the gallery
-## Add Carlson Wagonlit Travel from the gallery
-
-To configure the integration of Carlson Wagonlit Travel into Azure AD, you need to add Carlson Wagonlit Travel from the gallery to your list of managed SaaS apps.
+To configure the integration of CWT into Azure AD, you need to add CWT from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Carlson Wagonlit Travel** in the search box.
-1. Select **Carlson Wagonlit Travel** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **CWT** in the search box.
+1. Select **CWT** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Carlson Wagonlit Travel
+## Configure and test Azure AD SSO for CWT
-Configure and test Azure AD SSO with Carlson Wagonlit Travel using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Carlson Wagonlit Travel.
+Configure and test Azure AD SSO with CWT using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CWT.
-To configure and test Azure AD SSO with Carlson Wagonlit Travel, perform the following steps:
+To configure and test Azure AD SSO with CWT, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Carlson Wagonlit Travel SSO](#configure-carlson-wagonlit-travel-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Carlson Wagonlit Travel test user](#create-carlson-wagonlit-travel-test-user)** - to have a counterpart of B.Simon in Carlson Wagonlit Travel that is linked to the Azure AD representation of user.
+1. **[Configure CWT SSO](#configure-cwt-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create CWT test user](#create-cwt-test-user)** - to have a counterpart of B.Simon in CWT that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Carlson Wagonlit Travel** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **CWT** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following step:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- In the **Identifier** text box, type the value:
- `cwt-stage`
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
-5. On the **Set-up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+1. On the **Set-up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-6. On the **Set-up Carlson Wagonlit Travel** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set-up CWT** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Carlson Wagonlit Travel.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CWT.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Carlson Wagonlit Travel**.
+1. In the applications list, select **CWT**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Carlson Wagonlit Travel SSO
+## Configure CWT SSO
-To configure single sign-on on **Carlson Wagonlit Travel** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Carlson Wagonlit Travel support team](https://www.mycwt.com/traveler-help/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **CWT** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CWT support team](https://www.mycwt.com/traveler-help/). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Carlson Wagonlit Travel test user
+### Create CWT test user
-In this section, you create a user called Britta Simon in Carlson Wagonlit Travel. Work with [Carlson Wagonlit Travel support team](https://www.mycwt.com/traveler-help/) to add the users in the Carlson Wagonlit Travel platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in CWT. Work with [CWT support team](https://www.mycwt.com/traveler-help/) to add the users in the CWT platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the Carlson Wagonlit Travel for which you set up the SSO.
+* Click on Test this application in Azure portal and you should be automatically signed in to the CWT for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the Carlson Wagonlit Travel tile in the My Apps, you should be automatically signed in to the Carlson Wagonlit Travel for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the CWT tile in the My Apps, you should be automatically signed in to the CWT for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Carlson Wagonlit Travel you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure CWT you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Cloud Service Picco Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-service-picco-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Cloud Service PICCO | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Cloud Service PICCO'
description: Learn how to configure single sign-on between Azure Active Directory and Cloud Service PICCO.
Previously updated : 12/21/2018 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory integration with Cloud Service PICCO
+# Tutorial: Azure AD SSO integration with Cloud Service PICCO
-In this tutorial, you learn how to integrate Cloud Service PICCO with Azure Active Directory (Azure AD).
-Integrating Cloud Service PICCO with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Cloud Service PICCO with Azure Active Directory (Azure AD). When you integrate Cloud Service PICCO with Azure AD, you can:
-* You can control in Azure AD who has access to Cloud Service PICCO.
-* You can enable your users to be automatically signed-in to Cloud Service PICCO (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Cloud Service PICCO.
+* Enable your users to be automatically signed-in to Cloud Service PICCO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Cloud Service PICCO, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Cloud Service PICCO single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cloud Service PICCO single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Cloud Service PICCO supports **SP** initiated SSO
-* Cloud Service PICCO supports **Just In Time** user provisioning
+* Cloud Service PICCO supports **SP** initiated SSO.
+* Cloud Service PICCO supports **Just In Time** user provisioning.
-## Adding Cloud Service PICCO from the gallery
+## Add Cloud Service PICCO from the gallery
To configure the integration of Cloud Service PICCO into Azure AD, you need to add Cloud Service PICCO from the gallery to your list of managed SaaS apps.
-**To add Cloud Service PICCO from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Cloud Service PICCO**, select **Cloud Service PICCO** from result panel then click **Add** button to add the application.
-
- ![Cloud Service PICCO in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Cloud Service PICCO** in the search box.
+1. Select **Cloud Service PICCO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Cloud Service PICCO based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Cloud Service PICCO needs to be established.
+## Configure and test Azure AD SSO for Cloud Service PICCO
-To configure and test Azure AD single sign-on with Cloud Service PICCO, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Cloud Service PICCO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cloud Service PICCO.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Cloud Service PICCO Single Sign-On](#configure-cloud-service-picco-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Create Cloud Service PICCO test user](#create-cloud-service-picco-test-user)** - to have a counterpart of Britta Simon in Cloud Service PICCO that is linked to the Azure AD representation of user.
-5. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Cloud Service PICCO, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Cloud Service PICCO SSO](#configure-cloud-service-picco-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cloud Service PICCO test user](#create-cloud-service-picco-test-user)** - to have a counterpart of B.Simon in Cloud Service PICCO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Cloud Service PICCO, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Cloud Service PICCO** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Cloud Service PICCO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Configure single sign-on link](common/select-sso.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![Cloud Service PICCO Domain and URLs single sign-on information](common/sp-identifier-reply.png)
+ a. In the **Identifier** box, type a value using the following pattern:
+ `<SUB DOMAIN>.cloudservicepicco.com`
- a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://<SUB DOMAIN>.cloudservicepicco.com/app`
- b. In the **Identifier** box, type a URL using the following pattern:
- `<SUB DOMAIN>.cloudservicepicco.com`
-
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
`https://<SUB DOMAIN>.cloudservicepicco.com/app` > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [Cloud Service PICCO Client support team](mailto:picco.support@est.fujitsu.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Cloud Service PICCO Client support team](mailto:picco.support@est.fujitsu.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-4. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
-
-### Configure Cloud Service PICCO Single Sign-On
-
-To configure single sign-on on **Cloud Service PICCO** side, you need to send the **App Federation Metadata Url** to [Cloud Service PICCO support team](mailto:picco.support@est.fujitsu.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Cloud Service PICCO.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cloud Service PICCO.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Cloud Service PICCO**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Cloud Service PICCO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Cloud Service PICCO SSO
-2. In the applications list, select **Cloud Service PICCO**.
-
- ![The Cloud Service PICCO link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Cloud Service PICCO** side, you need to send the **App Federation Metadata Url** to [Cloud Service PICCO support team](mailto:picco.support@est.fujitsu.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Cloud Service PICCO test user In this section, a user called Britta Simon is created in Cloud Service PICCO. Cloud Service PICCO supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Cloud Service PICCO, a new one is created after authentication.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Cloud Service PICCO tile in the Access Panel, you should be automatically signed in to the Cloud Service PICCO for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Cloud Service PICCO Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Cloud Service PICCO Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Cloud Service PICCO tile in the My Apps, this will redirect to Cloud Service PICCO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Cloud Service PICCO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Guardium Data Protection Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/guardium-data-protection-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Guardium Data Protection'
+description: Learn how to configure single sign-on between Azure Active Directory and Guardium Data Protection.
++++++++ Last updated : 05/31/2022++++
+# Tutorial: Azure AD SSO integration with Guardium Data Protection
+
+In this tutorial, you'll learn how to integrate Guardium Data Protection with Azure Active Directory (Azure AD). When you integrate Guardium Data Protection with Azure AD, you can:
+
+* Control in Azure AD who has access to Guardium Data Protection.
+* Enable your users to be automatically signed-in to Guardium Data Protection with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Guardium Data Protection single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Guardium Data Protection supports **SP** and **IDP** initiated SSO.
+
+## Add Guardium Data Protection from the gallery
+
+To configure the integration of Guardium Data Protection into Azure AD, you need to add Guardium Data Protection from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Guardium Data Protection** in the search box.
+1. Select **Guardium Data Protection** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Guardium Data Protection
+
+Configure and test Azure AD SSO with Guardium Data Protection using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Guardium Data Protection.
+
+To configure and test Azure AD SSO with Guardium Data Protection, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Guardium Data Protection SSO](#configure-guardium-data-protection-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Guardium Data Protection test user](#create-guardium-data-protection-test-user)** - to have a counterpart of B.Simon in Guardium Data Protection that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Guardium Data Protection** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `<hostname>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<hostname>:8443/saml/sso`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<hostname>:8443`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Guardium Data Protection support team](mailto:NA@ibm.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Guardium Data Protection application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the Guardium Data Protection application image.](common/default-attributes.png "Image")
+
+1. In addition to above, Guardium Data Protection application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute |
+ |-| |
+ | jobtitle | user.jobtitle |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Guardium Data Protection** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Guardium Data Protection.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Guardium Data Protection**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Guardium Data Protection SSO
+
+To configure single sign-on on **Guardium Data Protection** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Guardium Data Protection support team](mailto:NA@ibm.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Guardium Data Protection test user
+
+In this section, you create a user called Britta Simon in Guardium Data Protection. Work with [Guardium Data Protection support team](mailto:NA@ibm.com) to add the users in the Guardium Data Protection platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Guardium Data Protection Sign on URL where you can initiate the login flow.
+
+* Go to Guardium Data Protection Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Guardium Data Protection for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Guardium Data Protection tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Guardium Data Protection for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Guardium Data Protection you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Javelo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/javelo-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Javelo'
+description: Learn how to configure single sign-on between Azure Active Directory and Javelo.
++++++++ Last updated : 06/06/2022++++
+# Tutorial: Azure AD SSO integration with Javelo
+
+In this tutorial, you'll learn how to integrate Javelo with Azure Active Directory (Azure AD). When you integrate Javelo with Azure AD, you can:
+
+* Control in Azure AD who has access to Javelo.
+* Enable your users to be automatically signed-in to Javelo with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Javelo single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Javelo supports **SP** initiated SSO.
+* Javelo supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Javelo from the gallery
+
+To configure the integration of Javelo into Azure AD, you need to add Javelo from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Javelo** in the search box.
+1. Select **Javelo** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Javelo
+
+Configure and test Azure AD SSO with Javelo using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Javelo.
+
+To configure and test Azure AD SSO with Javelo, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Javelo SSO](#configure-javelo-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Javelo test user](#create-javelo-test-user)** - to have a counterpart of B.Simon in Javelo that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Javelo** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, Upload the **Service Provider metadata file** which you can download from the [URL](https://api.javelo.io/omniauth/<CustomerSPIdentifier>_saml/metadata) and perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Screenshot shows Basic SAML Configuration with the Upload metadata file link.](common/upload-metadata.png "Folder")
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![Screenshot shows a dialog box where you can select and upload a file.](common/browse-upload-metadata.png "Logo")
+
+ c. Once the metadata file is successfully uploaded, the necessary URLs get auto populated automatically.
+
+ d. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerSubdomain>.javelo.io/auth/login`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign-on URL. Contact [Javelo Client support team](mailto:Support@javelo.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Javelo.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Javelo**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Javelo SSO
+
+1. Log in to your Javelo company site as an administrator.
+
+1. Go to **Admin** view and navigate to **SSO** tab > **Azure Active Directory** and click **Configure**.
+
+1. In the **Enable SSO with Azure Active Directory** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/javelo-tutorial/settings.png "Configuration")
+
+ a. Enter a valid name in the **Provider** textbox.
+
+ b. In the **Entity ID** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ c. In the **Metadata URL** textbox, paste the **App Federation Metadata Url** which you have copied from the Azure portal.
+
+ d. Click **Test URL**.
+
+ e. Enter a valid domain in the **Email Domains** textbox.
+
+ f. Click **Enable SSO with Azure Active Directory**.
+
+### Create Javelo test user
+
+In this section, a user called B.Simon is created in Javelo. Javelo supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Javelo, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Javelo Sign-on URL where you can initiate the login flow.
+
+* Go to Javelo Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Javelo tile in the My Apps, this will redirect to Javelo Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Javelo you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Paloaltoadmin Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/paloaltoadmin-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with Palo Alto Networks - Admin UI | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Palo Alto Networks - Admin UI'
description: Learn how to configure single sign-on between Azure Active Directory and Palo Alto Networks - Admin UI.
Previously updated : 09/08/2021 Last updated : 06/08/2022 # Tutorial: Azure AD SSO integration with Palo Alto Networks - Admin UI
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Palo Alto Networks - Admin UI single sign-on (SSO) enabled subscription.
+* It is a requirement that the service should be public available. Please refer [this](../develop/single-sign-on-saml-protocol.md) page for more information.
## Scenario description
active-directory Snowflake Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-tutorial.md
Previously updated : 12/22/2021 Last updated : 06/03/2022 # Tutorial: Azure AD SSO integration with Snowflake
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Snowflake SSO
-1. In a different web browser window, login to Snowflake as a Security Administrator.
+1. In a different web browser window, log in to Snowflake as a Security Administrator.
1. **Switch Role** to **ACCOUNTADMIN**, by clicking on **profile** on the top right side of page.
CREATE [ OR REPLACE ] SECURITY INTEGRATION [ IF NOT EXISTS ]
[ SAML2_SNOWFLAKE_ACS_URL = '<string_literal>' ] ```
+If you are using a new Snowflake URL with an organization name as the login URL, it is necessary to update the following parameters:
+
+ Alter the integration to add Snowflake Issuer URL and SAML2 Snowflake ACS URL, please follow the step-6 in [this](https://community.snowflake.com/s/article/HOW-TO-SETUP-SSO-WITH-ADFS-AND-THE-SNOWFLAKE-NEW-URL-FORMAT-OR-PRIVATELINK) article for more information.
+
+1. [ SAML2_SNOWFLAKE_ISSUER_URL = '<string_literal>' ]
+
+ alter security integration `<your security integration name goes here>` set SAML2_SNOWFLAKE_ISSUER_URL = `https://<organization_name>-<account name>.snowflakecomputing.com`;
+
+2. [ SAML2_SNOWFLAKE_ACS_URL = '<string_literal>' ]
+
+ alter security integration `<your security integration name goes here>` set SAML2_SNOWFLAKE_ACS_URL = `https://<organization_name>-<account name>.snowflakecomputing.com/fed/login`;
+ > [!NOTE] > Please follow [this](https://docs.snowflake.com/en/sql-reference/sql/create-security-integration.html) guide to know more about how to create a SAML2 security integration.
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign-on URL where you can initiate the login flow.
-* Go to Snowflake Sign-on URL directly and initiate the login flow from there.
+* Go to Snowflake Sign on URL directly and initiate the login flow from there.
#### IDP initiated: * Click on **Test this application** in Azure portal and you should be automatically signed in to the Snowflake for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Snowflake tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Snowflake for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Snowflake tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Snowflake for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Timeoffmanager Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeoffmanager-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with TimeOffManager | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with TimeOffManager'
description: Learn how to configure single sign-on between Azure Active Directory and TimeOffManager.
Previously updated : 12/10/2019 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with TimeOffManager
+# Tutorial: Azure AD SSO integration with TimeOffManager
In this tutorial, you'll learn how to integrate TimeOffManager with Azure Active Directory (Azure AD). When you integrate TimeOffManager with Azure AD, you can:
In this tutorial, you'll learn how to integrate TimeOffManager with Azure Active
* Enable your users to be automatically signed-in to TimeOffManager with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * TimeOffManager single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
+* TimeOffManager supports **IDP** initiated SSO.
-* TimeOffManager supports **IDP** initiated SSO
-
-* TimeOffManager supports **Just In Time** user provisioning
+* TimeOffManager supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant. -
-## Adding TimeOffManager from the gallery
+## Add TimeOffManager from the gallery
To configure the integration of TimeOffManager into Azure AD, you need to add TimeOffManager from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **TimeOffManager** in the search box. 1. Select **TimeOffManager** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for TimeOffManager
+## Configure and test Azure AD SSO for TimeOffManager
Configure and test Azure AD SSO with TimeOffManager using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TimeOffManager.
-To configure and test Azure AD SSO with TimeOffManager, complete the following building blocks:
+To configure and test Azure AD SSO with TimeOffManager, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with TimeOffManager, complete the following b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **TimeOffManager** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **TimeOffManager** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Reply URL** text box, type a URL using the following pattern: `https://www.timeoffmanager.com/cpanel/sso/consume.aspx?company_id=<companyid>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. TimeOffManager application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/edit-attribute.png)
+ ![Screenshot shows the image of TimeOffManager application.](common/edit-attribute.png "Image")
1. In addition to above, TimeOffManager application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
1. On the **Set up TimeOffManager** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **TimeOffManager**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. Go to **Account \> Account Options \> Single Sign-On Settings**.
- ![Screenshot shows Single Sign-On Settings selected from Account Options.](./media/timeoffmanager-tutorial/ic795917.png "Single Sign-On Settings")
+ ![Screenshot shows Single Sign-On Settings selected from Account Options.](./media/timeoffmanager-tutorial/account.png "Single Sign-On Settings")
3. In the **Single Sign-On Settings** section, perform the following steps:
- ![Screenshot shows the Single Sign-On Settings section where you can enter the values described.](./media/timeoffmanager-tutorial/ic795918.png "Single Sign-On Settings")
+ ![Screenshot shows the Single Sign-On Settings section where you can enter the values described.](./media/timeoffmanager-tutorial/settings.png "Single Sign-On Settings")
a. Open your base-64 encoded certificate in notepad, copy the content of it into your clipboard, and then paste the entire Certificate into **X.509 Certificate** textbox.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In **Single Sign on settings** page, copy the value of **Assertion Consumer Service URL** and paste it in the **Reply URL** text box under **Basic SAML Configuration** section in Azure portal.
- ![Screenshot shows the Assertion Consumer Service U R L link.](./media/timeoffmanager-tutorial/ic795915.png "Single Sign-On Settings")
+ ![Screenshot shows the Assertion Consumer Service U R L link.](./media/timeoffmanager-tutorial/values.png "Single Sign-On Settings")
### Create TimeOffManager test user
In this section, a user called Britta Simon is created in TimeOffManager. TimeOf
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the TimeOffManager tile in the Access Panel, you should be automatically signed in to the TimeOffManager for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the TimeOffManager for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the TimeOffManager tile in the My Apps, you should be automatically signed in to the TimeOffManager for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try TimeOffManager with Azure AD](https://aad.portal.azure.com/)
+Once you configure TimeOffManager you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Versal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/versal-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Versal | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Versal'
description: Learn how to configure single sign-on between Azure Active Directory and Versal.
Previously updated : 12/10/2019 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Versal
+# Tutorial: Azure AD SSO integration with Versal
In this tutorial, you'll learn how to integrate Versal with Azure Active Directory (Azure AD). When you integrate Versal with Azure AD, you can:
In this tutorial, you'll learn how to integrate Versal with Azure Active Directo
* Enable your users to be automatically signed-in to Versal with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Versal single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. -
-* Versal supports **IDP** initiated SSO
+* Versal supports **IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Versal from the gallery
+## Add Versal from the gallery
To configure the integration of Versal into Azure AD, you need to add Versal from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Versal** in the search box. 1. Select **Versal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Versal
+## Configure and test Azure AD SSO for Versal
Configure and test Azure AD SSO with Versal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Versal.
-To configure and test Azure AD SSO with Versal, complete the following building blocks:
+To configure and test Azure AD SSO with Versal, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Versal, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Versal** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Versal** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** page, perform the following steps:
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the value:
`VERSAL` b. In the **Reply URL** text box, type a URL using the following pattern:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Versal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Versal application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
- ![image](common/edit-attribute.png)
+ ![Screenshot shows the image of Versal application.](common/edit-attribute.png "Attributes")
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up Versal** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Versal**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
You will need to create a course, share it with your organization, and publish i
Please see [Creating a course](https://support.versal.com/hc/articles/203722528-Create-a-course), [Publishing a course](https://support.versal.com/hc/articles/203753398-Publishing-a-course), and [Course and learner management](https://support.versal.com/hc/articles/206029467-Course-and-learner-management) for more information.
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Versal with Azure AD](https://aad.portal.azure.com/)
+Once you configure Versal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
With an AKS cluster deployed into your existing virtual network subnet, you can
[express-route]: ../expressroute/expressroute-introduction.md [network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md
-[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
+[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-managed-identity
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
For more information on configuring your load balancer in a different subnet, se
## Connect Azure Private Link service to internal load balancer (Preview)
-To attach an Azure Private Link Service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
+### Before you begin
+
+You must have the following resource installed:
+
+* The Azure CLI
+* The `aks-preview` extension version 0.5.50 or later
+* Kubernetes version 1.22.x or above
+
+#### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Create a Private Link service connection
+
+To attach an Azure Private Link service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
```yaml apiVersion: v1
pls-xyz pls-xyz.abc123-defg-4hij-56kl-789mnop.eastus2.azure.privatelinkservice
```
-### Create a Private Endpoint to the Private Link Service
+### Create a Private Endpoint to the Private Link service
A Private Endpoint allows you to privately connect to your Kubernetes service object via the Private Link Service created above. To do so, follow the example shown below:
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-service-principal.md
Title: Service principals for Azure Kubernetes Services (AKS)
-description: Create and manage an Azure Active Directory service principal for a cluster in Azure Kubernetes Service (AKS)
+ Title: Use a service principal with Azure Kubernetes Services (AKS)
+description: Create and manage an Azure Active Directory service principal with a cluster in Azure Kubernetes Service (AKS)
Previously updated : 12/06/2021 Last updated : 06/08/2022 #Customer intent: As a cluster operator, I want to understand how to create a service principal and delegate permissions for AKS to access required resources. In large enterprise environments, the user that deploys the cluster (or CI/CD system), may not have permissions to create this service principal automatically when the cluster is created.
-# Service principals with Azure Kubernetes Service (AKS)
+# Use a service principal with Azure Kubernetes Service (AKS)
-To interact with Azure APIs, an AKS cluster requires either an [Azure Active Directory (AD) service principal][aad-service-principal] or a [managed identity](use-managed-identity.md). A service principal or managed identity is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR).
+To access other Azure Active Directory (Azure AD) resources, an AKS cluster requires either an [Azure Active Directory (AD) service principal][aad-service-principal] or a [managed identity][managed-identity-resources-overview]. A service principal or managed identity is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR).
+
+Managed identities are the recommended way to authenticate with other resources in Azure, and is the default authentication method for your AKS cluster. For more information about using a managed identity with your cluster, see [Use a system-assigned managed identity][use-managed-identity].
This article shows how to create and use a service principal for your AKS clusters. ## Before you begin
-To create an Azure AD service principal, you must have permissions to register an application with your Azure AD tenant, and to assign the application to a role in your subscription. If you don't have the necessary permissions, you might need to ask your Azure AD or subscription administrator to assign the necessary permissions, or pre-create a service principal for you to use with the AKS cluster.
-
-If you are using a service principal from a different Azure AD tenant, there are additional considerations around the permissions available when you deploy the cluster. You may not have the appropriate permissions to read and write directory information. For more information, see [What are the default user permissions in Azure Active Directory?][azure-ad-permissions]
-
-### [Azure CLI](#tab/azure-cli)
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-You also need Azure PowerShell version 5.0.0 or later installed. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install the Azure Az PowerShell module][install-the-azure-az-powershell-module].
+To create an Azure AD service principal, you must have permissions to register an application with your Azure AD tenant, and to assign the application to a role in your subscription. If you don't have the necessary permissions, you need to ask your Azure AD or subscription administrator to assign the necessary permissions, or pre-create a service principal for you to use with the AKS cluster.
--
-## Automatically create and use a service principal
-
-### [Azure CLI](#tab/azure-cli)
+If you're using a service principal from a different Azure AD tenant, there are other considerations around the permissions available when you deploy the cluster. You may not have the appropriate permissions to read and write directory information. For more information, see [What are the default user permissions in Azure Active Directory?][azure-ad-permissions]
-When you create an AKS cluster in the Azure portal or using the [az aks create][az-aks-create] command, Azure creates a managed identity.
+## Prerequisites
-In the following Azure CLI example, a service principal is not specified. In this scenario, the Azure CLI creates a managed identity for the AKS cluster.
+Azure CLI version 2.0.59 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
+Azure PowerShell version 5.0.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install the Azure Az PowerShell module][install-the-azure-az-powershell-module].
-When you create an AKS cluster in the Azure portal or using the [New-AzAksCluster][new-azakscluster] command, Azure can generate a new managed identity .
-
-In the following Azure PowerShell example, a service principal is not specified. In this scenario, Azure PowerShell creates a managed identity for the AKS cluster.
-
-```azurepowershell-interactive
-New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
-```
-
-> [!NOTE]
-> For error "Service principal clientID: 00000000-0000-0000-0000-000000000000 not found in Active Directory tenant 00000000-0000-0000-0000-000000000000", see [Additional considerations](#additional-considerations) to remove the `acsServicePrincipal.json` file.
-- ## Manually create a service principal ### [Azure CLI](#tab/azure-cli)
To manually create a service principal with the Azure CLI, use the [az ad sp cre
az ad sp create-for-rbac --name myAKSClusterServicePrincipal ```
-The output is similar to the following example. Make a note of your own `appId` and `password`. These values are used when you create an AKS cluster in the next section.
+The output is similar to the following example. Copy the values for `appId` and `password`. These values are used when you create an AKS cluster in the next section.
```json {
Id : 559513bd-0c19-4c1a-87cd-851a26afd5fc
Type : ```
-To decrypt the value stored in the **Secret** secure string, you use the following example.
+To decrypt the value stored in the **Secret** secure string, run the following command:
```azurepowershell-interactive $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret)
az aks create \
``` > [!NOTE]
-> If you're using an existing service principal with customized secret, ensure the secret is no longer than 190 bytes.
-
-If you deploy an AKS cluster using the Azure portal, on the *Authentication* page of the **Create Kubernetes cluster** dialog, choose to **Configure service principal**. Select **Use existing**, and specify the following values:
--- **Service principal client ID** is your *appId*-- **Service principal client secret** is the *password* value-
-![Image of browsing to Azure Vote](media/kubernetes-service-principal/portal-configure-service-principal.png)
+> If you're using an existing service principal with customized secret, ensure the secret is not longer than 190 bytes.
### [Azure PowerShell](#tab/azure-powershell)
New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -ServiceP
> [!NOTE] > If you're using an existing service principal with customized secret, ensure the secret is no longer than 190 bytes.
-If you deploy an AKS cluster using the Azure portal, on the *Authentication* page of the **Create Kubernetes cluster** dialog, choose to **Configure service principal**. Select **Use existing**, and specify the following values:
--- **Service principal client ID** is your *ApplicationId*-- **Service principal client secret** is the decrypted *Secret* value-
-![Image of browsing to Azure Vote](media/kubernetes-service-principal/portal-configure-service-principal.png)
- ## Delegate access to other Azure resources
The `Scope` for a resource needs to be a full resource ID, such as */subscriptio
> [!NOTE] > If you have removed the Contributor role assignment from the node resource group, the operations below may fail.
-> Permission grants to clusters using System Managed Identity may take up 60 minutes to populate.
+> Permission granted to a cluster using a system-assigned managed identity may take up 60 minutes to populate.
-The following sections detail common delegations that you may need to make.
+The following sections detail common delegations that you may need to assign.
### Azure Container Registry
If you use Azure Container Registry (ACR) as your container image store, you nee
### Networking
-You may use advanced networking where the virtual network and subnet or public IP addresses are in another resource group. Assign the [Network Contributor][rbac-network-contributor] built-in role on the subnet within the virtual network. Alternatively, you can create a [custom role][rbac-custom-role] with permissions to access the network resources in that resource group. See [AKS service permissions][aks-permissions] for more details.
+You may use advanced networking where the virtual network and subnet or public IP addresses are in another resource group. Assign the [Network Contributor][rbac-network-contributor] built-in role on the subnet within the virtual network. Alternatively, you can create a [custom role][rbac-custom-role] with permissions to access the network resources in that resource group. For more information, see [AKS service permissions][aks-permissions].
### Storage
-You may need to access existing Disk resources in another resource group. Assign one of the following set of role permissions:
+If you need to access existing disk resources in another resource group, assign one of the following set of role permissions:
- Create a [custom role][rbac-custom-role] and define the following role permissions: - *Microsoft.Compute/disks/read*
You may need to access existing Disk resources in another resource group. Assign
### Azure Container Instances
-If you use Virtual Kubelet to integrate with AKS and choose to run Azure Container Instances (ACI) in resource group separate to the AKS cluster, the AKS service principal must be granted *Contributor* permissions on the ACI resource group.
+If you use Virtual Kubelet to integrate with AKS and choose to run Azure Container Instances (ACI) in resource group separate from the AKS cluster, the AKS cluster service principal must be granted *Contributor* permissions on the ACI resource group.
-## Additional considerations
+## Other considerations
### [Azure CLI](#tab/azure-cli)
-When using AKS and Azure AD service principals, keep the following considerations in mind.
+When using AKS and an Azure AD service principal, consider the following:
-- The service principal for Kubernetes is a part of the cluster configuration. However, don't use the identity to deploy the cluster.
+- The service principal for Kubernetes is a part of the cluster configuration. However, don't use this identity to deploy the cluster.
- By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time. - Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. - When you specify the service principal **Client ID**, use the value of the `appId`. - On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json` - When you use the [az aks create][az-aks-create] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/aksServicePrincipal.json` on the machine used to run the command.-- If you do not specifically pass a service principal in additional AKS CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used.-- You can also optionally remove the aksServicePrincipal.json file, and AKS will create a new service principal.-- When you delete an AKS cluster that was created by [az aks create][az-aks-create], the service principal that was created automatically is not deleted.
- - To delete the service principal, query for your cluster *servicePrincipalProfile.clientId* and then delete with [az ad sp delete][az-ad-sp-delete]. Replace the following resource group and cluster names with your own values:
+- If you don't specify a service principal with AKS CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used.
+- You can optionally remove the `aksServicePrincipal.json` file, and AKS creates a new service principal.
+- When you delete an AKS cluster that was created by [az aks create][az-aks-create], the service principal created automatically isn't deleted.
+ - To delete the service principal, query for your clusters *servicePrincipalProfile.clientId* and then delete it using the [az ad sp delete][az-ad-sp-delete] command. Replace the values for the `-g` parameter for the resource group name, and `-n` parameter for the cluster name:
```azurecli az ad sp delete --id $(az aks show -g myResourceGroup -n myAKSCluster --query servicePrincipalProfile.clientId -o tsv)
When using AKS and Azure AD service principals, keep the following consideration
### [Azure PowerShell](#tab/azure-powershell)
-When using AKS and Azure AD service principals, keep the following considerations in mind.
+When using AKS and an Azure AD service principal, consider the following:
-- The service principal for Kubernetes is a part of the cluster configuration. However, don't use the identity to deploy the cluster.
+- The service principal for Kubernetes is a part of the cluster configuration. However, don't use this identity to deploy the cluster.
- By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time. - Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. - When you specify the service principal **Client ID**, use the value of the `ApplicationId`. - On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json` - When you use the [New-AzAksCluster][new-azakscluster] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/acsServicePrincipal.json` on the machine used to run the command.-- If you do not specifically pass a service principal in additional AKS PowerShell commands, the default service principal located at `~/.azure/acsServicePrincipal.json` is used.-- You can also optionally remove the acsServicePrincipal.json file, and AKS will create a new service principal.-- When you delete an AKS cluster that was created by [New-AzAksCluster][new-azakscluster], the service principal that was created automatically is not deleted.
- - To delete the service principal, query for your cluster *ServicePrincipalProfile.ClientId* and then delete with [Remove-AzADServicePrincipal][remove-azadserviceprincipal]. Replace the following resource group and cluster names with your own values:
+- If you don't specify a service principal with AKS PowerShell commands, the default service principal located at `~/.azure/acsServicePrincipal.json` is used.
+- You can optionally remove the `acsServicePrincipal.json` file, and AKS creates a new service principal.
+- When you delete an AKS cluster that was created by [New-AzAksCluster][new-azakscluster], the service principal created automatically isn't deleted.
+ - To delete the service principal, query for your clusters *ServicePrincipalProfile.ClientId* and then delete it using the [Remove-AzADServicePrincipal][remove-azadserviceprincipal] command. Replace the values for the `-ResourceGroupName` parameter for the resource group name, and `-Name` parameter for the cluster name:
```azurepowershell-interactive $ClientId = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster ).ServicePrincipalProfile.ClientId
When using AKS and Azure AD service principals, keep the following consideration
### [Azure CLI](#tab/azure-cli)
-The service principal credentials for an AKS cluster are cached by the Azure CLI. If these credentials have expired, you encounter errors deploying AKS clusters. The following error message when running [az aks create][az-aks-create] may indicate a problem with the cached service principal credentials:
+The service principal credentials for an AKS cluster are cached by the Azure CLI. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [az aks create][az-aks-create] may indicate a problem with the cached service principal credentials:
```console Operation failed with status: 'Bad Request'.
Details: The credentials in ServicePrincipalProfile were invalid. Please see htt
(Details: adal: Refresh request failed. Status Code = '401'. ```
-Check the age of the credentials file using the following command:
+Check the age of the credentials file by running the following command:
```console ls -la $HOME/.azure/aksServicePrincipal.json ```
-The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and try to deploy an AKS cluster again.
+The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and retry deploying the AKS cluster.
### [Azure PowerShell](#tab/azure-powershell)
-The service principal credentials for an AKS cluster are cached by Azure PowerShell. If these credentials have expired, you encounter errors deploying AKS clusters. The following error message when running [New-AzAksCluster][new-azakscluster] may indicate a problem with the cached service principal credentials:
+The service principal credentials for an AKS cluster are cached by Azure PowerShell. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [New-AzAksCluster][new-azakscluster] may indicate a problem with the cached service principal credentials:
```console Operation failed with status: 'Bad Request'.
Details: The credentials in ServicePrincipalProfile were invalid. Please see htt
(Details: adal: Refresh request failed. Status Code = '401'. ```
-Check the age of the credentials file using the following command:
+Check the age of the credentials file by running the following command:
```azurepowershell-interactive Get-ChildItem -Path $HOME/.azure/aksServicePrincipal.json ```
-The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and try to deploy an AKS cluster again.
+The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and retry deploying the AKS cluster.
For information on how to update the credentials, see [Update or rotate the cred
[new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment [set-azakscluster]: /powershell/module/az.aks/set-azakscluster [remove-azadserviceprincipal]: /powershell/module/az.resources/remove-azadserviceprincipal
+[use-managed-identity]: use-managed-identity.md
+[managed-identity-resources-overview]: ..//active-directory/managed-identities-azure-resources/overview.md
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
az group delete --name MyResourceGroup --yes --no-wait
## Next steps
-In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hub.
+In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hubs.
To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
To learn more about AKS, and walk through a complete code to deployment example,
[az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register [az-group-delete]: /cli/azure/group#az_group_delete
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
+[sp-delete]: kubernetes-service-principal.md#other-considerations
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
For more information about using Helm, see the Helm documentation.
[helm-documentation]: https://helm.sh/docs/ [helm-existing]: kubernetes-helm.md [helm-install]: https://helm.sh/docs/intro/install/
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
+[sp-delete]: kubernetes-service-principal.md#other-considerations
[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
For more information on AKS, see [AKS overview][aks-intro]. For guidance on a cr
[az aks upgrade]: /cli/azure/aks#az_aks_upgrade [azure-cli-install]: /cli/azure/install-azure-cli [az-group-delete]: /cli/azure/group#az_group_delete
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
+[sp-delete]: kubernetes-service-principal.md#other-considerations
[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE [azure-powershell-install]: /powershell/azure/install-az-ps [get-azakscluster]: /powershell/module/az.aks/get-azakscluster
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
Title: Use managed identities in Azure Kubernetes Service
-description: Learn how to use managed identities in Azure Kubernetes Service (AKS)
+ Title: Use a managed identity in Azure Kubernetes Service
+description: Learn how to use a system-assigned or user-assigned managed identity in Azure Kubernetes Service (AKS)
Previously updated : 06/01/2022 Last updated : 06/07/2022
-# Use managed identities in Azure Kubernetes Service
+# Use a managed identity in Azure Kubernetes Service
-Currently, an Azure Kubernetes Service (AKS) cluster (specifically, the Kubernetes cloud provider) requires an identity to create additional resources like load balancers and managed disks in Azure. This identity can be either a *managed identity* or a *service principal*. If you use a [service principal](kubernetes-service-principal.md), you must either provide one or AKS creates one on your behalf. If you use managed identity, this will be created for you by AKS automatically. Clusters using service principals eventually reach a state in which the service principal must be renewed to keep the cluster working. Managing service principals adds complexity, which is why it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities.
+An Azure Kubernetes Service (AKS) cluster requires an identity to access Azure resources like load balancers and managed disks. This identity can be either a managed identity or a service principal. By default, when you create an AKS cluster a system-assigned managed identity automatically created. The identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources][managed-identity-resources-overview].
-*Managed identities* are essentially a wrapper around service principals, and make their management simpler. Credential rotation for MI happens automatically every 46 days according to Azure Active Directory default. AKS uses both system-assigned and user-assigned managed identity types. These identities are currently immutable. To learn more, read about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+To use a [service principal](kubernetes-service-principal.md), you have to create one, AKS does not create one automatically. Clusters using a service principal eventually expire and the service principal must be renewed to keep the cluster working. Managing service principals adds complexity, which is why it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities.
-## Before you begin
+Managed identities are essentially a wrapper around service principals, and make their management simpler. Managed identities use certificate-based authentication, and each managed identities credential has an expiration of 90 days and it's rolled after 45 days. AKS uses both system-assigned and user-assigned managed identity types, and these identities are immutable.
-You must have the following resource installed:
+## Prerequisites
-- The Azure CLI, version 2.23.0 or later-
-> [!NOTE]
-> AKS will create a kubelet MI in the Node resource group if you do not bring your own kubelet MI.
+Azure CLI version 2.23.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Limitations
-* Tenants move / migrate of managed identity enabled clusters isn't supported.
+* Tenants move or migrate a managed identity-enabled cluster isn't supported.
* If the cluster has `aad-pod-identity` enabled, Node-Managed Identity (NMI) pods modify the nodes' iptables to intercept calls to the Azure Instance Metadata endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use
AKS uses several managed identities for built-in services and add-ons.
| Add-on | Virtual-Node (ACIConnector) | Manages required network resources for Azure Container Instances (ACI) | Contributor role for node resource group | No | OSS project | aad-pod-identity | Enables applications to access cloud resources securely with Azure Active Directory (AAD) | NA | Steps to grant permission at https://github.com/Azure/aad-pod-identity#role-assignment.
-## Create an AKS cluster with managed identities
+> [!NOTE]
+> AKS will create a kubelet managed identity in the Node resource group if you do not specify your own kubelet managed identity.
+
+## Create an AKS cluster using a managed identity
-You can now create an AKS cluster with managed identities by using the following CLI commands.
+You can create an AKS cluster using a system-assigned managed identity by running the following CLI command.
First, create an Azure resource group:
Finally, get credentials to access the cluster:
az aks get-credentials --resource-group myResourceGroup --name myManagedCluster ```
-## Update an AKS cluster to managed identities
+## Update an AKS cluster to use a managed identity
-You can now update an AKS cluster currently working with service principals to work with managed identities by using the following CLI commands.
+To update an AKS cluster currently using a service principals to work with a system-assigned managed identity, run the following CLI command.
```azurecli-interactive az aks update -g <RGName> -n <AKSName> --enable-managed-identity ```+ > [!NOTE]
-> An update will only work if there is an actual VHD update to consume. If you are running the latest VHD, you will need to wait till the next VHD is available in order to do the actual update.
+> An update will only work if there is an actual VHD update to consume. If you are running the latest VHD, you'll need to wait until the next VHD is available in order to perform the update.
> > [!NOTE]
-> After updating, your cluster's control plane and addon pods will switch to use managed identity, but kubelet will KEEP USING SERVICE PRINCIPAL until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity.
+> After updating, your cluster's control plane and addon pods, they use the managed identity, but kubelet will continue using a service principal until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to a managed identity.
>
-> If your cluster was using --attach-acr to pull from image from Azure Container Registry, after updating your cluster to Managed Identity, you need to rerun `az aks update --attach-acr <ACR Resource ID>` to let the newly created kubelet used for managed identity get the permission to pull from ACR. Otherwise you will not be able to pull from ACR after the upgrade.
+> If your cluster was using `--attach-acr` to pull from image from Azure Container Registry, after updating your cluster to a managed identity, you need to rerun `az aks update --attach-acr <ACR Resource ID>` to let the newly created kubelet used for managed identity get the permission to pull from ACR. Otherwise, you won't be able to pull from ACR after the upgrade.
>
-> The Azure CLI will ensure your addon's permission is correctly set after migrating, if you're not using the Azure CLI to perform the migrating operation, you will need to handle the addon identity's permission by yourself. Here is one example using [ARM](../role-based-access-control/role-assignments-template.md).
+> The Azure CLI will ensure your addon's permission is correctly set after migrating, if you're not using the Azure CLI to perform the migrating operation, you'll need to handle the addon identity's permission by yourself. Here is one example using an [Azure Resource Manager](../role-based-access-control/role-assignments-template.md) template.
> [!WARNING]
-> Nodepool upgrade will cause downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
+> A nodepool upgrade will cause downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
-## Obtain and use the system-assigned managed identity for your AKS cluster
+## Get and use the system-assigned managed identity for your AKS cluster
Confirm your AKS cluster is using managed identity with the following CLI command:
Confirm your AKS cluster is using managed identity with the following CLI comman
az aks show -g <RGName> -n <ClusterName> --query "servicePrincipalProfile" ```
-If the cluster is using managed identities, you will see a `clientId` value of "msi". A cluster using a Service Principal instead will instead show the object ID. For example:
+If the cluster is using a managed identity, the output shows `clientId` with a value of **msi**. A cluster using a service principal shows an object ID. For example:
```output {
If the cluster is using managed identities, you will see a `clientId` value of "
} ```
-After verifying the cluster is using managed identities, you can find the control plane system-assigned identity's object ID with the following command:
+After verifying the cluster is using a managed identity, you can find the control plane system-assigned identity's object ID by running the following command:
```azurecli-interactive az aks show -g <RGName> -n <ClusterName> --query "identity"
az aks show -g <RGName> -n <ClusterName> --query "identity"
``` > [!NOTE]
-> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, CLI will add the role assignement automatically. If you are using ARM template or other clients, you need to use the PrincipalID of the cluster System Assigned Managed Identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
+> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other method, you need to use the PrincipalID of the cluster system-assigned managed identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
>
-> Permission grants to cluster Managed Identity used by Azure Cloud provider may take up 60 minutes to populate.
--
-## Bring your own control plane MI
-A custom control plane identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity.
+> Permission granted to your cluster's managed identity used by Azure may take up 60 minutes to populate.
+## Bring your own control plane managed identity
-You must have the Azure CLI, version 2.15.1 or later installed.
+A custom control plane managed identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity.
-### Limitations
-* USDOD Central, USDOD East, USGov Iowa in Azure Government aren't currently supported.
+> [!NOTE]
+> USDOD Central, USDOD East, USGov Iowa regions in Azure US Government cloud aren't currently supported.
-If you don't have a managed identity yet, you should go ahead and create one for example by using the [az identity][az-identity-create] command.
+If you don't have a managed identity, you should create one by running the [az identity][az-identity-create] command.
```azurecli-interactive az identity create --name myIdentity --resource-group myResourceGroup ```
-Azure CLI will automatically add required role assignment for control plane MI. If you are using ARM template or other clients, you need to create the role assignment manually.
+Azure CLI automatically adds required role assignment for the control plane managed identity. If you are using an ARM template or other method, you need to create the role assignment manually.
+ ```azurecli-interactive az role assignment create --assignee <control-plane-identity-object-id> --role "Managed Identity Operator" --scope <kubelet-identity-resource-id> ```
-If your managed identity is part of your subscription, you can use [az identity CLI command][az-identity-list] to query it.
+If your managed identity is part of your subscription, run the following [az identity CLI command][az-identity-list] command to query it.
```azurecli-interactive az identity list --query "[].{Name:name, Id:id, Location:location}" -o table ```
-Now you can use the following command to create your cluster with your existing identity:
+Run the following command to create a cluster with your existing identity:
```azurecli-interactive az aks create \
az aks create \
--assign-identity <identity-id> ```
-A successful cluster creation using your own managed identities contains this userAssignedIdentities profile information:
+A successful cluster creation using your own managed identity should resemble the following **userAssignedIdentities** profile information:
```output "identity": {
A successful cluster creation using your own managed identities contains this us
}, ```
-## Bring your own kubelet MI
+## Use a pre-created kubelet managed identity
-A Kubelet identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
+A Kubelet identity enables access granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
> [!WARNING]
-> Updating kubelet MI will upgrade Nodepool, which causes downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
-
+> Updating kubelet managed identity upgrades Nodepool, which causes downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
### Prerequisites -- You must have the Azure CLI, version 2.26.0 or later installed.
+- Azure CLI version 2.26.0 or later installed. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
### Limitations -- Only works with a User-Assigned Managed cluster.-- China East, China North in Azure China 21Vianet aren't currently supported.
+- Only works with a user-assigned managed cluster.
+- China East and China North regions in Azure China 21Vianet aren't currently supported.
### Create or obtain managed identities
-If you don't have a control plane managed identity yet, you should go ahead and create one. The following example uses the [az identity create][az-identity-create] command:
+If you don't have a control plane managed identity, you can create by running the following [az identity create][az-identity-create] command:
```azurecli-interactive az identity create --name myIdentity --resource-group myResourceGroup ```
-The result should look like:
+The output should resemble the following:
```output {
The result should look like:
} ```
-If you don't have a kubelet managed identity yet, you should go ahead and create one. The following example uses the [az identity create][az-identity-create] command:
+If you don't have a kubelet managed identity, you can create one by running the following [az identity create][az-identity-create] command:
```azurecli-interactive az identity create --name myKubeletIdentity --resource-group myResourceGroup ```
-The result should look like:
+The output should resemble the following:
```output {
az identity list --query "[].{Name:name, Id:id, Location:location}" -o table
### Create a cluster using kubelet identity
-Now you can use the following command to create your cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
+Now you can use the following command to create your AKS cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
```azurecli-interactive az aks create \
az aks create \
--assign-kubelet-identity <kubelet-identity-resource-id> ```
-A successful cluster creation using your own kubelet managed identity contains the following output:
+A successful AKS cluster creation using your own kubelet managed identity should resemble the following output:
```output "identity": {
A successful cluster creation using your own kubelet managed identity contains t
}, ```
-### Update an existing cluster using kubelet identity
+### Update an existing cluster using kubelet identity
-Update kubelet identity on an existing cluster with your existing identities.
+Update kubelet identity on an existing AKS cluster with your existing identities.
#### Make sure the CLI version is 2.37.0 or later
az version
# Upgrade the version to make sure it is 2.37.0 or later az upgrade ```+ #### Updating your cluster with kubelet identity Now you can use the following command to update your cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
A successful cluster update using your own kubelet managed identity contains the
``` ## Next steps
-* Use [Azure Resource Manager templates ][aks-arm-template] to create Managed Identity enabled clusters.
+
+Use [Azure Resource Manager templates ][aks-arm-template] to create a managed identity-enabled cluster.
<!-- LINKS - external --> [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
A successful cluster update using your own kubelet managed identity contains the
[az-identity-list]: /cli/azure/identity#az_identity_list [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
+[managed-identity-resources-overview]: ../active-directory/managed-identities-azure-resources/overview.md
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
If you configure policy definitions at more than one scope, multiple policies co
In API Management, determine the policy evaluation order by placement of the `base` element in each section in the policy definition at each scope. The `base` element inherits the policies configured in that section at the next broader (parent) scope. The `base` element is included by default in each policy section. > [!NOTE]
-> To view the effective policies at the current scope, select **Recalculate effective policy** in the policy editor.
+> To view the effective policies at the current scope, select **Calculate effective policy** in the policy editor.
To modify the policy evaluation order using the policy editor:
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validation-policies.md
documentationcenter: ''
Previously updated : 03/07/2022 Last updated : 06/07/2022 # API Management policies to validate requests and responses
-This article provides a reference for API Management policies to validate REST or SOAP API requests and responses against schemas defined in the API definition or supplementary JSON or XML schemas. Validation policies protect from vulnerabilities such as injection of headers or payload or leaking sensitive data.
+This article provides a reference for API Management policies to validate REST or SOAP API requests and responses against schemas defined in the API definition or supplementary JSON or XML schemas. Validation policies protect from vulnerabilities such as injection of headers or payload or leaking sensitive data. Learn more about common [API vulnerabilites](mitigate-owasp-api-threats.md).
-While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to an additional class of threats that arenΓÇÖt covered by security products that rely on static, predefined rules.
+While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to an additional class of threats that arenΓÇÖt covered by security products that rely on static, predefined rules.
[!INCLUDE [api-management-policy-intro-links](../../includes/api-management-policy-intro-links.md)]
The `validate-content` policy validates the size or content of a request or resp
[!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
-The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
+The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
| Format | Content types | |||
The following table shows the schema formats and request or response content typ
|XML | Example: `application/xml` | |SOAP | Allowed values: `application/soap+xml` for SOAP 1.2 APIs<br/>`text/xml` for SOAP 1.1 APIs|
+### What content is validated
+
+The policy validates the following content in the request or response against the schema:
+
+* Presence of all required properties.
+* Absence of additional properties, if the schema has the `additionalProperties` field set to `false`.
+* Types of all properties. For example, if a schema specifies a property as an integer, the request (or response) must include an integer and not another type, such as a string.
+* The format of the properties, if specified in the schema - for example, regex (if the `pattern` keyword is specified), `minimum` for integers, and so on.
+
+> [!TIP]
+> For examples of regex pattern constraints that can be used in schemas, see [OWASP Validation Regex Repository](https://owasp.org/www-community/OWASP_Validation_Regex_Repository).
+ ### Policy statement ```xml
After the schema is created, it appears in the list on the **Schemas** page. Sel
> * A schema may cross-reference another schema that is added to the API Management instance. > * Open-source tools to resolve WSDL and XSD schema references and to batch-import generated schemas to API Management are available on [GitHub](https://github.com/Azure-Samples/api-management-schema-import). - ### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | **Supported paragraph roles**:
-The paragraph roles are best used with unstructured documents, structured documents and forms. Roles help analyze the structure of the extracted content for better semantic search and analysis.
+The paragraph roles are best used with unstructured documents. PAragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
* title * sectionHeading
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
A composed model is created by taking a collection of custom models and assignin
## Model data extraction
- | **Model ID** | **Text extraction** | **Selection Marks** | **Tables** | **Paragraphs** | **Key-Value pairs** | **Fields** |**Entities** |
- |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | | | ✓ | | | |
-|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | | ✓ | | ✓ | |
-|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | |
-| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
-| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
-| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
-| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
-| [Custom](concept-custom.md#compare-model-features) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
+ | **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** |
+ |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | ✓ | | | ✓ | | | |
+|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | | ✓ | | ✓ | | | ✓ |
+|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | | ✓ | ✓ | ✓ | | ✓ | |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | |
+| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Custom](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Additionally, the Read API supports Microsoft Word (DOCX), Excel (XLS), PowerPoint (PPT), and HTML files.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier. * Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
* The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
applied-ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/encrypt-data-at-rest.md
Last updated 08/28/2020
-#Customer intent: As a user of the Form Recognizer service, I want to learn how encryption at rest works.
+ # Form Recognizer encryption of data at rest
Azure Form Recognizer automatically encrypts your data when persisting it to the
## Next steps * [Form Recognizer Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
Last updated 05/25/2022 + # Customer spotlight
applied-ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md
Last updated 03/19/2021 + # Tutorial: Use an Azure Function to process stored documents
In this tutorial, you learned how to use an Azure Function written in Python to
> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/) * [What is Form Recognizer?](overview.md)
-* Learn more about the [Layout API](concept-layout.md)
+* Learn more about the [Layout API](concept-layout.md)
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
You can also use an [InlineScript](automation-powershell-workflow.md#use-inlines
Hybrid Runbook Workers on Azure virtual machines can use managed identities to authenticate to Azure resources. Using managed identities for Azure resources instead of Run As accounts provides benefits because you don't need to:
-* Export the Run As certificate and then import it into the Hybrid Runbook Worker.
-* Renew the certificate used by the Run As account.
-* Handle the Run As connection object in your runbook code.
+- Export the Run As certificate and then import it into the Hybrid Runbook Worker.
+- Renew the certificate used by the Run As account.
+- Handle the Run As connection object in your runbook code.
-Follow the next steps to use a managed identity for Azure resources on a Hybrid Runbook Worker:
+There are two ways to use the Managed Identities in Hybrid Runbook Worker scripts.
-1. Create an Azure VM.
-1. Configure managed identities for Azure resources on the VM. See [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm).
-1. Give the VM access to a resource group in Resource Manager. Refer to [Use a Windows VM system-assigned managed identity to access Resource Manager](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager).
-1. Install the Hybrid Runbook Worker on the VM. See [Deploy a Windows Hybrid Runbook Worker](automation-windows-hrw-install.md) or [Deploy a Linux Hybrid Runbook Worker](automation-linux-hrw-install.md).
-1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
+1. Use the system-assigned Managed Identity for the Automation account:
+
+ 1. [Configure](/enable-managed-identity-for-automation.md#enable-a-system-assigned-managed-identity-for-an-azure-automation-account) a System-assigned Managed Identity for the Automation account.
+ 1. Grant this identity the [required permissions](/enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity) within the Subscription to perform its task.
+ 1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
+
+ ```powershell
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ $AzureContext = (Connect-AzAccount -Identity).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext
+
+ # Get all VM names from the subscription
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
+ ```
+ > [!NOTE]
+ > It is **Not** possible to use the Automation Account's User Managed Identity on a Hybrid Runbook Worker, it must be the Automation Account's System Managed Identity.
+
+2. Use the VM Managed Identity for both the Azure VM or Arc-enabled server running as a Hybrid Runbook Worker.
+ Here, you can use either the **VMΓÇÖs User-assigned Managed Identity** or the **VMΓÇÖs System-assigned Managed Identity**.
+
+ > [!NOTE]
+ > This will **Not** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, you can't use the VM Managed Identity. The only available option is to use the Automation Account **System-Assigned Managed Identity** as mentioned in option 1.
+
+ **To use a VM's system-assigned managed identity**:
+
+ 1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) a System Managed Identity for the VM.
+ 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks.
+ 1. Update the runbook to use the [Connect-Az-Account](/powershell/module/az.accounts/connect-azaccount?view=azps-8.0.0) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As Account and perform the associated account management.
```powershell
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- $AzureContext = (Connect-AzAccount -Identity).context
-
- # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ $AzureContext = (Connect-AzAccount -Identity).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext
+
+ # Get all VM names from the subscription
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
+ ```
+
+ **To use a VM's user-assigned managed identity**:
+ 1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#user-assigned-managed-identity) a User Managed Identity for the VM.
+ 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/howto-assign-access-portal) within the Subscription to perform its tasks.
+ 1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount?view=azps-8.0.0) cmdlet with the `Identity ` and `AccountID` parameters to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
- # Get all VM names from the subscription
- Get-AzVM -DefaultProfile $AzureContext | Select Name
+ ```powershell
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with user-managed-assigned managed identity. Replace <ClientId> below with the Client Id of the User Managed Identity
+ $AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext
+
+ # Get all VM names from the subscription
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
```
+ > [!NOTE]
+ > You can find the client Id of the user-assigned managed identity in the Azure portal.
+
+ > :::image type="content" source="./media/automation-hrw-run-runbooks/managed-identities-client-id-inline.png" alt-text="Screenshot of client id in Managed Identites." lightbox="./media/automation-hrw-run-runbooks/managed-identities-client-id-expanded.png":::
- If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you run the runbook in an Azure sandbox instead of Hybrid Runbook Worker and you want to use a user-assigned managed identity, then:
- 1. From line 5, remove `$AzureContext = (Connect-AzAccount -Identity).context`,
- 1. Replace it with `$AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context`, and
- 1. Enter the Client ID.
>[!NOTE]
->By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence?view=azps-7.3.2).
+> By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence?view=azps-7.3.2).
For instance, a runbook with `Get-AzVM` can return all the VMs in the subscription with no call to `Connect-AzAccount`, and the user would be able to access Azure resources without having to authenticate within that runbook. You can disable context autosave in Azure PowerShell, as detailed [here](/powershell/azure/context-persistence?view=azps-7.3.2#save-azure-contexts-across-powershell-sessions). -
+
### Use runbook authentication with Hybrid Worker Credentials Instead of having your runbook provide its own authentication to local resources, you can specify Hybrid Worker Credentials for a Hybrid Runbook Worker group. To specify a Hybrid Worker Credentials, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Users can now restore an Automation account deleted within 30 days. Read [here](
**Type:** New feature
-New scripts are added to the Azure Automation [GitHub repository](https://github.com/azureautomation) to address one of Azure Automation's key scenarios of VM management based on Azure Monitor alert. For more information, see [Trigger runbook from Azure alert](./automation-create-alert-triggered-runbook.md).
+New scripts are added to the Azure Automation [GitHub repository](https://github.com/azureautomation) to address one of Azure Automation's key scenarios of VM management based on Azure Monitor alert. For more information, see [Trigger runbook from Azure alert](./automation-create-alert-triggered-runbook.md#common-azure-vm-management-operations).
- Stop-Azure-VM-On-Alert - Restart-Azure-VM-On-Alert
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
The following steps describe how to assign the App Configuration Data Reader rol
> options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential("<your_clientId>")) > }); >```
- >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid posible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
+ >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid possible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
:::zone-end
In addition to App Service, many other Azure services support managed identities
In this tutorial, you added an Azure managed identity to streamline access to App Configuration and improve credential management for your app. To learn more about how to use App Configuration, continue to the Azure CLI samples. > [!div class="nextstepaction"]
-> [CLI samples](./cli-samples.md)
+> [CLI samples](./cli-samples.md)
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 03/09/2022 Last updated : 06/07/2022
-description: "Troubleshooting common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
+description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux" # Azure Arc-enabled Kubernetes and GitOps troubleshooting
-This document provides troubleshooting guides for issues with Azure Arc-enabled Kubernetes connectivity, permissions, and agents. It also provides troubleshooting guides for Azure GitOps, which can be used in either Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters.
+This document provides troubleshooting guides for issues with Azure Arc-enabled Kubernetes connectivity, permissions, and agents. It also provides troubleshooting guides for Azure GitOps, which can be used in either Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters.
## General troubleshooting
az account show
All agents for Azure Arc-enabled Kubernetes are deployed as pods in the `azure-arc` namespace. All pods should be running and passing their health checks.
-First, verify the Azure Arc helm release:
+First, verify the Azure Arc Helm Chart release:
```console $ helm --namespace default status azure-arc
REVISION: 5
TEST SUITE: None ```
-If the Helm release isn't found or missing, try [connecting the cluster to Azure Arc](./quickstart-connect-cluster.md) again.
+If the Helm Chart release isn't found or missing, try [connecting the cluster to Azure Arc](./quickstart-connect-cluster.md) again.
-If the Helm release is present with `STATUS: deployed`, check the status of the agents using `kubectl`:
+If the Helm Chart release is present with `STATUS: deployed`, check the status of the agents using `kubectl`:
```console $ kubectl -n azure-arc get deployments,pods
pod/metrics-agent-58b765c8db-n5l7k 2/2 Running 0 16h
pod/resource-sync-agent-5cf85976c7-522p5 3/3 Running 0 16h ```
-All pods should show `STATUS` as `Running` with either `3/3` or `2/2` under the `READY` column. Fetch logs and describe the pods returning an `Error` or `CrashLoopBackOff`. If any pods are stuck in `Pending` state, there might be insufficient resources on cluster nodes. [Scale up your cluster](https://kubernetes.io/docs/tasks/administer-cluster/) can get these pods to transition to `Running` state.
+All pods should show `STATUS` as `Running` with either `3/3` or `2/2` under the `READY` column. Fetch logs and describe the pods returning an `Error` or `CrashLoopBackOff`. If any pods are stuck in `Pending` state, there might be insufficient resources on cluster nodes. [Scaling up your cluster](https://kubernetes.io/docs/tasks/administer-cluster/) can get these pods to transition to `Running` state.
## Connecting Kubernetes clusters to Azure Arc
-Connecting clusters to Azure requires both access to an Azure subscription and `cluster-admin` access to a target cluster. If you cannot reach the cluster or you have insufficient permissions, connecting the cluster to Azure Arc will fail.
+Connecting clusters to Azure Arc requires access to an Azure subscription and `cluster-admin` access to a target cluster. If you can't reach the cluster, or if you have insufficient permissions, connecting the cluster to Azure Arc will fail. Make sure you've met all of the [prerequisites to connect a cluster](quickstart-connect-cluster.md#prerequisites).
### Azure CLI is unable to download Helm chart for Azure Arc agents
-If you are using Helm version >= 3.7.0, you will run into the following error when `az connectedk8s connect` is run to connect the cluster to Azure Arc:
+With Helm version >= 3.7.0, you may run into the following error when using `az connectedk8s connect` to connect the cluster to Azure Arc:
```azurecli az connectedk8s connect -n AzureArcTest -g AzureArcTest
Unable to pull helm chart from the registry 'mcr.microsoft.com/azurearck8s/batch
Run 'helm --help' for usage. ```
-In this case, you'll need to install a prior version of [Helm 3](https://helm.sh/docs/intro/install/), where version &lt; 3.7.0. After this, run the `az connectedk8s connect` command again to connect the cluster to Azure Arc.
+To resolve this issue, you'll need to install a prior version of [Helm 3](https://helm.sh/docs/intro/install/), where the version is less than 3.7.0. After you've installed that version, run the `az connectedk8s connect` command again to connect the cluster to Azure Arc.
### Insufficient cluster permissions
-If the provided kubeconfig file does not have sufficient permissions to install the Azure Arc agents, the Azure CLI command will return an error.
+If the provided kubeconfig file doesn't have sufficient permissions to install the Azure Arc agents, the Azure CLI command will return an error.
```azurecli az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
This operation might take a while...
Error: list: failed to list: secrets is forbidden: User "myuser" cannot list resource "secrets" in API group "" at the cluster scope ```
-The user connecting the cluster to Azure Arc should have `cluster-admin` role assigned to them on the cluster.
+To resolve this issue, the user connecting the cluster to Azure Arc should have the `cluster-admin` role assigned to them on the cluster.
### Unable to connect OpenShift cluster to Azure Arc
-If `az connectedk8s connect` is timing out and failing when connecting an OpenShift cluster to Azure Arc, check the following:
+If `az connectedk8s connect` is timing out and failing when connecting an OpenShift cluster to Azure Arc:
-1. The OpenShift cluster needs to meet the version prerequisites: 4.5.41+ or 4.6.35+ or 4.7.18+.
+1. Ensure that the OpenShift cluster meets the version prerequisites: 4.5.41+ or 4.6.35+ or 4.7.18+.
-1. Before running `az connectedk8s connnect`, the following command needs to be run on the cluster:
+1. Before you run `az connectedk8s connnect`, run this command on the cluster:
```console oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
Ensure that you have the latest helm version installed before proceeding to avoid unexpected errors. This operation might take a while... ```+ ### Helm timeout error
+You may see the following Helm timeout error:
+ ```azurecli az connectedk8s connect -n AzureArcTest -g AzureArcTest ```
az connectedk8s connect -n AzureArcTest -g AzureArcTest
Unable to install helm release: Error: UPGRADE Failed: time out waiting for the condition ```
-If you get the above helm timeout issue, you can troubleshoot as follows:
-
- 1. Run the following command:
-
- ```console
- kubectl get pods -n azure-arc
- ```
- 2. Check if the `clusterconnect-agent` or the `config-agent` pods are showing crashloopbackoff, or not all containers are running:
-
- ```output
- NAME READY STATUS RESTARTS AGE
- cluster-metadata-operator-664bc5f4d-chgkl 2/2 Running 0 4m14s
- clusterconnect-agent-7cb8b565c7-wklsh 2/3 CrashLoopBackOff 0 1m15s
- clusteridentityoperator-76d645d8bf-5qx5c 2/2 Running 0 4m15s
- config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s
- ```
- 3. If the below certificate isn't present, the system assigned managed identity didn't get installed.
-
- ```console
- kubectl get secret -n azure-arc -o yaml | grep name:
- ```
-
- ```output
- name: azure-identity-certificate
- ```
- This could be a transient issue. You can try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If you're consistently facing this, it could be an issue with your proxy settings. Please follow [these steps](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy.
- 4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path.
-
+To resolve this issue, try the following steps.
+
+1. Run the following command:
+
+ ```console
+ kubectl get pods -n azure-arc
+ ```
+
+2. Check if the `clusterconnect-agent` or the `config-agent` pods are showing `crashloopbackoff`, or if not all containers are running:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ cluster-metadata-operator-664bc5f4d-chgkl 2/2 Running 0 4m14s
+ clusterconnect-agent-7cb8b565c7-wklsh 2/3 CrashLoopBackOff 0 1m15s
+ clusteridentityoperator-76d645d8bf-5qx5c 2/2 Running 0 4m15s
+ config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s
+ ```
+
+3. If the certificate below isn't present, the system assigned managed identity hasn't been installed.
+
+ ```console
+ kubectl get secret -n azure-arc -o yaml | grep name:
+ ```
+
+ ```output
+ name: azure-identity-certificate
+ ```
+
+ To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy.
+
+4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path.
### Helm validation error
-Helm `v3.3.0-rc.1` version has an [issue](https://github.com/helm/helm/pull/8527) where helm install/upgrade (used by `connectedk8s` CLI extension) results in running of all hooks leading to the following error:
+Helm `v3.3.0-rc.1` version has an [issue](https://github.com/helm/helm/pull/8527) where helm install/upgrade (used by the `connectedk8s` CLI extension) results in running of all hooks leading to the following error:
```azurecli az connectedk8s connect -n AzureArcTest -g AzureArcTest
To recover from this issue, follow these steps:
1. Delete the Azure Arc-enabled Kubernetes resource in the Azure portal. 2. Run the following commands on your machine:
-
- ```console
- kubectl delete ns azure-arc
- kubectl delete clusterrolebinding azure-arc-operator
- kubectl delete secret sh.helm.release.v1.azure-arc.v1
- ```
+
+ ```console
+ kubectl delete ns azure-arc
+ kubectl delete clusterrolebinding azure-arc-operator
+ kubectl delete secret sh.helm.release.v1.azure-arc.v1
+ ```
3. [Install a stable version](https://helm.sh/docs/intro/install/) of Helm 3 on your machine instead of the release candidate version. 4. Run the `az connectedk8s connect` command with the appropriate values to connect the cluster to Azure Arc.
az extension add --name k8s-configuration
### Flux v1 - General
+> [!NOTE]
+> Eventually Azure will stop supporting GitOps with Flux v1, so begin using [Flux v2](./tutorial-use-gitops-flux2.md) as soon as possible.
+ To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these az commands with `--debug` parameter specified: ```azurecli
For more information, see [How do I resolve `webhook does not support dry run` e
### Flux v2 - Error installing the `microsoft.flux` extension
-The `microsoft.flux` extension installs the Flux controllers and Azure GitOps agents into your Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. If the extension is not already installed in a cluster and you create a GitOps configuration resource for that cluster, the extension will be installed automatically.
+The `microsoft.flux` extension installs the Flux controllers and Azure GitOps agents into your Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. If the extension isn't already installed in a cluster and you create a GitOps configuration resource for that cluster, the extension will be installed automatically.
-If you experience an error during installation or if the extension is in a failed state, you can first run a script to investigate. The cluster-type parameter can be set to `connectedClusters` for an Arc-enabled cluster or `managedClusters` for an AKS cluster. The name of the `microsoft.flux` extension will be "flux" if the extension was installed automatically during creation of a GitOps configuration. Look in the "statuses" object for information.
+If you experience an error during installation, or if the extension is in a failed state, run a script to investigate. The cluster-type parameter can be set to `connectedClusters` for an Arc-enabled cluster or `managedClusters` for an AKS cluster. The name of the `microsoft.flux` extension will be "flux" if the extension was installed automatically during creation of a GitOps configuration. Look in the "statuses" object for information.
One example:
kubectl delete namespaces flux-system
``` Some other aspects to consider:
-
-* For AKS cluster, assure that the subscription has the following feature flag enabled: `Microsoft.ContainerService/AKS-ExtensionManager`.
+
+* For an AKS cluster, assure that the subscription has the `Microsoft.ContainerService/AKS-ExtensionManager` feature flag enabled.
```azurecli az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager ```
-* Assure that the cluster does not have any policies that restrict creation of the `flux-system` namespace or resources in that namespace.
+* Assure that the cluster doesn't have any policies that restrict creation of the `flux-system` namespace or resources in that namespace.
-With these actions accomplished you can either [re-create a flux configuration](./tutorial-use-gitops-flux2.md) which will install the flux extension automatically or you can re-install the flux extension manually.
+With these actions accomplished, you can either [recreate a flux configuration](./tutorial-use-gitops-flux2.md), which will install the flux extension automatically, or you can reinstall the flux extension manually.
### Flux v2 - Installing the `microsoft.flux` extension in a cluster with Azure AD Pod Identity enabled
The extension status also returns as "Failed".
"{\"status\":\"Failed\",\"error\":{\"code\":\"ResourceOperationFailure\",\"message\":\"The resource operation completed with terminal provisioning state 'Failed'.\",\"details\":[{\"code\":\"ExtensionCreationFailed\",\"message\":\" error: Unable to get the status from the local CRD with the error : {Error : Retry for given duration didn't get any results with err {status not populated}}\"}]}}", ```
-The issue is that the extension-agent pod is trying to get its token from IMDS on the cluster in order to talk to the extension service in Azure; however, this token request is being intercepted by pod identity ([details here](../../aks/use-azure-ad-pod-identity.md)).
+The extension-agent pod is trying to get its token from IMDS on the cluster in order to talk to the extension service in Azure, but the token request is intercepted by the [pod identity](../../aks/use-azure-ad-pod-identity.md)).
The workaround is to create an `AzurePodIdentityException` that will tell Azure AD Pod Identity to ignore the token requests from flux-extension pods.
spec:
## Monitoring
-Azure Monitor for containers requires its DaemonSet to be run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command:
+Azure Monitor for Containers requires its DaemonSet to run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command:
```console juju config kubernetes-worker allow-privileged=true
juju config kubernetes-worker allow-privileged=true
### Old version of agents used
-Usage of older version of agents where Cluster Connect feature was not yet supported will result in the following error:
+Some older agent versions didn't support the Cluster Connect feature. If you use one of these versions, you may see this error:
```azurecli az connectedk8s proxy -n AzureArcTest -g AzureArcTest
az connectedk8s proxy -n AzureArcTest -g AzureArcTest
Hybrid connection for the target resource does not exist. Agent might not have started successfully. ```
-When this occurs, ensure that you are using `connectedk8s` Azure CLI extension of version >= 1.2.0 and [connect your cluster again](quickstart-connect-cluster.md) to Azure Arc. Also, verify that you've met all the [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements) needed for Arc-enabled Kubernetes. If your cluster is behind an outbound proxy or firewall, verify that websocket connections are enabled for `*.servicebus.windows.net` which is required specifically for the [Cluster Connect](cluster-connect.md) feature.
+Be sure to use the `connectedk8s` Azure CLI extension with version >= 1.2.0, then [connect your cluster again](quickstart-connect-cluster.md) to Azure Arc. Also, verify that you've met all the [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements) needed for Arc-enabled Kubernetes.
+
+If your cluster is behind an outbound proxy or firewall, verify that websocket connections are enabled for `*.servicebus.windows.net`, which is required specifically for the [Cluster Connect](cluster-connect.md) feature.
### Cluster Connect feature disabled
To resolve this error, [enable the Cluster Connect feature](cluster-connect.md#e
## Enable custom locations using service principal
-When you are connecting your cluster to Azure Arc or when you are enabling custom locations feature on an existing cluster, you may observe the following warning:
+When connecting your cluster to Azure Arc or enabling custom locations on an existing cluster, you may see the following warning:
```console Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation. ```
-The above warning is observed when you have used a service principal to log into Azure. This is because a service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
+This warning occurs when you use a service principal to log into Azure. The service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
-1. Login into Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
+1. Sign in into Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
```azurecli az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv ```
-1. Login into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
- - If you are enabling custom locations feature as part of connecting the cluster to Arc, run the following command:
+1. Sign in into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations on the cluster:
- ```azurecli
- az connectedk8s connect -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId>
- ```
+ * To enable custom locations when connecting the cluster to Arc, run the following command:
- - If you are enabling custom locations feature on an existing Azure Arc-enabled Kubernetes cluster, run the following command:
+ ```azurecli
+ az connectedk8s connect -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId>
+ ```
+
+ * To enable custom locations on an existing Azure Arc-enabled Kubernetes cluster, run the following command:
- ```azurecli
- az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations
- ```
+ ```azurecli
+ az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations
+ ```
## Azure Arc-enabled Open Service Mesh
-The following troubleshooting steps provide guidance on validating the deployment of all the Open Service Mesh extension components on your cluster.
+The steps below provide guidance on validating the deployment of all the Open Service Mesh (OSM) extension components on your cluster.
### Check OSM Controller **Deployment**+ ```bash kubectl get deployment -n arc-osm-system --selector app=osm-controller ```
-If the OSM Controller is healthy, you will get an output similar to the following output:
-```
+If the OSM Controller is healthy, you'll see output similar to the following:
+
+```output
NAME READY UP-TO-DATE AVAILABLE AGE osm-controller 1/1 1 1 59m ``` ### Check the OSM Controller **Pod**+ ```bash kubectl get pods -n arc-osm-system --selector app=osm-controller ```
-If the OSM Controller is healthy, you will get an output similar to the following output:
-```
+If the OSM Controller is healthy, you'll see output similar to the following:
+
+```output
NAME READY STATUS RESTARTS AGE osm-controller-b5bd66db-wglzl 0/1 Evicted 0 61m osm-controller-b5bd66db-wvl9w 1/1 Running 0 31m ```
-Even though we had one controller _evicted_ at some point, we have another one which is `READY 1/1` and `Running` with `0` restarts.
-If the column `READY` is anything other than `1/1` the service mesh would be in a broken state.
-Column `READY` with `0/1` indicates the control plane container is crashing - we need to get logs. Use the following command to inspect controller logs:
+Even though one controller was _evicted_ at some point, there's another which is `READY 1/1` and `Running` with `0` restarts. If the column `READY` is anything other than `1/1`, the service mesh would be in a broken state. Column `READY` with `0/1` indicates the control plane container is crashing. Use the following command to inspect controller logs:
+ ```bash kubectl logs -n arc-osm-system -l app=osm-controller ```+ Column `READY` with a number higher than 1 after the `/` would indicate that there are sidecars installed. OSM Controller would most likely not work with any sidecars attached to it. ### Check OSM Controller **Service**+ ```bash kubectl get service -n arc-osm-system osm-controller ```
-If the OSM Controller is healthy, you will have the following output:
-```
+If the OSM Controller is healthy, you'll see the following output:
+
+```output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67m ```
osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67
> The `CLUSTER-IP` would be different. The service `NAME` and `PORT(S)` must be the same as seen in the output. ### Check OSM Controller **Endpoints**+ ```bash kubectl get endpoints -n arc-osm-system osm-controller ```
-If the OSM Controller is healthy, you will get an output similar to the following output:
-```
+If the OSM Controller is healthy, you'll see output similar to the following:
+
+```output
NAME ENDPOINTS AGE osm-controller 10.240.1.115:9092,10.240.1.115:15128 69m ```
-If the user's cluster has no `ENDPOINTS` for `osm-controller` this would indicate that the control plane is unhealthy. This may be caused by the OSM Controller pod crashing, or never deployed correctly.
+If the user's cluster has no `ENDPOINTS` for `osm-controller`, the control plane is unhealthy. This unhealthy state may be caused by the OSM Controller pod crashing, or the pod may never have been deployed correctly.
### Check OSM Injector **Deployment**+ ```bash kubectl get deployments -n arc-osm-system osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
-```
+If the OSM Injector is healthy, you'll see output similar to the following:
+
+```output
NAME READY UP-TO-DATE AVAILABLE AGE osm-injector 1/1 1 1 73m ``` ### Check OSM Injector **Pod**+ ```bash kubectl get pod -n arc-osm-system --selector app=osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
-```
+If the OSM Injector is healthy, you'll see output similar to the following:
+
+```output
NAME READY STATUS RESTARTS AGE osm-injector-5986c57765-vlsdk 1/1 Running 0 73m ```
osm-injector-5986c57765-vlsdk 1/1 Running 0 73m
The `READY` column must be `1/1`. Any other value would indicate an unhealthy osm-injector pod. ### Check OSM Injector **Service**+ ```bash kubectl get service -n arc-osm-system osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
-```
+If the OSM Injector is healthy, you'll see output similar to the following:
+
+```output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m ```
osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m
Ensure the IP address listed for `osm-injector` service is `9090`. There should be no `EXTERNAL-IP`. ### Check OSM Injector **Endpoints**+ ```bash kubectl get endpoints -n arc-osm-system osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
+If the OSM Injector is healthy, you'll see output similar to the following:
+ ``` NAME ENDPOINTS AGE osm-injector 10.240.1.172:9090 75m
osm-injector 10.240.1.172:9090 75m
For OSM to function, there must be at least one endpoint for `osm-injector`. The IP address of your OSM Injector endpoints will be different. The port `9090` must be the same. - ### Check **Validating** and **Mutating** webhooks+ ```bash kubectl get ValidatingWebhookConfiguration --selector app=osm-controller ```
-If the Validating Webhook is healthy, you will get an output similar to the following output:
-```
+If the **Validating** webhook is healthy, you'll see output similar to the following:
+
+```output
NAME WEBHOOKS AGE osm-validator-mesh-osm 1 81m ```
osm-validator-mesh-osm 1 81m
kubectl get MutatingWebhookConfiguration --selector app=osm-injector ```
+If the **Mutating** webhook is healthy, you'll see output similar to the following:
-If the Mutating Webhook is healthy, you will get an output similar to the following output:
-```
+```output
NAME WEBHOOKS AGE arc-osm-webhook-osm 1 102m ```
-Check for the service and the CA bundle of the **Validating** webhook
+Check for the service and the CA bundle of the **Validating** webhook by using the following command:
+ ```bash kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service' ```
-A well configured Validating Webhook Configuration would have the following output:
+A well configured **Validating** webhook configuration will have output similar to the following:
+ ```json { "name": "osm-config-validator",
A well configured Validating Webhook Configuration would have the following outp
} ```
-Check for the service and the CA bundle of the **Mutating** webhook
+Check for the service and the CA bundle of the **Mutating** webhook by using the following command:
+ ```bash kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service' ```
-A well configured Mutating Webhook Configuration would have the following output:
+A well configured **Mutating** webhook configuration will have output similar to the following:
``` { "name": "osm-injector",
A well configured Mutating Webhook Configuration would have the following output
} ``` -
-Check whether OSM Controller has given the Validating (or Mutating) Webhook a CA Bundle by using the following command:
+Check whether OSM Controller has given the **Validating** (or **Mutating**) webhook a CA Bundle by using the following command:
```bash kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq -r '.w
``` Example output:+ ```bash 1845 ```
-The number in the output indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or some number under a 1000, it would indicate that the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the ValidatingWebhook would throw an error.
+
+The number in the output indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or a number under 1000, the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the `ValidatingWebhook` will throw an error.
### Check the `osm-mesh-config` resource
-Check for the existence:
+Check for the existence of the resource:
```azurecli-interactive kubectl get meshconfig osm-mesh-config -n arc-osm-system ```
-Check the content of the OSM MeshConfig
+Check the content of the OSM MeshConfig:
```azurecli-interactive kubectl get meshconfig osm-mesh-config -n arc-osm-system -o yaml
metadata:
| spec.featureFlags.enableIngressBackendPolicy | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableIngressBackendPolicy":"true"}}}' --type=merge` | | spec.featureFlags.enableEnvoyActiveHealthChecks | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableEnvoyActiveHealthChecks":"false"}}}' --type=merge` |
-### Check Namespaces
+### Check namespaces
>[!Note]
->The arc-osm-system namespace will never participate in a service mesh and will never be labeled and/or annotated with the key/values below.
+>The arc-osm-system namespace will never participate in a service mesh and will never be labeled or annotated with the key/values below.
-We use the `osm namespace add` command to join namespaces to a given service mesh.
-When a kubernetes namespace is part of the mesh, the following must be true:
+We use the `osm namespace add` command to join namespaces to a given service mesh. When a Kubernetes namespace is part of the mesh, confirm the following:
View the annotations of the namespace `bookbuyer`:+ ```bash kubectl get namespace bookbuyer -o json | jq '.metadata.annotations' ``` The following annotation must be present:+ ``` { "openservicemesh.io/sidecar-injection": "enabled" } ``` - View the labels of the namespace `bookbuyer`: ```bash kubectl get namespace bookbuyer -o json | jq '.metadata.labels' ``` The following label must be present:+ ``` { "openservicemesh.io/monitored-by": "osm" } ```
-Note that if you are not using `osm` CLI, you could also manually add these annotations to your namespaces. If a namespace is not annotated with `"openservicemesh.io/sidecar-injection": "enabled"` or not labeled with `"openservicemesh.io/monitored-by": "osm"` the OSM Injector will not add Envoy sidecars.
+
+If you aren't using `osm` CLI, you could also manually add these annotations to your namespaces. If a namespace isn't annotated with `"openservicemesh.io/sidecar-injection": "enabled"`, or isn't labeled with `"openservicemesh.io/monitored-by": "osm"`, the OSM Injector will not add Envoy sidecars.
>[!Note] >After `osm namespace add` is called, only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment` command. - ### Verify the SMI CRDs
-Check whether the cluster has the required CRDs:
+
+Check whether the cluster has the required Custom Resource Definitions (CRDs) by using the following command:
+ ```bash kubectl get crds ```
-Ensure that the CRDs correspond to the versions available in the release branch. For example, if you are using OSM-Arc v1.0.0-1, navigate to the [SMI supported versions page](https://docs.openservicemesh.io/docs/overview/smi/) and select v1.0 from the Releases dropdown to check which CRDs versions are in use.
+Ensure that the CRDs correspond to the versions available in the release branch. For example, if you're using OSM-Arc v1.0.0-1, navigate to the [SMI supported versions page](https://docs.openservicemesh.io/docs/overview/smi/) and select v1.0 from the Releases dropdown to check which CRDs versions are in use.
Get the versions of the CRDs installed with the following command:+ ```bash for x in $(kubectl get crds --no-headers | awk '{print $1}' | grep 'smi-spec.io'); do kubectl get crd $x -o json | jq -r '(.metadata.name, "-" , .spec.versions[].name, "\n")' done ```
-If CRDs are missing, use the following commands to install them on the cluster. If you are using a version of OSM-Arc that is not v1.0, ensure that you replace the version in the command (ex: v1.1.0 would be release-v1.1).
+If CRDs are missing, use the following commands to install them on the cluster. If you're using a version of OSM-Arc that's not v1.0, ensure that you replace the version in the command (for example, v1.1.0 would be release-v1.1).
```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_http_route_group.yaml
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_traffic_split.yaml ```
-Refer to [OSM release notes](https://github.com/openservicemesh/osm/releases) to see CRD changes between releases.
+To see CRD changes between releases, refer to the [OSM release notes](https://github.com/openservicemesh/osm/releases).
### Troubleshoot certificate management
-Information on how OSM issues and manages certificates to Envoy proxies running on application pods can be found on the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/).
+
+For information on how OSM issues and manages certificates to Envoy proxies running on application pods, see the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/).
### Upgrade Envoy
-When a new pod is created in a namespace monitored by the add-on, OSM will inject an [Envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/guides/upgrade/#envoy) on the OSM docs site.
+
+When a new pod is created in a namespace monitored by the add-on, OSM will inject an [Envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the Envoy version needs to be updated, follow the steps in the [Upgrade Guide](https://docs.openservicemesh.io/docs/guides/upgrade/#envoy) on the OSM docs site.
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
Import an [application repository](./conceptual-gitops-ci-cd.md#application-repo
* **arc-cicd-demo-src** application repository * URL: https://github.com/Azure/arc-cicd-demo-src * Contains the example Azure Vote App that you will deploy using GitOps.
+ * Import the repository with name `arc-cicd-demo-src`
* **arc-cicd-demo-gitops** GitOps repository * URL: https://github.com/Azure/arc-cicd-demo-gitops * Works as a base for your cluster resources that house the Azure Vote App.
+ * Import the repository with name `arc-cicd-demo-gitops`
Learn more about [importing Git repositories](/azure/devops/repos/git/import-git-repository).
The CI/CD workflow will populate the manifest directory with extra manifests to
az k8s-configuration flux create \ --name cluster-config \ --cluster-name arc-cicd-cluster \
- --namespace cluster-config \
+ --namespace flux-system \
--resource-group myResourceGroup \
- -u https://dev.azure.com/<Your organization>/<Your project>/arc-cicd-demo-gitops \
+ -u https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \
--https-user <Azure Repos username> \ --https-key <Azure Repos PAT token> \ --scope cluster \
The CI/CD workflow will populate the manifest directory with extra manifests to
1. Check the state of the deployment in Azure portal. * If successful, you'll see both `dev` and `stage` namespaces created in your cluster.
+ * You can also check on Azure Portal page of your K8s cluster on `GitOps` tab a configuration `cluster-config` is created.
+ ### Import the CI/CD pipelines
The application repository contains a `.pipeline` folder with the pipelines you'
| Pipeline file name | Description | | - | - |
-| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
-| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
+| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
+| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-ci-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** | ### Connect Azure Container Registry to Azure DevOps
CD pipeline manipulates PRs in the GitOps repository. It needs a Service Connect
--set gitOpsAppURL=https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \ --set orchestratorPAT=<Azure Repos PAT token> ```
+> [!NOTE]
+> `Azure Repos PAT token` should have `Build: Read & executee` and `Code: Read` permissions.
+ 3. Configure Flux to send notifications to GitOps connector: ```console cat <<EOF | kubectl apply -f -
spec:
eventSeverity: info eventSources: - kind: GitRepository
- name: <Flux GitRepository to watch>
+ name: cluster-config
- kind: Kustomization
- name: <Flux Kustomization to watch>
+ name: cluster-config-cluster-config
providerRef: name: gitops-connector
For the details on installation, refer to the [GitOps Connector](https://github.
You're now ready to deploy to the `dev` and `stage` environments.
+#### Create environments
+
+In Azure DevOps project create `Dev` and `Stage` environments. See [Create and target an environment](/azure/devops/pipelines/process/environments) for more details.
+ ### Give more permissions to the build service The CD pipeline uses the security token of the running build to authenticate to the GitOps repository. More permissions are needed for the pipeline to create a new branch, push changes, and create pull requests. 1. Go to `Project settings` from the Azure DevOps project main page. 1. Select `Repos/Repositories`.
-1. Select `<GitOps Repo Name>`.
1. Select `Security`.
-1. For the `<Project Name> Build Service (<Organization Name>)`, allow `Contribute`, `Contribute to pull requests`, and `Create branch`.
+1. For the `<Project Name> Build Service (<Organization Name>)` and for the `Project Collection Build Service (<Organization Name>)` (type in the search field, if it doesn't show up), allow `Contribute`, `Contribute to pull requests`, and `Create branch`.
1. Go to `Pipelines/Settings`
-1. Switch off `Limit job authorization scope to referenced Azure DevOps repositories`
+1. Switch off `Protect access to repositories in YAML pipelines` option
For more information, see: - [Grant VC Permissions to the Build Service](/azure/devops/pipelines/scripts/git-commands?preserve-view=true&tabs=yaml&view=azure-devops#version-control )
The CI/CD workflow will populate the manifest directory with extra manifests to
--set gitOpsAppURL=https://github.com/<Your organization>/arc-cicd-demo-gitops/commit \ --set orchestratorPAT=<GitHub PAT token> ```+ 3. Configure Flux to send notifications to GitOps connector: ```console cat <<EOF | kubectl apply -f -
spec:
eventSeverity: info eventSources: - kind: GitRepository
- name: <Flux GitRepository to watch>
+ name: cluster-config
- kind: Kustomization
- name: <Flux Kustomization to watch>
+ name: cluster-config-cluster-config
providerRef: name: gitops-connector
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 06/06/2022 Last updated : 06/08/2022
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
### For Azure Kubernetes Service clusters
-* An AKS cluster that's up and running.
+* An MSI-based AKS cluster that's up and running.
>[!IMPORTANT]
- >Ensure that the AKS cluster is created with MSI (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
+ >**Ensure that the AKS cluster is created with MSI** (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
+ >For new AKS clusters created with ΓÇ£az aks createΓÇ¥, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI run ΓÇ£az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identityΓÇ¥. For more information, refer to [managed identity docs](../../aks/use-managed-identity.md).
* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. * Registration of your subscription with the `AKS-ExtensionManager` feature flag. Use the following command:
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 03/14/2022 Last updated : 06/06/2022
Metadata information about a connected machine is collected after the Connected
* Hardware manufacturer * Hardware model * Cloud provider
-* Amazon Web Services (AWS) account ID, instance ID and region (if running in AWS)
+* Amazon Web Services (AWS) metadata, when running in AWS:
+ * Account ID
+ * Instance ID
+ * Region
+* Google Cloud Platform (GCP) metadata, when running in GCP:
+ * Instance ID
+ * Image
+ * Machine type
+ * OS
+ * Project ID
+ * Project number
+ * Service accounts
+ * Zone
The following metadata information is requested by the agent from Azure:
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 05/24/2022 Last updated : 06/06/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.14 - January 2022
+
+### Fixed
+
+- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+ ## Version 1.13 - November 2021 ### Known issues
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 05/24/2022 Last updated : 06/06/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.19 - June 2022
+
+### New features
+
+- When installed on a Google Compute Engine virtual machine, the agent will now detect and report Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata.
+
+### Fixed
+
+- An issue that could cause the extension manager to hang during extension installation, update, and removal operations has been resolved.
+- Improved support for TLS 1.3
+ ## Version 1.18 - May 2022 ### New features
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Extended the device login timeout to 5 minutes - Removed resource constraints for Azure Monitor Agent to support high throughput scenarios
-## Version 1.14 - January 2022
-
-### Fixed
--- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
The following image shows the architecture for the Arc-enabled SCVMM:
### Supported VMM versions
-Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions.
+Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions and supports SCVMM management servers with a maximum of 3500 VMS.
### Supported scenarios
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
This QuickStart shows you how to connect your SCVMM management server to Azure A
| | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. | | **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud that has at least one cluster with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> For dynamic IP allocation to appliance VM, DHCP server is required. For static IP allocation, VMM static IP pool is required. |
-| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. |
+| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. |
| **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you may experience performance issues. | ## Prepare SCVMM management server
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Virtual network support is configured on the **New Azure Cache for Redis** pane
1. On the **Networking** tab, select **Virtual Networks** as your connectivity method. To use a new virtual network, create it first by following the steps in [Create a virtual network using the Azure portal](../virtual-network/manage-virtual-network.md#create-a-virtual-network) or [Create a virtual network (classic) by using the Azure portal](/previous-versions/azure/virtual-network/virtual-networks-create-vnet-classic-pportal). Then return to the **New Azure Cache for Redis** pane to create and configure your Premium-tier cache. > [!IMPORTANT]
- > When you deploy Azure Cache for Redis to a Resource Manager virtual network, the cache must be in a dedicated subnet that contains no other resources except for Azure Cache for Redis instances. If you attempt to deploy an Azure Cache for Redis instance to a Resource Manager virtual network subnet that contains other resources, or has a NAT Gateway assigned, the deployment fails.
- >
- >
+ > When you deploy Azure Cache for Redis to a Resource Manager virtual network, the cache must be in a dedicated subnet that contains no other resources except for Azure Cache for Redis instances. If you attempt to deploy an Azure Cache for Redis instance to a Resource Manager virtual network subnet that contains other resources, or has a NAT Gateway assigned, the deployment fails. The failure is because Azure Cache for Redis uses a basic load balancer that is not compatible with a NAT Gateway.
| Setting | Suggested value | Description | | | - | -- |
After the port requirements are configured as described in the previous section,
- [Reboot](cache-administration.md#reboot) all of the cache nodes. The cache won't be able to restart successfully if all of the required cache dependencies can't be reachedas documented in [Inbound port requirements](cache-how-to-premium-vnet.md#inbound-port-requirements) and [Outbound port requirements](cache-how-to-premium-vnet.md#outbound-port-requirements). - After the cache nodes have restarted, as reported by the cache status in the Azure portal, you can do the following tests:
- - Ping the cache endpoint by using port 6380 from a machine that's within the same virtual network as the cache, using [tcping](https://www.elifulkerson.com/projects/tcping.php). For example:
+ - Ping the cache endpoint by using port 6380 from a machine that's within the same virtual network as the cache, using [`tcping`](https://www.elifulkerson.com/projects/tcping.php). For example:
`tcping.exe contosocache.redis.cache.windows.net 6380`
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
The `FunctionName` method attribute sets the name of the function, which by defa
1. In the `HttpTrigger` method named `Run`, rename the `FunctionName` method attribute to `HttpExample`.
-Your function definition should now look like the following code:
+Your function definition should now look like the following code, depending on mode:
+
+# [In-process](#tab/in-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-template/HttpExample.cs" range="15-18":::
+# [Isolated process](#tab/isolated-process)
++
+
+ Now that you've renamed the function, you can test it on your local computer. ## Run the function locally
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
The following table shows current supported Node.js versions for each major vers
| Functions version | Node version (Windows) | Node Version (Linux) | ||| |
-| 4.x (recommended) | `~16` (preview)<br/>`~14` (recommended) | `node|16` (preview)<br/>`node|14` (recommended) |
+| 4.x (recommended) | `~16`<br/>`~14` | `node|16`<br/>`node|14` |
| 3.x | `~14`<br/>`~12`<br/>`~10` | `node|14`<br/>`node|12`<br/>`node|10` | | 2.x | `~12`<br/>`~10`<br/>`~8` | `node|10`<br/>`node|8` | | 1.x | 6.11.2 (locked by the runtime) | n/a |
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
Title: Deploy Start/Stop VMs v2 (preview)
-description: This article tells how to deploy the Start/Stop VMs v2 (preview) feature for your Azure VMs in your Azure subscription.
+ Title: Deploy Start/Stop VMs v2
+description: This article tells how to deploy the Start/Stop VMs v2 feature for your Azure VMs in your Azure subscription.
Previously updated : 06/25/2021 Last updated : 06/08/2022 ms.custon: subject-rbac-steps
-# Deploy Start/Stop VMs v2 (preview)
+# Deploy Start/Stop VMs v2
-Perform the steps in this topic in sequence to install the Start/Stop VMs v2 (preview) feature. After completing the setup process, configure the schedules to customize it to your requirements.
+Perform the steps in this topic in sequence to install the Start/Stop VMs v2 feature. After completing the setup process, configure the schedules to customize it to your requirements.
## Permissions considerations Please keep the following in mind before and during deployment:
Please keep the following in mind before and during deployment:
The deployment is initiated from the Start/Stop VMs v2 GitHub organization [here](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md). While this feature is intended to manage all of your VMs in your subscription across all resource groups from a single deployment within the subscription, you can install another instance of it based on the operations model or requirements of your organization. It also can be configured to centrally manage VMs across multiple subscriptions.
-To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (preview) to a dedicated resource group.
+To simplify management and removal, we recommend you deploy Start/Stop VMs v2 to a dedicated resource group.
> [!NOTE]
-> Currently this preview does not support specifying an existing Storage account or Application Insights resource.
+> Currently this solution does not support specifying an existing Storage account or Application Insights resource.
> [!NOTE]
To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (p
## Enable multiple subscriptions
-After the Start/Stop deployment completes, perform the following steps to enable Start/Stop VMs v2 (preview) to take action across multiple subscriptions.
+After the Start/Stop deployment completes, perform the following steps to enable Start/Stop VMs v2 to take action across multiple subscriptions.
1. Copy the value for the Azure Function App name that you specified during the deployment.
In an environment that includes two or more components on multiple Azure Resourc
## Auto stop scenario
-Start/Stop VMs v2 (preview) can help manage the cost of running Azure Resource Manager and classic VMs in your subscription by evaluating machines that aren't used during non-peak periods, such as after hours, and automatically shutting them down if processor utilization is less than a specified percentage.
+Start/Stop VMs v2 can help manage the cost of running Azure Resource Manager and classic VMs in your subscription by evaluating machines that aren't used during non-peak periods, such as after hours, and automatically shutting them down if processor utilization is less than a specified percentage.
The following metric alert properties in the request body support customization:
To learn more about how Azure Monitor metric alerts work and how to configure th
## Next steps
-To learn how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 (preview) feature and perform other management tasks, see the [Manage Start/Stop VMs](manage.md) article.
+To learn how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 feature and perform other management tasks, see the [Manage Start/Stop VMs](manage.md) article.
azure-functions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/manage.md
Title: Manage Start/Stop VMs v2 (preview)
-description: This article tells how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 (preview) feature and perform other management tasks.
+ Title: Manage Start/Stop VMs v2
+description: This article tells how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 feature and perform other management tasks.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# How to manage Start/Stop VMs v2 (preview)
+# How to manage Start/Stop VMs v2
## Azure dashboard
-Start/Stop VMs v2 (preview) includes a [dashboard](../../azure-monitor/best-practices-analysis.md#azure-dashboards) to help you understand the management scope and recent operations against your VMs. It is a quick and easy way to verify the status of each operation thatΓÇÖs performed on your Azure VMs. The visualization in each tile is based on a Log query and to see the query, select the **Open in logs blade** option in the right-hand corner of the tile. This opens the [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md#starting-log-analytics) tool in the Azure portal, and from here you can evaluate the query and modify to support your needs, such as custom [log alerts](../../azure-monitor/alerts/alerts-log.md), a custom [workbook](../../azure-monitor/visualize/workbooks-overview.md), etc.
+Start/Stop VMs v2 includes a [dashboard](../../azure-monitor/best-practices-analysis.md#azure-dashboards) to help you understand the management scope and recent operations against your VMs. It is a quick and easy way to verify the status of each operation thatΓÇÖs performed on your Azure VMs. The visualization in each tile is based on a Log query and to see the query, select the **Open in logs blade** option in the right-hand corner of the tile. This opens the [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md#starting-log-analytics) tool in the Azure portal, and from here you can evaluate the query and modify to support your needs, such as custom [log alerts](../../azure-monitor/alerts/alerts-log.md), a custom [workbook](../../azure-monitor/visualize/workbooks-overview.md), etc.
The log data each tile in the dashboard displays is refreshed every hour, with a manual refresh option on demand by clicking the **Refresh** icon on a given visualization, or by refreshing the full dashboard.
To learn about working with a log-based dashboard, see the following [tutorial](
## Configure email notifications
-To change email notifications after Start/Stop VMs v2 (preview) is deployed, you can modify the action group created during deployment.
+To change email notifications after Start/Stop VMs v2 is deployed, you can modify the action group created during deployment.
1. In the Azure portal, navigate to **Monitor**, then **Alerts**. Select **Action groups**.
The following screenshot is an example email that is sent when the feature shuts
## Next steps
-To handle problems during VM management, see [Troubleshoot Start/Stop VMs v2](troubleshoot.md) (preview) issues.
+To handle problems during VM management, see [Troubleshoot Start/Stop VMs v2](troubleshoot.md) issues.
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
Title: Start/Stop VMs v2 (preview) overview
-description: This article describes version two of the Start/Stop VMs (preview) feature, which starts or stops Azure Resource Manager and classic VMs on a schedule.
+ Title: Start/Stop VMs v2 overview
+description: This article describes version two of the Start/Stop VMs feature, which starts or stops Azure Resource Manager and classic VMs on a schedule.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# Start/Stop VMs v2 (preview) overview
+# Start/Stop VMs v2 overview
-The Start/Stop VMs v2 (preview) feature starts or stops Azure virtual machines (VMs) across multiple subscriptions. It starts or stops Azure VMs on user-defined schedules, provides insights through [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md), and send optional notifications by using [action groups](../../azure-monitor/alerts/action-groups.md). The feature can manage both Azure Resource Manager VMs and classic VMs for most scenarios.
+The Start/Stop VMs v2 feature starts or stops Azure virtual machines (VMs) across multiple subscriptions. It starts or stops Azure VMs on user-defined schedules, provides insights through [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md), and send optional notifications by using [action groups](../../azure-monitor/alerts/action-groups.md). The feature can manage both Azure Resource Manager VMs and classic VMs for most scenarios.
-This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
> [!NOTE] > We've added a plan (**AZ - Availability Zone**) to our Start/Stop V2 solution to enable a high-availability offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cos
## Overview
-Start/Stop VMs v2 (preview) is redesigned and it doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [previous version](../../automation/automation-solution-vm-management.md). This version relies on [Azure Functions](../../azure-functions/functions-overview.md) to handle the VM start and stop execution.
+Start/Stop VMs v2 is redesigned and it doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [previous version](../../automation/automation-solution-vm-management.md). This version relies on [Azure Functions](../../azure-functions/functions-overview.md) to handle the VM start and stop execution.
-A managed identity is created in Azure Active Directory (Azure AD) for this Azure Functions application and allows Start/Stop VMs v2 (preview) to easily access other Azure AD-protected resources, such as the logic apps and Azure VMs. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+A managed identity is created in Azure Active Directory (Azure AD) for this Azure Functions application and allows Start/Stop VMs v2 to easily access other Azure AD-protected resources, such as the logic apps and Azure VMs. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
An HTTP trigger endpoint function is created to support the schedule and sequence scenarios included with the feature, as shown in the following table.
The queue-based trigger functions are required in support of this feature. All t
Each Start/Stop action supports assignment of one or more subscriptions, resource groups, or a list of VMs.
-An Azure Storage account, which is required by Functions, is also used by Start/Stop VMs v2 (preview) for two purposes:
+An Azure Storage account, which is required by Functions, is also used by Start/Stop VMs v2 for two purposes:
- Uses Azure Table Storage to store the execution operation metadata (that is, the start/stop VM action).
Email notifications are also sent as a result of the actions performed on the VM
## New releases
-When a new version of Start/Stop VMs v2 (preview) is released, your instance is auto-updated without having to manually redeploy.
+When a new version of Start/Stop VMs v2 is released, your instance is auto-updated without having to manually redeploy.
## Supported scoping options
Specifying a list of VMs can be used when you need to perform the start and stop
- Your account has been granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) permission in the subscription. -- Start/Stop VMs v2 (preview) is available in all Azure global and US Government cloud regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions.
+- Start/Stop VMs v2 is available in all Azure global and US Government cloud regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions.
## Next steps
-To deploy this feature, see [Deploy Start/Stop VMs](deploy.md) (preview).
+To deploy this feature, see [Deploy Start/Stop VMs](deploy.md).
azure-functions Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/remove.md
Title: Remove Start/Stop VMs v2 (preview) overview
-description: This article describes how to remove the Start/Stop VMs v2 (preview) feature.
+ Title: Remove Start/Stop VMs v2 overview
+description: This article describes how to remove the Start/Stop VMs v2 feature.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# How to remove Start/Stop VMs v2 (preview)
+# How to remove Start/Stop VMs v2
-After you enable the Start/Stop VMs v2 (preview) feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done by deleting the resource group dedicated to store the following resources in support of Start/Stop VMs v2 (preview):
+After you enable the Start/Stop VMs v2 feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done by deleting the resource group dedicated to store the following resources in support of Start/Stop VMs v2:
- The Azure Functions applications - Schedules in Azure Logic Apps
After you enable the Start/Stop VMs v2 (preview) feature to manage the running s
- Azure Storage account > [!NOTE]
-> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2, or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this version.
## Delete the dedicated resource group
To delete the resource group, follow the steps outlined in the [Azure Resource M
## Next steps
-To re-deploy this feature, see [Deploy Start/Stop v2](deploy.md) (preview).
+To re-deploy this feature, see [Deploy Start/Stop v2](deploy.md).
azure-functions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/troubleshoot.md
Title: Troubleshoot Start/Stop VMs (preview)
-description: This article tells how to troubleshoot issues encountered with the Start/Stop VMs (preview) feature for your Azure VMs.
+ Title: Troubleshoot Start/Stop VMs
+description: This article tells how to troubleshoot issues encountered with the Start/Stop VMs feature for your Azure VMs.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# Troubleshoot common issues with Start/Stop VMs (preview)
+# Troubleshoot common issues with Start/Stop VMs
-This article provides information on troubleshooting and resolving issues that may occur while attempting to install and configure Start/Stop VMs (preview). For general information, see [Start/Stop VMs overview](overview.md).
+This article provides information on troubleshooting and resolving issues that may occur while attempting to install and configure Start/Stop VMs. For general information, see [Start/Stop VMs overview](overview.md).
## General validation and troubleshooting
This section covers how to troubleshoot general issues with the schedules scenar
### Azure dashboard
-You can start by reviewing the Azure shared dashboard. The Azure shared dashboard deployed as part of Start/Stop VMs v2 (preview) is a quick and easy way to verify the status of each operation that's performed on your VMs. Refer to the **Recently attempted actions on VMs** tile to see all the recent operations executed on your VMs. There is some latency, around five minutes, for data to show up in the report as it pulls data from the Application Insights resource.
+You can start by reviewing the Azure shared dashboard. The Azure shared dashboard deployed as part of Start/Stop VMs v2 is a quick and easy way to verify the status of each operation that's performed on your VMs. Refer to the **Recently attempted actions on VMs** tile to see all the recent operations executed on your VMs. There is some latency, around five minutes, for data to show up in the report as it pulls data from the Application Insights resource.
### Logic Apps
Depending on which Logic Apps you have enabled to support your start/stop scenar
### Azure Storage
-You can review the details for the operations performed on the VMs that are written to the table **requestsstoretable** in the Azure storage account used for Start/Stop VMs v2 (preview). Perform the following steps to view those records.
+You can review the details for the operations performed on the VMs that are written to the table **requestsstoretable** in the Azure storage account used for Start/Stop VMs v2. Perform the following steps to view those records.
-1. Navigate to the storage account in the Azure portal and in the account select **Storage Explorer (preview)** from the left-hand pane.
+1. Navigate to the storage account in the Azure portal and in the account select **Storage Explorer** from the left-hand pane.
1. Select **TABLES** and then select **requeststoretable**. 1. Each record in the table represents the start/stop action performed against an Azure VM based on the target scope defined in the logic app scenario. You can filter the results by any one of the record properties (for example, TIMESTAMP, ACTION, or TARGETTOPLEVELRESOURCENAME).
From the logic app, the **Scheduled** HTTP function is invoked with Payload sche
Perform the following steps to see the invocation details. 1. In the Azure portal, navigate to **Azure Functions**.
-1. Select the Function app for Start/Stop VMs v2 (preview) from the list.
+1. Select the Function app for Start/Stop VMs v2 from the list.
1. Select **Functions** from the left-hand pane. 1. In the list, you see several functions associated for each scenario. Select the **Scheduled** HTTP function. 1. Select **Monitor** from the left-hand pane.
Learn more about monitoring Azure Functions and logic apps:
* [Monitor logic apps](../../logic-apps/monitor-logic-apps.md).
-* If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is also available for this preview version.
+* If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2, or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is also available for this version.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Since the Dependency agent works at the kernel level, support is also dependent
| Distribution | OS version | Kernel version | |:|:|:|
-| Red Hat Linux 8 | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
+| Red Hat Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
| | 8.3 | 4.18.0-240.\*el8_3.x86_64 | | | 8.2 | 4.18.0-193.\*el8_2.x86_64 | | | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
Since the Dependency agent works at the kernel level, support is also dependent
| | 7.4 | 3.10.0-693 | | Red Hat Linux 6 | 6.10 | 2.6.32-754 | | | 6.9 | 2.6.32-696 |
-| CentOS Linux 8 | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
+| CentOS Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
| | 8.3 | 4.18.0-240.\*el8_3.x86_64 | | | 8.2 | 4.18.0-193.\*el8_2.x86_64 | | | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
description: Learn about Azure Monitor alerts, alert rules, action processing ru
Previously updated : 04/26/2022 Last updated : 06/09/2022
When the alert is considered resolved, the alert rule sends out a resolved notif
## Manage your alerts programmatically
-You can programmatically query for alerts using:
-You can also use [Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade). Resource graphs are good for managing alerts across multiple subscriptions.
+You can query you alerts instances to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
+We recommended that you use [Azure Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade) with the 'AlertsManagementResources' schema for managing alerts across multiple subscriptions. For an sample query, see [Azure Resource Graph sample queries for Azure Monitor](../resource-graph-samples.md).
+
+You can use Azure Resource Graphs:
+ - with [Azure PowerShell](/powershell/module/az.monitor/)
+ - with the [Azure CLI](/cli/azure/monitor?view=azure-cli-latest&preserve-view=true)
+ - in the Azure portal
+
+You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) for lower scale querying or to update fired alerts.
## Pricing See the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/) for information about pricing.
azure-monitor Alerts Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md
Navigate to Alerts > Alert processing rules (preview) > filter by the containing
### Change scope of a rule using PowerShell
-1. Get the existing rule ([metric alerts](/powershell/module/az.monitor/get-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/get-azactivitylogalert), [alert processing rules](/powershell/module/az.alertsmanagement/get-azactionrule)).
+1. Get the existing rule ([metric alerts](/powershell/module/az.monitor/get-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/get-azactivitylogalert), alert [processing rules](/powershell/module/az.alertsmanagement/get-azalertprocessingrule)).
2. Modify the scope. If needed, split into two rules (relevant for some cases of metric alerts, as noted above).
-3. Redeploy the rule ([metric alerts](/powershell/module/az.monitor/add-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/enable-azactivitylogalert), [alert processing rules](/powershell/module/az.alertsmanagement/set-azactionrule)).
+3. Redeploy the rule ([metric alerts](/powershell/module/az.monitor/add-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/enable-azactivitylogalert), [alert processing rules](/powershell/module/az.alertsmanagement/set-azalertprocessingrule)).
### Change the scope of a rule using Azure CLI
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Action groups provide a modular and reusable way of triggering actions for Azure
To learn more about action groups, see [Create and manage action groups in the Azure portal](../alerts/action-groups.md). > [!NOTE]
-> If you are using Log Serch alert notice that the query should project a ΓÇ£ComputerΓÇ¥ column with the configurtaion items list in order to have them as a part of the payload.
+> If you are using a log alert, the query results must include a ΓÇ£ComputerΓÇ¥ column containing the configuration items list.
To add a webhook to an action, follow these instructions for Secure Webhook:
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Below is our step-by-step troubleshooting guide for extension/agent based monito
:::image type="content" source="media/azure-web-apps-net-core/auto-instrumentation-status.png" alt-text="Screenshot displaying auto instrumentation status web page." lightbox="media/azure-web-apps-net-core/auto-instrumentation-status.png":::
-##### No Data
-
-1. List and identify the process that is hosting an app. Navigate to your terminal and on the command line type `ps ax`.
-
- The output should be similar to:
-
- ```bash
- PID TTY STAT TIME COMMAND
-
- 1 ? SNs 0:00 /bin/bash /opt/startup/startup.sh
-
- 19 ? SNs 0:00 /usr/sbin/sshd
-
- 27 ? SNLl 5:52 dotnet dotnet6demo.dll
-
- 50 ? SNs 0:00 sshd: root@pts/0
-
- 53 pts/0 SNs+ 0:00 -bash
-
- 55 ? SNs 0:00 sshd: root@pts/1
-
- 57 pts/1 SNs+ 0:00 -bash
- ```
--
-1. Then list environment variables from app process. On the command line type `cat /proc/27/environ | tr '\0' '\n`.
-
- The output should be similar to:
-
- ```bash
- ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=Microsoft.ApplicationInsights.StartupBootstrapper
-
- DOTNET_STARTUP_HOOKS=/DotNetCoreAgent/2.8.39/StartupHook/Microsoft.ApplicationInsights.StartupHook.dll
-
- APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://westus-0.in.applicationinsights.azure.com/
-
- ```
-
-1. Validate that `ASPNETCORE_HOSTINGSTARTUPASSEMBLIES`, `DOTNET_STARTUP_HOOKS`, and `APPLICATIONINSIGHTS_CONNECTION_STRING` are set.
- #### Default website deployed with web apps doesn't support automatic client-side monitoring
azure-monitor Azure Cli Application Insights Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-cli-application-insights-component.md
- Title: Manage Application Insights components in Azure CLI
-description: Use this sample code to manage components in Application Insights. This feature is part of Azure Monitor.
--- Previously updated : 09/10/2012
-ms.tool: azure-cli
--
-# Manage Application Insights components by using Azure CLI
-
-In Azure Monitor, components are independently deployable parts of your distributed or microservices application. Use these Azure CLI commands to manage components in Application Insights.
-
-The examples in this article do the following management tasks:
--- Create a component.-- Connect a component to a webapp.-- Link a component to a storage account with a component.-- Create a continuous export configuration for a component.--
-## Create a component
-
-If you don't already have a resource group and workspace, create them by using [az group create](/cli/azure/group#az-group-create) and [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create):
-
-```azurecli
-az group create --name ContosoAppInsightRG --location eastus2
-az monitor log-analytics workspace create --resource-group ContosoAppInsightRG \
- --workspace-name AppInWorkspace
-```
-
-To create a component, run the [az monitor app-insights component create](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create) command. The [az monitor app-insights component show](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-show) command displays the component.
-
-```azurecli
-az monitor app-insights component create --resource-group ContosoAppInsightRG \
- --app ContosoApp --location eastus2 --kind web --application-type web \
- --retention-time 120
-az monitor app-insights component show --resource-group ContosoAppInsightRG --app ContosoApp
-```
-
-## Connect a webapp
-
-This example connects your component to a webapp. You can create a webapp by using the [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create) and [az webapp create](/cli/azure/webapp#az-webapp-create) commands:
-
-```azurecli
-az appservice plan create --resource-group ContosoAppInsightRG --name ContosoAppService
-az webapp create --resource-group ContosoAppInsightRG --name ContosoApp \
- --plan ContosoAppService --name ContosoApp8765
-```
-
-Run the [az monitor app-insights component connect-webapp](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-connect-webapp) command to connect your component to the webapp:
-
-```azurecli
-az monitor app-insights component connect-webapp --resource-group ContosoAppInsightRG \
- --app ContosoApp --web-app ContosoApp8765 --enable-debugger false --enable-profiler false
-```
-
-You can instead connect to an Azure function by using the [az monitor app-insights component connect-function](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-connect-function) command.
-
-## Link a component to storage
-
-You can link a component to a storage account. To create a storage account, use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command:
-
-```azurecli
-az storage account create --resource-group ContosoAppInsightRG \
- --name contosolinkedstorage --location eastus2 --sku Standard_LRS
-```
-
-To link your component to the storage account, run the [az monitor app-insights component linked-storage link](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-link) command. You can see the existing links by using the [az monitor app-insights component linked-storage show](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-show) command:
--
-```azurecli
-az monitor app-insights component linked-storage link --resource-group ContosoAppInsightRG \
- --app ContosoApp --storage-account contosolinkedstorage
-az monitor app-insights component linked-storage show --resource-group ContosoAppInsightRG \
- --app ContosoApp
-```
-
-To unlink the storage, run the [az monitor app-insights component linked-storage unlink](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-unlink) command:
-
-```AzureCLI
-az monitor app-insights component linked-storage unlink \
- --resource-group ContosoAppInsightRG --app ContosoApp
-```
-
-## Set up continuous export
-
-Continuous export saves events from Application Insights portal in a storage container in JSON format.
-
-> [!NOTE]
-> Continuous export is only supported for classic Application Insights resources. [Workspace-based Application Insights resources](../app/create-workspace-resource.md) must use [diagnostic settings](../app/create-workspace-resource.md#export-telemetry).
->
-
-To create a storage container, run the [az storage container create](/cli/azure/storage/container#az-storage-container-create) command.
-
-```azurecli
-az storage container create --name contosostoragecontainer --account-name contosolinkedstorage \
- --public-access blob
-```
-
-You need access for the container to be write only. Run the [az storage container policy create](/cli/azure/storage/container/policy#az-storage-container-policy-create) cmdlet:
-
-```azurecli
-az storage container policy create --container-name contosostoragecontainer \
- --account-name contosolinkedstorage --name WAccessPolicy --permissions w
-```
-
-Create an SAS key by using the [az storage container generate-sas](/cli/azure/storage/container#az-storage-container-generate-sas) command. Be sure to use the `--output tsv` parameter value to save the key without unwanted formatting like quotation marks. For more information, see [Use Azure CLI effectively](/cli/azure/use-cli-effectively).
-
-```azurecli
-containersas=$(az storage container generate-sas --name contosostoragecontainer \
- --account-name contosolinkedstorage --permissions w --output tsv)
-```
-
-To create a continuous export, run the [az monitor app-insights component continues-export create](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-create) command:
-
-```azurecli
-az monitor app-insights component continues-export create --resource-group ContosoAppInsightRG \
- --app ContosoApp --record-types Event --dest-account contosolinkedstorage \
- --dest-container contosostoragecontainer --dest-sub-id 00000000-0000-0000-0000-000000000000 \
- --dest-sas $containersas
-```
-
-You can delete a configured continuous export by using the [az monitor app-insights component continues-export delete](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-delete) command:
-
-```azurecli
-az monitor app-insights component continues-export list \
- --resource-group ContosoAppInsightRG --app ContosoApp
-az monitor app-insights component continues-export delete \
- --resource-group ContosoAppInsightRG --app ContosoApp --id abcdefghijklmnopqrstuvwxyz=
-```
-
-## Clean up deployment
-
-If you created a resource group to test these commands, you can remove the resource group and all its contents by using the [az group delete](/cli/azure/group#az-group-delete) command:
-
-```azurecli
-az group delete --name ContosoAppInsightRG
-```
-
-## Azure CLI commands used in this article
--- [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create)-- [az group create](/cli/azure/group#az-group-create)-- [az group delete](/cli/azure/group#az-group-delete)-- [az monitor app-insights component connect-webapp](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-connect-webapp)-- [az monitor app-insights component continues-export create](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-create)-- [az monitor app-insights component continues-export delete](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-delete)-- [az monitor app-insights component continues-export list](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-list)-- [az monitor app-insights component create](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create)-- [az monitor app-insights component linked-storage link](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-link)-- [az monitor app-insights component linked-storage unlink](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-unlink)-- [az monitor app-insights component show](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-show)-- [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create)-- [az storage account create](/cli/azure/storage/account#az-storage-account-create)-- [az storage container create](/cli/azure/storage/container#az-storage-container-create)-- [az storage container generate-sas](/cli/azure/storage/container#az-storage-container-generate-sas)-- [az storage container policy create](/cli/azure/storage/container/policy#az-storage-container-policy-create)-- [az webapp create](/cli/azure/webapp#az-webapp-create)-
-## Next steps
-
-[Azure Monitor CLI samples](../cli-samples.md)
-
-[Find and diagnose performance issues](../app/tutorial-performance.md)
-
-[Monitor and alert on application health](../app/tutorial-alert.md)
azure-monitor Solution Targeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-targeting.md
description: Targeting monitoring solutions allows you to limit monitoring solut
Previously updated : 04/27/2017 Last updated : 06/08/2022
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
A Log Analytics workspace is a unique environment for log data from Azure Monito
You can use a single workspace for all your data collection, or you may create multiple workspaces based on a variety of requirements such as the geographic location of the data, access rights that define which users can access data, and configuration settings such as the pricing tier and data retention.
-To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see Design a Log Analytics workspace configuration(workspace-design.md).
+To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Design a Log Analytics workspace configuration](./workspace-design.md).
## Data structure
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-cloudservice.md
Title: Profile live Azure Cloud Services with Application Insights | Microsoft Docs
-description: Enable Application Insights Profiler for Azure Cloud Services.
+ Title: Enable Profiler for Azure Cloud Services | Microsoft Docs
+description: Profile live Azure Cloud Services with Application Insights Profiler.
Previously updated : 08/06/2018 Last updated : 05/25/2022
-# Profile live Azure Cloud Services with Application Insights
+# Enable Profiler for Azure Cloud Services
-You can also deploy Application Insights Profiler on these
-* [Azure App Service](profiler.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Service Fabric applications](profiler-servicefabric.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines](profiler-vm.md?toc=/azure/azure-monitor/toc.json)
+Receive performance traces for your [Azure Cloud Service](../../cloud-services-extended-support/overview.md) by enabling the Application Insights Profiler. The Profiler is installed on your Cloud Service via the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md).
-Application Insights Profiler is installed with the Azure Diagnostics extension. You just need to configure Azure Diagnostics to install Profiler and send profiles to your Application Insights resource.
+In this article, you will:
-## Enable Profiler for Azure Cloud Services
-1. Check to make sure that you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or newer. If you are using OS family 4, you'll need to install .NET Framework 4.6.1 or newer with a [startup task](../../cloud-services/cloud-services-dotnet-install-dotnet.md). OS Family 5 includes a compatible version of .NET Framework by default.
+- Enable your Cloud Service to send diagnostics data to Application Insights.
+- Configure the Azure Diagnostics extension within your solution to install Profiler.
+- Deploy your service and generate traffic to view Profiler traces.
-1. Add [Application Insights SDK to Azure Cloud Services](../app/cloudservices.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+## Pre-requisites
- **The bug in the profiler that ships in the WAD for Cloud Services has been fixed.** The latest version of WAD (1.12.2.0) for Cloud Services works with all recent versions of the App Insights SDK. Cloud Service hosts will upgrade WAD automatically, but it isn't immediate. To force an upgrade, you can redeploy your service or reboot the node.
+- Make sure you've [set up diagnostics for Azure Cloud Services](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines).
+- Use [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or newer.
+ - If you're using [OS Family 4](../../cloud-services/cloud-services-guestos-update-matrix.md#family-4-releases), install .NET Framework 4.6.1 or newer with a [startup task](../../cloud-services/cloud-services-dotnet-install-dotnet.md).
+ - [OS Family 5](../../cloud-services/cloud-services-guestos-update-matrix.md#family-5-releases) includes a compatible version of .NET Framework by default.
-1. Track requests with Application Insights:
+## Track requests with Application Insights
- * For ASP.NET web roles, Application Insights can track the requests automatically.
+When publishing your CloudService to Azure portal, add the [Application Insights SDK to Azure Cloud Services](../app/cloudservices.md).
- * For worker roles, [add code to track requests](profiler-trackrequests.md?toc=/azure/azure-monitor/toc.json).
-1. Configure the Azure Diagnostics extension to enable Profiler:
+Once you've added the SDK and published your Cloud Service to the Azure portal, track requests using Application Insights.
- a. Locate the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) *diagnostics.wadcfgx* file for your application role, as shown here:
+- **For ASP.NET web roles**, Application Insights tracks the requests automatically.
+- **For worker roles**, you need to [add code manually to your application to track requests](profiler-trackrequests.md).
- ![Location of the diagnostics config file](./media/profiler-cloudservice/cloud-service-solution-explorer.png)
+## Configure the Azure Diagnostics extension
- If you can't find the file, see [Set up diagnostics for Azure Cloud Services and Virtual Machines](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines).
+Locate the Azure Diagnostics *diagnostics.wadcfgx* file for your application role:
- b. Add the following `SinksConfig` section as a child element of `WadCfg`:
- ```xml
- <WadCfg>
- <DiagnosticMonitorConfiguration>...</DiagnosticMonitorConfiguration>
- <SinksConfig>
- <Sink name="MyApplicationInsightsProfiler">
- <!-- Replace with your own Application Insights instrumentation key. -->
- <ApplicationInsightsProfiler>00000000-0000-0000-0000-000000000000</ApplicationInsightsProfiler>
- </Sink>
- </SinksConfig>
- </WadCfg>
- ```
+Add the following `SinksConfig` section as a child element of `WadCfg`:
- > [!NOTE]
- > If the *diagnostics.wadcfgx* file also contains another sink of type ApplicationInsights, all three of the following instrumentation keys must match:
- > * The key that's used by your application.
- > * The key that's used by the ApplicationInsights sink.
- > * The key that's used by the ApplicationInsightsProfiler sink.
- >
- > You can find the actual instrumentation key value that's used by the `ApplicationInsights` sink in the *ServiceConfiguration.\*.cscfg* files.
- > After the Visual Studio 15.5 Azure SDK release, only the instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other.
+```xml
+<WadCfg>
+ <DiagnosticMonitorConfiguration>...</DiagnosticMonitorConfiguration>
+ <SinksConfig>
+ <Sink name="MyApplicationInsightsProfiler">
+ <!-- Replace with your own Application Insights instrumentation key. -->
+ <ApplicationInsightsProfiler>00000000-0000-0000-0000-000000000000</ApplicationInsightsProfiler>
+ </Sink>
+ </SinksConfig>
+</WadCfg>
+```
-1. Deploy your service with the new Diagnostics configuration, and Application Insights Profiler is configured to run on your service.
+> [!NOTE]
+> The instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other.
+
+Deploy your service with the new Diagnostics configuration. Application Insights Profiler is now configured to run on your Cloud Service.
+
+## Generate traffic to your service
+
+Now that your Azure Cloud Service is deployed with Profiler, you can generate traffic to view Profiler traces.
+
+Generate traffic to your application by setting up an [availability test](../app/monitor-web-app-availability.md). Wait 10 to 15 minutes for traces to be sent to the Application Insights instance.
+
+Navigate to your Azure Cloud Service's Application Insights resource. In the left side menu, select **Performance**.
++
+Select the **Profiler** for your Cloud Service.
++
+Select **Profile now** to start a profiling session. This process will take a few minutes.
++
+For more instructions on profiling sessions, see the [Profiler overview](./profiler-overview.md#start-a-profiler-on-demand-session).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Next steps
-* Generate traffic to your application (for example, launch an [availability test](../app/monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance.
-* See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal.
-* To troubleshoot Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
-
+- Learn more about [configuring Profiler](./profiler-settings.md).
+- [Troubleshoot Profiler issues](./profiler-troubleshooting.md).
azure-monitor Vminsights Configure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-configure-workspace.md
Previously updated : 12/22/2020 Last updated : 06/07/2022
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
description: This article describes how you enable VM insights for a hybrid clou
Previously updated : 07/27/2020 Last updated : 06/08/2022
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.13.19190 | 0882504FE5828C4C4BA0A869BD9F6D5B0020A52156DDBD21D55AAADA762923C4 |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.13.19190 | 7D90A2A7C6F1D7FB2BCC274ADC4C5D6C118E832FF8A620971734AED4F446B030 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.14.20760 | D4DB398FAD36E86FEACCC41D7B8AF46711346A943806769B6CE017F0BF1625FF |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.14.20760 | 3DE3B485BA79B57E74B3DFB60FD277A30C8A5D1BD898455AD77FECF20E0E2610 |
## Install the Dependency agent on Windows
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
description: Learn how to deploy and configure VM insights. Find out the system
Previously updated : 12/22/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-policy.md
description: Describes how you enable VM insights for multiple Azure virtual mac
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-portal.md
description: Learn how to enable VM insights on a single Azure virtual machine o
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
description: Describes how to enable VM insights for Azure virtual machines or v
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-resource-manager.md
description: This article describes how you enable VM insights for one or more A
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
+
+ Title: How to Query Logs from VM insights
+description: VM insights solution collects metrics and log data to and this article describes the records and includes sample queries.
+++ Last updated : 06/08/2022++
+# How to query logs from VM insights
+
+VM insights collects performance and connection metrics, computer and process inventory data, and health state information and forwards it to the Log Analytics workspace in Azure Monitor. This data is available for [query](../logs/log-query-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting.
+
+## Map records
+
+One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is added to VM insights. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource.
+
+There are internally generated properties you can use to identify unique processes and computers:
+
+- Computer: Use *ResourceId* or *ResourceName_s* to uniquely identify a computer within a Log Analytics workspace.
+- Process: Use *ResourceId* to uniquely identify a process within a Log Analytics workspace. *ResourceName_s* is unique within the context of the machine on which the process is running (MachineResourceName_s)
+
+Because multiple records can exist for a specified process and computer in a specified time range, queries can return more than one record for the same computer or process. To include only the most recent record, add `| summarize arg_max(TimeGenerated, *) by ResourceId` to the query.
+
+### Connections and ports
+
+The Connection Metrics feature introduces two new tables in Azure Monitor logs - VMConnection and VMBoundPort. These tables provide information about the connections for a machine (inbound and outbound), as well as the server ports that are open/active on them. ConnectionMetrics are also exposed via APIs that provide the means to obtain a specific metric during a time window. TCP connections resulting from *accepting* on a listening socket are inbound, while those created by *connecting* to a given IP and port are outbound. The direction of a connection is represented by the Direction property, which can be set to either **inbound** or **outbound**.
+
+Records in these tables are generated from data reported by the Dependency Agent. Every record represents an observation over a 1-minute time interval. The TimeGenerated property indicates the start of the time interval. Each record contains information to identify the respective entity, that is, connection or port, as well as metrics associated with that entity. Currently, only network activity that occurs using TCP over IPv4 is reported.
+
+#### Common fields and conventions
+
+The following fields and conventions apply to both VMConnection and VMBoundPort:
+
+- Computer: Fully-qualified domain name of reporting machine
+- AgentId: The unique identifier for a machine with the Log Analytics agent
+- Machine: Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId
+- Process: Name of the Azure Resource Manager resource for the process exposed by ServiceMap. It is of the form *p-{hex string}*. Process is unique within a machine scope and to generate a unique process ID across machines, combine Machine and Process fields.
+- ProcessName: Executable name of the reporting process.
+- All IP addresses are strings in IPv4 canonical format, for example *13.107.3.160*
+
+To manage cost and complexity, connection records do not represent individual physical network connections. Multiple physical network connections are grouped into a logical connection, which is then reflected in the respective table. Meaning, records in *VMConnection* table represent a logical grouping and not the individual physical connections that are being observed. Physical network connection sharing the same value for the following attributes during a given one-minute interval, are aggregated into a single logical record in *VMConnection*.
+
+| Property | Description |
+|:--|:--|
+|Direction |Direction of the connection, value is *inbound* or *outbound* |
+|Machine |The computer FQDN |
+|Process |Identity of process or groups of processes, initiating/accepting the connection |
+|SourceIp |IP address of the source |
+|DestinationIp |IP address of the destination |
+|DestinationPort |Port number of the destination |
+|Protocol |Protocol used for the connection. Values is *tcp*. |
+
+To account for the impact of grouping, information about the number of grouped physical connections is provided in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|LinksEstablished |The number of physical network connections that have been established during the reporting time window |
+|LinksTerminated |The number of physical network connections that have been terminated during the reporting time window |
+|LinksFailed |The number of physical network connections that have failed during the reporting time window. This information is currently available only for outbound connections. |
+|LinksLive |The number of physical network connections that were open at the end of the reporting time window|
+
+#### Metrics
+
+In addition to connection count metrics, information about the volume of data sent and received on a given logical connection or network port are also included in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|BytesSent |Total number of bytes that have been sent during the reporting time window |
+|BytesReceived |Total number of bytes that have been received during the reporting time window |
+|Responses |The number of responses observed during the reporting time window.
+|ResponseTimeMax |The largest response time (milliseconds) observed during the reporting time window. If no value, the property is blank.|
+|ResponseTimeMin |The smallest response time (milliseconds) observed during the reporting time window. If no value, the property is blank.|
+|ResponseTimeSum |The sum of all response times (milliseconds) observed during the reporting time window. If no value, the property is blank.|
+
+The third type of data being reported is response time - how long does a caller spend waiting for a request sent over a connection to be processed and responded to by the remote endpoint. The response time reported is an estimation of the true response time of the underlying application protocol. It is computed using heuristics based on the observation of the flow of data between the source and destination end of a physical network connection. Conceptually, it is the difference between the time the last byte of a request leaves the sender, and the time when the last byte of the response arrives back to it. These two timestamps are used to delineate request and response events on a given physical connection. The difference between them represents the response time of a single request.
+
+In this first release of this feature, our algorithm is an approximation that may work with varying degree of success depending on the actual application protocol used for a given network connection. For example, the current approach works well for request-response based protocols such as HTTP(S), but does not work with one-way or message queue-based protocols.
+
+Here are some important points to consider:
+
+1. If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported.
+2. Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the machine is open to inbound traffic.
+3. To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the IsWildcardBind record property with the specific IP address, will be set to "True" to indicate that the port is exposed over every interface of the reporting machine.
+4. Ports that are bound only on a specific interface have IsWildcardBind set to *False*.
+
+#### Naming and Classification
+
+For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it is the same as DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the machine for RemoteIp. The RemoteDnsQuestions property represents the DNS questions reported by the machine for RemoteIp. The RemoveClassification property is reserved for future use.
+
+#### Geolocation
+
+*VMConnection* also includes geolocation information for the remote end of each connection record in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|RemoteCountry |The name of the country/region hosting RemoteIp. For example, *United States* |
+|RemoteLatitude |The geolocation latitude. For example, *47.68* |
+|RemoteLongitude |The geolocation longitude. For example, *-122.12* |
+
+#### Malicious IP
+
+Every RemoteIp property in *VMConnection* table is checked against a set of IPs with known malicious activity. If the RemoteIp is identified as malicious the following properties will be populated (they are empty, when the IP is not considered malicious) in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|MaliciousIp |The RemoteIp address |
+|IndicatorThreadType |Threat indicator detected is one of the following values, *Botnet*, *C2*, *CryptoMining*, *Darknet*, *DDos*, *MaliciousUrl*, *Malware*, *Phishing*, *Proxy*, *PUA*, *Watchlist*. |
+|Description |Description of the observed threat. |
+|TLPLevel |Traffic Light Protocol (TLP) Level is one of the defined values, *White*, *Green*, *Amber*, *Red*. |
+|Confidence |Values are *0 ΓÇô 100*. |
+|Severity |Values are *0 ΓÇô 5*, where *5* is the most severe and *0* is not severe at all. Default value is *3*. |
+|FirstReportedDateTime |The first time the provider reported the indicator. |
+|LastReportedDateTime |The last time the indicator was seen by Interflow. |
+|IsActive |Indicates indicators are deactivated with *True* or *False* value. |
+|ReportReferenceLink |Links to reports related to a given observable. |
+|AdditionalInformation |Provides additional information, if applicable, about the observed threat. |
+
+### Ports
+
+Ports on a machine that actively accept incoming traffic or could potentially accept traffic, but are idle during the reporting time window, are written to the VMBoundPort table.
+
+Every record in VMBoundPort is identified by the following fields:
+
+| Property | Description |
+|:--|:--|
+|Process | Identity of process (or groups of processes) with which the port is associated with.|
+|Ip | Port IP address (can be wildcard IP, *0.0.0.0*) |
+|Port |The Port number |
+|Protocol | The protocol. Example, *tcp* or *udp* (only *tcp* is currently supported).|
+
+The identity a port is derived from the above five fields and is stored in the PortId property. This property can be used to quickly find records for a specific port across time.
+
+#### Metrics
+
+Port records include metrics representing the connections associated with them. Currently, the following metrics are reported (the details for each metric are described in the previous section):
+
+- BytesSent and BytesReceived
+- LinksEstablished, LinksTerminated, LinksLive
+- ResposeTime, ResponseTimeMin, ResponseTimeMax, ResponseTimeSum
+
+Here are some important points to consider:
+
+- If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported.
+- Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the machine is open to inbound traffic.
+- To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the *IsWildcardBind* property for the record with the specific IP address, will be set to *True*. This indicates the port is exposed over every interface of the reporting machine.
+- Ports that are bound only on a specific interface have IsWildcardBind set to *False*.
+
+### VMComputer records
+
+Records with a type of *VMComputer* have inventory data for servers with the Dependency agent. These records have the properties in the following table:
+
+| Property | Description |
+|:--|:--|
+|TenantId | The unique identifier for the workspace |
+|SourceSystem | *Insights* |
+|TimeGenerated | Timestamp of the record (UTC) |
+|Computer | The computer FQDN |
+|AgentId | The unique ID of the Log Analytics agent |
+|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
+|DisplayName | Display name |
+|FullDisplayName | Full display name |
+|HostName | The name of machine without domain name |
+|BootTime | The machine boot time (UTC) |
+|TimeZone | The normalized time zone |
+|VirtualizationState | *virtual*, *hypervisor*, *physical* |
+|Ipv4Addresses | Array of IPv4 addresses |
+|Ipv4SubnetMasks | Array of IPv4 subnet masks (in the same order as Ipv4Addresses). |
+|Ipv4DefaultGateways | Array of IPv4 gateways |
+|Ipv6Addresses | Array of IPv6 addresses |
+|MacAddresses | Array of MAC addresses |
+|DnsNames | Array of DNS names associated with the machine. |
+|DependencyAgentVersion | The version of the Dependency agent running on the machine. |
+|OperatingSystemFamily | *Linux*, *Windows* |
+|OperatingSystemFullName | The full name of the operating system |
+|PhysicalMemoryMB | The physical memory in megabytes |
+|Cpus | The number of processors |
+|CpuSpeed | The CPU speed in MHz |
+|VirtualMachineType | *hyperv*, *vmware*, *xen* |
+|VirtualMachineNativeId | The VM ID as assigned by its hypervisor |
+|VirtualMachineNativeName | The name of the VM |
+|VirtualMachineHypervisorId | The unique identifier of the hypervisor hosting the VM |
+|HypervisorType | *hyperv* |
+|HypervisorId | The unique ID of the hypervisor |
+|HostingProvider | *azure* |
+|_ResourceId | The unique identifier for an Azure resource |
+|AzureSubscriptionId | A globally unique identifier that identifies your subscription |
+|AzureResourceGroup | The name of the Azure resource group the machine is a member of. |
+|AzureResourceName | The name of the Azure resource |
+|AzureLocation | The location of the Azure resource |
+|AzureUpdateDomain | The name of the Azure update domain |
+|AzureFaultDomain | The name of the Azure fault domain |
+|AzureVmId | The unique identifier of the Azure virtual machine |
+|AzureSize | The size of the Azure VM |
+|AzureImagePublisher | The name of the Azure VM publisher |
+|AzureImageOffering | The name of the Azure VM offer type |
+|AzureImageSku | The SKU of the Azure VM image |
+|AzureImageVersion | The version of the Azure VM image |
+|AzureCloudServiceName | The name of the Azure cloud service |
+|AzureCloudServiceDeployment | Deployment ID for the Cloud Service |
+|AzureCloudServiceRoleName | Cloud Service role name |
+|AzureCloudServiceRoleType | Cloud Service role type: *worker* or *web* |
+|AzureCloudServiceInstanceId | Cloud Service role instance ID |
+|AzureVmScaleSetName | The name of the virtual machine scale set |
+|AzureVmScaleSetDeployment | Virtual machine scale set deployment ID |
+|AzureVmScaleSetResourceId | The unique identifier of the virtual machine scale set resource.|
+|AzureVmScaleSetInstanceId | The unique identifier of the virtual machine scale set |
+|AzureServiceFabricClusterId | The unique identifer of the Azure Service Fabric cluster |
+|AzureServiceFabricClusterName | The name of the Azure Service Fabric cluster |
+
+### VMProcess records
+
+Records with a type of *VMProcess* have inventory data for TCP-connected processes on servers with the Dependency agent. These records have the properties in the following table:
+
+| Property | Description |
+|:--|:--|
+|TenantId | The unique identifier for the workspace |
+|SourceSystem | *Insights* |
+|TimeGenerated | Timestamp of the record (UTC) |
+|Computer | The computer FQDN |
+|AgentId | The unique ID of the Log Analytics agent |
+|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
+|Process | The unique identifier of the Service Map process. It is in the form of *p-{GUID}*.
+|ExecutableName | The name of the process executable |
+|DisplayName | Process display name |
+|Role | Process role: *webserver*, *appServer*, *databaseServer*, *ldapServer*, *smbServer* |
+|Group | Process group name. Processes in the same group are logically related, e.g., part of the same product or system component. |
+|StartTime | The process pool start time |
+|FirstPid | The first PID in the process pool |
+|Description | The process description |
+|CompanyName | The name of the company |
+|InternalName | The internal name |
+|ProductName | The name of the product |
+|ProductVersion | The version of the product |
+|FileVersion | The version of the file |
+|ExecutablePath |The path of the executable |
+|CommandLine | The command line |
+|WorkingDirectory | The working directory |
+|Services | An array of services under which the process is executing |
+|UserName | The account under which the process is executing |
+|UserDomain | The domain under which the process is executing |
+|_ResourceId | The unique identifier for a process within the workspace |
++
+## Sample map queries
+
+### List all known machines
+
+```kusto
+VMComputer | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### When was the VM last rebooted
+
+```kusto
+let Today = now(); VMComputer | extend DaysSinceBoot = Today - BootTime | summarize by Computer, DaysSinceBoot, BootTime | sort by BootTime asc
+```
+
+### Summary of Azure VMs by image, location, and SKU
+
+```kusto
+VMComputer | where AzureLocation != "" | summarize by Computer, AzureImageOffering, AzureLocation, AzureImageSku
+```
+
+### List the physical memory capacity of all managed computers
+
+```kusto
+VMComputer | summarize arg_max(TimeGenerated, *) by _ResourceId | project PhysicalMemoryMB, Computer
+```
+
+### List computer name, DNS, IP, and OS
+
+```kusto
+VMComputer | summarize arg_max(TimeGenerated, *) by _ResourceId | project Computer, OperatingSystemFullName, DnsNames, Ipv4Addresses
+```
+
+### Find all processes with "sql" in the command line
+
+```kusto
+VMProcess | where CommandLine contains_cs "sql" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### Find a machine (most recent record) by resource name
+
+```kusto
+search in (VMComputer) "m-4b9c93f9-bc37-46df-b43c-899ba829e07b" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### Find a machine (most recent record) by IP address
+
+```kusto
+search in (VMComputer) "10.229.243.232" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### List all known processes on a specified machine
+
+```kusto
+VMProcess | where Machine == "m-559dbcd8-3130-454d-8d1d-f624e57961bc" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### List all computers running SQL Server
+
+```kusto
+VMComputer | where AzureResourceName in ((search in (VMProcess) "*sql*" | distinct Machine)) | distinct Computer
+```
+
+### List all unique product versions of curl in my datacenter
+
+```kusto
+VMProcess | where ExecutableName == "curl" | distinct ProductVersion
+```
+
+### Create a computer group of all computers running CentOS
+
+```kusto
+VMComputer | where OperatingSystemFullName contains_cs "CentOS" | distinct Computer
+```
+
+### Bytes sent and received trends
+
+```kusto
+VMConnection | summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer | order by Computer desc | render timechart
+```
+
+### Which Azure VMs are transmitting the most bytes
+
+```kusto
+VMConnection | join kind=fullouter(VMComputer) on $left.Computer == $right.Computer | summarize count(BytesSent) by Computer, AzureVMSize | sort by count_BytesSent desc
+```
+
+### Link status trends
+
+```kusto
+VMConnection | where TimeGenerated >= ago(24hr) | where Computer == "acme-demo" | summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h) | render timechart
+```
+
+### Connection failures trend
+
+```kusto
+VMConnection | where Computer == "acme-demo" | extend bythehour = datetime_part("hour", TimeGenerated) | project bythehour, LinksFailed | summarize failCount = count() by bythehour | sort by bythehour asc | render timechart
+```
+
+### Bound Ports
+
+```kusto
+VMBoundPort
+| where TimeGenerated >= ago(24hr)
+| where Computer == 'admdemo-appsvr'
+| distinct Port, ProcessName
+```
+
+### Number of open ports across machines
+
+```kusto
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize by Computer, Machine, Port, Protocol
+| summarize OpenPorts=count() by Computer, Machine
+| order by OpenPorts desc
+```
+
+### Score processes in your workspace by the number of ports they have open
+
+```kusto
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize by ProcessName, Port, Protocol
+| summarize OpenPorts=count() by ProcessName
+| order by OpenPorts desc
+```
+
+### Aggregate behavior for each port
+
+This query can then be used to score ports by activity, e.g., ports with most inbound/outbound traffic, ports with most connections
+```kusto
+//
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
+| project-away TimeGenerated
+| order by Machine, Computer, Port, Ip, ProcessName
+```
+
+### Summarize the outbound connections from a group of machines
+
+```kusto
+// the machines of interest
+let machines = datatable(m: string) ["m-82412a7a-6a32-45a9-a8d6-538354224a25"];
+// map of ip to monitored machine in the environment
+let ips=materialize(VMComputer
+| summarize ips=makeset(todynamic(Ipv4Addresses)) by MonitoredMachine=AzureResourceName
+| mvexpand ips to typeof(string));
+// all connections to/from the machines of interest
+let out=materialize(VMConnection
+| where Machine in (machines)
+| summarize arg_max(TimeGenerated, *) by ConnectionId);
+// connections to localhost augmented with RemoteMachine
+let local=out
+| where RemoteIp startswith "127."
+| project ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol, RemoteIp, RemoteMachine=Machine;
+// connections not to localhost augmented with RemoteMachine
+let remote=materialize(out
+| where RemoteIp !startswith "127."
+| join kind=leftouter (ips) on $left.RemoteIp == $right.ips
+| summarize by ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol, RemoteIp, RemoteMachine=MonitoredMachine);
+// the remote machines to/from which we have connections
+let remoteMachines = remote | summarize by RemoteMachine;
+// all augmented connections
+(local)
+| union (remote)
+//Take all outbound records but only inbound records that come from either //unmonitored machines or monitored machines not in the set for which we are computing dependencies.
+| where Direction == 'outbound' or (Direction == 'inbound' and RemoteMachine !in (machines))
+| summarize by ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol, RemoteIp, RemoteMachine
+// identify the remote port
+| extend RemotePort=iff(Direction == 'outbound', DestinationPort, 0)
+// construct the join key we'll use to find a matching port
+| extend JoinKey=strcat_delim(':', RemoteMachine, RemoteIp, RemotePort, Protocol)
+// find a matching port
+| join kind=leftouter (VMBoundPort
+| where Machine in (remoteMachines)
+| summarize arg_max(TimeGenerated, *) by PortId
+| extend JoinKey=strcat_delim(':', Machine, Ip, Port, Protocol)) on JoinKey
+// aggregate the remote information
+| summarize Remote=makeset(iff(isempty(RemoteMachine), todynamic('{}'), pack('Machine', RemoteMachine, 'Process', Process1, 'ProcessName', ProcessName1))) by ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol
+```
+
+## Performance records
+Records with a type of *InsightsMetrics* have performance data from the guest operating system of the virtual machine. These records have the properties in the following table:
++
+| Property | Description |
+|:--|:--|
+|TenantId | Unique identifier for the workspace |
+|SourceSystem | *Insights* |
+|TimeGenerated | Time the value was collected (UTC) |
+|Computer | The computer FQDN |
+|Origin | *vm.azm.ms* |
+|Namespace | Category of the performance counter |
+|Name | Name of the performance counter |
+|Val | Collected value |
+|Tags | Related details about the record. See the table below for tags used with different record types. |
+|AgentId | Unique identifier for each computer's agent |
+|Type | *InsightsMetrics* |
+|_ResourceId_ | Resource ID of the virtual machine |
+
+The performance counters currently collected into the *InsightsMetrics* table are listed in the following table:
+
+| Namespace | Name | Description | Unit | Tags |
+|:|:|:|:|:|
+| Computer | Heartbeat | Computer Heartbeat | | |
+| Memory | AvailableMB | Memory Available Bytes | Megabytes | memorySizeMB - Total memory size|
+| Network | WriteBytesPerSecond | Network Write Bytes Per Second | BytesPerSecond | NetworkDeviceId - Id of the device<br>bytes - Total sent bytes |
+| Network | ReadBytesPerSecond | Network Read Bytes Per Second | BytesPerSecond | networkDeviceId - Id of the device<br>bytes - Total received bytes |
+| Processor | UtilizationPercentage | Processor Utilization Percentage | Percent | totalCpus - Total CPUs |
+| LogicalDisk | WritesPerSecond | Logical Disk Writes Per Second | CountPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | WriteLatencyMs | Logical Disk Write Latency Millisecond | MilliSeconds | mountId - Mount ID of the device |
+| LogicalDisk | WriteBytesPerSecond | Logical Disk Write Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | TransfersPerSecond | Logical Disk Transfers Per Second | CountPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | TransferLatencyMs | Logical Disk Transfer Latency Millisecond | MilliSeconds | mountId - Mount ID of the device |
+| LogicalDisk | ReadsPerSecond | Logical Disk Reads Per Second | CountPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | ReadLatencyMs | Logical Disk Read Latency Millisecond | MilliSeconds | mountId - Mount ID of the device |
+| LogicalDisk | ReadBytesPerSecond | Logical Disk Read Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | FreeSpacePercentage | Logical Disk Free Space Percentage | Percent | mountId - Mount ID of the device |
+| LogicalDisk | FreeSpaceMB | Logical Disk Free Space Bytes | Megabytes | mountId - Mount ID of the device<br>diskSizeMB - Total disk size |
+| LogicalDisk | BytesPerSecond | Logical Disk Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device |
++
+## Next steps
+
+* If you are new to writing log queries in Azure Monitor, review [how to use Log Analytics](../logs/log-analytics-tutorial.md) in the Azure portal to write log queries.
+
+* Learn about [writing search queries](../logs/get-started-queries.md).
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
description: Map is a feature of VM insights. It automatically discovers applica
Previously updated : 03/20/2020 Last updated : 06/08/2022
azure-monitor Vminsights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-optout.md
description: This article describes how to stop monitoring your virtual machines
Previously updated : 03/12/2020 Last updated : 06/08/2022
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
description: Overview of VM insights, which monitors the health and performance
Previously updated : 07/22/2020- Last updated : 06/08/2022 # Overview of VM insights VM insights monitors the performance and health of your virtual machines and virtual machine scale sets, including their running processes and dependencies on other resources. It can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues and can also help you understand whether an issue is related to other dependencies.
+> [!NOTE]
+> VM insights does not currently support [Azure Monitor agent](../agents/azure-monitor-agent-overview.md). You can
+ VM insights supports Windows and Linux operating systems on the following machines: - Azure virtual machines
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
description: Performance is a feature of the VM insights that automatically disc
Previously updated : 05/31/2020 Last updated : 06/08/2022
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
description: Troubleshoot VM insights installation.
Previously updated : 03/15/2021 Last updated : 06/08/2022
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
na Previously updated : 12/14/2021 Last updated : 06/06/2022 # Convert an NFS volume between NFSv3 and NFSv4.1
This section shows you how to convert the NFSv3 volume to NFSv4.1.
2. Convert the NFS version: 1. In the Azure portal, navigate to the NFS volume that you want to convert.
- 2. Click **Edit**.
+ 2. Select **Edit**.
3. In the Edit window that appears, select **NSFv4.1** in the **Protocol type** pulldown. ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
This section shows you how to convert the NFSv4.1 volume to NFSv3.
> [!IMPORTANT] > Converting a volume from NFSv4.1 to NFSv3 will result in all NFSv4.1 features such as ACLs and file locking to become unavailable.
-1. Before converting the volume, unmount it from the clients in preparation. See [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
-
- Example:
- `sudo umount /path/to/vol1`
+1. Before converting the volume:
+ 1. Unmount it from the clients in preparation. See [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
+ Example:
+ `sudo umount /path/to/vol1`
+ 2. Change the export policy to read-only. See [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md).
2. Convert the NFS version: 1. In the Azure portal, navigate to the NFS volume that you want to convert.
- 2. Click **Edit**.
+ 2. Select **Edit**.
3. In the Edit window that appears, select **NSFv3** in the **Protocol type** pulldown. ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
This section shows you how to convert the NFSv4.1 volume to NFSv3.
Example: `mount -v | grep /path/to/vol1`
- `vol1:/path/to/vol1 on /path type nfs (rw,intr,tcp,nfsvers=3,rsize=16384,wsize=16384,addr=192.168.1.1)`
+ `vol1:/path/to/vol1 on /path type nfs (rw,intr,tcp,nfsvers=3,rsize=16384,wsize=16384,addr=192.168.1.1)`.
+
+7. Change the read-only export policy back to the original export policy. See See [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md).
-7. Verify access using root and non-root users.
+8. Verify access using root and non-root users.
## Next steps
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
# Compare Azure Media Services v3 presets and Azure Video Indexer
-This article compares the capabilities of **Azure Video Indexer (formerly Video Indexer) APIs** and **Media Services v3 APIs**.
+This article compares the capabilities of **Azure Video Indexer APIs** and **Media Services v3 APIs**.
Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). The following table offers the current guideline for understanding the differences and similarities.
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
The article also covers [Linking an Azure Video Indexer account to Azure Governm
If the connection to Azure failed, you can attempt to troubleshoot the problem by connecting manually. > [!NOTE]
-> It's mandatory to have the following three accounts in the same region: the Azure Video Indexer account that you're connecting with the Media Services account, as well as the Azure storage account connected to the same Media Services account.
+> It's mandatory to have the following three accounts in the same region: the Azure Video Indexer account that you're connecting with the Media Services account, as well as the Azure storage account connected to the same Media Services account. When you create an Azure Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
### Create and configure a Media Services account
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
The following schemas are in use by Azure Video Indexer
## Next steps <!-- replace below with the proper link to your main monitoring service article -->-- See [Monitoring Azure Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.
+- See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.
- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
Title: How to enable network security
-description: This article gives an overview of the Azure Video Indexer (formerly Video Analyzer for Media) network security options.
+description: This article gives an overview of the Azure Video Indexer network security options.
Last updated 04/11/2022
# NSG service tags for Azure Video Indexer
-Azure Video Indexer (formerly Video Analyzer for Media) is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](/azure/virtual-network/service-tags-overview). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Azure Video Indexer is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](/azure/virtual-network/service-tags-overview). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
## Get started with service tags
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
Last updated 12/17/2021
# Index your videos stored on OneDrive
-This article shows how to index videos stored on OneDrive by using the Azure Video Indexer (formerly Azure Azure Video Indexer) website.
+This article shows how to index videos stored on OneDrive by using the Azure Video Indexer website.
## Supported file formats
This parameter specifies the URL of the video or audio file to be indexed. If th
### Code sample
+> [!NOTE]
+> The following sample is intended for Classic accounts only and isn't compatible with ARM accounts. For an updated sample for ARM, see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
+ The following C# code snippets demonstrate the usage of all the Azure Video Indexer APIs together. ### [Classic account](#tab/With-classic-account/)
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Azure Video Indexer makes an inference of main topics from transcripts. When pos
} ] },
-` ` `
+ ``` ## Next steps
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Title: What is Azure Video Indexer? description: This article gives an overview of the Azure Video Indexer service. Previously updated : 02/15/2022 Last updated : 06/09/2022
Azure Video Indexer is a cloud application, part of Azure Applied AI Services, built on Azure Media Services and Azure Cognitive Services (such as the Face, Translator, Computer Vision, and Speech). It enables you to extract the insights from your videos using Azure Video Indexer video and audio models.
-To start extracting insights with Azure Video Indexer, you need to create an account and upload videos. When you upload your videos to Azure Video Indexer, it analyses both visuals and audio by running different AI models. As Azure Video Indexer analyzes your video, the insights that are extracted by the AI models.
-
-When you create an Azure Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account. For more information, see [Create an Azure Video Indexer account connected to Azure](connect-to-azure.md).
-
-The following diagram is an illustration and not a technical explanation of how Azure Video Indexer works in the backend.
+Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure Video Indexer in the background.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Azure Video Indexer flow diagram":::
+To start extracting insights with Azure Video Indexer, you need to [create an account](connect-to-azure.md) and upload videos, see the [how can i get started](#how-can-i-get-started-with-azure-video-indexer) section below.
+ ## Compliance, Privacy and Security As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
To learn about compliance, privacy and security in Azure Video Indexer please vi
Azure Video Indexer's insights can be applied to many scenarios, among them are: * *Deep search*: Use the insights extracted from the video to enhance the search experience across a video library. For example, indexing spoken words and faces can enable the search experience of finding moments in a video where a person spoke certain words or when two people were seen together. Search based on such insights from videos is applicable to news agencies, educational institutes, broadcasters, entertainment content owners, enterprise LOB apps, and in general to any industry that has a video library that users need to search against.
-* *Content creation*: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps for the people and label appearances make the creation process much smoother and easier, and allows you to get to the parts of the video you need for the content you're creating.
+* *Content creation*: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
* *Accessibility*: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure Video Indexer in multiple languages. * *Monetization*: Azure Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server. * *Content moderation*: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Before you begin the prerequisites, review the [Performance best practices](#per
## Supported regions
-Azure VMware Solution currently supports the following regions: East US, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, France Central, Germany West Central, Japan West, North Central US, North Europe, Southeast Asia, Switzerland West, UK South, UK West, US South Central, and West US. The list of supported regions will expand as the preview progresses.
+Azure VMware Solution currently supports the following regions:
+
+**America** : East US, West US, Central US, South Central US, North Central US, Canada East, Canada Central .
+
+**Europe** : North Europe, UK West, UK South, France Central, Switzerland West, Germany West Central.
+
+**Asia** : Southeast Asia, Japan West.
+
+**Australia** : Australia East, Australia Southeast.
+
+**Brazil** : Brazil South.
+
+The list of supported regions will expand as the preview progresses.
## Performance best practices
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Last updated 05/12/2022
In this article, you'll learn how to enable Public IP to the NSX Edge for your Azure VMware Solution. >[!TIP]
->Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md).
+>Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md).
Public IP to the NSX Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment. The Public IP is configured in Azure VMware Solution through the Azure portal and the NSX-T Data center interface within your Azure VMware Solution private cloud. With this capability, you have the following features:
The architecture shows Internet access to and from your Azure VMware Solution pr
:::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the NSX Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png"::: ## Configure a Public IP in the Azure portal
-1. Log in to the Azure portal.
+1. Log on to the Azure portal.
1. Search for and select Azure VMware Solution. 2. Select the Azure VMware Solution private cloud. 1. In the left navigation, under **Workload Networking**, select **Internet connectivity**. 4. Select the **Connect using Public IP down to the NSX-T Edge** button. >[!TIP]
->Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md)
+>Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md).
5. Select **Public IP**. :::image type="content" source="media/public-ip-nsx-edge/public-ip-internet-connectivity.png" alt-text="Diagram that shows how to select public IP to the NSX Edge":::
For example, the following rule is set to Match External Address, and this setti
If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM. For more information on the NSX-T Gateway Firewall see the [NSX-T Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
-The Distributed Firewall may also be used to filter traffic to VMs. This feature is outside the scope of this document. The [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html) .
+The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html)git status.
+
+To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](https://docs.microsoft.com/azure/azure-resource-manager/management/preview-features?tabs=azure-portal).
## Next steps
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
+
+ Title: Azure Web PubSub service data plane REST API reference overview
+description: Describes the REST APIs Azure Web PubSub supports to manage the WebSocket connections and send messages to them.
++++ Last updated : 06/09/2022++
+# Azure Web PubSub service data plane REST API reference
+
+![Diagram showing the Web PubSub service workflow.](./media/concept-service-internals/workflow.png)
+
+As illustrated by the above workflow graph, and also detailed workflow described in [internals](./concept-service-internals.md), your app server can send messages to clients or to manage the connected clients using REST APIs exposed by Web PubSub service. This article describes the REST APIs in detail.
+
+## Using REST API
+
+### Authenticate via Azure Web PubSub Service AccessKey
+
+In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with Azure Web PubSub Service.
+
+<a name="signing"></a>
+#### Signing Algorithm and Signature
+
+`HS256`, namely HMAC-SHA256, is used as the signing algorithm.
+
+You should use the `AccessKey` in Azure Web PubSub Service instance's connection string to sign the generated JWT token.
+
+#### Claims
+
+Below claims are required to be included in the JWT token.
+
+Claim Type | Is Required | Description
+||
+`aud` | true | Should be the **SAME** as your HTTP request url, trailing slash and query parameters not included. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub`.
+`exp` | true | Epoch time when this token will be expired.
+
+A pseudo code in JS:
+```js
+const bearerToken = jwt.sign({}, connectionString.accessKey, {
+ audience: request.url,
+ expiresIn: "1h",
+ algorithm: "HS256",
+ });
+```
+
+### Authenticate via Azure Active Directory Token (Azure AD Token)
+
+Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
+
+**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory.
+
+[Learn how to generate Azure AD Tokens](/azure/active-directory/develop/reference-v2-libraries)
+
+You could also use **Role Based Access Control (RBAC)** to authorize the request from your server to Azure Web PubSub Service.
+
+[Learn how to configure Role Based Access Control roles for your resource](./howto-authorize-from-application.md#add-role-assignments-on-azure-portal)
+
+## APIs
+
+| Operation Group | Description |
+|--|-|
+|[Service Status](/rest/api/webpubsub/dataplane/health-api)| Provides operations to check the service status |
+|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. |
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
These are top scenarios involving combinations of resources, features and Cloud
| Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. | | Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins are not recommended](./deploy-prerequisite.md#required-service-definition-file-csdef-updates) for use on Cloud Services (extended support).| | Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. |
-Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The migration will complete, but the role sizes will be updated to use modern role sizes. There is no change in cost or SKU properties and virtual machine will not be rebooted for this change. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)|
+Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The role sizes need to be updated before migration. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)|
| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. | | Cloud Service in a virtual network but does not have an explicit subnet assigned | Not supported. Mitigation involves moving the role into a subnet, which requires a role restart (downtime) |
As part of migration, the resource names are changed, and few Cloud Services fea
Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs. ## Next steps
-For assistance migrating your Cloud Services (classic) deployment to Cloud Services (extended support) see our [Support and troubleshooting](support-help.md) landing page.
+For assistance migrating your Cloud Services (classic) deployment to Cloud Services (extended support) see our [Support and troubleshooting](support-help.md) landing page.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the released languages and public preview languages.
|English (United Kingdom)|`en-GB`<sup>Public preview</sup> | |English (United States)|`en-US`<sup>General available</sup>| |French (France)|`fr-FR`<sup>Public preview</sup> |
+|German (Germany)|`de-DE`<sup>Public preview</sup> |
|Spanish (Spain)|`es-ES`<sup>Public preview</sup> | > [!NOTE]
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
Title: Create shared access signature (SAS) tokens for containers and blobs with Microsoft Storage Explorer
+ Title: Create shared access signature (SAS) tokens for storage containers and blobs
description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal.
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
If you try to access the resultUrl directly, you will get a 404 error. You must
```bash curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
- "ImportJobOptions": {"fileUri": "FILE-URI-PATH"}
+ "fileUri": "FILE-URI-PATH"
}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/:import?api-version=2021-10-01&format=tsv' ```
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features, which are currently available i
| | Place a group call with PSTN participants | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ |
+| | Suppport for early media | ❌ |
| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | | | Get camera list | ✔️ |
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following list presents the set of features which are currently available in
| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️ | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️ | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Support for early media | ❌ | ✔️ | ✔️ | ✔️ |
| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️ | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️ | ✔️ | | | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Browser Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/browser-support.md
Title: How to verify if your application is running in a web browser supported by Azure Communication Services
+ Title: Verify if a web browser is supported
+ description: Learn how to get current browser environment details using the Azure Communication Services Calling SDK for JavaScript -+ Previously updated : 05/27/2022- ++ Last updated : 06/08/2021++
+#Customer intent: As a developer, I can verify that a browser an end user is trying to do a call on is supported by Azure Communication Services.
+ # How to verify if your application is running in a web browser supported by Azure Communication Services
confidential-computing Confidential Nodes Aks Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-addon.md
The SGX Device plugin implements the Kubernetes device plugin interface for Encl
## PSW with SGX quote helper
-Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use the PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host. Read the feature details below.
+Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host. Read the feature details below.
[Enclave applications](confidential-computing-enclaves.md) that do remote attestation require a generated quote. This quote provides cryptographic proof of the application's identity, state, and running environment. The generation requires trusted software components that are part of Intel's PSW.
Enclave applications that do remote attestation need to generate a quote. The qu
> [!NOTE] > This feature is only required for DCsv2/DCsv3 VMs that use specialized Intel SGX hardware.
-Intel supports two attestation modes to run the quote generation. For how to choose which type, see the [attestation type differences](#attestation-type-differences).
+Intel supports two attestation modes to run the quote generation. For how to choose which type, see the [attestation type differences] (#attestation-type-differences).
- **in-proc**: hosts the trusted software components inside the enclave application process. This method is useful when you are performing local attestation (between 2 enclave apps in a single VM node) - **out-of-proc**: hosts the trusted software components outside of the enclave application. This is a preferred method when performing remote attestation.
-SGX applications built using Open Enclave SDK by default use in-proc attestation mode. SGX-based applications allow out-of-proc and require extra hosting. These applications expose the required components such as Architectural Enclave Service Manager (AESM), external to the application.
+SGX applications are built using Open Enclave SDK by default use in-proc attestation mode. SGX-based applications allow out-of-proc and require extra hosting. These applications expose the required components such as Architectural Enclave Service Manager (AESM), external to the application.
It's highly recommended to use this feature. This feature enhances uptime for your enclave apps during Intel Platform updates or DCAP driver updates.
It's highly recommended to use this feature. This feature enhances uptime for yo
No updates are required for quote generation components of PSW for each containerized application.
-With out-of-proc, container owners donΓÇÖt need to manage updates within their container. Container owners instead rely on the provided interface that invokes the centralized service outside of the container. The provider update sand manages this service.
+With out-of-proc, container owners donΓÇÖt need to manage updates within their container. Container owners instead rely on the provided interface that invokes the centralized service outside of the container.
-For out-of-proc, there's not a concern of failures because of out-of-date PSW components. The quote generation involves the trusted SW components - Quoting Enclave (QE) & Provisioning Certificate Enclave (PCE), which are part of the trusted computing base (TCB). These SW components must be up to date to maintain the attestation requirements. The provider manages the updates to these components. Customers never have to deal with attestation failures because of out-of-date trusted SW components within their container.
+For out-of-proc, there's no concern of failures because of out-of-date PSW components. The quote generation involves the trusted SW components - Quoting Enclave (QE) & Provisioning Certificate Enclave (PCE), which are part of the trusted computing base (TCB). These SW components must be up to date to maintain the attestation requirements. The provider manages the updates to these components. Customers never have to deal with attestation failures because of out-of-date trusted SW components within their container.
Out-of-proc better uses EPC memory. In in-proc attestation mode, each enclave application instantiates the copy of QE and PCE for remote attestation. With out-of-proc, the container doesn't host those enclaves, and doesnΓÇÖt consume enclave memory from the container quota.
The out-of-proc attestation model works for confidential workloads. The quote re
![Diagram of quote requestor and quote generation interface.](./media/confidential-nodes-out-of-proc-attestation/aesmmanager.png)
-The abstract model applies to confidential workload scenarios. This model uses already available AESM service. AESM is containerized and deployed as a daemon set across the Kubernetes cluster. Kubernetes guarantees a single instance of an AESM service container, wrapped in a pod, to be deployed on each agent node. The new SGX Quote daemon set has a dependency on the `sgx-device-plugin` daemon set, since the AESM service container would request EPC memory from `sgx-device-plugin` for launching QE and PCE enclaves.
+The abstract model applies to confidential workload scenarios. This model uses the already available AESM service. AESM is containerized and deployed as a daemon set across the Kubernetes cluster. Kubernetes guarantees a single instance of an AESM service container, wrapped in a pod, to be deployed on each agent node. The new SGX Quote daemon set has a dependency on the `sgx-device-plugin` daemon set, since the AESM service container would request EPC memory from `sgx-device-plugin` for launching QE and PCE enclaves.
Each container needs to opt in to use out-of-proc quote generation by setting the environment variable `SGX_AESM_ADDR=1` during creation. The container also must include the package `libsgx-quote-ex`, which directs the request to default Unix domain socket An application can still use the in-proc attestation as before. However, you can't simultaneously use both in-proc and out-of-proc within an application. The out-of-proc infrastructure is available by default and consumes resources. > [!NOTE]
-> If you are using a Intel SGX wrapper software(OSS/ISV) to run you unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider.
+> If you are using a Intel SGX wrapper software (OSS/ISV) to run you unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider.
### Sample implementation
-The below docker file is a sample for an Open Enclave-based application. Set the `SGX_AESM_ADDR=1` environment variable in the Docker file. Or, set the variable in the deployment file. Follow this sample for the Docker file and deployment YAML details.
+By default, this service is not enabled for your AKS Cluster with "confcom" addon. Please update the addon with the below command
+
+```azurecli
+az aks addon update --addon confcom --name " YourAKSClusterName " --resource-group "YourResourceGroup " --enable-sgxquotehelper
+```
+Once the service is up, use the below docker sample for an Open Enclave-based application to validate the flow. Set the `SGX_AESM_ADDR=1` environment variable in the Docker file. Or, set the variable in the deployment file. Follow this sample for the Docker file and deployment YAML details.
> [!Note]
-> The **libsgx-quote-ex** package from Intel needs to be packaged in the application container for out-of-proc attestation to work properly.
+> The **libsgx-quote-ex** package from Intel needs to be packaged in the application container for out-of-proc attestation to work properly. The instructions below have the details.
```yaml # Refer to Intel_SGX_Installation_Guide_Linux for detail
RUN apt-get update && apt-get install -y \
WORKDIR /opt/openenclave/share/openenclave/samples/remote_attestation RUN . /opt/openenclave/share/openenclave/openenclaverc \ && make build
-# this sets the flag for out of proc attestation mode. alternatively you can set this flag on the deployment files
+# this sets the flag for out of proc attestation mode, alternatively you can set this flag on the deployment files
ENV SGX_AESM_ADDR=1 CMD make run
spec:
path: /var/run/aesmd ```
+The deployment should succeed and allow your apps to perform remote attestation using the SGX Quote Helper service.
++ ## Next Steps - [Set up Confidential Nodes (DCsv2/DCsv3-Series) on AKS](./confidential-enclave-nodes-aks-get-started.md)
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
Previously updated : 06/07/2022 Last updated : 06/09/2022 zone_pivot_groups: azure-cli-or-portal
az containerapp env create `
-> [!NOTE]
-> As you call `az conatinerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
- The following table describes the parameters used in for `containerapp env create`. | Parameter | Description |
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Previously updated : 06/07/2022 Last updated : 06/09/2022 zone_pivot_groups: azure-cli-or-portal
az containerapp env create `
-> [!NOTE]
-> As you call `az containerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
- The following table describes the parameters used in `containerapp env create`. | Parameter | Description |
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md
az acr repository show-manifests \
--repository hello-world ```
+To import an artifact by digest without adding a tag:
+
+```azurecli
+az acr import \
+ --name myregistry \
+ --source docker.io/library/hello-world@sha256:abc123 \
+ --repository hello-world
+```
+ If you have a [Docker Hub account](https://www.docker.com/pricing), we recommend that you use the credentials when importing an image from Docker Hub. Pass the Docker Hub user name and the password or a [personal access token](https://docs.docker.com/docker-hub/access-tokens/) as parameters to `az acr import`. The following example imports a public image from the `tensorflow` repository in Docker Hub, using Docker Hub credentials: ```azurecli
cosmos-db Cassandra Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-partitioning.md
When data is returned, it is sorted by the clustering key, as expected in Apache
:::image type="content" source="./media/cassandra-partitioning/select-from-pk.png" alt-text="Screenshot that shows the returned data that is sorted by the clustering key."::: > [!WARNING]
-> When querying data, if you want to filter *only* on the partition key value element of a compound primary key (as is the case above), ensure that you *explicitly add a secondary index on the partition key*:
+> When querying data in a table that has a compound primary key, if you want to filter on the partition key *and* any other non-indexed fields aside from the clustering key, ensure that you *explicitly add a secondary index on the partition key*:
> > ```shell > CREATE INDEX ON uprofile.user (user);
cosmos-db Secondary Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md
It's not advised to create an index on a frequently updated column. It is pruden
> - Clustering keys > [!WARNING]
-> If you have a [compound primary key](cassandra-partitioning.md#compound-primary-key) in your table, and you want to filter *only* on the partition key value element of the compound primary key, please ensure that you *explicitly add a secondary index on the partition key*. Azure Cosmos DB Cassandra API does not apply indexes to partition keys by default, and the index in this scenario may significantly improve query performance. Review our article on [partitioning](cassandra-partitioning.md) for more information.
+> Partition keys are not indexed by default in Cassandra API. If you have a [compound primary key](cassandra-partitioning.md#compound-primary-key) in your table, and you filter either on partition key and clustering key, or just partition key, this will give the desired behaviour. However, if you filter on partition key and any other non-indexed fields aside from the clustering key, this will result in a partition key fan-out - even if the other non-indexed fields have a secondary index. If you have a compound primary key in your table, and you want to filter on both the partition key value element of the compound primary key, plus another field that is not the partition key or clustering key, please ensure that you explicitly add a secondary index on the *partition key*. The index in this scenario should significantly improve query performance, even if the other non-partition key and non-clustering key fields have no index. Review our article on [partitioning](cassandra-partitioning.md) for more information.
## Indexing example
cosmos-db Sql Api Dotnet V2sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v2sdk-samples.md
Title: 'Azure Cosmos DB: .NET examples for the SQL API' description: Find C# .NET examples on GitHub for common tasks using the Azure Cosmos DB SQL API, including CRUD operations.--+++
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
Title: 'Azure Cosmos DB: .NET (Microsoft.Azure.Cosmos) examples for the SQL API' description: Find the C# .NET v3 SDK examples on GitHub for common tasks by using the Azure Cosmos DB SQL API.--+++ Last updated 05/02/2020 - # Azure Cosmos DB .NET v3 SDK (Microsoft.Azure.Cosmos) examples for the SQL API
cost-management-billing Consumption Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/consumption-api-overview.md
# Azure consumption API overview
-The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. These APIs currently only support Enterprise Enrollments and Web Direct Subscriptions (with a few exceptions). The APIs are continually updated to support other types of Azure subscriptions.
+The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. These APIs currently only support Enterprise Enrollments, Web Direct Subscriptions (with a few exceptions), and CSP Azure plan subscriptions. The APIs are continually updated to support other types of Azure subscriptions.
Azure Consumption APIs provide access to: - Enterprise and Web Direct Customers
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/subscription-transfer.md
- Title: About transferring billing ownership for an Azure subscription
-description: This article explains the things you need to know before you transfer billing ownership of an Azure subscription to another account.
-keywords: transfer azure subscription, azure transfer subscription, move azure subscription to another account,azure change subscription owner, transfer azure subscription to another account, azure transfer billing
--
-tags: billing,top-support-issue
--- Previously updated : 09/15/2021----
-# About transferring billing ownership for an Azure subscription
-
-This article helps you understand the things you should know before you transfer billing ownership of an Azure subscription to another account.
-
-You might want to transfer billing ownership of your Azure subscription if you're leaving your organization, or you want your subscription to be billed to another account. Transferring billing ownership to another account provides the administrators in the new account permission for billing tasks. They can change the payment method, view charges, and cancel the subscription.
-
-If you want to keep the billing ownership but change the type of your subscription, see [Switch your Azure subscription to another offer](../manage/switch-azure-offer.md). To control who can access resources in the subscription, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-
-If you're an Enterprise Agreement (EA) customer, your enterprise administrators can transfer billing ownership of your subscriptions between accounts.
-
-Only the billing administrator of an account can transfer ownership of a subscription.
-
-## Determine if you are a billing administrator
-
-<a name="whoisaa"></a>
-
-In effort to do the transfer, locate the person who has access to manage billing for an account. They're authorized to access billing on the [Azure portal](https://portal.azure.com) and do various billing tasks like create subscriptions, view and pay invoices, or update payment methods.
-
-### Check if you have billing access
-
-1. To identify accounts for which you have billing access, visit the [Cost Management + Billing page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/Overview).
-
-2. Select **Billing accounts** from the left-hand menu.
-
-3. The **Billing scope** listing page shows all the subscriptions where you have access to the billing details.
-
-### Check by subscription
-
-1. If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade).
-
-2. Select the subscription you want to check.
-
-3. Under the **Settings** heading, select **Properties**. See the **Account Admin** box to understand who is the account administrator of the subscription.
-
- > [!NOTE]
- > Not all subscription types show the Properties.
-
-## Supported subscription types
-
-Subscription transfer in the Azure portal is available for the subscription types listed below. Currently transfer isn't supported for [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) or [Azure in Open (AIO)](https://azure.microsoft.com/offers/ms-azr-0111p/) subscriptions. For a workaround, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). To transfer other subscriptions, like support plans, [contact Azure Support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
--- [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/)<sup>1</sup>-- [Microsoft Partner Network](https://azure.microsoft.com/offers/ms-azr-0025p/) -- [Visual Studio Enterprise (MPN) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)-- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/) -- [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/)-- [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/offers/ms-azr-0023p/)-- [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/)-- [Visual Studio Enterprise: BizSpark](https://azure.microsoft.com/offers/ms-azr-0064p/)-- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)-- [Visual Studio Test Professional](https://azure.microsoft.com/offers/ms-azr-0060p/)-- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)<sup>2</sup>-
-<sup>1</sup> Using the EA portal.
-
-<sup>2</sup> Only supported for accounts that are created during sign-up on the Azure website.
-
-## Resources transferred with subscriptions
-
-All your resources like VMs, disks, and websites transfer to the new account. However, if you transfer a subscription to an account in another Azure AD tenant, any [administrator roles](../manage/add-change-subscription-administrator.md) and [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) on the subscription don't transfer. Also, [app registrations](../../active-directory/develop/quickstart-register-app.md) and other tenant-specific services don't transfer along with the subscription.
-
-## Transfer account ownership to another country/region
-
-Unfortunately, you can't transfer subscriptions across countries or regions using the Azure portal. However they can get transferred if you open an Azure support request. To create a support request, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
-
-## Transfer a subscription from one account to another
-
-If you're an administrator of two accounts, your can transfer a subscription between your accounts. Your accounts are conceptually considered accounts of two different users so you can transfer subscriptions between your accounts.
-To view the steps needed to transfer your subscription, see [Transfer billing ownership of an Azure subscription](../manage/billing-subscription-transfer.md).
-
-## Transferring a subscription shouldn't create downtime
-
-If you transfer a subscription to an account in the same Azure AD tenant, there's no impact to the resources running in the subscription. However, context information saved in PowerShell isn't updated so you might have to clear it or change settings. If you transfer the subscription to an account in another tenant and decide to move the subscription to the tenant, all users, groups, and service principals who had [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) to access resources in the subscription lose their access. Service downtime might result.
-
-## New account usage and billing history
-
-The only information available to the users for the new account is the last month's cost for your subscription. The rest of the usage and billing history doesn't transfer with the subscription.
-
-## Manually migrate data and services
-
-When you transfer a subscription, its resources stay with it. If you can't transfer subscription ownership, you can manually migrate its resources. For more information, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
-
-## Remaining subscription credits
-
-If you have a Visual Studio or Microsoft Partner Network subscription, you get monthly credits. Your credit doesn't carry forward with the subscription in the new account. The user who accepts the transfer request needs to have a Visual Studio license to accept the transfer request. The subscription uses the Visual Studio credit that's available in the user's account. For more information, see [Transferring Visual Studio and Partner Network subscriptions](../manage/billing-subscription-transfer.md#transfer-visual-studio-and-partner-network-subscriptions).
-
-## Users keep access to transferred resources
-
-Keep in mind that users with access to resources in a subscription keep their access when ownership is transferred. However, [administrator roles](../manage/add-change-subscription-administrator.md) and [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) might get removed. Losing access occurs when your account is in an Azure AD tenant other than the subscription's tenant and the user who sent the transfer request moves the subscription to your account's tenant.
-
-You can view the users who have Azure role assignments to access resources in the subscription in the Azure portal. Visit the [Subscription page in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Then select the subscription you want to check, and then select **Access control (IAM)** from the left-hand pane. Next, select **Role assignments** from the top of the page. The role assignments page lists all users who have access on the subscription.
-
-Even if the [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) are removed during transfer, users in the original owner account might continue to have access to the subscription through other security mechanisms, including:
-
-* Management certificates that grant the user admin rights to subscription resources. For more information, see [Create and Upload a Management Certificate for Azure](../../cloud-services/cloud-services-certs-create.md).
-* Access keys for services like Storage. For more information, see [About Azure storage accounts](../../storage/common/storage-account-create.md).
-* Remote Access credentials for services like Azure Virtual Machines.
-
-If the recipient needs to restrict access to resources, they should consider updating any secrets associated with the service. Most resources can be updated. Sign in to the [Azure portal](https://portal.azure.com) and then on the Hub menu, select **All resources**. Next, Select the resource. Then in the resource page, select **Settings**. There you can view and update existing secrets.
-
-## You pay for usage when you receive ownership
-
-Your account is responsible for payment for any usage that is reported from the time of transfer onwards. There may be some usage that took place before transfer but was reported afterwards. The usage is included in your account's bill.
-
-## Use a different payment method
-
-While accepting the transfer request, you can select an existing payment method that's linked to your account or add a new payment method.
-
-## Transfer Enterprise Agreement subscription ownership
-
-The Enterprise Administrator can update account ownership for any account, even after an original account owner is no longer part of the organization. For more information about transferring Azure Enterprise Agreement accounts, see [Azure Enterprise transfers](../manage/ea-transfers.md).
-
-## Need help? Contact us.
-
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
-
-## Next steps
--- [Transfer billing ownership of an Azure subscription](../manage/billing-subscription-transfer.md)
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Previously updated : 10/14/2021 Last updated : 06/08/2022 # Automated publishing for continuous integration and delivery
Follow these steps to get started:
- task: NodeTool@0 inputs:
- versionSpec: '10.x'
+ versionSpec: '14.x'
displayName: 'Install Node.js' - task: Npm@1
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Defender for Cloud's supported kill chain intents are based on [version 9 of the
| **LateralMovement** | Lateral movement consists of techniques that enable an adversary to access and control remote systems on a network and could, but does not necessarily, include execution of tools on remote systems. The lateral movement techniques could allow an adversary to gather information from a system without needing additional tools, such as a remote access tool. An adversary can use lateral movement for many purposes, including remote Execution of tools, pivoting to additional systems, access to specific information or files, access to additional credentials, or to cause an effect. | | **Execution** | The execution tactic represents techniques that result in execution of adversary-controlled code on a local or remote system. This tactic is often used in conjunction with lateral movement to expand access to remote systems on a network. | | **Collection** | Collection consists of techniques used to identify and gather information, such as sensitive files, from a target network prior to exfiltration. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. |
-| **Exfiltration** | Exfiltration refers to techniques and attributes that result or aid in the adversary removing files and information from a target network. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. |
| **Command and Control** | The command and control tactic represents how adversaries communicate with systems under their control within a target network. |
+| **Exfiltration** | Exfiltration refers to techniques and attributes that result or aid in the adversary removing files and information from a target network. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. |
| **Impact** | Impact events primarily try to directly reduce the availability or integrity of a system, service, or network; including manipulation of data to impact a business or operational process. This would often refer to techniques such as ransomware, defacement, data manipulation, and others. |
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
Title: How to use Defender for Containers to identify vulnerabilities
+ Title: How to use Defender for Containers to identify vulnerabilities in Microsoft Defender for Cloud
description: Learn how to use Defender for Containers to scan images in your registries Previously updated : 04/28/2022 Last updated : 06/08/2022 # Use Defender for Containers to scan your ACR images for vulnerabilities
-This page explains how to use the built-in vulnerability scanner to scan the container images stored in your Azure Resource Manager-based Azure Container Registry.
+This page explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
-When the scanner, powered by Qualys, reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
> [!TIP] > You can also scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
There are four triggers for an image scan:
- **Continuous scan**- This trigger has two modes:
- - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+ - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
- - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
+This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
Defender for Cloud filters, and classifies findings from the scanner. When an im
To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
-1. Enable **Defender for Containers** for your subscription. Defender for Cloud is now ready to scan images in your registries.
+1. [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
>[!NOTE] > This feature is charged per image.
To create a rule:
## FAQ
-### How does Defender for Cloud scan an image?
-Defender for Cloud pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
+### How does Defender for Containers scan an image?
+
+Defender for Containers pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts. ### Can I get the scan results via REST API?+ Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan. ### What registry types are scanned? What types are billed?+ For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](defender-for-container-registries-introduction.md#availability).
-If you connect unsupported registries to your Azure subscription, Defender for Cloud won't scan them and won't bill you for them.
+If you connect unsupported registries to your Azure subscription, Defender for Containers won't scan them and won't bill you for them.
### Can I customize the findings from the vulnerability scanner?+ Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise. [Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-usage.md#disable-specific-findings). ### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?+ Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities. ## Next steps
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | |
+| Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Γ£ô (Preview) | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
defender-for-iot Tutorial Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-splunk.md
To address a lack of visibility into the security and resiliency of OT networks,
The application provides SOC analysts with multidimensional visibility into the specialized OT protocols and IIoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior. The application also enables both IT, and OT incident response from within one corporate SOC. This is an important evolution given the ongoing convergence of IT and OT to support new IIoT initiatives, such as smart machines and real-time intelligence.
-The Splunk application can be installed locally or run on a cloud. The Splunk integration along with Defender for IoT supports both deployments.
+The Splunk application can be installed locally ('Splunk Enterprise') or run on a cloud ('Splunk Cloud'). The Splunk integration along with Defender for IoT supports 'Splunk Enterprise' only.
> [!Note] > References to CyberX refer to Microsoft Defender for IoT.
dns Tutorial Alias Pip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-pip.md
Title: 'Tutorial: Create an Azure DNS alias record to refer to an Azure public IP address'
-description: This tutorial shows you how to configure an Azure DNS alias record to reference an Azure public IP address.
+description: In this tutorial, you learn how to configure an Azure DNS alias record to reference an Azure public IP address.
Previously updated : 04/19/2021 Last updated : 06/09/2022 + #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to an Azure public IP address.
-# Tutorial: Configure an alias record to refer to an Azure public IP address
+# Tutorial: Create an alias record to refer to an Azure public IP address
+
+You can create an alias record to reference an Azure resource. An example is an alias record that references an Azure public IP resource.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a network infrastructure.
+> * Create a virtual network and a subnet.
> * Create a web server virtual machine with a public IP. > * Create an alias record that points to the public IP. > * Test the alias record.
In this tutorial, you learn how to:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-You must have a domain name available that you can host in Azure DNS to test with. You must have full control of this domain. Full control includes the ability to set the name server (NS) records for the domain.
-For instructions to host your domain in Azure DNS, see [Tutorial: Host your domain in Azure DNS](dns-delegate-domain-azure-dns.md).
+* An Azure account with an active subscription.
+* A domain name hosted in Azure DNS. If you don't have an Azure DNS zone, you can [create a DNS zone](./dns-delegate-domain-azure-dns.md#create-a-dns-zone), then [delegate your domain](dns-delegate-domain-azure-dns.md#delegate-the-domain) to Azure DNS.
+
+> [!NOTE]
+> In this tutorial, `contoso.com` is used as an example. Replace `contoso.com` with your own domain name.
-The example domain used for this tutorial is contoso.com, but use your own domain name.
+## Sign in to Azure
+
+Sign in to the Azure portal at https://portal.azure.com.
## Create the network infrastructure
-First, create a virtual network and a subnet to place your web servers in.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Create a resource** from the left panel of the Azure portal. Enter *resource group* in the search box, and create a resource group named **RG-DNS-Alias-pip**.
-3. Select **Create a resource** > **Networking** > **Virtual network**.
-4. Create a virtual network named **VNet-Server**. Place it in the **RG-DNS-Alias-pip** resource group, and name the subnet **SN-Web**.
+
+Create a virtual network and a subnet to place your web server in.
+
+1. In the Azure portal, enter *virtual network* in the search box at the top of the portal, and then select **Virtual networks** from the search results.
+1. In **Virtual networks**, select **+ Create**.
+1. In **Create virtual network**, enter or select the following information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |-||
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new** </br> In **Name**, enter **PIPResourceGroup** </br> Select **OK** |
+ | **Instance details** | |
+ | Name | Enter **myPIPVNet** |
+ | Region | Select your region |
+
+1. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. In the **IP Addresses** tab, enter the following information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.10.0.0/16** |
+
+1. Select **+ Add subnet**, and enter this information in the **Add subnet**:
+
+ | Setting | Value |
+ |-|-|
+ | Subnet name | Enter **WebSubnet** |
+ | Subnet address range | Enter **10.10.0.0/24** |
+
+1. Select **Add**.
+1. Select the **Review + create** tab or select the **Review + create** button.
+1. Select **Create**.
## Create a web server virtual machine
-1. Select **Create a resource** > **Windows Server 2016 VM**.
-2. Enter **Web-01** for the name, and place the VM in the **RG-DNS-Alias-TM** resource group. Enter a username and password, and select **OK**.
-3. For **Size**, select an SKU with 8-GB RAM.
-4. For **Settings**, select the **VNet-Servers** virtual network and the **SN-Web** subnet. For public inbound ports, select **HTTP (80)** > **HTTPS (443)** > **RDP (3389)**, and then select **OK**.
-5. On the **Summary** page, select **Create**.
-This deployment takes a few minutes to complete. The virtual machine will have an attached NIC with a basic dynamic public IP called Web-01-ip. The public IP will change every time the virtual machine is restarted.
+Create a Windows Server virtual machine and then install IIS web server on it.
+
+### Create the virtual machine
+
+Create a Windows Server 2019 virtual machine.
-### Install IIS
+1. In the Azure portal, enter *virtual machine* in the search box at the top of the portal, and then select **Virtual machines** from the search results.
+1. In **Virtual machines**, select **+ Create** and then select **Azure virtual machine**.
+1. In **Create a virtual machine**, enter or select the following information in the **Basics** tab:
-Install IIS on **Web-01**.
+ | **Setting** | **Value** |
+ |||
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **RG-DNS-Alias-pip** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **Web-01** |
+ | Region | Select **(US) East US** |
+ | Availability options | Select **No infrastructure redundancy required** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2** |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
-1. Connect to **Web-01**, and sign in.
-2. On the **Server Manager** dashboard, select **Add roles and features**.
-3. Select **Next** three times. On the **Server Roles** page, select **Web Server (IIS)**.
-4. Select **Add Features**, and then select **Next**.
-5. Select **Next** four times, and then select **Install**. This procedure takes a few minutes to finish.
-6. After the installation finishes, select **Close**.
-7. Open a web browser. Browse to **localhost** to verify that the default IIS web page appears.
+
+1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+1. In the **Networking** tab, enter or select the following information:
+
+ | Setting | Value |
+ ||-|
+ | **Network interface** | |
+ | Virtual network | **myPIPVNet** |
+ | Subnet | **WebSubnet** |
+ | Public IP | Take the default public IP |
+ | NIC network security group | Select **Basic**|
+ | Public inbound ports | Select **Allow selected ports** |
+ | Select inbound ports | Select **HTTP (80)**, **HTTPS (443)** and **RDP (3389)** |
+
+1. Select **Review + create**.
+1. Review the settings, and then select **Create**.
+
+This deployment may take a few minutes to complete.
+
+> [!NOTE]
+> **Web-01** virtual machine has an attached NIC with a basic dynamic public IP that changes every time the virtual machine is restarted.
+
+### Install IIS web server
+
+Install IIS web server on **Web-01**.
+
+1. In the **Overview** page of **Web-01**, select **Connect** and then **RDP**.
+1. In the **RDP** page, select **Download RDP File**.
+1. Open *Web-01.rdp*, and select **Connect**.
+1. Enter the username and password entered during virtual machine creation.
+1. On the **Server Manager** dashboard, select **Manage** then **Add Roles and Features**.
+1. Select **Server Roles** or select **Next** three times. On the **Server Roles** page, select **Web Server (IIS)**.
+1. Select **Add Features**, and then select **Next**.
+1. Select **Confirmation** or select **Next** three times, and then select **Install**. The installation process takes a few minutes to finish.
+1. After the installation finishes, select **Close**.
+1. Open a web browser. Browse to **localhost** to verify that the default IIS web page appears.
+
+ :::image type="content" source="./media/tutorial-alias-pip/iis-web-server.png" alt-text="Screenshot of Internet Explorer showing the I I S Web Server Welcome page.":::
## Create an alias record Create an alias record that points to the public IP address.
-1. Select your Azure DNS zone to open the zone.
-2. Select **Record set**.
-3. In the **Name** text box, select **web01**.
-4. Leave the **Type** as an **A** record.
-5. Select the **Alias Record Set** check box.
-6. Select **Choose Azure service**, and then select the **Web-01-ip** public IP address.
+1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal, and then select **contoso.com** DNS zone from the search results.
+1. In the **Overview** page, select the **+ Record set** button.
+1. In the **Add record set**, enter *web01* in the **Name**.
+1. Select **A** for the **Type**.
+1. Select **Yes** for the **Alias record set**, and then select the **Azure Resource** for the **Alias type**.
+1. Select the **Web-01-ip** public IP address for the **Azure resource**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-alias-pip/add-public-ip-alias-inline.png" alt-text="Screenshot of adding an alias record to refer to the Azure public IP of the I I S web server using the Add record set page." lightbox="./media/tutorial-alias-pip/add-public-ip-alias-expanded.png":::
## Test the alias record
-1. In the **RG-DNS-Alias-pip** resource group, select the **Web-01** virtual machine. Note the public IP address.
-1. From a web browser, browse to the fully qualified domain name for the Web01-01 virtual machine. An example is **web01.contoso.com**. You now see the IIS default web page.
-2. Close the web browser.
-3. Stop the **Web-01** virtual machine, and then restart it.
-4. After the virtual machine restarts, note the new public IP address for the virtual machine.
-5. Open a new browser. Browse again to the fully qualified domain name for the Web01-01 virtual machine. An example is **web01.contoso.com**.
+1. In the Azure portal, enter *virtual machine* in the search box at the top of the portal, and then select **Virtual machines** from the search results.
+1. Select the **Web-01** virtual machine. Note the public IP address in the **Overview** page.
+1. From a web browser, browse to `web01.contoso.com`, which is the fully qualified domain name of the **Web-01** virtual machine. You now see the IIS welcome web page.
+1. Close the web browser.
+1. Stop the **Web-01** virtual machine, and then restart it.
+1. After the virtual machine restarts, note the new public IP address for the virtual machine.
+1. From a web browser, browse again to `web01.contoso.com`.
-This procedure succeeds because you used an alias record to point to the public IP address resource, not a standard A record.
+This procedure succeeds because you used an alias record to point to the public IP resource instead of a standard A record that points to the public IP address, not the resource.
## Clean up resources
-When you no longer need the resources created for this tutorial, delete the **RG-DNS-Alias-pip** resource group.
-
+When no longer needed, you can delete all resources created in this tutorial by deleting the **RG-DNS-Alias-pip** resource group and the alias record **web01** from **contoso.com** DNS zone.
## Next steps
-In this tutorial, you created an alias record to refer to an Azure public IP address. To learn about Azure DNS and web apps, continue with the tutorial for web apps.
+In this tutorial, you created an alias record to refer to an Azure public IP address resource. To learn how to create an alias record to support domain name apex with Traffic Manager, continue with the alias records for Traffic Manager tutorial.
> [!div class="nextstepaction"]
-> [Create DNS records for a web app in a custom domain](./dns-web-sites-custom-domain.md)
+> [Create alias records for Traffic Manager](./tutorial-alias-tm.md)
dns Tutorial Alias Rr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-rr.md
Title: 'Tutorial: Create an alias record to refer to a resource record in a zone'
-description: This tutorial shows you how to configure an Azure DNS alias record to reference a resource record within the zone.
-
+description: In this tutorial, you learn how to configure an alias record to reference a resource record within the zone.
+ + Previously updated : 04/19/2021- Last updated : 06/09/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to a resource record within the zone.
Alias records can reference other record sets of the same type. For example, you
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create an alias record for a resource record in the zone.
+> * Create a resource record in the zone.
+> * Create an alias record for the resource record.
> * Test the alias record. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-You must have a domain name available that you can host in Azure DNS to test with. You must have full control of this domain. Full control includes the ability to set the name server (NS) records for the domain.
-For instructions to host your domain in Azure DNS, see [Tutorial: Host your domain in Azure DNS](dns-delegate-domain-azure-dns.md).
+* An Azure account with an active subscription.
+* A domain name hosted in Azure DNS. If you don't have an Azure DNS zone, you can [create a DNS zone](./dns-delegate-domain-azure-dns.md#create-a-dns-zone), then [delegate your domain](dns-delegate-domain-azure-dns.md#delegate-the-domain) to Azure DNS.
+
+> [!NOTE]
+> In this tutorial, `contoso.com` is used as an example. Replace `contoso.com` with your own domain name.
+
+## Sign in to Azure
+Sign in to the Azure portal at https://portal.azure.com.
## Create an alias record Create an alias record that points to a resource record in the zone. ### Create the target resource record
-1. Select your Azure DNS zone to open the zone.
-2. Select **Record set**.
-3. In the **Name** text box, enter **server**.
-4. For the **Type**, select **A**.
-5. In the **IP ADDRESS** text box, enter **10.10.10.10**.
-6. Select **OK**.
+1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal, and then select **contoso.com** DNS zone from the search results.
+1. In the **Overview** page, select the **+Record set** button.
+1. In the **Add record set**, enter *server* in the **Name**.
+1. Select **A** for the **Type**.
+1. Enter *10.10.10.10* in the **IP address**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-alias-rr/add-record-set-inline.png" alt-text="Screentshot of adding the target record set in the Add record set page." lightbox="./media/tutorial-alias-rr/add-record-set-expanded.png":::
### Create the alias record
-1. Select your Azure DNS zone to open the zone.
-2. Select **Record set**.
-3. In the **Name** text box, enter **test**.
-4. For the **Type**, select **A**.
-5. Select **Yes** in the **Alias Record Set** check box. Then select the **Zone record set** option.
-6. For the **Zone record set**, select the **server** record.
-7. Select **OK**.
+1. In the **Overview** page of **contoso.com** DNS zone, select the **+Record set** button.
+1. In the **Add record set**, enter *test* in the **Name**.
+1. Select **A** for the **Type**.
+1. Select **Yes** for the **Alias record set**, and then select the **Zone record set** for the **Alias type**.
+1. Select the **server** record for the **Zone record set**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-alias-rr/add-alias-record-set-inline.png" alt-text="Screentshot of adding the alias record set in the Add record set page." lightbox="./media/tutorial-alias-rr/add-alias-record-set-expanded.png":::
## Test the alias record
-1. Start your favorite nslookup tool. One option is to browse to [https://network-tools.com/nslook](https://network-tools.com/nslook).
-2. Set the query type for A records, and look up **test.\<your domain name\>**. The answer is **10.10.10.10**.
-3. In the Azure portal, change the **server** A record to **10.11.11.11**.
-4. Wait a few minutes, and then use nslookup again for the **test** record. The answer is **10.11.11.11**.
+After adding the alias record, you can verify that it's working by using a tool such as *nslookup* to query the `test` A record.
-## Clean up resources
+> [!TIP]
+> You may need to wait at least 10 minutes after you add a record to successfully verify that it's working. It can take a while for changes to propagate through the DNS system.
+
+1. From a command prompt, enter the `nslookup` command:
+
+ ```
+ nslookup test.contoso.com
+ ```
-When you no longer need the resources created for this tutorial, delete the **server** and **test** resource records in your zone.
+1. Verify that the response looks similar to the following output:
+
+ ```
+ Server: UnKnown
+ Address: 40.90.4.1
+
+ Name: test.contoso.com
+ Address: 10.10.10.10
+ ```
+
+1. In the **Overview** page of **contoso.com** DNS zone, select the **server** record, and then enter *10.11.11.11* in the **IP address**.
+
+1. Select **Save**.
+
+1. Wait a few minutes, and then use the `nslookup` command again. Verify the response changed to reflect the new IP address:
++
+ ```
+ Server: UnKnown
+ Address: 40.90.4.1
+
+ Name: test.contoso.com
+ Address: 10.11.11.11
+ ```
+
+## Clean up resources
+When you no longer need the resources created for this tutorial, delete the **server** and **test** records from your zone.
## Next steps
-In this tutorial, you created an alias record to refer to a resource record within the zone. To learn about Azure DNS and web apps, continue with the tutorial for web apps.
+In this tutorial, you learned the basic steps to create an alias record to refer to a resource record within the Azure DNS zone.
-> [!div class="nextstepaction"]
-> [Create DNS records for a web app in a custom domain](./dns-web-sites-custom-domain.md)
+- Learn more about [alias records](dns-alias.md).
+- Learn more about [zones and records](dns-zones-records.md).
event-grid Azure Active Directory Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/azure-active-directory-events.md
+
+ Title: Azure Active Directory events
+description: This article describes Azure AD event types and provides event samples.
+ Last updated : 06/09/2022++
+# Azure Active Directory events
+
+This article provides the properties and schema for Azure Active Directory (Azure AD) events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md).
+
+## Available event types
+These events are triggered when a [User](/graph/api/resources/user) or [Group](/graph/api/resources/group) is created, updated or deleted in Azure AD or by operating over those resources using Microsoft Graph API.
+
+ | Event name | Description |
+ | - | -- |
+ | **Microsoft.Graph.UserCreated** | Triggered when a user in Azure AD is created. |
+ | **Microsoft.Graph.UserUpdated** | Triggered when a user in Azure AD is updated. |
+ | **Microsoft.Graph.UserDeleted** | Triggered when a user in Azure AD is deleted. |
+ | **Microsoft.Graph.GroupCreated** | Triggered when a group in Azure AD is created. |
+ | **Microsoft.Graph.GroupUpdated** | Triggered when a group in Azure AD is updated. |
+ | **Microsoft.Graph.GroupDeleted** | Triggered when a group in Azure AD is deleted. |
+
+## Example event
+When an event is triggered, the Event Grid service sends data about that event to subscribing destinations. This section contains an example of what that data would look like for each Azure AD event.
+
+### Microsoft.Graph.UserCreated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.UserCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Users/<user-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "created",
+ "clientState": "<guid>",
+ "resource": "Users/<user-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.User",
+ "@odata.id": "Users/<user-id>",
+ "id": "<user-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+
+### Microsoft.Graph.UserUpdated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.UserUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Users/<user-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "updated",
+ "clientState": "<guid>",
+ "resource": "Users/<user-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.User",
+ "@odata.id": "Users/<user-id>",
+ "id": "<user-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+### Microsoft.Graph.UserDeleted event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.UserDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Users/<user-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "deleted",
+ "clientState": "<guid>",
+ "resource": "Users/<user-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.User",
+ "@odata.id": "Users/<user-id>",
+ "id": "<user-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+### Microsoft.Graph.GroupCreated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.GroupCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Groups/<group-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "created",
+ "clientState": "<guid>",
+ "resource": "Groups/<group-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.Group",
+ "@odata.id": "Groups/<group-id>",
+ "id": "<group-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+### Microsoft.Graph.GroupUpdated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.GroupUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Groups/<group-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "updated",
+ "clientState": "<guid>",
+ "resource": "Groups/<group-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.Group",
+ "@odata.id": "Groups/<group-id>",
+ "id": "<group-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+
+### Microsoft.Graph.GroupDeleted event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.GroupDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Groups/<group-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "deleted",
+ "clientState": "<guid>",
+ "resource": "Groups/<group-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.Group",
+ "@odata.id": "Groups/<group-id>",
+ "id": "<group-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
++
+## Event properties
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `source` | string | The tenant event source. This field isn't writeable. Microsoft Graph API provides this value. |
+| `subject` | string | Publisher-defined path to the event subject. |
+| `type` | string | One of the event types for this event source. |
+| `time` | string | The time the event is generated based on the provider's UTC time |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Event payload that provides the data about the resource state change. |
+| `specversion` | string | CloudEvents schema specification version. |
+++
+The data object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `changeType` | string | The type of resource state change. |
+| `resource` | string | The resource identifier for which the event was raised. |
+| `tenantId` | string | The organization ID where the user or group is kept. |
+| `clientState` | string | A secret provided by the user at the time of the Graph API subscription creation. |
+| `@odata.type` | string | The Graph API change type. |
+| `@odata.id` | string | The Graph API resource identifier for which the event was raised. |
+| `id` | string | The resource identifier for which the event was raised. |
+| `organizationId` | string | The Azure AD tenant identifier. |
+| `eventTime` | string | The time at which the resource state occurred. |
+| `sequenceNumber` | string | A sequence number. |
+| `subscriptionExpirationDateTime` | string | The time in [RFC 3339](https://tools.ietf.org/html/rfc3339) format at which the Graph API subscription expires. |
+| `subscriptionId` | string | The Graph API subscription identifier. |
+| `tenantId` | string | The Azure AD tenant identifier. |
++
+## Next steps
+
+* For an introduction to Azure Event Grid's Partner Events, see [Partner Events overview](partner-events-overview.md)
+* For information on how to subscribe to Microsoft Graph API to receive Azure AD events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md).
+* For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md).
+* For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md).
+* For information about how to configure an event subscription to select specific events to be delivered, consult [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md).
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
If you specify multiple different filters, an **AND** operation is done, so each
## CloudEvents For events in the **CloudEvents schema**, use the following values for the key: `eventid`, `source`, `eventtype`, `eventtypeversion`, or event data (like `data.key1`).
-You can also use [extension context attributes in CloudEvents 1.0](https://github.com/cloudevents/spec/blob/v1.0.1/spec.md#extension-context-attributes). In the following example, `comexampleextension1` and `comexampleothervalue` are extension context attributes.
+You can also use [extension context attributes in CloudEvents 1.0](https://github.com/cloudevents/spec/blob/v1.0.1/spec.md#extension-context-attributes). In the following example, `comexampleextension1` and `comexampleothervalue` are extension context attributes.
```json {
Here's an example of using an extension context attribute in a filter.
Advanced filtering has the following limitations:
-* 25 advanced filters and 25 filter values across all the filters per event grid subscription
+* 25 advanced filters and 25 filter values across all the filters per Event Grid subscription
* 512 characters per string value * Keys with **`.` (dot)** character in them. For example: `http://schemas.microsoft.com/claims/authnclassreference` or `john.doe@contoso.com`. Currently, there's no support for escape characters in keys.
event-grid Outlook Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/outlook-events.md
+
+ Title: Outlook events in Azure Event Grid
+description: This article describes Microsoft Outlook events in Azure Event Grid.
+ Last updated : 06/09/2022++
+# Microsoft Outlook events
+
+This article provides the properties and schema for Microsoft Outlook events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md).
+
+## Available event types
+These events are triggered when an Outlook event or an Outlook contact is created, updated or deleted or by operating over those resources using Microsoft Graph API.
+
+ | Event name | Description |
+ | - | -- |
+ | **Microsoft.Graph.EventCreated** | Triggered when an event in Outlook is created. |
+ | **Microsoft.Graph.EventUpdated** | Triggered when an event in Outlook is updated. |
+ | **Microsoft.Graph.EventDeleted** | Triggered when an event in Outlook is deleted. |
+ | **Microsoft.Graph.ContactCreated** | Triggered when a contact in Outlook is created. |
+ | **Microsoft.Graph.ContactUpdated** | Triggered when a contact in Outlook is updated. |
+ | **Microsoft.Graph.ContactDeleted** | Triggered when a contact in Outlook is deleted. |
+
+## Example event
+When an event is triggered, the Event Grid service sends data about that event to subscribing destinations. This section contains an example of what that data would look like for each Outlook event.
+
+### Microsoft.Graph.EventCreated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.EventCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Events/<event-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "created",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<event id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Event",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+### Microsoft.Graph.EventUpdated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.EventUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Events/<event-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "updated",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<event id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Event",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+### Microsoft.Graph.EventDeleted event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.EventDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Events/<event-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "deleted",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<event id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Event",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+### Microsoft.Graph.ContactCreated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.ContactCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Contacts/<contact-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "created",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<contact id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Contact",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+### Microsoft.Graph.ContactUpdated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.ContactUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Contacts/<contact-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "updated",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<contact id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Contact",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+### Microsoft.Graph.ContactDeleted event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.ContactDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Contacts/<contact-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "deleted",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<contact id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Contact",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+## Event properties
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `source` | string | The tenant event source. This field isn't writeable. Microsoft Graph API provides this value. |
+| `subject` | string | Publisher-defined path to the event subject. |
+| `type` | string | One of the event types for this event source. |
+| `time` | string | The time the event is generated based on the provider's UTC time |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Event payload that provides the data about the resource state change. |
+| `specversion` | string | CloudEvents schema specification version. |
+++
+The data object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `changeType` | string | The type of resource state change. |
+| `resource` | string | The resource identifier for which the event was raised. |
+| `tenantId` | string | The organization ID where the user or contact is kept. |
+| `clientState` | string | A secret provided by the user at the time of the Graph API subscription creation. |
+| `@odata.type` | string | The Graph API change type. |
+| `@odata.id` | string | The Graph API resource identifier for which the event was raised. |
+| `id` | string | The resource identifier for which the event was raised. |
+| `organizationId` | string | The Outlook tenant identifier. |
+| `eventTime` | string | The time at which the resource state occurred. |
+| `sequenceNumber` | string | A sequence number. |
+| `subscriptionExpirationDateTime` | string | The time in [RFC 3339](https://tools.ietf.org/html/rfc3339) format at which the Graph API subscription expires. |
+| `subscriptionId` | string | The Graph API subscription identifier. |
+| `tenantId` | string | The Outlook tenant identifier. |
+| `otherResourceData` | string | Placeholder that represents one or more dynamic properties that may be included in the event. |
++
+## Next steps
+
+* For an introduction to Azure Event Grid's Partner Events, see [Partner Events overview](partner-events-overview.md)
+* For information on how to subscribe to Microsoft Graph API to receive Outlook events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md).
+* For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md).
+* For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md).
+* For information about how to configure an event subscription to select specific events to be delivered, consult [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md).
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Title: What is Azure Event Grid? description: Send event data from a source to handlers with Azure Event Grid. Build event-based applications, and integrate with Azure services. Previously updated : 03/15/2022 Last updated : 06/09/2022 # What is Azure Event Grid?
-Azure Event Grid allows you to easily build applications with event-based architectures. First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to. Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has support for your own events, using custom topics.
+Event Grid is a highly scalable, serverless event broker that you can use to integrate applications using events. Events are delivered by Event Grid to subscriber destinations such as applications, Azure services, or any endpoint to which Event Grid has network access. The source of those events can be other applications, SaaS services and Azure services.
-You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and make sure your events are reliably delivered.
+With Event Grid you connect solutions using event-driven architectures. An [event-driven architecture](/azure/architecture/guide/architecture-styles/event-driven) uses events to communicate occurrences in system state changes, for example, to other applications or services. You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and make sure your events are reliably delivered.
Azure Event Grid is deployed to maximize availability by natively spreading across multiple fault domains in every region, and across availability zones (in regions that support them). For a list of regions that are supported by Event Grid, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid&regions=all).
-This article provides an overview of Azure Event Grid. If you want to get started with Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md).
+The event sources and event handlers or destinations are summarized in the following diagram.
:::image type="content" source="./media/overview/functional-model.png" alt-text="Event Grid model of sources and handlers" lightbox="./media/overview/functional-model-big.png":::
This article provides an overview of Azure Event Grid. If you want to get starte
## Event sources
-Currently, the following Azure services support sending events to Event Grid. For more information about a source in the list, select the link.
+Event Grid supports the following event sources:
+1. **Your own service or solution** that publishes events to Event Grid so that your customers can subscribe to them. Event Grid provides two type of resources you can use depending on your requirements.
+ - [Custom Topics](custom-topics.md) or "Topics" for short. Use custom topics if your requirements resemble the following user story:
+
+ "As an owner of a system, I want to communicate my system's state changes by publishing events and routing those events to event handlers, under my control or otherwise, that can process my system's events in a way they see fit."
+
+ - [Domains](event-domains.md). Use domains if you want to deliver events to multiple teams at scale. Your requirements probably are similar to the following one:
+
+ "As an owner of a system, I want to announce my systemΓÇÖs state changes to multiple teams in a single tenant so that they can process my systemΓÇÖs events in a way they see fit."
+2. A **SaaS provider or platform** can publish their events to Event Grid through a feature called [Partner Events](partner-events-overview.md). You can [subscribe to those events](subscribe-to-partner-events.md) and automate tasks, for example. Events from the following partners are currently available:
+ - [Auth0](auth0-overview.md)
+ - [Microsoft Graph API](subscribe-to-graph-api-events.md). Through Microsoft Graph API you can get events from [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), [Azure AD](azure-active-directory-events.md), SharePoint, Conversations, security alerts, and Universal Print.
+
+3. **An Azure service**. The following Azure services support sending events to Event Grid. For more information about a source in the list, select the link.
+ ## Event handlers
-For full details on the capabilities of each handler as well as related articles, see [event handlers](event-handlers.md). Currently, the following Azure services support handling events from Event Grid:
+For full details on the capabilities of each handler and related articles, see [event handlers](event-handlers.md). Currently, the following Azure services support handling events from Event Grid:
[!INCLUDE [event-handlers.md](includes/event-handlers.md)]
Azure Event Grid uses a pay-per-event pricing model, so you only pay for what yo
A tutorial that uses Azure Functions to stream data from Event Hubs to Azure Synapse Analytics. * [Event Grid REST API reference](/rest/api/eventgrid) Provides reference content for managing Event Subscriptions, routing, and filtering.
+* [Partner Events overview](partner-events-overview.md).
+* [subscribe to partner events](subscribe-to-partner-events.md).
event-grid Partner Events Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-graph-api.md
+
+ Title: Microsoft Graph API events in Azure Event Grid
+description: This article describes events published by Microsoft Graph API.
+ Last updated : 06/09/2022++
+# Microsoft Graph API events
+
+Microsoft Graph API provides a unified programmable model that you can use to receive events about state changes of resources in Microsoft Outlook, Teams, SharePoint, Azure Active Directory, Microsoft Conversations, and security alerts. For every resource in the following table, events for create, update and delete state changes are supported.
+
+## Graph API event sources
+
+|Microsoft event source |Resource(s) | Available event types |
+|: | : | :-|
+|Azure Active Directory| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Azure AD event types](azure-active-directory-events.md) |
+|Microsoft Outlook|[Event](/graph/api/resources/event) (calendar meeting), [Message](/graph/api/resources/message) (email), [Contact](/graph/api/resources/contact) | [Microsoft Outlook event types](outlook-events.md) |
+|Microsoft Teams|[ChatMessage](/graph/api/resources/callrecords-callrecord), [CallRecord](/graph/api/resources/callrecords-callrecord) (meeting) | [Microsoft Teams event types](teams-events.md) |
+|Microsoft SharePoint and OneDrive| [DriveItem](/graph/api/resources/driveitem)| |
+|Microsoft SharePoint| [List](/graph/api/resources/list)|
+|Security alerts| [Alert](/graph/api/resources/alert)|
+|Microsoft Conversations| [Conversation](/graph/api/resources/conversation)| |
+
+You create a Microsoft Graph API subscription to enable Graph API events to flow into a partner topic. The partner topic is automatically created for you as part of the Graph API subscription creation. You use that partner topic to [create event subscriptions](event-filtering.md) to send your events to any of the supported [event handlers](event-handlers.md) that best meets your requirements to process the events.
++
+## Next steps
+
+* [Partner Events overview](partner-events-overview.md).
+* [subscribe to partner events](subscribe-to-partner-events.md), which includes instructions on how to subscribe to Microsoft Graph API events.
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
You may want to use the Partner Events feature if you've one or more of the foll
## Available partners A partner must go through an [onboarding process](onboard-partner.md) before a customer can start receiving or sending events to partners. Following is the list of available partners and whether their services were designed to send events to or receive events from Event Grid.
+### Microsoft partners
+| Partner | Sends events to Azure? | Receives events from Azure? |
+| :--|:--:|:-:|
+| Microsoft Graph API* | Yes | N/A |
+
+#### Microsoft Graph API
+Through Microsoft Graph API, you can get events from a diverse set of Microsoft services such as [Azure AD](azure-active-directory-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), **SharePoint**, and so on. For a complete list of event sources, see [Microsoft Graph API's change notifications documentation](/graph/webhooks#supported-resources).
+
+### Non-Microsoft partners
| Partner | Sends events to Azure? | Receives events from Azure? | | : |:--:|:-:| | Auth0 | Yes | N/A | ### Auth0+ [Auth0](https://auth0.com) is a managed authentication platform for businesses to authenticate, authorize, and secure access for applications, devices, and users. You can create an [Auth0 partner topic](auth0-overview.md) to connect your Auth0 and Azure accounts. This integration allows you to react to, log, and monitor Auth0 events in real time. To try it out, see [Integrate Azure Event Grid with Auto0](auth0-how-to.md). ## Verified partners
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
+
+ Title: Azure Event Grid - Subscribe to Microsoft Graph API events
+description: This article explains how to subscribe to events published by Microsoft Graph API.
+ Last updated : 06/09/2022++
+# Subscribe to events published by Microsoft Graph API
+This article describes steps to subscribe to events published by Microsoft Graph API. The following table lists the resources for which events are available through Graph API. For every resource, events for create, update and delete state changes are supported.
+
+|Microsoft event source |Resource(s) | Available event types |
+|: | : | :-|
+|Azure Active Directory| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Azure AD event types](azure-active-directory-events.md) |
+|Microsoft Outlook|[Event](/graph/api/resources/event) (calendar meeting), [Message](/graph/api/resources/message) (email), [Contact](/graph/api/resources/contact) | [Microsoft Outlook event types](outlook-events.md) |
+|Microsoft Teams|[ChatMessage](/graph/api/resources/callrecords-callrecord), [CallRecord](/graph/api/resources/callrecords-callrecord) (meeting) | [Microsoft Teams event types](teams-events.md) |
+|Microsoft SharePoint and OneDrive| [DriveItem](/graph/api/resources/driveitem)| |
+|Microsoft SharePoint| [List](/graph/api/resources/list)|
+|Security alerts| [Alert](/graph/api/resources/alert)|
+|Microsoft Conversations| [Conversation](/graph/api/resources/conversation)| |
+
+> [!IMPORTANT]
+>If you aren't familiar with the **Partner Events** feature, see [Partner Events overview](partner-events-overview.md).
++
+## Why you should use Microsoft Graph API with Event Grid as a destination?
+Besides the ability to subscribe to Microsoft Graph API events via Event Grid, you have [other options](/graph/change-notifications-delivery) through which you can receive similar notifications (not events). Consider using Microsoft Graph API to deliver events to Event Grid if you have at least one of the following requirements:
+
+- You're developing an event-driven solution that requires events from Azure Active Directory, Outlook, Teams, etc. to react to resource changes. You require the robust eventing model and publish-subscribe capabilities that Event Grid provides. For an overview of Event Grid, see [Event Grid concepts](concepts.md).
+- You want to use Event Grid to route events to multiple destinations using a single Graph API subscription and you want to avoid managing multiple Graph API subscriptions.
+- You require to route events to different downstream applications, webhooks or Azure services depending on some of the properties in the event. For example, you may want to route event types such as `Microsoft.Graph.UserCreated` and `Microsoft.Graph.UserDeleted` to a specialized application that processes users' onboarding and off-boarding. You may also want to send `Microsoft.Graph.UserUpdated` events to another application that syncs contacts information, for example. You can achieve that using a single Graph API subscription when using Event Grid as a notification destination. For more information, see [event filtering](event-filtering.md) and [event handlers](event-handlers.md).
+- Interoperability is important to you. You want to forward and handle events in a standard way using CNCF's [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specification standard, to which Event Grid fully complies.
+- You like the extensibility support that CloudEvents provides. For example, if you want to trace events across compliant systems, you may use CloudEvents extension [Distributed Tracing](https://github.com/cloudevents/spec/blob/v1.0.1/extensions/distributed-tracing.md). Learn more about more [CloudEvents extensions](https://github.com/cloudevents/spec/blob/v1.0.1/documented-extensions.md).
+- You want to use proven event-driven approaches adopted by the industry.
+
+## High-level steps
+
+The common steps to subscribe to events published by any partner, including Graph API, are described in [subscribe to partner events](subscribe-to-partner-events.md). For a quick reference, the steps described in that article are listed here. This article deals with step 3: enable events flow to a partner topic.
+
+1. Register the Event Grid resource provider with your Azure subscription.
+2. Authorize partner to create a partner topic in your resource group.
+3. [Enable events to flow to a partner topic](#enable-microsoft-graph-api-events-to-flow-to-your-partner-topic)
+4. Activate partner topic so that your events start flowing to your partner topic.
+5. Subscribe to events.
+
+### Enable Microsoft Graph API events to flow to your partner topic
+
+> [!IMPORTANT]
+> Microsoft Graph API's (MGA) ability to send events to Even Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
+
+You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample:
+
+```json
+POST to https://canary.graph.microsoft.com/testprodbetawebhooks1/subscriptions
+
+Body:
+{
+ "changeType": "Updated,Deleted,Created",
+ "notificationUrl": "EventGrid:?azuresubscriptionid=8A8A8A8A-4B4B-4C4C-4D4D-12E12E12E12E&resourcegroup=yourResourceGroup&partnertopic=youPartnerTopic&location=theAzureRegionFortheTopic",
+ "resource": "users",
+ "expirationDateTime": "2022-04-30T00:00:00Z",
+ "clientState": "mysecret"
+}
+```
+
+Here are some of the key payload properties:
+
+- `changeType`: the kind of resource changes for which you want to receive events. Valid values: `Updated`, `Deleted`, and `Created`. You can specify one or more of these values separated by commas.
+- `notificationUrl`: a URI that conforms to the following pattern: `EventGrid:?azuresubscriptionid=<you-azure-subscription-id>&resourcegroup=<your-resource-group-name>&partnertopic=<the-name-for-your-partner-topic>&location=<the-Azure-region-where-you-want-the-topic-created>`.
+- resource: the resource for which you need events announcing state changes.
+- expirationDateTime: the expiration time at which the subscription will expire and hence the flow of events will stop. It must conform to the format specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). You must specify an expiration time that is within the [maximum subscription length allowable for the resource type](/graph/api/resources/subscription#maximum-length-of-subscription-per-resource-type) used.
+- client state. A value that is set by you when creating a Graph API subscription. For more information, see [Graph API subscription properties](/graph/api/resources/subscription#properties).
+
+> [!NOTE]
+> Microsoft Graph API's capability to send events to Event Grid is only available in a specific Graph API environment. You will need to update your code so that it uses the following Graph API endpoint `https://canary.graph.microsoft.com/testprodbetawebhooks1`. For example, this is the way you can set the endpoint on your graph client (`com.microsoft.graph.requests.GraphServiceClient`) using the Graph API Java SDK:
+>
+>```java
+>graphClient.setServiceRoot("https://canary.graph.microsoft.com/testprodbetawebhooks1");
+>```
+
+**You can create a Microsoft Graph API subscription by following the instructions in the [Microsoft Graph API webhook samples](https://github.com/microsoftgraph?q=webhooks&type=public&language=&sort=)** that include code samples for [NodeJS](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java (Spring Boot)](https://github.com/microsoftgraph/java-spring-webhooks-sample), and [.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample). There are no samples available for Python, Go and other languages yet, but the [Graph SDK](/graph/sdks/sdks-overview) supports creating Graph API subscriptions using those programming languages.
+
+> [!NOTE]
+> - Partner topic names must be unique within the same Azure region. Each tenant-application ID combination can create up to 10 unique partner topics.
+> - Be mindful of certain [Graph API resources' service limits](/graph/webhooks#azure-ad-resource-limitations) when developing your solution.
+
+#### What happens when you create a Microsoft Graph API subscription?
+
+When you create a Graph API subscription with a `notificationUrl` bound to Event Grid, a partner topic is created in your Azure subscription. For that partner topic, you [configure event subscriptions](event-filtering.md) to send your events to any of the supported [event handlers](event-handlers.md) that best meets your requirements to process the events.
+
+#### Microsoft Graph API Explorer
+For quick tests and to get to know the API, you could use the [Microsoft Graph API explorer](/graph/graph-explorer/graph-explorer-features). For anything else beyond casuals tests or learning, you should use the Graph SDKs as described above.
+
+## Next steps
+
+See the following articles:
+
+- [Azure Event Grid - Partner Events overview](partner-events-overview.md)
+- [Microsoft Graph API webhook samples](https://github.com/microsoftgraph?q=webhooks&type=public&language=&sort=). Use these samples to send events to Event Grid. You just need to provide a suitable value ``notificationUrl`` according to the request example above.
+- [Varied set of resources on Microsoft Graph API](https://developer.microsoft.com/en-us/graph/rest-api).
+- [Microsoft Graph API webhooks](/graph/api/resources/webhooks)
+- [Best practices for working with Microsoft Graph API](/graph/best-practices-concept)
+- [Microsoft Graph API SDKs](/graph/sdks/sdks-overview)
+- [Microsoft Graph API tutorials](/graph/tutorials), which shows how to use Graph API in different programming languages.This doesn't necessarily include examples for sending events to Event Grid.
+
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Title: Azure Event Grid - Subscribe to partner events description: This article explains how to subscribe to events from a partner using Azure Event Grid. Previously updated : 03/31/2022 Last updated : 06/09/2022 # Subscribe to events published by a partner with Azure Event Grid
Following example shows the way to create a partner configuration resource that
Here's the list of partners and a link to submit a request to enable events flow to a partner topic. - [Auth0](auth0-how-to.md)
+- [Microsoft Graph API](subscribe-to-graph-api-events.md)
## Activate a partner topic
event-grid Teams Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/teams-events.md
+
+ Title: Microsoft Teams events in Azure Event Grid
+description: This article describes Microsoft Teams events in Azure Event Grid.
+ Last updated : 06/06/2022++
+# Microsoft Teams events in Azure Event Grid
+
+This article provides the list of available event types for Microsoft Teams events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md).
+
+## Available event types
+These events are triggered when a call record is created or updated, and chat message is created, updated or deleted or by operating over those resources using Microsoft Graph API.
+
+ | Event name | Description |
+ | - | -- |
+ | **Microsoft.Graph.CallRecordCreated** | Triggered when a call or meeting is produced in Microsoft Teams. |
+ | **Microsoft.Graph.CallRecordUpdated** | Triggered when a call or meeting is updated in Microsoft Teams. |
+ | **Microsoft.Graph.ChatMessageCreated** | Triggered when a chat message is sent via teams or channels in Microsoft Teams. |
+ | **Microsoft.Graph.ChatMessageUpdated** | Triggered when a chat message is edited via teams or channels in Microsoft Teams. |
+ | **Microsoft.Graph.ChatMessageDeleted** | Triggered when a chat message is deleted via Teams or channels in Teams. |
++
+## Next steps
+
+* For an introduction to Azure Event Grid's Partner Events, see [Partner Events overview](partner-events-overview.md)
+* For information on how to subscribe to Microsoft Graph API events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md).
+* For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md).
+* For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md).
+* For information about how to configure an event subscription to select specific events to be delivered, consult [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md).
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
Previously updated : 12/14/2020 Last updated : 06/09/2022
ExpressRoute Direct gives you the ability to directly connect to Microsoft's glo
## Before you begin
-Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, please do the following via Azure PowerShell:
+Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, run the following via Azure PowerShell:
1. Sign in to Azure and select the subscription you wish to enroll. ```azurepowershell-interactive
Once enrolled, verify that the **Microsoft.Network** resource provider is regist
## <a name="authorization"></a>Generate the Letter of Authorization (LOA)
-Reference the recently created ExpressRoute Direct resource, input a customer name to write the LOA to and (optionally) define a file location to store the document. If a file path is not referenced, the document will download to the current directory.
+Reference the recently created ExpressRoute Direct resource, input a customer name to write the LOA to and (optionally) define a file location to store the document. If a file path isn't referenced, the document will download to the current directory.
### Azure PowerShell
This process should be used to conduct a Layer 1 test, ensuring that each cross-
## <a name="circuit"></a>Create a circuit
-By default, you can create 10 circuits in the subscription where the ExpressRoute Direct resource is. This limit can be increased by support. You are responsible for tracking both Provisioned and Utilized Bandwidth. Provisioned bandwidth is the sum of bandwidth of all circuits on the ExpressRoute Direct resource and utilized bandwidth is the physical usage of the underlying physical interfaces.
+By default, you can create 10 circuits in the subscription where the ExpressRoute Direct resource is. This limit can be increased by support. You're responsible for tracking both Provisioned and Utilized Bandwidth. Provisioned bandwidth is the sum of bandwidth of all circuits on the ExpressRoute Direct resource and utilized bandwidth is the physical usage of the underlying physical interfaces.
-There are additional circuit bandwidths that can be utilized on ExpressRoute Direct to support only the scenarios outlined above. These bandwidths are 40 Gbps and 100 Gbps.
+There are more circuit bandwidths that can be utilized on ExpressRoute Direct to support only the scenarios outlined above. These bandwidths are 40 Gbps and 100 Gbps.
**SkuTier** can be Local, Standard, or Premium.
-**SkuFamily** can only be MeteredData. Unlimited is not supported on ExpressRoute Direct.
+**SkuFamily** can only be MeteredData. Unlimited isn't supported on ExpressRoute Direct.
Create a circuit on the ExpressRoute Direct resource.
You can delete the ExpressRoute Direct resource by running the following command
```powershell Remove-azexpressrouteport -Name $Name -Resourcegroupname -$ResourceGroupName ```+
+## Public Preview
+
+The following scenario is in public preview:
+
+ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Azure Active Directory tenants. You'll create an authorization for your ExpressRoute Direct resource, and redeem the authorization to create an ExpressRoute circuit in a different subscription or Azure Active Directory tenant.
+
+### Enable ExpressRoute Direct and circuits in different subscriptions
+
+1. To enroll in the preview, send an e-mail to ExpressRouteDirect@microsoft.com with the ExpressRoute Direct and target ExpressRoute circuit Azure subscription IDs. You'll receive an e-mail once the feature get enabled for your subscriptions.
+
+1. Create the ExpressRoute Direct authorization by running the following commands in PowerShell:
+
+ ```powershell
+ Add-AzExpressRoutePortAuthorization -Name $Name -ExpressRoutePort $ERPort
+ Set-AzExpressRoutePort -ExpressRoutePort $ERPort
+ ```
+
+1. Verify the authorization was created successfully and store ExpressRoute Direct authorization into a variable:
+
+ ```powershell
+ $ERDirect = Get-AzExpressRoutePort -Name $Name -ResourceGroupName $ResourceGroupName
+ $ERDirect
+ ```
+
+1. Redeem the authorization to create the ExpressRoute Direct circuit with the following command:
+
+ ```powershell
+ New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -ExpressRoutePort $ERDirect -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -Authorization $ERDirect.Authorization
+ ```
## Next steps For more information about ExpressRoute Direct, see the [Overview](expressroute-erdirect-about.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | Interxion | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo |
+| **Doha2** | [Ooredoo](https://www.ooredoo.qa/portal/OoredooQatar/b2b-data-centre) | 3 | Qatar Central | Supported | |
| **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | n/a | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
firewall-manager Secured Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secured-virtual-hub.md
Previously updated : 10/12/2020 Last updated : 06/09/2022
You can choose the required security providers to protect and govern your networ
Using Firewall Manager in the Azure portal, you can either create a new secured virtual hub, or convert an existing virtual hub that you previously created using Azure Virtual WAN.
-## Gated public preview
+## Public preview features
-The below features are currently in gated public preview.
+The following features are in public preview:
| Feature | Description | | - | |
-| Routing Intent and Policies enabling Inter-hub security | This feature allows customers to configure internet-bound, private or inter-hub traffic flow through the Azure Firewall. Please review [Routing Intent and Policies](../virtual-wan/how-to-routing-policies.md) to learn more. |
+| Routing Intent and Policies enabling Inter-hub security | This feature allows you to configure internet-bound, private or inter-hub traffic flow through Azure Firewall. For more information, see [Routing Intent and Policies](../virtual-wan/how-to-routing-policies.md). |
## Next steps
firewall-manager Threat Intelligence Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/threat-intelligence-settings.md
Previously updated : 06/30/2020 Last updated : 06/09/2022
You can configure threat intelligence in one of the three modes that are describ
|Mode |Description | |||
-|`Off` | The threat intelligence feature is not enabled for your firewall. |
-|`Alert only` | You will receive high-confidence alerts for traffic going through your firewall to or from known malicious IP addresses and domains. |
-|`Alert and deny` | Traffic is blocked and you will receive high-confidence alerts when traffic is detected attempting to go through your firewall to or from known malicious IP addresses and domains. |
+|`Off` | The threat intelligence feature isn't enabled for your firewall. |
+|`Alert only` | You'll receive high-confidence alerts for traffic going through your firewall to or from known malicious IP addresses and domains. |
+|`Alert and deny` | Traffic is blocked and you'll receive high-confidence alerts when traffic is detected attempting to go through your firewall to or from known malicious IP addresses and domains. |
> [!NOTE] > Threat intelligence mode is inherited from parent policies to child policies. A child policy must be configured with the same or a stricter mode than the parent policy.
The following log excerpt shows a triggered rule for outbound traffic to a malic
## Testing -- **Outbound testing** - Outbound traffic alerts should be a rare occurrence, as it means that your environment has been compromised. To help test outbound alerts are working, a test FQDN has been created that triggers an alert. Use **testmaliciousdomain.eastus.cloudapp.azure.com** for your outbound tests.
+- **Outbound testing** - Outbound traffic alerts should be a rare occurrence, as it means that your environment has been compromised. To help test outbound alerts are working, the following FQDNs have been created to trigger an alert. Use the following FQDNs for your outbound tests:
+<br><br>
+
+ - `documentos-001.brazilsouth.cloudapp.azure.com`
+ - `itaucardiupp.centralus.cloudapp.azure.com`
+ - `azure-c.online`
+ - `www.azureadsec.com`
+ - `azurein360.co`
+
+ > [!NOTE]
+ > These FQDNs are subject to change, so they are not guaranteed to always work. Any changes will be documented here.
+ - **Inbound testing** - You can expect to see alerts on incoming traffic if DNAT rules are configured on the firewall. This is true even if only specific sources are allowed on the DNAT rule and traffic is otherwise denied. Azure Firewall doesn't alert on all known port scanners; only on scanners that are known to also engage in malicious activity.
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In PowerShell, run the following command: ```azurepowershell-interactive
- New-AzADServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8' -Role Contributor
+ New-AzADServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8'
``` ##### Azure CLI
hdinsight Apache Hadoop Use Hive Ambari View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-ambari-view.md
description: Learn how to use the Hive View from your web browser to submit Hive
Previously updated : 04/23/2020 Last updated : 06/09/2022 # Use Apache Ambari Hive View with Apache Hadoop in HDInsight
hdinsight Hdinsight Hadoop Manage Ambari Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-manage-ambari-rest-api.md
description: Learn how to use Ambari to monitor and manage Hadoop clusters in Az
Previously updated : 04/29/2020 Last updated : 06/09/2022 # Manage HDInsight clusters by using the Apache Ambari REST API
hdinsight Hdinsight Linux Ambari Ssh Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-linux-ambari-ssh-tunnel.md
description: Learn how to use an SSH tunnel to securely browse web resources hos
Previously updated : 04/14/2020 Last updated : 06/09/2022 # Use SSH tunneling to access Apache Ambari web UI, JobHistory, NameNode, Apache Oozie, and other UIs
hdinsight Hdinsight Scaling Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-scaling-best-practices.md
Previously updated : 04/29/2020 Last updated : 06/09/2022 # Manually scale Azure HDInsight clusters
hdinsight Hdinsight Troubleshoot Failed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-failed-cluster.md
description: Diagnose and troubleshoot a slow or failing job on an Azure HDInsig
Previously updated : 08/15/2019 Last updated : 06/09/2022 # Troubleshoot a slow or failing job on a HDInsight cluster
To help diagnose the source of a cluster error, start a new cluster with the sam
* [Analyze HDInsight Logs](./hdinsight-troubleshoot-guide.md) * [Access Apache Hadoop YARN application sign in Linux-based HDInsight](hdinsight-hadoop-access-yarn-app-logs-linux.md) * [Enable heap dumps for Apache Hadoop services on Linux-based HDInsight](hdinsight-hadoop-collect-debug-heap-dump-linux.md)
-* [Known Issues for Apache Spark cluster on HDInsight](./spark/apache-spark-known-issues.md)
+* [Known Issues for Apache Spark cluster on HDInsight](./spark/apache-spark-known-issues.md)
hdinsight Hdinsight Use Oozie Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-oozie-linux-mac.md
description: Use Hadoop Oozie in Linux-based HDInsight. Learn how to define an O
Previously updated : 04/27/2020 Last updated : 05/09/2022 # Use Apache Oozie with Apache Hadoop to define and run a workflow on Linux-based Azure HDInsight
In this article, you learned how to define an Oozie workflow and how to run an O
* [Upload data for Apache Hadoop jobs in HDInsight](hdinsight-upload-data.md) * [Use Apache Sqoop with Apache Hadoop in HDInsight](hadoop/apache-hadoop-use-sqoop-mac-linux.md) * [Use Apache Hive with Apache Hadoop on HDInsight](hadoop/hdinsight-use-hive.md)
-* [Troubleshoot Apache Oozie](./troubleshoot-oozie.md)
+* [Troubleshoot Apache Oozie](./troubleshoot-oozie.md)
hdinsight Machine Learning Services Quickstart Job Rconsole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/machine-learning-services-quickstart-job-rconsole.md
- Title: 'Quickstart: R script on ML Services & R console - Azure HDInsight'
-description: In the quickstart, you execute an R script on an ML Services cluster in Azure HDInsight using R console.
-- Previously updated : 06/19/2019--
-#Customer intent: I want to learn how to execute an R script using ML Services in Azure HDInsight for R console.
--
-# Quickstart: Execute an R script on an ML Services cluster in Azure HDInsight using R console
--
-ML Services on Azure HDInsight allows R scripts to use Apache Spark and Apache Hadoop MapReduce to run distributed computations. ML Services controls how calls are executed by setting the compute context. The edge node of a cluster provides a convenient place to connect to the cluster and to run your R scripts. With an edge node, you have the option of running the parallelized distributed functions of RevoScaleR across the cores of the edge node server. You can also run them across the nodes of the cluster by using RevoScaleR's Hadoop Map Reduce or Apache Spark compute contexts.
-
-In this quickstart, you learn how to run an R script with R console that demonstrates using Spark for distributed R computations. You will define a compute context to perform computations locally on an edge node, and again distributed across the nodes in the HDInsight cluster.
-
-## Prerequisites
-
-* An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-* An SSH client. For more information, see [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md).
--
-## Connect to R console
-
-1. Connect to the edge node of an ML Services HDInsight cluster using SSH. Edit the command below by replacing `CLUSTERNAME` with the name of your cluster, and then enter the command:
-
- ```cmd
- ssh sshuser@CLUSTERNAME-ed-ssh.azurehdinsight.net
- ```
-
-1. From the SSH session, use the following command to start the R console:
-
- ```
- R
- ```
-
- You should see an output with the version of ML Server, in addition to other information.
--
-## Use a compute context
-
-1. From the `>` prompt, you can enter R code. Use the following code to load example data into the default storage for HDInsight:
-
- ```R
- # Set the HDFS (WASB) location of example data
- bigDataDirRoot <- "/example/data"
-
- # create a local folder for storing data temporarily
- source <- "/tmp/AirOnTimeCSV2012"
- dir.create(source)
-
- # Download data to the tmp folder
- remoteDir <- "https://packages.revolutionanalytics.com/datasets/AirOnTimeCSV2012"
- download.file(file.path(remoteDir, "airOT201201.csv"), file.path(source, "airOT201201.csv"))
- download.file(file.path(remoteDir, "airOT201202.csv"), file.path(source, "airOT201202.csv"))
- download.file(file.path(remoteDir, "airOT201203.csv"), file.path(source, "airOT201203.csv"))
- download.file(file.path(remoteDir, "airOT201204.csv"), file.path(source, "airOT201204.csv"))
- download.file(file.path(remoteDir, "airOT201205.csv"), file.path(source, "airOT201205.csv"))
- download.file(file.path(remoteDir, "airOT201206.csv"), file.path(source, "airOT201206.csv"))
- download.file(file.path(remoteDir, "airOT201207.csv"), file.path(source, "airOT201207.csv"))
- download.file(file.path(remoteDir, "airOT201208.csv"), file.path(source, "airOT201208.csv"))
- download.file(file.path(remoteDir, "airOT201209.csv"), file.path(source, "airOT201209.csv"))
- download.file(file.path(remoteDir, "airOT201210.csv"), file.path(source, "airOT201210.csv"))
- download.file(file.path(remoteDir, "airOT201211.csv"), file.path(source, "airOT201211.csv"))
- download.file(file.path(remoteDir, "airOT201212.csv"), file.path(source, "airOT201212.csv"))
-
- # Set directory in bigDataDirRoot to load the data into
- inputDir <- file.path(bigDataDirRoot,"AirOnTimeCSV2012")
-
- # Make the directory
- rxHadoopMakeDir(inputDir)
-
- # Copy the data from source to input
- rxHadoopCopyFromLocal(source, bigDataDirRoot)
- ```
-
- This step may take around 10 minutes to complete.
-
-1. Create some data info and define two data sources. Enter the following code in the R console:
-
- ```R
- # Define the HDFS (WASB) file system
- hdfsFS <- RxHdfsFileSystem()
-
- # Create info list for the airline data
- airlineColInfo <- list(
- DAY_OF_WEEK = list(type = "factor"),
- ORIGIN = list(type = "factor"),
- DEST = list(type = "factor"),
- DEP_TIME = list(type = "integer"),
- ARR_DEL15 = list(type = "logical"))
-
- # get all the column names
- varNames <- names(airlineColInfo)
-
- # Define the text data source in hdfs
- airOnTimeData <- RxTextData(inputDir, colInfo = airlineColInfo, varsToKeep = varNames, fileSystem = hdfsFS)
-
- # Define the text data source in local system
- airOnTimeDataLocal <- RxTextData(source, colInfo = airlineColInfo, varsToKeep = varNames)
-
- # formula to use
- formula = "ARR_DEL15 ~ ORIGIN + DAY_OF_WEEK + DEP_TIME + DEST"
- ```
-
-1. Run a logistic regression over the data using the **local** compute context. Enter the following code in the R console:
-
- ```R
- # Set a local compute context
- rxSetComputeContext("local")
-
- # Run a logistic regression
- system.time(
- modelLocal <- rxLogit(formula, data = airOnTimeDataLocal)
- )
-
- # Display a summary
- summary(modelLocal)
- ```
-
- The computations should complete in about 7 minutes. You should see output that ends with lines similar to the following snippet:
-
- ```output
- Data: airOnTimeDataLocal (RxTextData Data Source)
- File name: /tmp/AirOnTimeCSV2012
- Dependent variable(s): ARR_DEL15
- Total independent variables: 634 (Including number dropped: 3)
- Number of valid observations: 6005381
- Number of missing observations: 91381
- -2*LogLikelihood: 5143814.1504 (Residual deviance on 6004750 degrees of freedom)
-
- Coefficients:
- Estimate Std. Error z value Pr(>|z|)
- (Intercept) -3.370e+00 1.051e+00 -3.208 0.00134 **
- ORIGIN=JFK 4.549e-01 7.915e-01 0.575 0.56548
- ORIGIN=LAX 5.265e-01 7.915e-01 0.665 0.50590
- ......
- DEST=SHD 5.975e-01 9.371e-01 0.638 0.52377
- DEST=TTN 4.563e-01 9.520e-01 0.479 0.63172
- DEST=LAR -1.270e+00 7.575e-01 -1.676 0.09364 .
- DEST=BPT Dropped Dropped Dropped Dropped
-
-
-
- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-
- Condition number of final variance-covariance matrix: 11904202
- Number of iterations: 7
- ```
-
-1. Run the same logistic regression using the **Spark** context. The Spark context distributes the processing over all the worker nodes in the HDInsight cluster. Enter the following code in the R console:
-
- ```R
- # Define the Spark compute context
- mySparkCluster <- RxSpark()
-
- # Set the compute context
- rxSetComputeContext(mySparkCluster)
-
- # Run a logistic regression
- system.time(
- modelSpark <- rxLogit(formula, data = airOnTimeData)
- )
-
- # Display a summary
- summary(modelSpark)
- ```
-
- The computations should complete in about 5 minutes.
-
-1. To quit the R console, use the following command:
-
- ```R
- quit()
- ```
-
-## Clean up resources
-
-After you complete the quickstart, you may want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it is not in use. You are also charged for an HDInsight cluster, even when it is not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they are not in use.
-
-To delete a cluster, see [Delete an HDInsight cluster using your browser, PowerShell, or the Azure CLI](../hdinsight-delete-cluster.md).
-
-## Next steps
-
-In this quickstart, you learned how to run an R script with R console that demonstrated using Spark for distributed R computations. Advance to the next article to learn the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster.
-
-> [!div class="nextstepaction"]
->[Compute context options for ML Services on HDInsight](./r-server-compute-contexts.md)
hdinsight Machine Learning Services Quickstart Job Rstudio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/machine-learning-services-quickstart-job-rstudio.md
- Title: 'Quickstart: RStudio Server & ML Services for R - Azure HDInsight'
-description: In the quickstart, you execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server.
-- Previously updated : 06/19/2019--
-#Customer intent: I want to learn how to execute an R script using ML Services in Azure HDInsight for RStudio Server.
--
-# Quickstart: Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server
--
-ML Services on Azure HDInsight allows R scripts to use Apache Spark and Apache Hadoop MapReduce to run distributed computations. ML Services controls how calls are executed by setting the compute context. The edge node of a cluster provides a convenient place to connect to the cluster and to run your R scripts. With an edge node, you have the option of running the parallelized distributed functions of RevoScaleR across the cores of the edge node server. You can also run them across the nodes of the cluster by using RevoScaleR's Hadoop Map Reduce or Apache Spark compute contexts.
-
-In this quickstart, you learn how to run an R script with RStudio Server that demonstrates using Spark for distributed R computations. You will define a compute context to perform computations locally on an edge node, and again distributed across the nodes in the HDInsight cluster.
-
-## Prerequisite
-
-An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-## Connect to RStudio Server
-
-RStudio Server runs on the cluster's edge node. Go to the following URL where `CLUSTERNAME` is the name of the ML Services cluster you created:
-
-```
-https://CLUSTERNAME.azurehdinsight.net/rstudio/
-```
-
-The first time you sign in you need to authenticate twice. For the first authentication prompt, provide the cluster Admin login and password, default is `admin`. For the second authentication prompt, provide the SSH login and password, default is `sshuser`. Subsequent sign-ins only require the SSH credentials.
-
-Once you are connected, your screen should resemble the following screenshot:
--
-## Use a compute context
-
-1. From RStudio Server, use the following code to load example data into the default storage for HDInsight:
-
- ```RStudio
- # Set the HDFS (WASB) location of example data
- bigDataDirRoot <- "/example/data"
-
- # create a local folder for storing data temporarily
- source <- "/tmp/AirOnTimeCSV2012"
- dir.create(source)
-
- # Download data to the tmp folder
- remoteDir <- "https://packages.revolutionanalytics.com/datasets/AirOnTimeCSV2012"
- download.file(file.path(remoteDir, "airOT201201.csv"), file.path(source, "airOT201201.csv"))
- download.file(file.path(remoteDir, "airOT201202.csv"), file.path(source, "airOT201202.csv"))
- download.file(file.path(remoteDir, "airOT201203.csv"), file.path(source, "airOT201203.csv"))
- download.file(file.path(remoteDir, "airOT201204.csv"), file.path(source, "airOT201204.csv"))
- download.file(file.path(remoteDir, "airOT201205.csv"), file.path(source, "airOT201205.csv"))
- download.file(file.path(remoteDir, "airOT201206.csv"), file.path(source, "airOT201206.csv"))
- download.file(file.path(remoteDir, "airOT201207.csv"), file.path(source, "airOT201207.csv"))
- download.file(file.path(remoteDir, "airOT201208.csv"), file.path(source, "airOT201208.csv"))
- download.file(file.path(remoteDir, "airOT201209.csv"), file.path(source, "airOT201209.csv"))
- download.file(file.path(remoteDir, "airOT201210.csv"), file.path(source, "airOT201210.csv"))
- download.file(file.path(remoteDir, "airOT201211.csv"), file.path(source, "airOT201211.csv"))
- download.file(file.path(remoteDir, "airOT201212.csv"), file.path(source, "airOT201212.csv"))
-
- # Set directory in bigDataDirRoot to load the data into
- inputDir <- file.path(bigDataDirRoot,"AirOnTimeCSV2012")
-
- # Make the directory
- rxHadoopMakeDir(inputDir)
-
- # Copy the data from source to input
- rxHadoopCopyFromLocal(source, bigDataDirRoot)
- ```
-
- This step may take around 8 minutes to complete.
-
-1. Create some data info and define two data sources. Enter the following code in RStudio:
-
- ```RStudio
- # Define the HDFS (WASB) file system
- hdfsFS <- RxHdfsFileSystem()
-
- # Create info list for the airline data
- airlineColInfo <- list(
- DAY_OF_WEEK = list(type = "factor"),
- ORIGIN = list(type = "factor"),
- DEST = list(type = "factor"),
- DEP_TIME = list(type = "integer"),
- ARR_DEL15 = list(type = "logical"))
-
- # get all the column names
- varNames <- names(airlineColInfo)
-
- # Define the text data source in hdfs
- airOnTimeData <- RxTextData(inputDir, colInfo = airlineColInfo, varsToKeep = varNames, fileSystem = hdfsFS)
-
- # Define the text data source in local system
- airOnTimeDataLocal <- RxTextData(source, colInfo = airlineColInfo, varsToKeep = varNames)
-
- # formula to use
- formula = "ARR_DEL15 ~ ORIGIN + DAY_OF_WEEK + DEP_TIME + DEST"
- ```
-
-1. Run a logistic regression over the data using the **local** compute context. Enter the following code in RStudio:
-
- ```RStudio
- # Set a local compute context
- rxSetComputeContext("local")
-
- # Run a logistic regression
- system.time(
- modelLocal <- rxLogit(formula, data = airOnTimeDataLocal)
- )
-
- # Display a summary
- summary(modelLocal)
- ```
-
- The computations should complete in about 7 minutes. You should see output that ends with lines similar to the following snippet:
-
- ```output
- Data: airOnTimeDataLocal (RxTextData Data Source)
- File name: /tmp/AirOnTimeCSV2012
- Dependent variable(s): ARR_DEL15
- Total independent variables: 634 (Including number dropped: 3)
- Number of valid observations: 6005381
- Number of missing observations: 91381
- -2*LogLikelihood: 5143814.1504 (Residual deviance on 6004750 degrees of freedom)
-
- Coefficients:
- Estimate Std. Error z value Pr(>|z|)
- (Intercept) -3.370e+00 1.051e+00 -3.208 0.00134 **
- ORIGIN=JFK 4.549e-01 7.915e-01 0.575 0.56548
- ORIGIN=LAX 5.265e-01 7.915e-01 0.665 0.50590
- ......
- DEST=SHD 5.975e-01 9.371e-01 0.638 0.52377
- DEST=TTN 4.563e-01 9.520e-01 0.479 0.63172
- DEST=LAR -1.270e+00 7.575e-01 -1.676 0.09364 .
- DEST=BPT Dropped Dropped Dropped Dropped
-
-
-
- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-
- Condition number of final variance-covariance matrix: 11904202
- Number of iterations: 7
- ```
-
-1. Run the same logistic regression using the **Spark** context. The Spark context distributes the processing over all the worker nodes in the HDInsight cluster. Enter the following code in RStudio:
-
- ```RStudio
- # Define the Spark compute context
- mySparkCluster <- RxSpark()
-
- # Set the compute context
- rxSetComputeContext(mySparkCluster)
-
- # Run a logistic regression
- system.time(
- modelSpark <- rxLogit(formula, data = airOnTimeData)
- )
-
- # Display a summary
- summary(modelSpark)
- ```
-
- The computations should complete in about 5 minutes.
-
-## Clean up resources
-
-After you complete the quickstart, you may want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it is not in use. You are also charged for an HDInsight cluster, even when it is not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they are not in use.
-
-To delete a cluster, see [Delete an HDInsight cluster using your browser, PowerShell, or the Azure CLI](../hdinsight-delete-cluster.md).
-
-## Next steps
-
-In this quickstart, you learned how to run an R script with RStudio Server that demonstrated using Spark for distributed R computations. Advance to the next article to learn the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster.
-
-> [!div class="nextstepaction"]
->[Compute context options for ML Services on HDInsight](./r-server-compute-contexts.md)
-
-> [!NOTE]
-> This page describes features of RStudio software. Microsoft Azure HDInsight is not affiliated with RStudio, Inc.
hdinsight Ml Services Tutorial Spark Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/ml-services-tutorial-spark-compute.md
- Title: 'Tutorial: Use R in a Spark compute context in Azure HDInsight'
-description: Tutorial - Get started with R and Spark on an Azure HDInsight Machine Learning services cluster.
-- Previously updated : 06/21/2019-
-#Customer intent: As a developer, I need to understand the Spark compute context for Machine Learning services.
--
-# Tutorial: Use R in a Spark compute context in Azure HDInsight
--
-This tutorial provides a step-by-step introduction to using the R functions in Apache Spark that run on an Azure HDInsight Machine Learning services cluster.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Download the sample data to local storage
-> * Copy the data to default storage
-> * Set up a dataset
-> * Create data sources
-> * Create a compute context for Spark
-> * Fit a linear model
-> * Use composite XDF files
-> * Convert XDF to CSV
-
-## Prerequisites
-
-* An Azure HDInsight Machine Learning services cluster. Go to [Create Apache Hadoop clusters by using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and, for **Cluster type**, select **ML Services**.
-
-## Connect to RStudio Server
-
-RStudio Server runs on the cluster's edge node. Go to the following site (where *CLUSTERNAME* in the URL is the name of the HDInsight Machine Learning services cluster you created):
-
-```
-https://CLUSTERNAME.azurehdinsight.net/rstudio/
-```
-
-The first time you sign in, you authenticate twice. At the first authentication prompt, provide the cluster admin username and password (the default is *admin*). At the second authentication prompt, provide the SSH username and password (the default is *sshuser*). Subsequent sign-ins require only the SSH credentials.
-
-## Download the sample data to local storage
-
-The *Airline 2012 On-Time Data Set* consists of 12 comma-separated files that contain flight arrival and departure details for all commercial flights within the US for the year 2012. This dataset is large, with over 6 million observations.
-
-1. Initialize a few environment variables. In the RStudio Server console, enter the following code:
-
- ```R
- bigDataDirRoot <- "/tutorial/data" # root directory on cluster default storage
- localDir <- "/tmp/AirOnTimeCSV2012" # directory on edge node
- remoteDir <- "https://packages.revolutionanalytics.com/datasets/AirOnTimeCSV2012" # location of data
- ```
-
-1. In the right pane, select the **Environment** tab. The variables are displayed under **Values**.
-
- :::image type="content" source="./media/ml-services-tutorial-spark-compute/hdinsight-rstudio-image.png" alt-text="HDInsight R studio web console" border="true":::
-
-1. Create a local directory, and download the sample data. In RStudio, enter the following code:
-
- ```R
- # Create local directory
- dir.create(localDir)
-
- # Download data to the tmp folder(local)
- download.file(file.path(remoteDir, "airOT201201.csv"), file.path(localDir, "airOT201201.csv"))
- download.file(file.path(remoteDir, "airOT201202.csv"), file.path(localDir, "airOT201202.csv"))
- download.file(file.path(remoteDir, "airOT201203.csv"), file.path(localDir, "airOT201203.csv"))
- download.file(file.path(remoteDir, "airOT201204.csv"), file.path(localDir, "airOT201204.csv"))
- download.file(file.path(remoteDir, "airOT201205.csv"), file.path(localDir, "airOT201205.csv"))
- download.file(file.path(remoteDir, "airOT201206.csv"), file.path(localDir, "airOT201206.csv"))
- download.file(file.path(remoteDir, "airOT201207.csv"), file.path(localDir, "airOT201207.csv"))
- download.file(file.path(remoteDir, "airOT201208.csv"), file.path(localDir, "airOT201208.csv"))
- download.file(file.path(remoteDir, "airOT201209.csv"), file.path(localDir, "airOT201209.csv"))
- download.file(file.path(remoteDir, "airOT201210.csv"), file.path(localDir, "airOT201210.csv"))
- download.file(file.path(remoteDir, "airOT201211.csv"), file.path(localDir, "airOT201211.csv"))
- download.file(file.path(remoteDir, "airOT201212.csv"), file.path(localDir, "airOT201212.csv"))
- ```
-
- The download should be complete in about 9.5 minutes.
-
-## Copy the data to default storage
-
-The Hadoop Distributed File System (HDFS) location is specified with the `airDataDir` variable. In RStudio, enter the following code:
-
-```R
-# Set directory in bigDataDirRoot to load the data into
-airDataDir <- file.path(bigDataDirRoot,"AirOnTimeCSV2012")
-
-# Create directory (default storage)
-rxHadoopMakeDir(airDataDir)
-
-# Copy data from local storage to default storage
-rxHadoopCopyFromLocal(localDir, bigDataDirRoot)
-
-# Optional. Verify files
-rxHadoopListFiles(airDataDir)
-```
-
-The step should be complete in about 10 seconds.
-
-## Set up a dataset
-
-1. Create a file system object that uses the default values. In RStudio, enter the following code:
-
- ```R
- # Define the HDFS (WASB) file system
- hdfsFS <- RxHdfsFileSystem()
- ```
-
-1. Because the original CSV files have rather unwieldy variable names, you supply a *colInfo* list to make them more manageable. In RStudio, enter the following code:
-
- ```R
- airlineColInfo <- list(
- MONTH = list(newName = "Month", type = "integer"),
- DAY_OF_WEEK = list(newName = "DayOfWeek", type = "factor",
- levels = as.character(1:7),
- newLevels = c("Mon", "Tues", "Wed", "Thur", "Fri", "Sat",
- "Sun")),
- UNIQUE_CARRIER = list(newName = "UniqueCarrier", type =
- "factor"),
- ORIGIN = list(newName = "Origin", type = "factor"),
- DEST = list(newName = "Dest", type = "factor"),
- CRS_DEP_TIME = list(newName = "CRSDepTime", type = "integer"),
- DEP_TIME = list(newName = "DepTime", type = "integer"),
- DEP_DELAY = list(newName = "DepDelay", type = "integer"),
- DEP_DELAY_NEW = list(newName = "DepDelayMinutes", type =
- "integer"),
- DEP_DEL15 = list(newName = "DepDel15", type = "logical"),
- DEP_DELAY_GROUP = list(newName = "DepDelayGroups", type =
- "factor",
- levels = as.character(-2:12),
- newLevels = c("< -15", "-15 to -1","0 to 14", "15 to 29",
- "30 to 44", "45 to 59", "60 to 74",
- "75 to 89", "90 to 104", "105 to 119",
- "120 to 134", "135 to 149", "150 to 164",
- "165 to 179", ">= 180")),
- ARR_DELAY = list(newName = "ArrDelay", type = "integer"),
- ARR_DELAY_NEW = list(newName = "ArrDelayMinutes", type =
- "integer"),
- ARR_DEL15 = list(newName = "ArrDel15", type = "logical"),
- AIR_TIME = list(newName = "AirTime", type = "integer"),
- DISTANCE = list(newName = "Distance", type = "integer"),
- DISTANCE_GROUP = list(newName = "DistanceGroup", type =
- "factor",
- levels = as.character(1:11),
- newLevels = c("< 250", "250-499", "500-749", "750-999",
- "1000-1249", "1250-1499", "1500-1749", "1750-1999",
- "2000-2249", "2250-2499", ">= 2500")))
-
- varNames <- names(airlineColInfo)
- ```
-
-## Create data sources
-
-In a Spark compute context, you can create data sources by using the following functions:
-
-|Function | Description |
-||-|
-|`RxTextData` | A comma-delimited text data source. |
-|`RxXdfData` | Data in the XDF data file format. In RevoScaleR, the XDF file format is modified for Hadoop to store data in a composite set of files rather than a single file. |
-|`RxHiveData` | Generates a Hive Data Source object.|
-|`RxParquetData` | Generates a Parquet Data Source object.|
-|`RxOrcData` | Generates an Orc Data Source object.|
-
-Create an [RxTextData](/machine-learning-server/r-reference/revoscaler/rxtextdata) object by using the files you copied to HDFS. In RStudio, enter the following code:
-
-```R
-airDS <- RxTextData( airDataDir,
- colInfo = airlineColInfo,
- varsToKeep = varNames,
- fileSystem = hdfsFS )
-```
-
-## Create a compute context for Spark
-
-To load data and run analyses on worker nodes, you set the compute context in your script to [RxSpark](/machine-learning-server/r-reference/revoscaler/rxspark). In this context, R functions automatically distribute the workload across all the worker nodes, with no built-in requirement for managing jobs or the queue. The Spark compute context is established through `RxSpark` or `rxSparkConnect()` to create the Spark compute context, and it uses `rxSparkDisconnect()` to return to a local compute context. In RStudio, enter the following code:
-
-```R
-# Define the Spark compute context
-mySparkCluster <- RxSpark()
-
-# Set the compute context
-rxSetComputeContext(mySparkCluster)
-```
-
-## Fit a linear model
-
-1. Use the [rxLinMod](/machine-learning-server/r-reference/revoscaler/rxlinmod) function to fit a linear model using your `airDS` data source. In RStudio, enter the following code:
-
- ```R
- system.time(
- delayArr <- rxLinMod(ArrDelay ~ DayOfWeek, data = airDS,
- cube = TRUE)
- )
- ```
-
- This step should be complete in 2 to 3 minutes.
-
-1. View the results. In RStudio, enter the following code:
-
- ```R
- summary(delayArr)
- ```
-
- You should see the following results:
-
- ```output
- Call:
- rxLinMod(formula = ArrDelay ~ DayOfWeek, data = airDS, cube = TRUE)
-
- Cube Linear Regression Results for: ArrDelay ~ DayOfWeek
- Data: airDataXdf (RxXdfData Data Source)
- File name: /tutorial/data/AirOnTimeCSV2012
- Dependent variable(s): ArrDelay
- Total independent variables: 7
- Number of valid observations: 6005381
- Number of missing observations: 91381
-
- Coefficients:
- Estimate Std. Error t value Pr(>|t|) | Counts
- DayOfWeek=Mon 3.54210 0.03736 94.80 2.22e-16 *** | 901592
- DayOfWeek=Tues 1.80696 0.03835 47.12 2.22e-16 *** | 855805
- DayOfWeek=Wed 2.19424 0.03807 57.64 2.22e-16 *** | 868505
- DayOfWeek=Thur 4.65502 0.03757 123.90 2.22e-16 *** | 891674
- DayOfWeek=Fri 5.64402 0.03747 150.62 2.22e-16 *** | 896495
- DayOfWeek=Sat 0.91008 0.04144 21.96 2.22e-16 *** | 732944
- DayOfWeek=Sun 2.82780 0.03829 73.84 2.22e-16 *** | 858366
-
- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-
- Residual standard error: 35.48 on 6005374 degrees of freedom
- Multiple R-squared: 0.001827 (as if intercept included)
- Adjusted R-squared: 0.001826
- F-statistic: 1832 on 6 and 6005374 DF, p-value: < 2.2e-16
- Condition number: 1
- ```
-
- The results indicate that you've processed all the data, 6 million observations, using all the CSV files in the specified directory. Because you specified `cube = TRUE`, you have an estimated coefficient for each day of the week (and not the intercept).
-
-## Use composite XDF files
-
-As you've seen, you can analyze CSV files directly with R on Hadoop. But you can do the analysis more quickly if you store the data in a more efficient format. The R XDF file format is efficient, but it's modified somewhat for HDFS so that individual files remain within a single HDFS block. (The HDFS block size varies from installation to installation but is typically either 64 MB or 128 MB.)
-
-When you use [rxImport](/machine-learning-server/r-reference/revoscaler/rximport) on Hadoop to create a set of composite XDF files, you specify an `RxTextData` data source such as `AirDS` as the inData and an `RxXdfData` data source with fileSystem set to an HDFS file system as the outFile argument. You can then use the `RxXdfData` object as the data argument in subsequent R analyses.
-
-1. Define an `RxXdfData` object. In RStudio, enter the following code:
-
- ```R
- airDataXdfDir <- file.path(bigDataDirRoot,"AirOnTimeXDF2012")
-
- airDataXdf <- RxXdfData( airDataXdfDir,
- fileSystem = hdfsFS )
- ```
-
-1. Set a block size of 250000 rows and specify that we read all the data. In RStudio, enter the following code:
-
- ```R
- blockSize <- 250000
- numRowsToRead = -1
- ```
-
-1. Import the data using `rxImport`. In RStudio, enter the following code:
-
- ```R
- rxImport(inData = airDS,
- outFile = airDataXdf,
- rowsPerRead = blockSize,
- overwrite = TRUE,
- numRows = numRowsToRead )
- ```
-
- This step should be complete in a few minutes.
-
-1. Re-estimate the same linear model, using the new, faster data source. In RStudio, enter the following code:
-
- ```R
- system.time(
- delayArr <- rxLinMod(ArrDelay ~ DayOfWeek, data = airDataXdf,
- cube = TRUE)
- )
- ```
-
- The step should be complete in less than a minute.
-
-1. View the results. The results should be the same as from the CSV files. In RStudio, enter the following code:
-
- ```R
- summary(delayArr)
- ```
-
-## Convert XDF to CSV
-
-### In a Spark context
-
-If you converted your CSV files to XDF file format for greater efficiency while running the analyses, but now want to convert your data back to CSV, you can do so by using [rxDataStep](/machine-learning-server/r-reference/revoscaler/rxdatastep).
-
-To create a folder of CSV files, first create an `RxTextData` object by using a directory name as the file argument. This object represents the folder in which to create the CSV files. This directory is created when you run the `rxDataStep`. Then, point to this `RxTextData` object in the `outFile` argument of the `rxDataStep`. Each CSV that's created is named based on the directory name and followed by a number.
-
-Suppose that you want to write out a folder of CSV files in HDFS from your `airDataXdf` composite XDF after you perform the logistic regression and prediction, so that the new CSV files contain the predicted values and residuals. In RStudio, enter the following code:
-
-```R
-airDataCsvDir <- file.path(bigDataDirRoot,"AirDataCSV2012")
-airDataCsvDS <- RxTextData(airDataCsvDir,fileSystem=hdfsFS)
-rxDataStep(inData=airDataXdf, outFile=airDataCsvDS)
-```
-
-This step should be complete in about 2.5 minutes.
-
-The `rxDataStep` wrote out one CSV file for every XDFD file in the input composite XDF file. This is the default behavior for writing CSV files from composite XDF files to HDFS when the compute context is set to `RxSpark`.
-
-### In a local context
-
-Alternatively, when you're done performing your analyses, you could switch your compute context back to `local` to take advantage of two arguments within `RxTextData` that give you slightly more control when you write out CSV files to HDFS: `createFileSet` and `rowsPerOutFile`. When you set `createFileSet` to `TRUE`, a folder of CSV files is written to the directory that you specify. When you set `createFileSet` to `FALSE`, a single CSV file is written. You can set the second argument, `rowsPerOutFile`, to an integer to indicate how many rows to write to each CSV file when `createFileSet` is `TRUE`.
-
-In RStudio, enter the following code:
-
-```R
-rxSetComputeContext("local")
-airDataCsvRowsDir <- file.path(bigDataDirRoot,"AirDataCSVRows2012")
-airDataCsvRowsDS <- RxTextData(airDataCsvRowsDir, fileSystem=hdfsFS, createFileSet=TRUE, rowsPerOutFile=1000000)
-rxDataStep(inData=airDataXdf, outFile=airDataCsvRowsDS)
-```
-
-This step should be complete in about 10 minutes.
-
-When you use an `RxSpark` compute context, `createFileSet` defaults to `TRUE` and `rowsPerOutFile` has no effect. Therefore, if you want to create a single CSV or customize the number of rows per file, perform `rxDataStep` in a `local` compute context (the data can still be in HDFS).
-
-## Final steps
-
-1. Clean up the data. In RStudio, enter the following code:
-
- ```R
- rxHadoopRemoveDir(airDataDir)
- rxHadoopRemoveDir(airDataXdfDir)
- rxHadoopRemoveDir(airDataCsvDir)
- rxHadoopRemoveDir(airDataCsvRowsDir)
- rxHadoopRemoveDir(bigDataDirRoot)
- ```
-
-1. Stop the remote Spark application. In RStudio, enter the following code:
-
- ```R
- rxStopEngine(mySparkCluster)
- ```
-
-1. Quit the R session. In RStudio, enter the following code:
-
- ```R
- quit()
- ```
-
-## Clean up resources
-
-After you complete the tutorial, you might want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it's not in use. You're also charged for an HDInsight cluster, even when it's not in use. Because the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they're not in use.
-
-To delete a cluster, see [Delete an HDInsight cluster by using your browser, PowerShell, or the Azure CLI](../hdinsight-delete-cluster.md).
-
-## Next steps
-
-In this tutorial, you learned how to use R functions in Apache Spark that are running on an HDInsight Machine Learning services cluster. For more information, see the following articles:
-
-* [Compute context options for an Azure HDInsight Machine Learning services cluster](r-server-compute-contexts.md)
-* [R Functions for Spark on Hadoop](/machine-learning-server/r-reference/revoscaler/revoscaler-hadoop-functions)
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/quickstart-resource-manager-template.md
- Title: 'Quickstart: Create ML Services cluster using template - Azure HDInsight'
-description: This quickstart shows how to use Resource Manager template to create an ML Services cluster in Azure HDInsight.
--- Previously updated : 03/13/2020-
-#Customer intent: As a developer new to ML Services on Azure, I need to see how to create an ML Services cluster.
--
-# Quickstart: Create ML Services cluster in Azure HDInsight using ARM template
--
-In this quickstart, you use an Azure Resource Manager template (ARM template) to create an [ML Services](./r-server-overview.md) cluster in Azure HDInsight. Microsoft Machine Learning Server is available as a deployment option when you create HDInsight clusters in Azure. The cluster type that provides this option is called ML Services. This capability provides data scientists, statisticians, and R programmers with on-demand access to scalable, distributed methods of analytics on HDInsight.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[:::image type="icon" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.hdinsight%2Fhdinsight-rserver%2Fazuredeploy.json)
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Review the template
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-rserver/).
--
-Two Azure resources are defined in the template:
-
-* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.
-* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster.
-
-## Deploy the template
-
-1. Select the **Deploy to Azure** button below to sign in to Azure and open the ARM template.
-
- [:::image type="icon" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.hdinsight%2Fhdinsight-rserver%2Fazuredeploy.json)
-
-1. Enter or select the following values:
-
- |Property |Description |
- |||
- |Subscription|From the drop-down list, select the Azure subscription that's used for the cluster.|
- |Resource group|From the drop-down list, select your existing resource group, or select **Create new**.|
- |Location|The value will autopopulate with the location used for the resource group.|
- |Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.|
- |Cluster Login User Name|Provide the username, default is **admin**.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` ). |
- |Ssh User Name|Provide the username, default is sshuser|
- |Ssh Password|Provide the password.|
-
- :::image type="content" source="./media/quickstart-resource-manager-template/resource-manager-template-rserver.png" alt-text="Deploy Resource Manager template HBase" border="true":::
-
-1. Review the **TERMS AND CONDITIONS**. Then select **I agree to the terms and conditions stated above**, then **Purchase**. You'll receive a notification that your deployment is in progress. It takes about 20 minutes to create a cluster.
-
-## Review deployed resources
-
-Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, an [Azure Data Lake Storage Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
-
-## Clean up resources
-
-After you complete the quickstart, you may want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it isn't in use. You're also charged for an HDInsight cluster, even when it isn't in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they aren't in use.
-
-From the Azure portal, navigate to your cluster, and select **Delete**.
-
-[Delete Resource Manager template HBase](./media/quickstart-resource-manager-template/azure-portal-delete-rserver.png)
-
-You can also select the resource group name to open the resource group page, and then select **Delete resource group**. By deleting the resource group, you delete both the HDInsight cluster, and the default storage account.
-
-## Next steps
-
-In this quickstart, you learned how to create an ML Services cluster in HDInsight using an ARM template. In the next article, you learn how to run an R script with RStudio Server that demonstrates using Spark for distributed R computations..
-
-> [!div class="nextstepaction"]
-> [Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server](./machine-learning-services-quickstart-job-rstudio.md)
hdinsight R Server Compute Contexts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-compute-contexts.md
- Title: Compute context options for ML Services on HDInsight - Azure
-description: Learn about the different compute context options available to users with ML Services on HDInsight
-- Previously updated : 01/02/2020---
-# Compute context options for ML Services on HDInsight
--
-ML Services on Azure HDInsight controls how calls are executed by setting the compute context. This article outlines the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster.
-
-The edge node of a cluster provides a convenient place to connect to the cluster and to run your R scripts. With an edge node, you have the option of running the parallelized distributed functions of RevoScaleR across the cores of the edge node server. You can also run them across the nodes of the cluster by using RevoScaleR's Hadoop Map Reduce or Apache Spark compute contexts.
-
-## ML Services on Azure HDInsight
-
-[ML Services on Azure HDInsight](r-server-overview.md) provides the latest capabilities for R-based analytics. It can use data that is stored in an Apache Hadoop HDFS container in your [Azure Blob](../../storage/common/storage-introduction.md "Azure Blob storage") storage account, a Data Lake Store, or the local Linux file system. Since ML Services is built on open-source R, the R-based applications you build can apply any of the 8000+ open-source R packages. They can also use the routines in [RevoScaleR](/machine-learning-server/r-reference/revoscaler/revoscaler), Microsoft's big data analytics package that is included with ML Services.
-
-## Compute contexts for an edge node
-
-In general, an R script that's run in ML Services cluster on the edge node runs within the R interpreter on that node. The exceptions are those steps that call a RevoScaleR function. The RevoScaleR calls run in a compute environment that is determined by how you set the RevoScaleR compute context. When you run your R script from an edge node, the possible values of the compute context are:
--- local sequential (*local*)-- local parallel (*localpar*)-- Map Reduce-- Spark-
-The *local* and *localpar* options differ only in how **rxExec** calls are executed. They both execute other rx-function calls in a parallel manner across all available cores unless specified otherwise through use of the RevoScaleR **numCoresToUse** option, for example `rxOptions(numCoresToUse=6)`. Parallel execution options offer optimal performance.
-
-The following table summarizes the various compute context options to set how calls are executed:
-
-| Compute context | How to set | Execution context |
-| - | - | - |
-| Local sequential | rxSetComputeContext('local') | Parallelized execution across the cores of the edge node server, except for rxExec calls, which are executed serially |
-| Local parallel | rxSetComputeContext('localpar') | Parallelized execution across the cores of the edge node server |
-| Spark | RxSpark() | Parallelized distributed execution via Spark across the nodes of the HDI cluster |
-| Map Reduce | RxHadoopMR() | Parallelized distributed execution via Map Reduce across the nodes of the HDI cluster |
-
-## Guidelines for deciding on a compute context
-
-Which of the three options you choose that provide parallelized execution depends on the nature of your analytics work, the size, and the location of your data. There's no simple formula that tells you, which compute context to use. There are, however, some guiding principles that can help you make the right choice, or, at least, help you narrow down your choices before you run a benchmark. These guiding principles include:
--- The local Linux file system is faster than HDFS.-- Repeated analyses are faster if the data is local, and if it's in XDF.-- It's preferable to stream small amounts of data from a text data source. If the amount of data is larger, convert it to XDF before analysis.-- The overhead of copying or streaming the data to the edge node for analysis becomes unmanageable for very large amounts of data.-- ApacheSpark is faster than Map Reduce for analysis in Hadoop.-
-Given these principles, the following sections offer some general rules of thumb for selecting a compute context.
-
-### Local
--- If the amount of data to analyze is small and doesn't require repeated analysis, then stream it directly into the analysis routine using *local* or *localpar*.-- If the amount of data to analyze is small or medium-sized and requires repeated analysis, then copy it to the local file system, import it to XDF, and analyze it via *local* or *localpar*.-
-### Apache Spark
--- If the amount of data to analyze is large, then import it to a Spark DataFrame using **RxHiveData** or **RxParquetData**, or to XDF in HDFS (unless storage is an issue), and analyze it using the Spark compute context.-
-### Apache Hadoop Map Reduce
--- Use the Map Reduce compute context only if you come across an insurmountable problem with the Spark compute context since it's generally slower. -
-## Inline help on rxSetComputeContext
-For more information and examples of RevoScaleR compute contexts, see the inline help in R on the rxSetComputeContext method, for example:
-
-```console
-> ?rxSetComputeContext
-```
-
-You can also refer to the [Distributed computing overview](/machine-learning-server/r/how-to-revoscaler-distributed-computing) in [Machine Learning Server documentation](/machine-learning-server/).
-
-## Next steps
-
-In this article, you learned about the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster. To learn more about how to use ML Services with HDInsight clusters, see the following topics:
--- [Overview of ML Services for Apache Hadoop](r-server-overview.md)-- [Azure Storage options for ML Services on HDInsight](r-server-storage.md)
hdinsight R Server Hdinsight Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-hdinsight-manage.md
- Title: Manage ML Services cluster on HDInsight - Azure
-description: Learn how to manage various tasks on ML Services cluster in Azure HDInsight.
-- Previously updated : 06/19/2019---
-# Manage ML Services cluster on Azure HDInsight
--
-In this article, you learn how to manage an existing ML Services cluster on Azure HDInsight to perform tasks like adding multiple concurrent users, connecting remotely to an ML Services cluster, changing compute context, etc.
-
-## Prerequisites
-
-* An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-* A Secure Shell (SSH) client: An SSH client is used to remotely connect to the HDInsight cluster and run commands directly on the cluster. For more information, see [Use SSH with HDInsight.](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-## Enable multiple concurrent users
-
-You can enable multiple concurrent users for ML Services cluster on HDInsight by adding more users for the edge node on which the RStudio community version runs. When you create an HDInsight cluster, you must provide two users, an HTTP user and an SSH user:
---- **Cluster login username**: an HTTP user for authentication through the HDInsight gateway that is used to protect the HDInsight clusters you created. This HTTP user is used to access the Apache Ambari UI, Apache Hadoop YARN UI, as well as other UI components.-- **Secure Shell (SSH) username**: an SSH user to access the cluster through secure shell. This user is a user in the Linux system for all the head nodes, worker nodes, and edge nodes. So you can use secure shell to access any of the nodes in a remote cluster.-
-The R Studio Server Community version used in the ML Services cluster on HDInsight accepts only Linux username and password as a sign in mechanism. It does not support passing tokens. So, when you try to access R Studio for the first time on an ML Services cluster, you need to sign in twice.
--- First sign in using the HTTP user credentials through the HDInsight Gateway.--- Then use the SSH user credentials to sign in to RStudio.
-
-Currently, only one SSH user account can be created when provisioning an HDInsight cluster. So to enable multiple users to access ML Services cluster on HDInsight, you must create additional users in the Linux system.
-
-Because RStudio runs on the cluster's edge node, there are several steps here:
-
-1. Use the existing SSH user to sign in to the edge node
-2. Add more Linux users in edge node
-3. Use RStudio Community version with the user created
-
-### Step 1: Use the created SSH user to sign in to the edge node
-
-Follow the instructions at [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md) to access the edge node. The edge node address for ML Services cluster on HDInsight is `CLUSTERNAME-ed-ssh.azurehdinsight.net`.
-
-### Step 2: Add more Linux users in edge node
-
-To add a user to the edge node, execute the commands:
-
-```bash
-# Add a user
-sudo useradd <yournewusername> -m
-
-# Set password for the new user
-sudo passwd <yournewusername>
-```
-
-The following screenshot shows the outputs.
--
-When prompted for "Current Kerberos password:", just press **Enter** to ignore it. The `-m` option in `useradd` command indicates that the system will create a home folder for the user, which is required for RStudio Community version.
-
-### Step 3: Use RStudio Community version with the user created
-
-Access RStudio from `https://CLUSTERNAME.azurehdinsight.net/rstudio/`. If you are logging in for the first time after creating the cluster, enter the cluster admin credentials followed by the SSH user credentials you created. If this is not your first login, only enter the credentials for the SSH user you created.
-
-You can also sign in using the original credentials (by default, it is *sshuser*) concurrently from another browser window.
-
-Note also that the newly added users do not have root privileges in Linux system, but they do have the same access to all the files in the remote HDFS and WASB storage.
-
-## Connect remotely to Microsoft ML Services
-
-You can set up access to the HDInsight Spark compute context from a remote instance of ML Client running on your desktop. To do so, you must specify the options (hdfsShareDir, shareDir, sshUsername, sshHostname, sshSwitches, and sshProfileScript) when defining the RxSpark compute context on your desktop: For example:
-
-```r
-myNameNode <- "default"
-myPort <- 0
-
-mySshHostname <- '<clustername>-ed-ssh.azurehdinsight.net' # HDI secure shell hostname
-mySshUsername <- '<sshuser>'# HDI SSH username
-mySshSwitches <- '-i /cygdrive/c/Data/R/davec' # HDI SSH private key
-
-myhdfsShareDir <- paste("/user/RevoShare", mySshUsername, sep="/")
-myShareDir <- paste("/var/RevoShare" , mySshUsername, sep="/")
-
-mySparkCluster <- RxSpark(
- hdfsShareDir = myhdfsShareDir,
- shareDir = myShareDir,
- sshUsername = mySshUsername,
- sshHostname = mySshHostname,
- sshSwitches = mySshSwitches,
- sshProfileScript = '/etc/profile',
- nameNode = myNameNode,
- port = myPort,
- consoleOutput= TRUE
-)
-```
-
-For more information, see the "Using Microsoft Machine Learning Server as an Apache Hadoop Client" section in [How to use RevoScaleR in an Apache Spark compute context](/machine-learning-server/r/how-to-revoscaler-spark#more-spark-scenarios)
-
-## Use a compute context
-
-A compute context allows you to control whether computation is performed locally on the edge node or distributed across the nodes in the HDInsight cluster. For an example of setting a compute context with RStudio Server, see [Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server](machine-learning-services-quickstart-job-rstudio.md).
-
-## Distribute R code to multiple nodes
-
-With ML Services on HDInsight, you can take existing R code and run it across multiple nodes in the cluster by using `rxExec`. This function is useful when doing a parameter sweep or simulations. The following code is an example of how to use `rxExec`:
-
-```r
-rxExec( function() {Sys.info()["nodename"]}, timesToRun = 4 )
-```
-
-If you are still using the Spark context, this command returns the nodename value for the worker nodes that the code `(Sys.info()["nodename"])` is run on. For example, on a four node cluster, you expect to receive output similar to the following snippet:
-
-```r
-$rxElem1
- nodename
-"wn3-mymlser"
-
-$rxElem2
- nodename
-"wn0-mymlser"
-
-$rxElem3
- nodename
-"wn3-mymlser"
-
-$rxElem4
- nodename
-"wn3-mymlser"
-```
-
-## Access data in Apache Hive and Parquet
-
-HDInsight ML Services allows direct access to data in Hive and Parquet for use by ScaleR functions in the Spark compute context. These capabilities are available through new ScaleR data source functions called RxHiveData and RxParquetData that work through use of Spark SQL to load data directly into a Spark DataFrame for analysis by ScaleR.
-
-The following code provides some sample code on use of the new functions:
-
-```r
-#Create a Spark compute context:
-myHadoopCluster <- rxSparkConnect(reset = TRUE)
-
-#Retrieve some sample data from Hive and run a model:
-hiveData <- RxHiveData("select * from hivesampletable",
- colInfo = list(devicemake = list(type = "factor")))
-rxGetInfo(hiveData, getVarInfo = TRUE)
-
-rxLinMod(querydwelltime ~ devicemake, data=hiveData)
-
-#Retrieve some sample data from Parquet and run a model:
-rxHadoopMakeDir('/share')
-rxHadoopCopyFromLocal(file.path(rxGetOption('sampleDataDir'), 'claimsParquet/'), '/share/')
-pqData <- RxParquetData('/share/claimsParquet',
- colInfo = list(
- age = list(type = "factor"),
- car.age = list(type = "factor"),
- type = list(type = "factor")
- ) )
-rxGetInfo(pqData, getVarInfo = TRUE)
-
-rxNaiveBayes(type ~ age + cost, data = pqData)
-
-#Check on Spark data objects, cleanup, and close the Spark session:
-lsObj <- rxSparkListData() # two data objs are cached
-lsObj
-rxSparkRemoveData(lsObj)
-rxSparkListData() # it should show empty list
-rxSparkDisconnect(myHadoopCluster)
-```
-
-For additional info on use of these new functions see the online help in ML Services through use of the `?RxHivedata` and `?RxParquetData` commands.
-
-## Install additional R packages on the cluster
-
-### To install R packages on the edge node
-
-If you want to install additional R packages on the edge node, you can use `install.packages()` directly from within the R console, once connected to the edge node through SSH.
-
-### To install R packages on the worker node
-
-To install R packages on the worker nodes of the cluster, you must use a Script Action. Script Actions are Bash scripts that are used to make configuration changes to the HDInsight cluster or to install additional software, such as additional R packages.
-
-> [!IMPORTANT]
-> Using Script Actions to install additional R packages can only be used after the cluster has been created. Do not use this procedure during cluster creation, as the script relies on ML Services being completely configured.
-
-1. Follow the steps at [Customize clusters using Script Action](../hdinsight-hadoop-customize-cluster-linux.md).
-
-3. For **Submit script action**, provide the following information:
-
- * For **Script type**, select **Custom**.
-
- * For **Name**, provide a name for the script action.
-
- * For **Bash script URI**, enter `https://mrsactionscripts.blob.core.windows.net/rpackages-v01/InstallRPackages.sh`. This is the script that installs additional R packages on the worker node
-
- * Select the check box only for **Worker**.
-
- * **Parameters**: The R packages to be installed. For example, `bitops stringr arules`
-
- * Select the check box to **Persist this script action**.
-
- > [!NOTE]
- > 1. By default, all R packages are installed from a snapshot of the Microsoft MRAN repository consistent with the version of ML Server that has been installed. If you want to install newer versions of packages, then there is some risk of incompatibility. However this kind of install is possible by specifying `useCRAN` as the first element of the package list, for example `useCRAN bitops, stringr, arules`.
- > 2. Some R packages require additional Linux system libraries. For convenience, the HDInsight ML Services comes pre-installed with the dependencies needed by the top 100 most popular R packages. However, if the R package(s) you install require libraries beyond these then you must download the base script used here and add steps to install the system libraries. You must then upload the modified script to a public blob container in Azure storage and use the modified script to install the packages.
- > For more information on developing Script Actions, see [Script Action development](../hdinsight-hadoop-script-actions-linux.md).
-
- :::image type="content" source="./media/r-server-hdinsight-manage/submit-script-action.png" alt-text="Azure portal submit script action" border="true":::
-
-4. Select **Create** to run the script. Once the script completes, the R packages are available on all worker nodes.
-
-## Next steps
-
-* [Operationalize ML Services cluster on HDInsight](r-server-operationalize.md)
-* [Compute context options for ML Service cluster on HDInsight](r-server-compute-contexts.md)
-* [Azure Storage options for ML Services cluster on HDInsight](r-server-storage.md)
hdinsight R Server Operationalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-operationalize.md
- Title: Operationalize ML Services on HDInsight - Azure
-description: Learn how to operationalize your data model to make predictions with ML Services in Azure HDInsight.
-- Previously updated : 06/27/2018---
-# Operationalize ML Services cluster on Azure HDInsight
--
-After you have used ML Services cluster in HDInsight to complete your data modeling, you can operationalize the model to make predictions. This article provides instructions on how to perform this task.
-
-## Prerequisites
-
-* An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-* A Secure Shell (SSH) client: An SSH client is used to remotely connect to the HDInsight cluster and run commands directly on the cluster. For more information, see [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-## Operationalize ML Services cluster with one-box configuration
-
-> [!NOTE]
-> The steps below are applicable to R Server 9.0 and ML Server 9.1. For ML Server 9.3, refer to [Use the administration tool to manage the operationalization configuration](/machine-learning-server/operationalize/configure-admin-cli-launch).
-
-1. SSH into the edge node.
-
- ```bash
- ssh USERNAME@CLUSTERNAME-ed-ssh.azurehdinsight.net
- ```
-
- For instructions on how to use SSH with Azure HDInsight, see [Use SSH with HDInsight.](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-1. Change directory for the relevant version and sudo the dot net dll:
-
- - For Microsoft ML Server 9.1:
-
- ```bash
- cd /usr/lib64/microsoft-r/rserver/o16n/9.1.0
- sudo dotnet Microsoft.RServer.Utils.AdminUtil/Microsoft.RServer.Utils.AdminUtil.dll
- ```
-
- - For Microsoft R Server 9.0:
-
- ```bash
- cd /usr/lib64/microsoft-deployr/9.0.1
- sudo dotnet Microsoft.DeployR.Utils.AdminUtil/Microsoft.DeployR.Utils.AdminUtil.dll
- ```
-
-1. You are presented with the options to choose from. Choose the first option, as shown in the following screenshot, to **Configure ML Server for Operationalization**.
-
- :::image type="content" source="./media/r-server-operationalize/admin-util-one-box-1.png" alt-text="R server Administration utility select" border="true":::
-
-1. You are now presented with the option to choose how you want to operationalize ML Server. From the presented options, choose the first one by entering **A**.
-
- :::image type="content" source="./media/r-server-operationalize/admin-util-one-box-2.png" alt-text="R server Administration utility operationalize" border="true":::
-
-1. When prompted, enter and reenter the password for a local admin user.
-
-1. You should see outputs suggesting that the operation was successful. You are also prompted to select another option from the menu. Select E to go back to the main menu.
-
- :::image type="content" source="./media/r-server-operationalize/admin-util-one-box-3.png" alt-text="R server Administration utility success" border="true":::
-
-1. Optionally, you can perform diagnostic checks by running a diagnostic test as follows:
-
- a. From the main menu, select **6** to run diagnostic tests.
-
- :::image type="content" source="./media/r-server-operationalize/hdinsight-diagnostic1.png" alt-text="R server Administration utility diagnostic" border="true":::
-
- b. From the Diagnostic Tests menu, select **A**. When prompted, enter the password that you provided for the local admin user.
-
- :::image type="content" source="./media/r-server-operationalize/hdinsight-diagnostic2.png" alt-text="R server Administration utility test" border="true":::
-
- c. Verify that the output shows that overall health is a pass.
-
- :::image type="content" source="./media/r-server-operationalize/hdinsight-diagnostic3.png" alt-text="R server Administration utility pass" border="true":::
-
- d. From the menu options presented, enter **E** to return to the main menu and then enter **8** to exit the admin utility.
-
-### Long delays when consuming web service on Apache Spark
-
-If you encounter long delays when trying to consume a web service created with mrsdeploy functions in an Apache Spark compute context, you may need to add some missing folders. The Spark application belongs to a user called '*rserve2*' whenever it is invoked from a web service using mrsdeploy functions. To work around this issue:
-
-```r
-# Create these required folders for user 'rserve2' in local and hdfs:
-
-hadoop fs -mkdir /user/RevoShare/rserve2
-hadoop fs -chmod 777 /user/RevoShare/rserve2
-
-mkdir /var/RevoShare/rserve2
-chmod 777 /var/RevoShare/rserve2
--
-# Next, create a new Spark compute context:
-
-rxSparkConnect(reset = TRUE)
-```
-
-At this stage, the configuration for operationalization is complete. Now you can use the `mrsdeploy` package on your RClient to connect to the operationalization on edge node and start using its features like [remote execution](/machine-learning-server/r/how-to-execute-code-remotely) and [web-services](/machine-learning-server/operationalize/concept-what-are-web-services). Depending on whether your cluster is set up on a virtual network or not, you may need to set up port forward tunneling through SSH login. The following sections explain how to set up this tunnel.
-
-### ML Services cluster on virtual network
-
-Make sure you allow traffic through port 12800 to the edge node. That way, you can use the edge node to connect to the Operationalization feature.
-
-```r
-library(mrsdeploy)
-
-remoteLogin(
- deployr_endpoint = "http://[your-cluster-name]-ed-ssh.azurehdinsight.net:12800",
- username = "admin",
- password = "xxxxxxx"
-)
-```
-
-If the `remoteLogin()` cannot connect to the edge node, but you can SSH to the edge node, then you need to verify whether the rule to allow traffic on port 12800 has been set properly or not. If you continue to face the issue, you can work around it by setting up port forward tunneling through SSH. For instructions, see the following section:
-
-### ML Services cluster not set up on virtual network
-
-If your cluster is not set up on vnet or if you are having troubles with connectivity through vnet, you can use SSH port forward tunneling:
-
-```bash
-ssh -L localhost:12800:localhost:12800 USERNAME@CLUSTERNAME-ed-ssh.azurehdinsight.net
-```
-
-Once your SSH session is active, the traffic from your local machine's port 12800 is forwarded to the edge node's port 12800 through SSH session. Make sure you use `127.0.0.1:12800` in your `remoteLogin()` method. This logs into the edge node's operationalization through port forwarding.
-
-```r
-library(mrsdeploy)
-
-remoteLogin(
- deployr_endpoint = "http://127.0.0.1:12800",
- username = "admin",
- password = "xxxxxxx"
-)
-```
-
-## Scale operationalized compute nodes on HDInsight worker nodes
-
-To scale the compute nodes, you first decommission the worker nodes and then configure compute nodes on the decommissioned worker nodes.
-
-### Step 1: Decommission the worker nodes
-
-ML Services cluster is not managed through [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). If the worker nodes are not decommissioned, the YARN Resource Manager does not work as expected because it is not aware of the resources being taken up by the server. In order to avoid this situation, we recommend decommissioning the worker nodes before you scale out the compute nodes.
-
-Follow these steps to decommission worker nodes:
-
-1. Log in to the cluster's Ambari console and click on **Hosts** tab.
-
-1. Select worker nodes (to be decommissioned).
-
-1. Click **Actions** > **Selected Hosts** > **Hosts** > **Turn ON Maintenance Mode**. For example, in the following image we have selected wn3 and wn4 to decommission.
-
- :::image type="content" source="./media/r-server-operationalize/get-started-operationalization.png" alt-text="Apache Ambari Turn On Maintenance Mode" border="true":::
-
-* Select **Actions** > **Selected Hosts** > **DataNodes** > click **Decommission**.
-* Select **Actions** > **Selected Hosts** > **NodeManagers** > click **Decommission**.
-* Select **Actions** > **Selected Hosts** > **DataNodes** > click **Stop**.
-* Select **Actions** > **Selected Hosts** > **NodeManagers** > click on **Stop**.
-* Select **Actions** > **Selected Hosts** > **Hosts** > click **Stop All Components**.
-* Unselect the worker nodes and select the head nodes.
-* Select **Actions** > **Selected Hosts** > "**Hosts** > **Restart All Components**.
-
-### Step 2: Configure compute nodes on each decommissioned worker node(s)
-
-1. SSH into each decommissioned worker node.
-
-1. Run admin utility using the relevant DLL for the ML Services cluster that you have. For ML Server 9.1, run the following:
-
- ```bash
- dotnet /usr/lib64/microsoft-deployr/9.0.1/Microsoft.DeployR.Utils.AdminUtil/Microsoft.DeployR.Utils.AdminUtil.dll
- ```
-
-1. Enter **1** to select option **Configure ML Server for Operationalization**.
-
-1. Enter **C** to select option `C. Compute node`. This configures the compute node on the worker node.
-
-1. Exit the Admin Utility.
-
-### Step 3: Add compute nodes details on web node
-
-Once all decommissioned worker nodes are configured to run compute node, come back on the edge node and add decommissioned worker nodes' IP addresses in the ML Server web node's configuration:
-
-1. SSH into the edge node.
-
-1. Run `vi /usr/lib64/microsoft-deployr/9.0.1/Microsoft.DeployR.Server.WebAPI/appsettings.json`.
-
-1. Look for the "Uris" section, and add worker node's IP and port details.
-
- ```json
- "Uris": {
- "Description": "Update 'Values' section to point to your backend machines. Using HTTPS is highly recommended",
- "Values": [
- "http://localhost:12805", "http://[worker-node1-ip]:12805", "http://[workder-node2-ip]:12805"
- ]
- }
- ```
-
-## Next steps
-
-* [Manage ML Services cluster on HDInsight](r-server-hdinsight-manage.md)
-* [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md)
-* [Azure Storage options for ML Services cluster on HDInsight](r-server-storage.md)
hdinsight R Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-overview.md
- Title: Introduction to ML Services on Azure HDInsight
-description: Learn how to use ML Services on HDInsight to create applications for big data analysis.
-- Previously updated : 04/20/2020-
-#Customer intent: As a developer I want to have a basic understanding of Microsoft's implementation of machine learning in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
--
-# What is ML Services in Azure HDInsight
--
-Microsoft Machine Learning Server is available as a deployment option when you create HDInsight clusters in Azure. The cluster type that provides this option is called **ML Services**. This capability provides on-demand access to adaptable, distributed methods of analytics on HDInsight.
-
-ML Services on HDInsight provides the latest capabilities for R-based analytics on datasets of virtually any size. The datasets can be loaded to either Azure Blob or Data Lake storage. Your R-based applications can use the 8000+ open-source R packages. The routines in ScaleR, Microsoft's big data analytics package are also available.
-
-The edge node provides a convenient place to connect to the cluster and run your R scripts. The edge node allows running the ScaleR parallelized distributed functions across the cores of the server. You can also run them across the nodes of the cluster by using ScaleR's Hadoop Map Reduce. You can also use Apache Spark compute contexts.
-
-The models or predictions that result from analysis can be downloaded for on-premises use. They can also be `operationalized` elsewhere in Azure. In particular, through [Azure Machine Learning Studio (classic)](https://studio.azureml.net), and [web service](../../machine-learning/classic/deploy-a-machine-learning-web-service.md).
-
-## Get started with ML Services on HDInsight
-
-To create an ML Services cluster in HDInsight, select the **ML Services** cluster type. The ML Services cluster type includes ML Server on the data nodes, and edge node. The edge node serves as a landing zone for ML Services-based analytics. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) for a walkthrough on how to create the cluster.
-
-## Why choose ML Services in HDInsight?
-
-ML Services in HDInsight provides the following benefits:
-
-### AI innovation from Microsoft and open-source
-
- ML Services includes highly adaptable, distributed set of algorithms such as [RevoscaleR](/machine-learning-server/r-reference/revoscaler/revoscaler), [revoscalepy](/machine-learning-server/python-reference/revoscalepy/revoscalepy-package), and [microsoftML](/machine-learning-server/python-reference/microsoftml/microsoftml-package). These algorithms can work on data sizes larger than the size of physical memory. They also run on a wide variety of platforms in a distributed manner. Learn more about the collection of Microsoft's custom [R packages](/machine-learning-server/r-reference/introducing-r-server-r-package-reference) and [Python packages](/machine-learning-server/python-reference/introducing-python-package-reference) included with the product.
-
- ML Services bridges these Microsoft innovations and contributions coming from the open-source community (R, Python, and AI toolkits). All on top of a single enterprise-grade platform. Any R or Python open-source machine learning package can work side by side with any proprietary innovation from Microsoft.
-
-### Simple, secure, and high-scale operationalization and administration
-
- Enterprises relying on traditional paradigms and environments invest much time and effort towards operationalization. This action results in inflated costs and delays including the translation time for: models, iterations to keep them valid and current, regulatory approval, and managing permissions.
-
- ML Services offers enterprise grade [operationalization](/machine-learning-server/what-is-operationalization). After a machine learning model completes, it takes just a few clicks to generate web services APIs. These [web services](/machine-learning-server/operationalize/concept-what-are-web-services) are hosted on a server grid in the cloud and can be integrated with line-of-business applications. The ability to deploy to an elastic grid lets you scale seamlessly with the needs of your business, both for batch and real-time scoring. For instructions, see [Operationalize ML Services on HDInsight](r-server-operationalize.md).
-
-<!
-* **Deep ecosystem engagements to deliver customer success with optimal total cost of ownership**
-
- Individuals embarking on the journey of making their applications intelligent or simply wanting to learn the new world of AI and machine learning, need the right resources to help them get started. In addition to this documentation, Microsoft provides several learning resources and has engaged several training partners to help you ramp up and become productive quickly.
->
-
-> [!NOTE]
-> The ML Services cluster type on HDInsight is supported only on HDInsight 3.6. HDInsight 3.6 is scheduled to retire on December 31, 2020.
-
-## Key features of ML Services on HDInsight
-
-The following features are included in ML Services on HDInsight.
-
-| Feature category | Description |
-||-|
-| R-enabled | [R packages](/machine-learning-server/r-reference/introducing-r-server-r-package-reference) for solutions written in R, with an open-source distribution of R, and run-time infrastructure for script execution. |
-| Python-enabled | [Python modules](/machine-learning-server/python-reference/introducing-python-package-reference) for solutions written in Python, with an open-source distribution of Python, and run-time infrastructure for script execution.
-| [Pre-trained models](/machine-learning-server/install/microsoftml-install-pretrained-models) | For visual analysis and text sentiment analysis, ready to score data you provide. |
-| [Deploy and consume](r-server-operationalize.md) | `Operationalize` your server and deploy solutions as a web service. |
-| [Remote execution](r-server-hdinsight-manage.md#connect-remotely-to-microsoft-ml-services) | Start remote sessions on ML Services cluster on your network from your client workstation. |
-
-## Data storage options for ML Services on HDInsight
-
-Default storage for the HDFS file system can be an Azure Storage account or Azure Data Lake Storage. Uploaded data to cluster storage during analysis is made persistent. The data is available even after the cluster is deleted. Various tools can handle the data transfer to storage. The tools include the portal-based upload facility of the storage account and the AzCopy utility.
-
-You can enable access to additional Blob and Data lake stores during cluster creation. You aren't limited by the primary storage option in use. See [Azure Storage options for ML Services on HDInsight](./r-server-storage.md) article to learn more about using multiple storage accounts.
-
-You can also use Azure Files as a storage option for use on the edge node. Azure Files enables file shares created in Azure Storage to the Linux file system. For more information, see [Azure Storage options for ML Services on HDInsight](r-server-storage.md).
-
-## Access ML Services edge node
-
-You can connect to Microsoft ML Server on the edge node using a browser, or SSH/PuTTY. The R console is installed by default during cluster creation.
-
-## Develop and run R scripts
-
-Your R scripts can use any of the 8000+ open-source R packages. You can also use the parallelized and distributed routines from the ScaleR library. Scripts run on the edge node run within the R interpreter on that node. Except for steps that call ScaleR functions with a Map Reduce (RxHadoopMR) or Spark (RxSpark) compute context. The functions run in a distributed fashion across the data nodes that are associated with the data. For more information about context options, see [Compute context options for ML Services on HDInsight](r-server-compute-contexts.md).
-
-## `Operationalize` a model
-
-When your data modeling is complete, `operationalize` the model to make predictions for new data either from Azure or on-premises. This process is known as scoring. Scoring can be done in HDInsight, Azure Machine Learning, or on-premises.
-
-### Score in HDInsight
-
-To score in HDInsight, write an R function. The function calls your model to make predictions for a new data file that you've loaded to your storage account. Then, save the predictions back to the storage account. You can run this routine on-demand on the edge node of your cluster or by using a scheduled job.
-
-### Score in Azure Machine Learning (AML)
-
-To score using Azure Machine Learning, use the open-source Azure Machine Learning R package known as [AzureML](https://cran.r-project.org/src/contrib/Archive/AzureML/) to publish your model as an Azure web service. For convenience, this package is pre-installed on the edge node. Next, use the facilities in Azure Machine Learning to create a user interface for the web service, and then call the web service as needed for scoring. Then convert ScaleR model objects to equivalent open-source model objects for use with the web service. Use ScaleR coercion functions, such as `as.randomForest()` for ensemble-based models, for this conversion.
-
-### Score on-premises
-
-To score on-premises after creating your model: serialize the model in R, download it, de-serialize it, then use it for scoring new data. You can score new data by using the approach described earlier in Score in HDInsight or by using [web services](/machine-learning-server/operationalize/concept-what-are-web-services).
-
-## Maintain the cluster
-
-### Install and maintain R packages
-
-Most of the R packages that you use are required on the edge node since most steps of your R scripts run there. To install additional R packages on the edge node, you can use the `install.packages()` method in R.
-
-If you're just using ScaleR library routines, you don't usually need additional R packages. You might need additional packages for **rxExec** or **RxDataStep** execution on the data nodes.
-
-The additional packages can be installed with a script action after you create the cluster. For more information, see [Manage ML Services in HDInsight cluster](r-server-hdinsight-manage.md).
-
-### Change Apache Hadoop MapReduce memory settings
-
-Available memory to ML Services can be modified when it's running a MapReduce job. To modify a cluster, use the Apache Ambari UI for your cluster. For Ambari UI instructions, see [Manage HDInsight clusters using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
-
-Available memory to ML Services can be changed by using Hadoop switches in the call to **RxHadoopMR**:
-
-```r
-hadoopSwitches = "-libjars /etc/hadoop/conf -Dmapred.job.map.memory.mb=6656"
-```
-
-### Scale your cluster
-
-An existing ML Services cluster on HDInsight can be scaled up or down through the portal. By scaling up, you gain additional capacity for larger processing tasks. You can scale back a cluster when it's idle. For instructions about how to scale a cluster, see [Manage HDInsight clusters](../hdinsight-administer-use-portal-linux.md).
-
-### Maintain the system
-
-OS Maintenance is done on the underlying Linux VMs in an HDInsight cluster during off-hours. Typically, maintenance is done at 3:30 AM (VM's local time) every Monday and Thursday. Updates don't impact more than a quarter of the cluster at a time.
-
-Running jobs might slow down during maintenance. However, they should still run to completion. Any custom software or local data that you've is preserved across these maintenance events unless a catastrophic failure occurs that requires a cluster rebuild.
-
-## IDE options for ML Services on HDInsight
-
-The Linux edge node of an HDInsight cluster is the landing zone for R-based analysis. Recent versions of HDInsight provide a browser-based IDE of RStudio Server on the edge node. RStudio Server is more productive than the R console for development and execution.
-
-A desktop IDE can access the cluster through a remote MapReduce or Spark compute context. Options include: Microsoft's [R Tools for Visual Studio](https://marketplace.visualstudio.com/items?itemName=MikhailArkhipov007.RTVS2019) (RTVS), RStudio, and Walware's Eclipse-based StatET.
-
-Access the R console on the edge node by typing **R** at the command prompt. When using the console interface, it's convenient to develop R script in a text editor. Then cut and paste sections of your script into the R console as needed.
-
-## Pricing
-
-The prices associated with an ML Services HDInsight cluster are structured similarly to other HDInsight cluster types. They're based on the sizing of the underlying VMs across the name, data, and edge nodes. Core-hour uplifts as well. For more information, see [HDInsight pricing](https://azure.microsoft.com/pricing/details/hdinsight/).
-
-## Next steps
-
-To learn more about how to use ML Services on HDInsight clusters, see the following articles:
-
-* [Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server](machine-learning-services-quickstart-job-rstudio.md)
-* [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md)
-* [Storage options for ML Services cluster on HDInsight](r-server-storage.md)
hdinsight R Server Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-storage.md
- Title: Azure storage solutions for ML Services on HDInsight - Azure
-description: Learn about the different storage options available with ML Services on HDInsight
-- Previously updated : 01/02/2020---
-# Azure storage solutions for ML Services on Azure HDInsight
--
-ML Services on HDInsight can use different storage solutions to persist data, code, or objects that contain results from analysis. These solutions include the following options:
--- [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/)-- [Azure Data Lake Storage Gen1](https://azure.microsoft.com/services/storage/data-lake-storage/)-- [Azure Files](https://azure.microsoft.com/services/storage/files/)-
-You also have the option of accessing multiple Azure storage accounts or containers with your HDInsight cluster. Azure Files is a convenient data storage option for use on the edge node that enables you to mount an Azure file share to, for example, the Linux file system. But Azure file shares can be mounted and used by any system that has a supported operating system such as Windows or Linux.
-
-When you create an Apache Hadoop cluster in HDInsight, you specify either an **Azure Blob storage** account or **Data Lake Storage Gen1**. A specific storage container from that account holds the file system for the cluster that you create (for example, the Hadoop Distributed File System). For more information and guidance, see:
--- [Use Azure Blob storage with HDInsight](../hdinsight-hadoop-use-blob-storage.md)-- [Use Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen1.md)-
-## Use Azure Blob storage accounts with ML Services cluster
-
-If you specified more than one storage account when creating your ML Services cluster, the following instructions explain how to use a secondary account for data access and operations on an ML Services cluster. Assume the following storage accounts and container: **storage1** and a default container called **container1**, and **storage2** with **container2**.
-
-> [!WARNING]
-> For performance purposes, the HDInsight cluster is created in the same data center as the primary storage account that you specify. Using a storage account in a different location than the HDInsight cluster is not supported.
-
-### Use the default storage with ML Services on HDInsight
-
-1. Using an SSH client, connect to the edge node of your cluster. For information on using SSH with HDInsight clusters, see [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-2. Copy a sample file, mysamplefile.csv, to the /share directory.
-
- ```bash
- hadoop fs ΓÇômkdir /share
- hadoop fs ΓÇôcopyFromLocal mycsv.scv /share
- ```
-
-3. Switch to R Studio or another R console, and write R code to set the name node to **default** and location of the file you want to access.
-
- ```R
- myNameNode <- "default"
- myPort <- 0
-
- #Location of the data:
- bigDataDirRoot <- "/share"
-
- #Define Spark compute context:
- mySparkCluster <- RxSpark(nameNode=myNameNode, consoleOutput=TRUE)
-
- #Set compute context:
- rxSetComputeContext(mySparkCluster)
-
- #Define the Hadoop Distributed File System (HDFS) file system:
- hdfsFS <- RxHdfsFileSystem(hostName=myNameNode, port=myPort)
-
- #Specify the input file to analyze in HDFS:
- inputFile <-file.path(bigDataDirRoot,"mysamplefile.csv")
- ```
-
-All the directory and file references point to the storage account `wasbs://container1@storage1.blob.core.windows.net`. This is the **default storage account** that's associated with the HDInsight cluster.
-
-### Use the additional storage with ML Services on HDInsight
-
-Now, suppose you want to process a file called mysamplefile1.csv that's located in the /private directory of **container2** in **storage2**.
-
-In your R code, point the name node reference to the **storage2** storage account.
-
-```R
-myNameNode <- "wasbs://container2@storage2.blob.core.windows.net"
-myPort <- 0
-
-#Location of the data:
-bigDataDirRoot <- "/private"
-
-#Define Spark compute context:
-mySparkCluster <- RxSpark(consoleOutput=TRUE, nameNode=myNameNode, port=myPort)
-
-#Set compute context:
-rxSetComputeContext(mySparkCluster)
-
-#Define HDFS file system:
-hdfsFS <- RxHdfsFileSystem(hostName=myNameNode, port=myPort)
-
-#Specify the input file to analyze in HDFS:
-inputFile <-file.path(bigDataDirRoot,"mysamplefile1.csv")
-```
-
-All of the directory and file references now point to the storage account `wasbs://container2@storage2.blob.core.windows.net`. This is the **Name Node** that you've specified.
-
-Configure the `/user/RevoShare/<SSH username>` directory on **storage2** as follows:
-
-```bash
-hadoop fs -mkdir wasbs://container2@storage2.blob.core.windows.net/user
-hadoop fs -mkdir wasbs://container2@storage2.blob.core.windows.net/user/RevoShare
-hadoop fs -mkdir wasbs://container2@storage2.blob.core.windows.net/user/RevoShare/<RDP username>
-```
-
-## Use Azure Data Lake Storage Gen1 with ML Services cluster
-
-To use Data Lake Storage Gen1 with your HDInsight cluster, you need to give your cluster access to each Azure Data Lake Storage Gen1 that you want to use. For instructions on how to use the Azure portal to create a HDInsight cluster with an Azure Data Lake Storage Gen1 as the default storage or as additional storage, see [Create an HDInsight cluster with Data Lake Storage Gen1 using Azure portal](../../data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md).
-
-You then use the storage in your R script much like you did a secondary Azure storage account as described in the previous procedure.
-
-### Add cluster access to your Azure Data Lake Storage Gen1
-
-You access Data Lake Storage Gen1 by using an Azure Active Directory (Azure AD) Service Principal that's associated with your HDInsight cluster.
-
-1. When you create your HDInsight cluster, select **Cluster Azure AD Identity** from the **Data Source** tab.
-
-2. In the **Cluster Azure AD Identity** dialog box, under **Select AD Service Principal**, select **Create new**.
-
-After you give the Service Principal a name and create a password for it, click **Manage ADLS Access** to associate the Service Principal with your Data Lake Storage.
-
-It's also possible to add cluster access to one or more Data Lake storage Gen1 accounts following cluster creation. Open the Azure portal entry for a Data Lake Storage Gen1 and go to **Data Explorer > Access > Add**.
-
-### How to access Data Lake Storage Gen1 from ML Services on HDInsight
-
-Once you've given access to Data Lake Storage Gen1, you can use the storage in ML Services cluster on HDInsight the way you would a secondary Azure storage account. The only difference is that the prefix **wasbs://** changes to **adl://** as follows:
-
-```R
-# Point to the ADL Storage (e.g. ADLtest)
-myNameNode <- "adl://rkadl1.azuredatalakestore.net"
-myPort <- 0
-
-# Location of the data (assumes a /share directory on the ADL account)
-bigDataDirRoot <- "/share"
-
-# Define Spark compute context
-mySparkCluster <- RxSpark(consoleOutput=TRUE, nameNode=myNameNode, port=myPort)
-
-# Set compute context
-rxSetComputeContext(mySparkCluster)
-
-# Define HDFS file system
-hdfsFS <- RxHdfsFileSystem(hostName=myNameNode, port=myPort)
-
-# Specify the input file in HDFS to analyze
-inputFile <-file.path(bigDataDirRoot,"mysamplefile.csv")
-```
-
-The following commands are used to configure the Data Lake Storage Gen1 with the RevoShare directory and add the sample .csv file from the previous example:
-
-```bash
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/user
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/user/RevoShare
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/user/RevoShare/<user>
-
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/share
-
-hadoop fs -copyFromLocal /usr/lib64/R Server-7.4.1/library/RevoScaleR/SampleData/mysamplefile.csv adl://rkadl1.azuredatalakestore.net/share
-
-hadoop fs ΓÇôls adl://rkadl1.azuredatalakestore.net/share
-```
-
-## Use Azure Files with ML Services on HDInsight
-
-There's also a convenient data storage option for use on the edge node called [Azure Files](https://azure.microsoft.com/services/storage/files/). It enables you to mount an Azure Storage file share to the Linux file system. This option can be handy for storing data files, R scripts, and result objects that might be needed later, especially when it makes sense to use the native file system on the edge node rather than HDFS.
-
-A major benefit of Azure Files is that the file shares can be mounted and used by any system that has a supported OS such as Windows or Linux. For example, it can be used by another HDInsight cluster that you or someone on your team has, by an Azure VM, or even by an on-premises system. For more information, see:
--- [How to use Azure Files with Linux](../../storage/files/storage-how-to-use-files-linux.md)-- [How to use Azure Files on Windows](../../storage/files/storage-dotnet-how-to-use-files.md)-
-## Next steps
--- [Overview of ML Services cluster on HDInsight](r-server-overview.md)-- [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md)-- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen2.md)
hdinsight R Server Submit Jobs R Tools Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-submit-jobs-r-tools-vs.md
- Title: Submit jobs from R Tools for Visual Studio - Azure HDInsight
-description: Submit R jobs from your local Visual Studio machine to an HDInsight cluster.
-- Previously updated : 06/19/2019---
-# Submit jobs from R Tools for Visual Studio
--
-[R Tools for Visual Studio](https://marketplace.visualstudio.com/items?itemName=MikhailArkhipov007.RTVS2019) (RTVS) is a free, open-source extension for the Community (free), Professional, and Enterprise editions of both [Visual Studio 2017](https://www.visualstudio.com/downloads/), and [Visual Studio 2015 Update 3](https://go.microsoft.com/fwlink/?LinkId=691129) or higher. RTVS is not available for [Visual Studio 2019](/visualstudio/porting/port-migrate-and-upgrade-visual-studio-projects?preserve-view=true&view=vs-2019).
-
-RTVS enhances your R workflow by offering tools such as the [R Interactive window](/visualstudio/rtvs/interactive-repl) (REPL), intellisense (code completion), [plot visualization](/visualstudio/rtvs/visualizing-data) through R libraries such as ggplot2 and ggviz, [R code debugging](/visualstudio/rtvs/debugging), and more.
-
-## Set up your environment
-
-1. Install [R Tools for Visual Studio](/visualstudio/rtvs/installing-r-tools-for-visual-studio).
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/install-r-tools-for-vs.png" alt-text="Installing RTVS in Visual Studio 2017" border="true":::
-
-2. Select the *Data science and analytical applications* workload, then select the **R language support**, **Runtime support for R development**, and **Microsoft R Client** options.
-
-3. You need to have public and private keys for SSH authentication.
- <!-- {TODO tbd, no such file yet}[use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-windows.md) -->
-
-4. Install [ML Server](/previous-versions/machine-learning-server/install/r-server-install-windows) on your machine. ML Server provides the [`RevoScaleR`](/machine-learning-server/r-reference/revoscaler/revoscaler) and `RxSpark` functions.
-
-5. Install [PuTTY](https://www.putty.org/) to provide a compute context to run `RevoScaleR` functions from your local client to your HDInsight cluster.
-
-6. You have the option to apply the Data Science Settings to your Visual Studio environment, which provides a new layout for your workspace for the R tools.
- 1. To save your current Visual Studio settings, use the **Tools > Import and Export Settings** command, then select **Export selected environment settings** and specify a file name. To restore those settings, use the same command and select **Import selected environment settings**.
-
- 2. Go to the **R Tools** menu item, then select **Data Science Settings...**.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/data-science-settings.png" alt-text="Visual Studio Data Science Settings" border="true":::
-
- > [!NOTE]
- > Using the approach in step 1, you can also save and restore your personalized data scientist layout, rather than repeating the **Data Science Settings** command.
-
-## Execute local R methods
-
-1. Create your HDInsight ML Services cluster.
-2. Install the [RTVS extension](/visualstudio/rtvs/installation).
-3. Download the [samples zip file](https://github.com/Microsoft/RTVS-docs/archive/master.zip).
-4. Open `examples/Examples.sln` to launch the solution in Visual Studio.
-5. Open the `1-Getting Started with R.R` file in the `A first look at R` solution folder.
-6. Starting at the top of the file, press Ctrl+Enter to send each line, one at a time, to the R Interactive window. Some lines might take a while as they install packages.
- * Alternatively, you can select all lines in the R file (Ctrl+A), then either execute all (Ctrl+Enter), or select the Execute Interactive icon on the toolbar.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/execute-interactive1.png" alt-text="Visual Studio execute interactive" border="true":::
-
-7. After running all the lines in the script, you should see an output similar to this:
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/visual-studio-workspace.png" alt-text="Visual Studio workspace R tools" border="true":::
-
-## Submit jobs to an HDInsight ML Services cluster
-
-Using a Microsoft ML Server/Microsoft R Client from a Windows computer equipped with PuTTY, you can create a compute context that will run distributed `RevoScaleR` functions from your local client to your HDInsight cluster. Use `RxSpark` to create the compute context, specifying your username, the Apache Hadoop cluster's edge node, SSH switches, and so forth.
-
-1. The ML Services edge node address on HDInsight is `CLUSTERNAME-ed-ssh.azurehdinsight.net` where `CLUSTERNAME` is the name of your ML Services cluster.
-
-1. Paste the following code into the R Interactive window in Visual Studio, altering the values of the setup variables to match your environment.
-
- ```R
- # Setup variables that connect the compute context to your HDInsight cluster
- mySshHostname <- 'r-cluster-ed-ssh.azurehdinsight.net ' # HDI secure shell hostname
- mySshUsername <- 'sshuser' # HDI SSH username
- mySshClientDir <- "C:\\Program Files (x86)\\PuTTY"
- mySshSwitches <- '-i C:\\Users\\azureuser\\r.ppk' # Path to your private ssh key
- myHdfsShareDir <- paste("/user/RevoShare", mySshUsername, sep = "/")
- myShareDir <- paste("/var/RevoShare", mySshUsername, sep = "/")
- mySshProfileScript <- "/usr/lib64/microsoft-r/3.3/hadoop/RevoHadoopEnvVars.site"
-
- # Create the Spark Cluster compute context
- mySparkCluster <- RxSpark(
- sshUsername = mySshUsername,
- sshHostname = mySshHostname,
- sshSwitches = mySshSwitches,
- sshProfileScript = mySshProfileScript,
- consoleOutput = TRUE,
- hdfsShareDir = myHdfsShareDir,
- shareDir = myShareDir,
- sshClientDir = mySshClientDir
- )
-
- # Set the current compute context as the Spark compute context defined above
- rxSetComputeContext(mySparkCluster)
- ```
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/apache-spark-context.png" alt-text="apache spark setting the context" border="true":::
-
-1. Execute the following commands in the R Interactive window:
-
- ```R
- rxHadoopCommand("version") # should return version information
- rxHadoopMakeDir("/user/RevoShare/newUser") # creates a new folder in your storage account
- rxHadoopCopy("/example/data/people.json", "/user/RevoShare/newUser") # copies file to new folder
- ```
-
- You should see an output similar to the following:
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/successful-rx-commands.png" alt-text="Successful rx command execution" border="true":::
-a
-1. Verify that the `rxHadoopCopy` successfully copied the `people.json` file from the example data folder to the newly created `/user/RevoShare/newUser` folder:
-
- 1. From your HDInsight ML Services cluster pane in Azure, select **Storage accounts** from the left-hand menu.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/hdinsight-storage-accounts.png" alt-text="Azure HDInsight Storage accounts" border="true":::
-
- 2. Select the default storage account for your cluster, making note of the container/directory name.
-
- 3. Select **Containers** from the left-hand menu on your storage account pane.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/hdi-storage-containers.png" alt-text="Azure HDInsight Storage containers" border="true":::
-
- 4. Select your cluster's container name, browse to the **user** folder (you might have to click *Load more* at the bottom of the list), then select *RevoShare*, then **newUser**. The `people.json` file should be displayed in the `newUser` folder.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/hdinsight-copied-file.png" alt-text="HDInsight copied file folder location" border="true":::
-
-1. After you are finished using the current Apache Spark context, you must stop it. You cannot run multiple contexts at once.
-
- ```R
- rxStopEngine(mySparkCluster)
- ```
-
-## Next steps
-
-* [Compute context options for ML Services on HDInsight](r-server-compute-contexts.md)
-* [Combining ScaleR and SparkR](../hdinsight-hadoop-r-scaler-sparkr.md) provides an example of airline flight delay predictions.
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Title: Architectural concepts in Azure IoT Central | Microsoft Docs
description: This article introduces key concepts relating the architecture of Azure IoT Central Previously updated : 08/31/2021 Last updated : 06/03/2022
iot-central Concepts Device Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-authentication.md
To connect a device with device SAS token to your application:
> [!NOTE] > To use existing SAS keys in your enrollment groups, disable the **Auto generate keys** toggle and manually enter your SAS keys.
+If you use the default **SAS-IoT-Devices** enrollment group, IoT Central generates the individual device keys for you. To access these keys, select **Connect** on the device details page. This page displays the **ID Scope**, **Device ID**, **Primary key**, and **Secondary key** that you use in your device code. This page also displays a QR code the contains the same data.
+ ## Individual enrollment Typically, devices connect by using credentials derived from an enrollment group X.509 certificate or SAS key. However, if your devices each have their own credentials, you can use individual enrollments. An individual enrollment is an entry for a single device that's allowed to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
Title: What are device templates in Azure IoT Central | Microsoft Docs
description: Azure IoT Central device templates let you specify the behavior of the devices connected to your application. A device template specifies the telemetry, properties, and commands the device must implement. A device template also defines the UI for the device in IoT Central such as the forms and views an operator uses. Previously updated : 08/24/2021 Last updated : 06/03/2022
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
Title: Move from IoT Central to a PaaS solution | Microsoft Docs
description: How do I move between aPaaS and PaaS solution approaches? Previously updated : 01/25/2022 Last updated : 06/09/2022
iot-central Concepts Faq Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-extend.md
Title: Extend IoT Central | Microsoft Docs
description: How do I extend IoT Central if it's missing something I need? Previously updated : 01/05/2022 Last updated : 06/09/2022
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. Previously updated : 01/18/2022 Last updated : 06/08/2022
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-quotas-limits.md
Title: Azure IoT Central quotas and limits | Microsoft Docs
description: This article lists the key quotas and limits that apply to an IoT Central application. Previously updated : 12/15/2021 Last updated : 06/07/2022
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
Title: Telemetry, property, and command payloads in Azure IoT Central | Microsof
description: Azure IoT Central device templates let you specify the telemetry, properties, and commands of a device must implement. Understand the format of the data a device can exchange with IoT Central. Previously updated : 12/27/2021 Last updated : 06/08/2022
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
instanceOf: .device.templateId,
properties: .device.properties.reported | map({ key: .name, value: .value }) | from_entries ```
-Device templates: If you're currently using legacy data exports with the device templates data type, then you can obtain the same data using the [Device Templates - Get API call](/rest/api/iotcentral/1.0dataplane/device-templates/get).
+Device templates: If you're currently using legacy data exports with the device templates data type, then you can obtain the same data using the [Device Templates - Get API call](/rest/api/iotcentral/2022-05-31dataplane/device-templates/get).
### Destination migration considerations
This example snapshot shows a message that contains device and properties data i
If you have an existing data export in your preview application with the *Devices* and *Device templates* streams turned on, update your export by **30 June 2020**. This requirement applies to exports to Azure Blob storage, Azure Event Hubs, and Azure Service Bus.
-Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/1.0dataplane/devices/get), [device property](/rest/api/iotcentral/1.0dataplane/devices/get-properties), and [device template](/rest/api/iotcentral/1.0dataplane/device-templates/get) objects in the IoT Central public API.
+Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/2022-05-31dataplane/devices/get), [device property](/rest/api/iotcentral/2022-05-31dataplane/devices/get-properties), and [device template](/rest/api/iotcentral/2022-05-31dataplane/device-templates/get) objects in the IoT Central public API.
For **Devices**, notable differences between the old data format and the new data format include: - `@id` for device is removed, `deviceId` is renamed to `id`
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
When a device connects to your IoT Central application, its device status change
- A new real device is added on the **Devices** page. - A set of devices is added using **Import** on the **Devices** page.
-1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
+1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS. The *Device ID* that was used to register the device. Either a SAS key or X.509 certificatTo find these values: to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
When a device connects to your IoT Central application, its device status change
### Device connection status
-When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events are not sent by the device, they are generated internally by IoT Central.
+When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events aren't sent by the device, they're generated internally by IoT Central.
The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute.
To add a device to your Azure IoT Central application:
1. This device now appears in your device list for this template. Select the device to see the device details page that contains all views for the device.
+## Get device connection information
+
+When a device provisions and connects to IoT Central, it needs connection information from your IoT Central application:
+
+- The *ID Scope* that identifies the application to DPS.
+- The *Device ID* that was used to register the device.
+- Either a SAS key or X.509 certificate.
+
+To find these values:
+
+1. Choose **Devices** on the left pane.
+
+1. Click on the device in the device list to see the device details.
+
+1. Select **Connect** to view the connection information. The QR code encodes a JSON document that includes the **ID Scope**, **Device ID**, and **Primary key** derived from the default **SAS-IoT-Devices** device connection group.
+
+> [!NOTE]
+> If the authentication type is **Shared access signature**, the keys displayed are derived from the default **SAS-IoT-Devices** device connection group.
+ ## Change organization To move a device to a different organization, you must have access to both the source and destination organizations. To move a device:
To move a device to a different organization, you must have access to both the s
1. Select the device to move in the device list.
-1. Select **Manage Device** and **Organization** from the drop down menu.
+1. Select **Manage Device** and **Organization** from the drop-down menu.
1. Select the new organization for the device:
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
The response to this request looks like the following example. The role value id
} ```
-You can also add a service principal user which is useful if you need to use service principal authentication for REST API calls. To learn more, see [Add or update a service principal user](/rest/api/iotcentral/1.0dataplane/users/create#add-or-update-a-service-principal-user).
+You can also add a service principal user which is useful if you need to use service principal authentication for REST API calls. To learn more, see [Add or update a service principal user](/rest/api/iotcentral/2022-05-31dataplane/users/create#add-or-update-a-service-principal-user).
### Change the role of a user
iot-central Iot Central Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-customer-data-requests.md
Title: Customer data request featuresΓÇï in Azure IoT Central | Microsoft Docs
description: This article describes identifying, deleting, and exporting customer data in Azure IoT Central application. Previously updated : 12/28/2021 Last updated : 06/03/2022
iot-central Iot Central Customer Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-customer-data-residency.md
Title: Customer data residency in Azure IoT Central | Microsoft Docs
description: This article describes customer data residency in Azure IoT Central applications. Previously updated : 12/09/2021 Last updated : 06/07/2022
iot-central Iot Central Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-supported-browsers.md
Title: Supported browsers for Azure IoT Central | Microsoft Docs
description: Azure IoT Central can be accessed across modern desktops, tablets and browsers. This article outlines the list of supported browsers. Previously updated : 12/21/2021 Last updated : 06/08/2022
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central application administration guide
description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to administer your IoT Central application. Application administration includes users, organization, security, and automated deployments. Previously updated : 01/04/2022 Last updated : 06/08/2022
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
This article introduces you to Azure IoT Central REST API. Use the API to create
The REST API operations are grouped into the: -- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/1.0dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) versions of the data plane API.
+- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/2022-05-31dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) versions of the data plane API.
- *Control plane* operations that let you work with the Azure resources associated with IoT Central applications. Control plane operations let you automate tasks that can also be completed in the Azure portal. ## Data plane operations
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
When you register a device with IoT Central, you're telling IoT Central the ID o
There are three ways to register a device in an IoT Central application: -- Use the **Devices** page in your IoT Central application to register devices individually. To learn more, see [Add a device](howto-manage-devices-individually.md#add-a-device).-- Add devices in bulk from a CSV file. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices). - Automatically register devices when they first try to connect. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. To learn more, see [Automatically register devices](concepts-device-authentication.md#automatically-register-devices).
+- Add devices in bulk from a CSV file. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices).
+- Use the **Devices** page in your IoT Central application to register devices individually. To learn more, see [Add a device](howto-manage-devices-individually.md#add-a-device).
Optionally, you can require an operator to approve the device before it starts sending data.
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Title: Azure IoT Central data integration guide | Microsoft Docs
description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to integrate your IoT Central application with other services to extend its capabilities. Previously updated : 01/04/2022 Last updated : 06/03/2022
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
To register your device:
Keep this page open. In the next section, you scan this QR code using the smartphone app to connect it to IoT Central.
+> [!TIP]
+> The QR code contains the information, such as the registered device ID, your device needs to establish a connection to your IoT Central application. It saves you from the need to enter the connection information manually.
+ ## Connect your device To get you started quickly, this article uses the **IoT Plug and Play** smartphone app as an IoT device. The app sends telemetry collected from the smartphone's sensors, responds to commands invoked from IoT Central, and reports property values to IoT Central.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
Keep Termite open to monitor device output in the following steps.
* IAR Embedded Workbench for ARM (EW for ARM). You can download and install a [14-day free trial of IAR EW for ARM](https://www.iar.com/products/architectures/arm/iar-embedded-workbench-for-arm/).
-* Download the Microchip ATSAME54-XPRO IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the Microchip ATSAME54-XPRO IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory.
+ > [!IMPORTANT]
+ > Choose a directory with a short path to avoid compiler errors when you build. For example, use *C:\atsame54*.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
Keep Termite open to monitor device output in the following steps.
* [MPLAB XC32/32++ Compiler 2.4.0 or later](https://www.microchip.com/mplab/compilers).
-* Download the Microchip ATSAME54-XPRO MPLab sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the Microchip ATSAME54-XPRO MPLab sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory.
+ > [!IMPORTANT]
+ > Choose a directory with a short path to avoid compiler errors when you build. For example, use *C:\atsame54*.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
This checklist is a starting point for firewall rules:
| FQDN (\* = wildcard) | Outbound TCP Ports | Usage | | -- | -- | -- | | `mcr.microsoft.com` | 443 | Microsoft Container Registry |
+ | `\*.data.mcr.microsoft.com` | 443 | Data endpoint providing content delivery. |
| `global.azure-devices-provisioning.net` | 443 | [Device Provisioning Service](../iot-dps/about-iot-dps.md) access (optional) | | `\*.azurecr.io` | 443 | Personal and third-party container registries | | `\*.blob.core.windows.net` | 443 | Download Azure Container Registry image deltas from blob storage |
iot-hub Tutorial Message Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md
Previously updated : 03/16/2022 Last updated : 06/08/2022 # Customer intent: As a customer using Azure IoT Hub, I want to add information to the messages that come through my IoT hub and are sent to another endpoint. For example, I'd like to pass the IoT hub name to the application that reads the messages from the final endpoint, such as Azure Storage. # Tutorial: Use Azure IoT Hub message enrichments
-*Message enrichments* describes the ability of Azure IoT Hub to *stamp* messages with additional information before the messages are sent to the designated endpoint. One reason to use message enrichments is to include data that can be used to simplify downstream processing. For example, enriching device telemetry messages with a device twin tag can reduce load on customers to make device twin API calls for this information. For more information, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
+*Message enrichments* are the ability of Azure IoT Hub to stamp messages with additional information before the messages are sent to the designated endpoint. One reason to use message enrichments is to include data that can be used to simplify downstream processing. For example, enriching device messages with a device twin tag can reduce load on customers to make device twin API calls for this information. For more information, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
In this tutorial, you see two ways to create and configure the resources that are needed to test the message enrichments for an IoT hub. The resources include one storage account with two storage containers. One container holds the enriched messages, and another container holds the original messages. Also included is an IoT hub to receive the messages and route them to the appropriate storage container based on whether they're enriched or not.
-* The first method is to use the Azure CLI to create the resources and configure the message routing. Then you define the enrichments manually by using the [Azure portal](https://portal.azure.com).
+* The first method is to use the Azure CLI to create the resources and configure the message routing. Then you define the message enrichments in the Azure portal.
-* The second method is to use an Azure Resource Manager template to create both the resources *and* the configurations for the message routing and message enrichments.
+* The second method is to use an Azure Resource Manager template to create both the resources and configure both the message routing and message enrichments.
After the configurations for the message routing and message enrichments are finished, you use an application to send messages to the IoT hub. The hub then routes them to both storage containers. Only the messages sent to the endpoint for the **enriched** storage container are enriched.
-Here are the tasks you perform to complete this tutorial:
+In this tutorial, you perform the following tasks:
-**Use IoT Hub message enrichments**
> [!div class="checklist"]
-> * First method: Create resources and configure message routing by using the Azure CLI. Configure the message enrichments manually by using the [Azure portal](https://portal.azure.com).
-> * Second method: Create resources and configure message routing and message enrichments by using a Resource Manager template.
+>
+> * First method: Create resources and configure message routing using the Azure CLI. Configure the message enrichments in the Azure portal.
+> * Second method: Create resources and configure message routing and message enrichments using a Resource Manager template.
> * Run an app that simulates an IoT device sending messages to the hub.
-> * View the results, and verify that the message enrichments are working as expected.
+> * View the results, and verify that the message enrichments are being applied to the targeted messages.
## Prerequisites -- You must have an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- Install [Visual Studio](https://www.visualstudio.com/).
+* You must have an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-- Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)] ## Retrieve the IoT C# samples repository
-Download the [IoT C# samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip) from GitHub and unzip them. This repository has several applications, scripts, and Resource Manager templates in it. The ones to be used for this tutorial are as follows:
+Download or clone the [IoT C# samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) from GitHub. Follow the directions in **README.md** to set up the prerequisites for running C# samples.
+
+This repository has several applications, scripts, and Resource Manager templates in it. The ones to be used for this tutorial are as follows:
-* For the manual method, there's a CLI script that's used to create the resources. This script is in /azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/iothub_msgenrichment_cli.azcli. This script creates the resources and configures the message routing. After you run this script, create the message enrichments manually by using the [Azure portal](https://portal.azure.com).
-* For the automated method, there's an Azure Resource Manager template. The template is in /azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/template_msgenrichments.json. This template creates the resources, configures the message routing, and then configures the message enrichments.
-* The third application you use is the Device Simulation app, which you use to send messages to the IoT hub and test the message enrichments.
+* For the manual method, there's a CLI script that creates the cloud resources. This script is in `/azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/iothub_msgenrichment_cli.azcli`. This script creates the resources and configures the message routing. After you run this script, create the message enrichments manually by using the Azure portal.
+* For the automated method, there's an Azure Resource Manager template. The template is in `/azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/template_msgenrichments.json`. This template creates the resources, configures the message routing, and then configures the message enrichments.
+* The third application you use is the device simulation app, which you use to send messages to the IoT hub and test the message enrichments.
-## Manually set up and configure by using the Azure CLI
+## Create and configure resources using the Azure CLI
-In addition to creating the necessary resources, the Azure CLI script also configures the two routes to the endpoints that are separate storage containers. For more information on how to configure the message routing, see the [Routing tutorial](tutorial-routing.md). After the resources are set up, use the [Azure portal](https://portal.azure.com) to configure message enrichments for each endpoint. Then continue on to the testing step.
+In addition to creating the necessary resources, the Azure CLI script also configures the two routes to the endpoints that are separate storage containers. For more information on how to configure message routing, see the [routing tutorial](tutorial-routing.md). After the resources are set up, use the [Azure portal](https://portal.azure.com) to configure message enrichments for each endpoint. Then continue on to the testing step.
> [!NOTE] > All messages are routed to both endpoints, but only the messages going to the endpoint with configured message enrichments will be enriched.
->
You can use the script that follows, or you can open the script in the /resources folder of the downloaded repository. The script performs the following steps: * Create an IoT hub. * Create a storage account. * Create two containers in the storage account. One container is for the enriched messages, and another container is for messages that aren't enriched.
-* Set up routing for the two different storage accounts:
- * Create an endpoint for each storage account container.
- * Create a route to each of the storage account container endpoints.
+* Set up routing for the two different storage containers:
+ * Create an endpoint for each storage account container.
+ * Create a route to each of the storage account container endpoints.
There are several resource names that must be globally unique, such as the IoT hub name and the storage account name. To make running the script easier, those resource names are appended with a random alphanumeric value called *randomValue*. The random value is generated once at the top of the script. It's appended to the resource names as needed throughout the script. If you don't want the value to be random, you can set it to an empty string or to a specific value.
Here are the resources created by the script. *Enriched* means that the resource
| Name | Value | |--|--| | resourceGroup | ContosoResourcesMsgEn |
-| container name | original |
-| container name | enriched |
| IoT device name | Contoso-Test-Device | | IoT Hub name | ContosoTestHubMsgEn | | storage Account Name | contosostorage |
+| container name 1 | original |
+| container name 2 | enriched |
| endpoint Name 1 | ContosoStorageEndpointOriginal | | endpoint Name 2 | ContosoStorageEndpointEnriched| | route Name 1 | ContosoStorageRouteOriginal |
subscriptionID=$(az account show --query id -o tsv)
# This retrieves a random value. randomValue=$RANDOM
-# This command installs the IOT Extension for Azure CLI.
+# This command installs the IoT Extension for Azure CLI.
# You only need to install this the first time. # You need it to create the device identity. az extension add --name azure-iot
az iot hub route create \
At this point, the resources are all set up and the message routing is configured. You can view the message routing configuration in the portal and set up the message enrichments for messages going to the **enriched** storage container.
-### Manually configure the message enrichments by using the Azure portal
+### Configure the message enrichments using the Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it.
-1. Go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it. Select **Message routing** for the IoT hub.
+2. Select **Message routing** for the IoT hub.
:::image type="content" source="./media/tutorial-message-enrichments/select-iot-hub.png" alt-text="Screenshot that shows how to select message routing." border="true":::
- The message routing pane has three tabs labeled **Routes**, **Custom endpoints**, and **Enrich messages**. Browse the first two tabs to see the configuration set up by the script. Use the third tab to add message enrichments. Let's enrich messages going to the endpoint for the storage container called **enriched**. Fill in the name and value, and then select the endpoint **ContosoStorageEndpointEnriched** from the drop-down list. Here's an example of how to set up an enrichment that adds the IoT hub name to the message:
+ The message routing pane has three tabs labeled **Routes**, **Custom endpoints**, and **Enrich messages**. Browse the first two tabs to see the configuration set up by the script.
+
+3. Select the **Enrich messages** tab to add three message enrichments for the messages going to the endpoint for the storage container called **enriched**.
+
+4. For each message enrichment, fill in the name and value, and then select the endpoint **ContosoStorageEndpointEnriched** from the drop-down list. Here's an example of how to set up an enrichment that adds the IoT hub name to the message:
![Add first enrichment](./media/tutorial-message-enrichments/add-message-enrichments.png)
-2. Add these values to the list for the ContosoStorageEndpointEnriched endpoint.
+ Add these values to the list for the ContosoStorageEndpointEnriched endpoint:
- | Key | Value | Endpoint (drop-down list) |
- | - | -- | -|
- | myIotHub | $iothubname | AzureStorageContainers > ContosoStorageEndpointEnriched |
- | DeviceLocation | $twin.tags.location (assumes that the device twin has a location tag) | AzureStorageContainers > ContosoStorageEndpointEnriched |
- |customerID | 6ce345b8-1e4a-411e-9398-d34587459a3a | AzureStorageContainers > ContosoStorageEndpointEnriched |
+ | Name | Value | Endpoint |
+ | - | -- | -- |
+ | myIotHub | `$iothubname` | ContosoStorageEndpointEnriched |
+ | DeviceLocation | `$twin.tags.location` (assumes that the device twin has a location tag) | ContosoStorageEndpointEnriched |
+ |customerID | `6ce345b8-1e4a-411e-9398-d34587459a3a` | ContosoStorageEndpointEnriched |
-3. When you're finished, your pane should look similar to this image:
+ When you're finished, your pane should look similar to this image:
![Table with all enrichments added](./media/tutorial-message-enrichments/all-message-enrichments.png)
-4. Select **Apply** to save the changes. Skip to the [Test message enrichments](#test-message-enrichments) section.
+5. Select **Apply** to save the changes.
+
+You now have message enrichments set up for all messages routed to the **enriched** endpoint. Skip to the [Test message enrichments](#test-message-enrichments) section to continue the tutorial.
-## Create and configure by using a Resource Manager template
+## Create and configure resources using a Resource Manager template
You can use a Resource Manager template to create and configure the resources, message routing, and message enrichments.
-1. Sign in to the Azure portal. Select **+ Create a Resource** to bring up a search box. Enter *template deployment*, and search for it. In the results pane, select **Template deployment (deploy using custom template)**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **+ Create a Resource** to bring up a search box. Enter *template deployment*, and search for it. In the results pane, select **Template deployment (deploy using custom template)**.
![Template deployment in the Azure portal](./media/tutorial-message-enrichments/template-select-deployment.png)
You can use a Resource Manager template to create and configure the resources, m
1. In the **Custom deployment** pane, select **Build your own template in the editor**.
-1. In the **Edit template** pane, select **Load file**. Windows Explorer appears. Locate the **template_messageenrichments.json** file in the unzipped repo file in **/iot-hub/Tutorials/Routing/SimulatedDevice/resources**.
+1. In the **Edit template** pane, select **Load file**. Windows Explorer appears. Locate the **template_messageenrichments.json** file in the unzipped repo file in the **/iot-hub/Tutorials/Routing/SimulatedDevice/resources** directory.
![Select template from local machine](./media/tutorial-message-enrichments/template-select.png)
You can use a Resource Manager template to create and configure the resources, m
| Name | Value | |--|--|
- | resourceGroup | ContosoResourcesMsgEn |
- | container name | original |
- | container name | enriched |
- | IoT device name | Contoso-Test-Device |
| IoT Hub name | ContosoTestHubMsgEn | | storage Account Name | contosostorage |
+ | container name 1 | original |
+ | container name 2 | enriched |
| endpoint Name 1 | ContosoStorageEndpointOriginal | | endpoint Name 2 | ContosoStorageEndpointEnriched| | route Name 1 | ContosoStorageRouteOriginal |
You can use a Resource Manager template to create and configure the resources, m
![Top half of Custom deployment pane](./media/tutorial-message-enrichments/template-deployment-top.png)
-1. Here's the bottom half of the **Custom deployment** pane. You can see the rest of the parameters and the terms and conditions.
+1. Here's the bottom half of the **Custom deployment** pane. You can see the rest of the parameters and the terms and conditions.
![Bottom half of Custom deployment pane](./media/tutorial-message-enrichments/template-deployment-bottom.png) 1. Select the check box to agree to the terms and conditions. Then select **Purchase** to continue with the template deployment.
-1. Wait for the template to be fully deployed. Select the bell icon at the top of the screen to check on the progress. When it's finished, continue to the [Test message enrichments](#test-message-enrichments) section.
+1. Wait for the template to be fully deployed. Select the bell icon at the top of the screen to check on the progress.
+
+### Register a device in the portal
+
+1. Once your resources are deployed, select the IoT hub in your resource group.
+1. Select **Devices** from the **Device management** section of the navigation menu.
+1. Select **Add Device** to register a new device in your hub.
+1. Provide a device ID. The sample application used later in this tutorial defaults to a device named `Contoso-Test-Device`, but you can use any ID. Select **Save**.
+1. Once the device is created in your hub, select its name from the list of devices. You may need to refresh the list.
+1. Copy the **Primary key** value and have it available to use in the testing section of this article.
## Add location tag to the device twin
One of the message enrichments configured on your IoT hub specifies a key of Dev
Follow these steps to add a location tag to your device's twin with the portal.
-1. Go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it. Select **Devices** on the left-pane of the IoT hub, then select your device (**Contoso-Test-Device**).
+1. Navigate to your IoT hub in the Azure portal.
+
+1. Select **Devices** on the left-pane of the IoT hub, then select your device.
1. Select the **Device twin** tab at the top of the device page and add the following line just before the closing brace at the bottom of the device twin. Then select **Save**. ```json , "tags": {"location": "Plant 43"}- ``` :::image type="content" source="./media/tutorial-message-enrichments/add-location-tag-to-device-twin.png" alt-text="Screenshot of adding location tag to device twin in Azure portal":::
To learn more about how device twin paths are handled with message enrichments,
To view the message enrichments, select **Resource groups**. Then select the resource group you're using for this tutorial. Select the IoT hub from the list of resources, and go to **Messaging**. The message routing configuration and the configured enrichments appear.
-Now that the message enrichments are configured for the endpoint, run the Simulated Device application to send messages to the IoT hub. The hub was set up with settings that accomplish the following tasks:
+Now that the message enrichments are configured for the **enriched** endpoint, run the simulated device application to send messages to the IoT hub. The hub was set up with settings that accomplish the following tasks:
-* Messages routed to the storage endpoint ContosoStorageEndpointOriginal won't be enriched and will be stored in the storage container `original`.
+* Messages routed to the storage endpoint ContosoStorageEndpointOriginal won't be enriched and will be stored in the storage container **original**.
-* Messages routed to the storage endpoint ContosoStorageEndpointEnriched will be enriched and stored in the storage container `enriched`.
+* Messages routed to the storage endpoint ContosoStorageEndpointEnriched will be enriched and stored in the storage container **enriched**.
-The Simulated Device application is one of the applications in the unzipped download. The application sends messages for each of the different message routing methods in the [Routing tutorial](tutorial-routing.md), which includes Azure Storage.
+The simulated device application is one of the applications in the azure-iot-samples-csharp repository. The application sends messages with a randomized value for the property `level`. Only messages that have `storage` set as the message's level property will be routed to the two endpoints.
-Double-click the solution file **IoT_SimulatedDevice.sln** to open the code in Visual Studio, and then open **Program.cs**. Substitute the IoT hub name for the marker `{your hub name}`. The format of the IoT hub host name is **{your hub name}.azure-devices.net**. For this tutorial, the hub host name is ContosoTestHubMsgEn.azure-devices.net. Next, substitute the device key you saved earlier when you ran the script to create the resources for the marker `{your device key}`.
+1. Open the file **Program.cs** from the **SimulatedDevice** directory in your preferred code editor.
-If you don't have the device key, you can retrieve it from the portal. After you sign in, go to **Resource groups**, select your resource group, and then select your IoT hub. Look under **IoT Devices** for your test device, and select your device. Select the copy icon next to **Primary key** to copy it to the clipboard.
+1. Replace the placeholder text with your own resource information. Substitute the IoT hub name for the marker `{your hub name}`. The format of the IoT hub host name is **{your hub name}.azure-devices.net**. Next, substitute the device key you saved earlier when you ran the script to create the resources for the marker `{your device key}`.
+
+ If you don't have the device key, you can retrieve it from the portal. After you sign in, go to **Resource groups**, select your resource group, and then select your IoT hub. Look under **IoT Devices** for your test device, and select your device. Select the copy icon next to **Primary key** to copy it to the clipboard.
```csharp
- private readonly static string s_myDeviceId = "Contoso-Test-Device";
- private readonly static string s_iotHubUri = "ContosoTestHubMsgEn.azure-devices.net";
- // This is the primary key for the device. This is in the portal.
- // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
- private readonly static string s_deviceKey = "{your device key}";
+ private readonly static string s_myDeviceId = "Contoso-Test-Device";
+ private readonly static string s_iotHubUri = "{your hub name}.azure-devices.net";
+ // This is the primary key for the device. This is in the portal.
+ // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
+ private readonly static string s_deviceKey = "{your device key}";
``` ### Run and test
-Run the console application for a few minutes. The messages that are being sent are displayed on the console screen of the application.
+Run the console application for a few minutes.
-The app sends a new device-to-cloud message to the IoT hub every second. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. It randomly assigns a level of `critical` or `storage`, which causes the message to be routed to the storage account or to the default endpoint. The messages sent to the **enriched** container in the storage account will be enriched.
+In a command line window, you can run the sample with the following commands executed at the **SimulatedDevice** directory level:
-After several storage messages are sent, view the data.
+```console
+dotnet restore
+dotnet run
+```
-1. Select **Resource groups**. Find your resource group, **ContosoResourcesMsgEn**, and select it.
+The app sends a new device-to-cloud message to the IoT hub every second. The messages that are being sent are displayed on the console screen of the application. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. The sample program randomly changes the message level to either `critical` or `storage`. Messages labeled for storage are routed to the storage account, and the rest go to the default endpoint. The messages sent to the **enriched** container in the storage account will be enriched.
-2. Select your storage account, which is **contosostorage**. Then select **Storage Explorer (preview)** in the left pane.
+After several storage messages are sent, view the data.
- ![Select Storage Explorer](./media/tutorial-message-enrichments/select-storage-explorer.png)
+1. Select **Resource groups**. Find your resource group, **ContosoResourcesMsgEn**, and select it.
- Select **BLOB CONTAINERS** to see the two containers that can be used.
+2. Select your storage account, which begins with **contosostorage**. Then select **Storage browser (preview)** from the navigation menu. Select **Blob containers** to see the two containers that you created.
- ![See the containers in the storage account](./media/tutorial-message-enrichments/show-blob-containers.png)
+ :::image type="content" source="./media/tutorial-message-enrichments/show-blob-containers.png" alt-text="See the containers in the storage account.":::
-The messages in the container called **enriched** have the message enrichments included in the messages. The messages in the container called **original** have the raw messages with no enrichments. Drill down into one of the containers until you get to the bottom, and open the most recent message file. Then do the same for the other container to verify that there are no enrichments added to messages in that container.
+The messages in the container called **enriched** have the message enrichments included in the messages. The messages in the container called **original** have the raw messages with no enrichments. Drill down into one of the containers until you get to the bottom, and open the most recent message file. Then do the same for the other container to verify that the one is enriched and one isn't.
When you look at messages that have been enriched, you should see "my IoT Hub" with the hub name and the location and the customer ID, like this: ```json
-{"EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z","Properties":{"level":"storage","myIotHub":"contosotesthubmsgen3276","DeviceLocation":"Plant 43","customerID":"6ce345b8-1e4a-411e-9398-d34587459a3a"},"SystemProperties":{"connectionDeviceId":"Contoso-Test-Device","connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"636930642531278483","enqueuedTime":"2019-05-10T06:06:32.7220000Z"},"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"}
+{
+ "EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z",
+ "Properties":
+ {
+ "level":"storage",
+ "myIotHub":"contosotesthubmsgen3276",
+ "DeviceLocation":"Plant 43",
+ "customerID":"6ce345b8-1e4a-411e-9398-d34587459a3a"
+ },
+ "SystemProperties":
+ {
+ "connectionDeviceId":"Contoso-Test-Device",
+ "connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
+ "connectionDeviceGenerationId":"636930642531278483",
+ "enqueuedTime":"2019-05-10T06:06:32.7220000Z"
+ },"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"
+}
```
-Here's an unenriched message. Notice that "my IoT Hub," "devicelocation," and "customerID" don't show up here because these fields are added by the enrichments. This endpoint has no enrichments.
+Here's an unenriched message. Notice that `my IoT Hub,` `devicelocation,` and `customerID` don't show up here because these fields are added by the enrichments. This endpoint has no enrichments.
```json
-{"EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z","Properties":{"level":"storage"},"SystemProperties":{"connectionDeviceId":"Contoso-Test-Device","connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"636930642531278483","enqueuedTime":"2019-05-10T06:06:32.7220000Z"},"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"}
+{
+ "EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z",
+ "Properties":
+ {
+ "level":"storage"
+ },
+ "SystemProperties":
+ {
+ "connectionDeviceId":"Contoso-Test-Device",
+ "connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
+ "connectionDeviceGenerationId":"636930642531278483",
+ "enqueuedTime":"2019-05-10T06:06:32.7220000Z"
+ },"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"
+}
``` ## Clean up resources
az group delete --name $resourceGroup
## Next steps
-In this tutorial, you configured and tested adding message enrichments to IoT Hub messages by using the following steps:
-
-**Use IoT Hub message enrichments**
-
-> [!div class="checklist"]
-> * First method: Create resources and configure message routing by using the Azure CLI. Configure the message enrichments manually by using the [Azure portal](https://portal.azure.com).
-> * Second method: Create resources and configure message routing and message enrichments by using an Azure Resource Manager template.
-> * Run an app that simulates an IoT device sending messages to the hub.
-> * View the results, and verify that the message enrichments are working as expected.
+In this tutorial, you configured and tested message enrichments for IoT Hub messages as they are routed to an endpoint.
For more information about message enrichments, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
-For more information about message routing, see these articles:
-
-> [!div class="nextstepaction"]
-> [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](iot-hub-devguide-messages-d2c.md)
+To learn more about IoT Hub, continue to the next tutorial.
> [!div class="nextstepaction"]
-> [Tutorial: IoT Hub routing](tutorial-routing.md)
+> [Tutorial: Set up and use metrics and logs with an IoT hub](tutorial-use-metrics-and-diags.md)
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
For full details, see [Assign Azure roles using Azure PowerShell](../../role-bas
1. Validate adding new secret without "Key Vault Secrets Officer" role on key vault level.
-Go to key vault Access control (IAM) tab and remove "Key Vault Secrets Officer" role assignment for this resource.
+ 1. Go to key vault Access control (IAM) tab and remove "Key Vault Secrets Officer" role assignment for this resource.
-![Remove assignment - key vault](../media/rbac/image-9.png)
+ ![Remove assignment - key vault](../media/rbac/image-9.png)
-Navigate to previously created secret. You can see all secret properties.
+ 1. Navigate to previously created secret. You can see all secret properties.
-![Secret view with access](../media/rbac/image-10.png)
+ ![Secret view with access](../media/rbac/image-10.png)
-Create new secret ( Secrets \> +Generate/Import) should show below error:
+ 1. Create new secret ( Secrets \> +Generate/Import) should show below error:
- ![Create new secret](../media/rbac/image-11.png)
+ ![Create new secret](../media/rbac/image-11.png)
-2. Validate secret editing without "Key Vault Secret Officer" role on secret level.
+1. Validate secret editing without "Key Vault Secret Officer" role on secret level.
-- Go to previously created secret Access Control (IAM) tab
+ 1. Go to previously created secret Access Control (IAM) tab
and remove "Key Vault Secrets Officer" role assignment for this resource. -- Navigate to previously created secret. You can see secret properties.
+ 1. Navigate to previously created secret. You can see secret properties.
- ![Secret view without access](../media/rbac/image-12.png)
+ ![Secret view without access](../media/rbac/image-12.png)
-3. Validate secrets read without reader role on key vault level.
+1. Validate secrets read without reader role on key vault level.
-- Go to key vault resource group Access control (IAM) tab and remove "Key Vault Reader" role assignment.
+ 1. Go to key vault resource group Access control (IAM) tab and remove "Key Vault Reader" role assignment.
-- Navigating to key vault's Secrets tab should show below error:
+ 1. Navigating to key vault's Secrets tab should show below error:
- ![Secret tab - error](../media/rbac/image-13.png)
+ ![Secret tab - error](../media/rbac/image-13.png)
### Creating custom roles
lighthouse Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/architecture.md
Title: Azure Lighthouse architecture description: Learn about the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship. Previously updated : 09/13/2021 Last updated : 06/09/2022
lighthouse Cloud Solution Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cloud-solution-provider.md
Title: Cloud Solution Provider program considerations description: For CSP partners, Azure delegated resource management helps improve security and control by enabling granular permissions. Previously updated : 11/18/2021 Last updated : 06/09/2022
Azure Lighthouse helps improve security by limiting unnecessary access to your c
To further minimize the number of permanent assignments, you can [create eligible authorizations](../how-to/create-eligible-authorizations.md) (currently in public preview) to grant additional permissions to your users on a just-in-time basis.
-Onboarding a subscription that you created through the CSP program follows the steps described in [Onboard a subscription to Azure Lighthouse](../how-to/onboard-customer.md). Any user who has the Admin Agent role in your tenant can perform this onboarding.
+Onboarding a subscription that you created through the CSP program follows the steps described in [Onboard a subscription to Azure Lighthouse](../how-to/onboard-customer.md). Any user who has the Admin Agent role in the customer's tenant can perform this onboarding.
> [!TIP]
-> [Managed Service offers](managed-services-offers.md) with private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. You can onboard these subscriptions to Azure Lighthouse by [using Azure Resource Manager templates](../how-to/onboard-customer.md).
+> [Managed Service offers](managed-services-offers.md) with private plans aren't supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. Instead, you can onboard these subscriptions to Azure Lighthouse by [using Azure Resource Manager templates](../how-to/onboard-customer.md).
> [!NOTE] > The [**My customers** page in the Azure portal](../how-to/view-manage-customers.md) now includes a **Cloud Solution Provider (Preview)** section, which displays billing info and resources for CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more info, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md).
lighthouse Cross Tenant Management Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cross-tenant-management-experience.md
Title: Cross-tenant management experiences description: Azure Lighthouse enables and enhances cross-tenant experiences in many Azure services. Previously updated : 12/01/2021 Last updated : 06/09/2022 # Cross-tenant management experiences
-As a service provider, you can use [Azure Lighthouse](../overview.md) to manage resources for multiple customers from within your own Azure Active Directory (Azure AD) tenant. Many tasks and services can be performed across managed tenants by using [Azure delegated resource management](../concepts/architecture.md).
+As a service provider, you can use [Azure Lighthouse](../overview.md) to manage your customers' Azure resources from within your own Azure Active Directory (Azure AD) tenant. Many common tasks and services can be performed across these managed tenants.
> [!TIP] > Azure Lighthouse can also be used [within an enterprise which has multiple Azure AD tenants of its own](enterprise.md) to simplify cross-tenant administration. ## Understanding tenants and delegation
-An Azure AD tenant is a representation of an organization. It's a dedicated instance of Azure AD that an organization receives when they create a relationship with Microsoft by signing up for Azure, Microsoft 365, or other services. Each Azure AD tenant is distinct and separate from other Azure AD tenants, and has its own tenant ID (a GUID). For more info, see [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
+An Azure AD tenant is a representation of an organization. It's a dedicated instance of Azure AD that an organization receives when they create a relationship with Microsoft by signing up for Azure, Microsoft 365, or other services. Each Azure AD tenant is distinct and separate from other Azure AD tenants, and has its own tenant ID (a GUID). For more information, see [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
-Typically, in order to manage Azure resources for a customer, service providers would have to sign in to the Azure portal using an account associated with that customer's tenant, requiring an administrator in the customer's tenant to create and manage user accounts for the service provider.
+Typically, in order to manage Azure resources for a customer, service providers would have to sign in to the Azure portal using an account associated with that customer's tenant. In this scenario, an administrator in the customer's tenant must create and manage user accounts for the service provider.
-With Azure Lighthouse, the onboarding process specifies users within the service provider's tenant who will be able to work on delegated subscriptions and resource groups in the customer's tenant. These users can then sign in to the Azure portal using their own credentials. Within the Azure portal, they can manage resources belonging to all customers to which they have access. This can be done by visiting the [My customers](../how-to/view-manage-customers.md) page in the Azure portal, or by working directly within the context of that customer's subscription, either in the Azure portal or via APIs.
+With Azure Lighthouse, the onboarding process specifies users in the service provider's tenant who will be able to work on delegated subscriptions and resource groups in the customer's tenant. These users can then sign in to the Azure portal, using their own credentials, and work on resources belonging to all of the customers to which they have access. Users in the managing tenant can see all of these customers by visiting the [My customers](../how-to/view-manage-customers.md) page in the Azure portal. They can also work on resources directly within the context of that customer's subscription, either in the Azure portal or via APIs.
-Azure Lighthouse allows greater flexibility to manage resources for multiple customers without having to sign in to different accounts in different tenants. For example, a service provider may have two customers with different responsibilities and access levels. Using Azure Lighthouse, authorized users can sign in to the service provider's tenant to access these resources.
+Azure Lighthouse provides flexibility to manage resources for multiple customers without having to sign in to different accounts in different tenants. For example, a service provider may have two customers with different responsibilities and access levels. Using Azure Lighthouse, authorized users can sign in to the service provider's tenant and access all of the delegated resources across these customers.
![Diagram showing customer resources managed through one service provider tenant.](../media/azure-delegated-resource-management-service-provider-tenant.jpg) ## APIs and management tool support
-You can perform management tasks on delegated resources either directly in the portal or by using APIs and management tools (such as Azure CLI and Azure PowerShell). All existing APIs can be used when working with delegated resources, as long as the functionality is supported for cross-tenant management and the user has the appropriate permissions.
+You can perform management tasks on delegated resources in the Azure portal, or you can use APIs and management tools such as Azure CLI and Azure PowerShell. All existing APIs can be used on delegated resources, as long as the functionality is supported for cross-tenant management and the user has the appropriate permissions.
The Azure PowerShell [Get-AzSubscription cmdlet](/powershell/module/Az.Accounts/Get-AzSubscription) will show the `TenantId` for the managing tenant by default. You can use the `HomeTenantId` and `ManagedByTenantIds` attributes for each subscription, allowing you to identify whether a returned subscription belongs to a managed tenant or to your managing tenant.
Most tasks and services can be performed on delegated resources across managed t
- Manage hybrid servers at scale - [Azure Arc-enabled servers](../../azure-arc/servers/overview.md): - [Manage Windows Server or Linux machines outside Azure that are connected](../../azure-arc/servers/onboard-portal.md) to delegated subscriptions and/or resource groups in Azure - Manage connected machines using Azure constructs, such as Azure Policy and tagging
- - Ensure the same set of policies are applied across customers' hybrid environments
- - Use Microsoft Defender for Cloud to monitor compliance across customers' hybrid environments
+ - Ensure the same set of [policies are applied](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md) across customers' hybrid environments
+ - Use Microsoft Defender for Cloud to [monitor compliance across customers' hybrid environments](../../defender-for-cloud/quickstart-onboard-machines.md?pivots=azure-arc)
- Manage hybrid Kubernetes clusters at scale - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md): - [Manage Kubernetes clusters that are connected](../../azure-arc/kubernetes/quickstart-connect-cluster.md) to delegated subscriptions and/or resource groups in Azure
- - [Use GitOps](../../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md) for connected clusters
- - Enforce policies across connected clusters
+ - [Use GitOps](../../azure-arc/kubernetes/tutorial-use-gitops-flux2.md) for connected clusters
+ - [Enforce policies across connected clusters](../../governance/policy/concepts/policy-for-kubernetes.md#install-azure-policy-extension-for-azure-arc-enabled-kubernetes)
[Azure Automation](../../automation/index.yml):
Most tasks and services can be performed on delegated resources across managed t
[Azure Backup](../../backup/index.yml): - Back up and restore customer data [from on-premises workloads, Azure VMs, Azure file shares, and more](../..//backup/backup-overview.md#what-can-i-back-up)-- View data for all delegated customer resources in [Backup Center](../../backup/backup-center-overview.md)
+- View data for all delegated customer resources in [Backup center](../../backup/backup-center-overview.md)
- Use the [Backup Explorer](../../backup/monitor-azure-backup-with-backup-explorer.md) to help view operational information of backup items (including Azure resources not yet configured for backup) and monitoring information (jobs and alerts) for delegated subscriptions. The Backup Explorer is currently available only for Azure VM data.-- Use [Backup Reports](../../backup/configure-reports.md) across delegated subscriptions to track historical trends, analyze backup storage consumption, and audit backups and restores.
+- Use [Backup reports](../../backup/configure-reports.md) across delegated subscriptions to track historical trends, analyze backup storage consumption, and audit backups and restores.
[Azure Blueprints](../../governance/blueprints/index.yml):
Most tasks and services can be performed on delegated resources across managed t
- Manage hosted Kubernetes environments and deploy and manage containerized applications within customer tenants - Deploy and manage clusters in customer tenants-- Use Azure Monitor for containers to monitor performance across customer tenants
+- [Use Azure Monitor for containers](../../aks/monitor-aks.md) to monitor performance across customer tenants
[Azure Migrate](../../migrate/index.yml):
Most tasks and services can be performed on delegated resources across managed t
- Deploy and manage [Azure Virtual Network](../../virtual-network/index.yml) and virtual network interface cards (vNICs) within managed tenants - Deploy and configure [Azure Firewall](../../firewall/overview.md) to protect customersΓÇÖ Virtual Network resources-- Manage connectivity services such as [Azure Virtual WAN](../../virtual-wan/virtual-wan-about.md), [ExpressRoute](../../expressroute/expressroute-introduction.md), and [VPN Gateways](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
+- Manage connectivity services such as [Azure Virtual WAN](../../virtual-wan/virtual-wan-about.md), [Azure ExpressRoute](../../expressroute/expressroute-introduction.md), and [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
- Use Azure Lighthouse to support key scenarios for the [Azure Networking MSP Program](../../networking/networking-partners-msp.md) [Azure Policy](../../governance/policy/index.yml):
With all scenarios, please be aware of the following current limitations:
- Onboard your customers to Azure Lighthouse, either by [using Azure Resource Manager templates](../how-to/onboard-customer.md) or by [publishing a private or public managed services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). - [View and manage customers](../how-to/view-manage-customers.md) by going to **My customers** in the Azure portal.
+- Learn more about [Azure Lighthouse architecture](architecture.md).
lighthouse Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/enterprise.md
Title: Azure Lighthouse in enterprise scenarios description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Azure AD tenants. Previously updated : 02/18/2022 Last updated : 06/09/2022
lighthouse Isv Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/isv-scenarios.md
Title: Azure Lighthouse in ISV scenarios description: The capabilities of Azure Lighthouse can be used by ISVs for more flexibility with customer offerings. Previously updated : 09/08/2021 Last updated : 06/09/2022
lighthouse Managed Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-applications.md
Title: Azure Lighthouse and Azure managed applications description: Understand how Azure Lighthouse and Azure managed applications can be used together. Previously updated : 09/08/2021 Last updated : 06/09/2022
lighthouse Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-services-offers.md
Title: Managed Service offers in Azure Marketplace description: Offer your Azure Lighthouse management services to customers through Managed Services offers in Azure Marketplace. Previously updated : 02/02/2022 Last updated : 06/09/2022
lighthouse Recommended Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/recommended-security-practices.md
Title: Recommended security practices description: When using Azure Lighthouse, it's important to consider security and access control. Previously updated : 09/08/2021 Last updated : 06/09/2022
lighthouse Tenants Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/tenants-users-roles.md
Title: Tenants, users, and roles in Azure Lighthouse scenarios description: Understand how Azure Active Directory tenants, users, and roles can be used in Azure Lighthouse scenarios. Previously updated : 12/16/2021 Last updated : 06/09/2022
When defining an authorization, each user account must be assigned one of the [A
All [built-in roles](../../role-based-access-control/built-in-roles.md) are currently supported with Azure Lighthouse, with the following exceptions: - The [Owner](../../role-based-access-control/built-in-roles.md#owner) role is not supported.-- Any built-in roles with [DataActions](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported.
+- Any built-in roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported.
- The [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) built-in role is supported, but only for the limited purpose of [assigning roles to a managed identity in the customer tenant](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). No other permissions typically granted by this role will apply. If you define a user with this role, you must also specify the built-in role(s) that this user can assign to managed identities. > [!NOTE]
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
If you've previously created a quick test, you can edit the test plan at any tim
### Split CSV input data across test engines
-By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. You don't have to make any modifications to the JMX test script.
+By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file will be split evenly.
For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
-If you have multiple CSV files, each file will be split evenly.
+Azure Load Testing doesn't preserve the header row in your CSV file when splitting a CSV file. For more information about how to configure your JMeter script and CSV file, see [Read data from a CSV file](./how-to-read-csv-data.md).
-To configure your load test:
-
-1. Go to the **Test plan** page for your load test.
-1. Select **Split CSV evenly between Test engines**.
-
- :::image type="content" source="media/how-to-create-manage-test/configure-test-split-csv.png" alt-text="Screenshot that shows the checkbox to enable splitting input C S V files when configuring a test in the Azure portal.":::
## Parameters
Configure the number of test engine instances, and Azure Load Testing automatica
## Test criteria
-You can specify test failure criteria based on a number of client metrics. When a load test surpasses the threshold for a metric, the load test has a **Failed** status. For more information, see [Configure test failure criteria](./how-to-define-test-criteria.md).
+You can specify test failure criteria based on client metrics. When a load test surpasses the threshold for a metric, the load test has a **Failed** status. For more information, see [Configure test failure criteria](./how-to-define-test-criteria.md).
You can use the following client metrics:
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
To edit your JMeter script by using the Apache JMeter GUI:
1. Select the **CSV Data Set Config** element in your test plan. 1. Update the **Filename** information and remove any file path reference.
+
+ 1. Optionally, enter the CSV field names in **Variable Names**, when you split the CSV file across test engines.
+
+ Azure Load Testing doesn't preserve the header row when splitting your CSV file. Provide the variable names in the **CSV Data Set Config** element instead of using a header row.
- :::image type="content" source="media/how-to-read-csv-data/update-csv-data-set-config.png" alt-text="Screenshot that shows the test runs to compare.":::
+ :::image type="content" source="media/how-to-read-csv-data/update-csv-data-set-config.png" alt-text="Screenshot that shows the JMeter UI to configure a C S V Data Set Config element.":::
- 1. Repeat the previous steps for every CSV Data Set Config element.
+ 1. Repeat the previous steps for every **CSV Data Set Config** element in the script.
- 1. Save the JMeter script.
+ 1. Save the JMeter script and add it to your [test plan](./how-to-create-manage-test.md#test-plan).
To edit your JMeter script by using Visual Studio Code or your editor of preference: 1. Open the JMX file in Visual Studio Code.
- 1. For each `CSVDataSet`, update the `filename` element and remove any file path reference.
+ 1. For each `CSVDataSet`:
+
+ 1. Update the `filename` element and remove any file path reference.
+
+ 1. Add the CSV field names as a comma-separated list in `variableNames`.
```xml <CSVDataSet guiclass="TestBeanGUI" testclass="CSVDataSet" testname="Search parameters" enabled="true">
To edit your JMeter script by using Visual Studio Code or your editor of prefere
</CSVDataSet> ```
- 1. Save the JMeter script.
+ 1. Save the JMeter script and add it to your [test plan](./how-to-create-manage-test.md#test-plan).
## Add a CSV file to your load test When you reference an external file in your JMeter script, upload this file to your load test. When the load starts, Azure Load Testing copies all files to a single folder on each of the test engines instances.
+> [!IMPORTANT]
+> Azure Load Testing doesn't preserve the header row when splitting your CSV file. Before you add the CSV file to the load test, remove the header row from the file.
+ ::: zone pivot="experience-azp" To add a CSV file to your load test by using the Azure portal:
To add a CSV file to your load test:
## Split CSV input data across test engines
-By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. You don't have to make any modifications to the JMX test script.
+By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file will be split evenly.
For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
-If you have multiple CSV files, each file will be split evenly.
+> [!IMPORTANT]
+> Azure Load Testing doesn't preserve the header row when splitting your CSV file.
+> 1. [Configure your JMeter script](#configure-your-jmeter-script) to use variable names when reading the CSV file.
+> 1. Remove the header row from the CSV file before you add it to the load test.
To configure your load test to split input CSV files:
logic-apps Logic Apps Workflow Actions Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-workflow-actions-triggers.md
The Logic Apps engine checks access to the trigger you want to call, so make sur
| <*trigger-name*> | String | The name for the trigger in the nested logic app you want to call | | <*Azure-subscription-ID*> | String | The Azure subscription ID for the nested logic app | | <*Azure-resource-group*> | String | The Azure resource group name for the nested logic app |
-| <*nested-logic-app-name*> | String | The name for the logic app you want to call |
|||| *Optional*
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
---+++ Last updated 05/11/2022 #Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
machine-learning How To Use Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-data.md
description: 'Learn to how work with data using the Python SDK v2 preview for Azure Machine Learning.' --++ Last updated 05/10/2022
returned_job.services["Studio"].endpoint
## Table
-An MLTable is primarily an abstraction over tabular data, but it can also be used for some advanced scenarios involving multiple paths. The following YAML describes an MLTable:
+An [MLTable](concept-data.md#mltable) is primarily an abstraction over tabular data, but it can also be used for some advanced scenarios involving multiple paths. The following YAML describes an MLTable:
```yaml paths:
tbl = mltable.load("./sample_data")
df = tbl.to_pandas_dataframe() ```
-For a full example of using an MLTable, see the [Working with MLTable notebook].
+For more information on the YAML file format, see [the MLTable file](how-to-create-register-data-assets.md#the-mltable-file).
+
+<!-- Commenting until notebook is published. For a full example of using an MLTable, see the [Working with MLTable notebook]. -->
## Consuming V1 dataset assets in V2
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-version-track-datasets.md
description: Learn how to version machine learning datasets and how versioning w
--++ Last updated 10/21/2021
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
To change permissions for a specific resource, follow these steps:
1. Select **Access Control (IAM)**. 1. Under **Grant access to this resource**, select **Add role assignment**.
- :::image type="content" source="media/managed-grafana-how-to-permissions-iam.png" alt-text="Screenshot of the Azure platform to add role assignment in App Insights.":::
+ :::image type="content" source="./media/permissions/permissions-iam.png" alt-text="Screenshot of the Azure platform to add role assignment in App Insights.":::
1. The portal lists various roles you can give to your Managed Grafana resource. Select a role. For instance, **Monitoring Reader**. Select this role. 1. Click **Next**.
- :::image type="content" source="media/managed-grafana-how-to-permissions-role.png" alt-text="Screenshot of the Azure platform and choose Monitor Reader.":::
+ :::image type="content" source="./media/permissions/permissions-role.png" alt-text="Screenshot of the Azure platform and choose Monitor Reader.":::
1. For **Assign access to**, select **Managed Identity**. 1. Click **Select members**.
- :::image type="content" source="media/managed-grafana-how-to-permissions-members.png" alt-text="Screenshot of the Azure platform selecting members.":::
+ :::image type="content" source="media/permissions/permissions-members.png" alt-text="Screenshot of the Azure platform selecting members.":::
1. Select the **Subscription** containing your Managed Grafana workspace 1. Select a **Managed identity** from the options in the dropdown list 1. Select your Managed Grafana workspace from the list. 1. Click **Select** to confirm
- :::image type="content" source="media/managed-grafana-how-to-permissions-identity.png" alt-text="Screenshot of the Azure platform selecting the workspace.":::
+ :::image type="content" source="media/permissions/permissions-managed-identities.png" alt-text="Screenshot of the Azure platform selecting the workspace.":::
1. Click **Next**, then **Review + assign** to confirm the application of the new permission
+For more information about how to use Managed Grafana with Azure Monitor, go to [Monitor your Azure services in Grafana](../azure-monitor/visualize/grafana-plugin.md).
+ ## Next steps > [!div class="nextstepaction"]
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB
No, there's no architectural dependency between Azure Managed Instance for Apache Cassandra and the Azure Cosmos DB backend.
+### What versions of Apache Cassandra does the service support?
+
+The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
+ ### Does Azure Managed Instance for Apache Cassandra have an SLA? Yes, the SLA is published [here](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/).
-#### Can I deploy Azure Managed Instance for Apache Cassandra in any region?
+### Can I deploy Azure Managed Instance for Apache Cassandra in any region?
Currently the managed instance is available in a limited number of regions.
managed-instance-apache-cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/introduction.md
You can use this service to easily place managed instances of Apache Cassandra d
- **Simplified deployment:** After the hybrid connectivity is established, deployment of new data centers in Azure is easy through [simple commands](manage-resources-cli.md#create-datacenter). - **Metrics:** each datacenter node provisioned by the service emits metrics using [Metric Collector for Apache Cassandra](https://github.com/datastax/metric-collector-for-apache-cassandra). The metrics can be [visualized in Prometheus or Grafana](visualize-prometheus-grafana.md). The service is also integrated with [Azure Monitor for metrics and diagnostic logging](monitor-clusters.md).
+>[!NOTE]
+> The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
+ ### Simplified scaling In the managed instance, scaling up and scaling down nodes in a datacenter is fully managed. You select the number of nodes you need, and with a [simple command](manage-resources-cli.md#update-datacenter), the scaling orchestrator takes care of establishing their operation within the Cassandra ring.
managed-instance-apache-cassandra Management Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/management-operations.md
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
## Compaction
-* The system currently does not perform a major compaction.
+* The system currently doesn't perform a major compaction.
* Repair (see [Maintenance](#maintenance)) performs a Merkle tree compaction, which is a special kind of compaction. * Depending on the compaction strategy on the keyspace, Cassandra automatically compacts when the keyspace reaches a specific size. We recommend that you carefully select a compaction strategy for your workload, and don't do any manual compactions outside the strategy.
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
* Apache Cassandra software-level patches are done when security vulnerabilities are identified. The patching cadence may vary.
-* During patching, machines are rebooted one rack at a time. You should not experience any degradation at the application side as long as **quorum ALL setting is not being used**, and the replication factor is **3 or higher**.
+* During patching, machines are rebooted one rack at a time. You shouldn't experience any degradation at the application side as long as **quorum ALL setting is not being used**, and the replication factor is **3 or higher**.
-* The version in Apache Cassandra is in the format `X.Y.Z`. You can control the deployment of major (X) and minor (Y) versions manually via service tools. Whereas the Cassandra patches (Z) that may be required for that major/minor version combination are done automatically.
+* The version in Apache Cassandra is in the format `X.Y.Z`. You can control the deployment of major (X) and minor (Y) versions manually via service tools. Whereas the Cassandra patches (Z) that may be required for that major/minor version combination are done automatically.
+
+>[!NOTE]
+> The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
## Maintenance
Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.micr
## Backup and restore
-Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There is no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There's no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
> [!WARNING] > Backups can be restored to the same VNet/subnet as your existing cluster, but they cannot be restored to the *same cluster*. Backups can only be restored to **new clusters**. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
For more information on security features, see our article [here](security.md).
## Hybrid support
-When a [hybrid](configure-hybrid-cluster.md) cluster is configured, automated reaper operations running in the service will benefit the whole cluster. This includes data centers that are not provisioned by the service. Outside this, it is your responsibility to maintain your on-premise or externally hosted data center.
+When a [hybrid](configure-hybrid-cluster.md) cluster is configured, automated reaper operations running in the service will benefit the whole cluster. This includes data centers that aren't provisioned by the service. Outside this, it is your responsibility to maintain your on-premise or externally hosted data center.
## Next steps
mysql Tutorial Deploy Springboot On Aks Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-springboot-on-aks-vnet.md
az group delete --name rg-mysqlaksdemo
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps
mysql Tutorial Deploy Wordpress On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md
Download the [latest WordPress](https://wordpress.org/download/) version. Create
```
-Rename ```wp-config-sample.php``` to ```wp-config.php``` and replace lines from beginingin of ```// ** MySQL settings - You can get this info from your web host ** //``` until the line ```define( 'DB_COLLATE', '' );``` with the code snippet below. The code below is reading the database host , username and password from the Kubernetes manifest file.
+Rename ```wp-config-sample.php``` to ```wp-config.php``` and replace lines from beginingin of ```// ** MySQL settings - You can get this info from your web host ** //``` until the line ```define( 'DB_COLLATE', '' );``` with the code snippet below. The code below is reading the database host, username and password from the Kubernetes manifest file.
```php //Using environment variables for DB connection information
az group delete --name wordpress-project --yes --no-wait
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.0.59 or later
## Create a resource group - Azure CLI
-Run the following Azure CLI command to create a resource group.
+Run the following Azure CLI command to create a resource group in which your Azure Red Hat OpenShift cluster will reside.
```azurecli-interactive AZ_RG=$(az group create -n test-aro-rg -l eastus2 --query name -o tsv)
The output is similar to the following example.
} ```
-> [!NOTE]
-> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
+> [!IMPORTANT]
+> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well. You also need to create your Azure Red Hat OpenShift cluster in the resource group you created above.
To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
-## Use the service principal to deploy an Azure Red Hat OpenShift cluster - Azure CLI
-
-Using the service principal that you created when you created the Azure Red Hat OpenShift cluster, use the `az aro create` command to deploy the Azure Red Hat OpenShift cluster. Use the `--client-id` and `--client-secret` parameters to specify the appId and password from the output of the `az ad sp create-for-rbac` command, as shown in the following command.
-
-```azure-cli
-az aro create \
-
- --resource-group myResourceGroup \
-
- --name myAROCluster \
-
- --client-id <appID> \
-
- --client-secret <password>
-```
-
-> [!IMPORTANT]
-> If you're using an existing service principal with a customized secret, ensure the secret doesn't exceed 190 bytes.
- ::: zone-end ::: zone pivot="aro-azureportal" ## Create a service principal with the Azure portal
-The following sections explain how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
-
-## Prerequisite - Azure portal
-
-Create a service principal, as explained in [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). **Be sure to save the client ID and the appID.**
-
-## To use the service principal to deploy an Azure Red Hat OpenShift cluster - Azure portal
+This section explains how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
-To use the service principal you created to deploy a cluster, complete the following steps.
+To create a service principal, see [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). **Be sure to save the client ID and the appID.**
-1. On the Create Azure Red Hat OpenShift **Basics** tab, create a resource group for your subscription, as shown in the following example.
- :::image type="content" source="./media/basics-openshift-sp.png" alt-text="Screenshot that shows how to use the Azure Red Hat service principal with Azure portal to create a cluster." lightbox="./media/basics-openshift-sp.png":::
-
-2. Select **Next: Authentication** to configure the service principal on the **Authentication** page of the **Azure Red Hat OpenShift** dialog.
-
- :::image type="content" source="./media/openshift-service-principal-portal.png" alt-text="Screenshot that shows how to use the Authentication tab with Azure portal to create a service principal." lightbox="./media/openshift-service-principal-portal.png":::
-
-In the **Service principal information** section:
--- **Service principal client ID** is your appId. -- **Service principal client secret** is the service principal's decrypted Secret value.-
-In the **Cluster pull secret** section:
--- **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.-
-After completing this tab, select **Next: Networking** to continue deploying your cluster. Select **Review + Create** when you complete the remaining tabs.
-
-> [!NOTE]
-> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
-
-## Grant permissions to the service principal - Azure portal
-
-To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
::: zone-end
openshift Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-portal.md
Create a service principal, as explained in [Use the portal to create an Azure A
- **Service principal client ID** is your appId. - **Service principal client secret** is the service principal's decrypted Secret value.
+ If you need to create a service principal, see [Creating and using a service principal with an Azure Red Hat OpenShift cluster](howto-create-service-principal.md).
+
In the **Cluster pull secret** section: - **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/create.md
Title: Create Datadog - Azure partner solutions description: This article describes how to use the Azure portal to create an instance of Datadog. Previously updated : 05/28/2021 Last updated : 06/08/2022
Use the Azure portal to find Datadog.
1. If you've visited the **Marketplace** in a recent session, select the icon from the available options. Otherwise, search for _Marketplace_.
- :::image type="content" source="media/create/marketplace.png" alt-text="Marketplace icon.":::
+ :::image type="content" source="media/create/marketplace.png" alt-text="Screenshot of the Azure Marketplace icon.":::
1. In the Marketplace, search for **Datadog**.
-1. In the plan overview screen, select **Set up + subscribe**.
+1. In the plan overview screen, select **Subscribe**.
- :::image type="content" source="media/create/datadog-app-2.png" alt-text="Datadog application in Azure Marketplace.":::
+ :::image type="content" source="media/create/datadog-app-2.png" alt-text="Screenshot of the Datadog application in Azure Marketplace.":::
## Create a Datadog resource in Azure The portal displays a selection asking whether you would like to create a Datadog organization or link Azure subscription to an existing Datadog organization.
-If you are creating a new Datadog organization, select **Create** under the **Create a new Datadog organization**
+If you're creating a new Datadog organization, select **Create** under the **Create a new Datadog organization**
The portal displays a form for creating the Datadog resource. Provide the following values.
Use Azure resource tags to configure which metrics and logs are sent to Datadog.
Tag rules for sending **metrics** are: - By default, metrics are collected for all resources, except virtual machines, virtual machine scale sets, and app service plans.-- Virtual machines, virtual machine scale sets, and app service plans with *Include* tags send metrics to Datadog.-- Virtual machines, virtual machine scale sets, and app service plans with *Exclude* tags don't send metrics to Datadog.-- If there's a conflict between inclusion and exclusion rules, exclusion takes priority
+- Virtual machines, virtual machine scale sets, and app service plans with _Include_ tags send metrics to Datadog.
+- Virtual machines, virtual machine scale sets, and app service plans with _Exclude_ tags don't send metrics to Datadog.
+- If there's a conflict between inclusion and exclusion rules, exclusion takes priority.
Tag rules for sending **logs** are: - By default, logs are collected for all resources.-- Azure resources with *Include* tags send logs to Datadog.-- Azure resources with *Exclude* tags don't send logs to Datadog.
+- Azure resources with _Include_ tags send logs to Datadog.
+- Azure resources with _Exclude_ tags don't send logs to Datadog.
- If there's a conflict between inclusion and exclusion rules, exclusion takes priority.
-For example, the screenshot below shows a tag rule where only those virtual machines, virtual machine scale sets, and app service plans tagged as *Datadog = True* send metrics to Datadog.
+For example, the following screenshot shows a tag rule where only those virtual machines, virtual machine scale sets, and app service plans tagged as _Datadog = True_ send metrics to Datadog.
-There are two types of logs that can be emitted from Azure to Datadog.
+There are three types of logs that can be sent from Azure to Datadog.
1. **Subscription level logs** - Provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription. 1. **Azure resource logs** - Provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+1. **Azure Active Directory logs** - As an IT administrator, you want to monitor your IT environment. The information about your system's health enables you to assess potential issues and decide how to respond.
+
+The Azure Active Directory portal gives you access to three activity logs:
+
+- [Sign-in](../../active-directory/reports-monitoring/concept-sign-ins.md) ΓÇô Information about sign-ins and how your resources are used by your users.
+- [Audit](../../active-directory/reports-monitoring/concept-audit-logs.md) ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenant's resources.
+- [Provisioning](../../active-directory/reports-monitoring/concept-provisioning-logs.md) ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
+ To send subscription level logs to Datadog, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Datadog.
-To send Azure resource logs to Datadog, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Datadog, use Azure resource tags.
+To send Azure resource logs to Datadog, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Datadog, use Azure resource tags.
+
+You can request your IT Administrator to route Azure Active Directory Logs to Datadog. For more information, see [Azure AD activity logs in Azure Monitor](../../active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md).
The logs sent to Datadog will be charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
The Azure portal retrieves the appropriate Datadog application from Azure Active
Select the Datadog app name. Select **Next: Tags**.
Select **Next: Tags**.
You can specify custom tags for the new Datadog resource. Provide name and value pairs for the tags to apply to the Datadog resource. When you've finished adding tags, select **Next: Review+Create**.
When you've finished adding tags, select **Next: Review+Create**.
Review your selections and the terms of use. After validation completes, select **Create**. Azure deploys the Datadog resource. When the process completes, select **Go to Resource** to see the Datadog resource. ## Next steps
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Title: Create Dynatrace application - Azure partner solutions
+ Title: Create Dynatrace for Azure (preview) resource - Azure partner solutions
description: This article describes how to use the Azure portal to create an instance of Dynatrace.
Last updated 06/07/2022
# QuickStart: Get started with Dynatrace
-In this quickstart, you create a new instance of Dynatrace. You can either create a new Dynatrace environment or [link to an existing Dynatrace environment](dynatrace-link-to-existing.md#link-to-existing-dynatrace-environment).
+In this quickstart, you create a new instance of Dynatrace for Azure (preview). You can either create a new Dynatrace environment or [link to an existing Dynatrace environment](dynatrace-link-to-existing.md#link-to-existing-dynatrace-environment).
When you use the integrated Dynatrace experience in Azure portal, the following entities are created and mapped for monitoring and billing purposes. - **Dynatrace resource in Azure** - Using the Dynatrace resource, you can manage the Dynatrace environment in Azure. The resource is created in the Azure subscription and resource group that you select during the create or linking process. - **Dynatrace environment** - This is the Dynatrace environment on Dynatrace SaaS. When you choose to create a new environment, the environment on Dynatrace SaaS is automatically created, in addition to the Dynatrace resource in Azure. The resource is created in the Azure subscription and resource group that you selected when you created the environment or linked to an existing environment.
partner-solutions Dynatrace How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md
Title: Configure pre-deployment to use Dynatrace with Azure.
+ Title: Configure pre-deployment to use Dynatrace with Azure (preview) - Azure partner solutions
description: This article describes how to complete the prerequisites for Dynatrace on the Azure portal.
This article describes the prerequisites that must be completed before you creat
## Access control
-To set up the Azure Dynatrace integration, you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before starting the setup.
+To set up the Dynatrace for Azure (preview), you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before starting the setup.
## Add enterprise application
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
Title: Manage your Dynatrace for Azure integration
+ Title: Manage your Dynatrace for Azure (preview) integration - Azure partner solutions
description: This article describes how to manage Dynatrace on the Azure portal.
Last updated 06/07/2022
# Manage the Dynatrace integration with Azure
-This article describes how to manage the settings for your Azure integration with Dynatrace.
+This article describes how to manage the settings for your Dynatrace for Azure (preview).
## Resource overview
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
Title: Linking to an existing Dynatrace for Azure resource
+ Title: Linking to an existing Dynatrace for Azure (preview) resource - Azure partner solutions
description: This article describes how to use the Azure portal to link to an instance of Dynatrace.
Last updated 06/07/2022
In this quickstart, you link an Azure subscription to an existing Dynatrace environment. After you link to the Dynatrace environment, you can monitor the linked Azure subscription and the resources in that subscription using the Dynatrace environment.
-When you use the integrated experience for Dynatrace in the Azure portal, your billing and monitoring for the following entities is tracked in the portal.
+When you use the integrated experience for Dynatrace in the Azure (preview) portal, your billing and monitoring for the following entities is tracked in the portal.
:::image type="content" source="media/dynatrace-link-to-existing/dynatrace-entities-linking.png" alt-text="Flowchart showing three entities: subscription 1 connected to subscription 1 and Dynatrace S A A S.":::
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
Title: Dynatrace integration overview - Azure partner solutions
+ Title: Dynatrace for Azure (preview) overview - Azure partner solutions
description: Learn about using the Dynatrace Cloud-Native Observability Platform in the Azure Marketplace.
Last updated 06/07/2022
# What is Dynatrace integration with Azure?
-Dynatrace is a popular monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
+Dynatrace is a monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
-Dynatrace for Azure offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
+Dynatrace for Azure (preview) offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
You can create and manage the Dynatrace resources using the Azure portal through a resource provider named `Dynatrace.Observability`. Dynatrace owns and runs the software as a service (SaaS) application including the Dynatrace environments created through this experience.
Dynatrace for Azure provides the following capabilities:
- **Manage Dynatrace OneAgent on VMs and App Services** - Provides a single experience to install and uninstall Dynatrace OneAgent on virtual machines and App Services.
-## Dynatrace Links
+## Dynatrace links
For more help using Dynatrace for Azure service, see the [Dynatrace](https://aka.ms/partners/Dynatrace/PartnerDocs) documentation.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
Title: Troubleshooting Dynatrace - Azure partner solutions
-description: This article provides information about troubleshooting Dynatrace integration with Azure
+ Title: Troubleshooting Dynatrace for Azure (preview) - Azure partner solutions
+description: This article provides information about troubleshooting Dynatrace for Azure
Last updated 06/07/2022
# Troubleshoot Dynatrace for Azure
-This article describes how to contact support when working with a Dynatrace resource. Before contacting support, see [Fix common errors](#fix-common-errors).
+This article describes how to contact support when working with a Dynatrace for Azure (preview) resource. Before contacting support, see [Fix common errors](#fix-common-errors).
## Contact support
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Partner solutions are available through the Marketplace.
| [Datadog](./datadog/overview.md) | Monitor your servers, clouds, metrics, and apps in one place. | | [Elastic](./elastic/overview.md) | Monitor the health and performance of your Azure environment. | | [Logz.io](./logzio/overview.md) | Monitor the health and performance of your Azure environment. |
-| [Dynatrace for Azure](./dynatrace/dynatrace-overview.md) | Use Dyntrace for Azure to create and manage Dynatrace environments using the Azure portal. |
+| [Dynatrace for Azure (preview)](./dynatrace/dynatrace-overview.md) | Use Dyntrace for Azure (preview) for monitoring your workflows using the Azure portal. |
| [NGINX for Azure (preview)](./nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
az group delete --name django-project --yes --no-wait
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-audit.md
Last updated 08/03/2021
# Audit logging in Azure Database for PostgreSQL - Hyperscale (Citus) + > [!IMPORTANT] > The pgAudit extension in Hyperscale (Citus) is currently in preview. This > preview version is provided without a service level agreement, and it's not
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-backup.md
Last updated 04/14/2021
# Backup and restore in Azure Database for PostgreSQL - Hyperscale (Citus) + Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) automatically creates backups of each node and stores them in locally redundant storage. Backups can be used to restore your Hyperscale (Citus) server group to a specified time.
postgresql Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-colocation.md
Last updated 05/06/2019
# Table colocation in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Colocation means storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node. ## Data colocation for hash-distributed tables
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-columnar.md
Last updated 08/03/2021
# Columnar table storage + Azure Database for PostgreSQL - Hyperscale (Citus) supports append-only columnar table storage for analytic and data warehousing workloads. When columns (rather than rows) are stored contiguously on disk, data becomes more
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-connection-pool.md
Last updated 05/31/2022
# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) connection pooling + Establishing new connections takes time. That works against most applications, which request many short-lived connections. We recommend using a connection pooler, both to reduce idle transactions and reuse existing connections. To
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-distributed-data.md
Last updated 05/06/2019
# Distributed data in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + This article outlines the three table types in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). It shows how distributed tables are stored as shards, and the way that shards are placed on nodes.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-firewall-rules.md
Last updated 10/15/2021
# Public access in Azure Database for PostgreSQL - Hyperscale (Citus) + [!INCLUDE [azure-postgresql-hyperscale-access](../../../includes/azure-postgresql-hyperscale-access.md)] This page describes the public access option. For private access, see
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-high-availability.md
Last updated 01/12/2022
# High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + High availability (HA) avoids database downtime by maintaining standby replicas of every node in a server group. If a node goes down, Hyperscale (Citus) switches incoming connections from the failed node to its standby. Failover happens
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-maintenance.md
Last updated 02/14/2022
# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Azure Database for PostgreSQL - Hyperscale (Citus) does periodic maintenance to keep your managed database secure, stable, and up-to-date. During maintenance, all nodes in the server group get new features, updates, and patches.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-monitoring.md
Last updated 12/06/2021
# Monitor and tune Azure Database for PostgreSQL - Hyperscale (Citus) + Monitoring data about your servers helps you troubleshoot and optimize for your workload. Hyperscale (Citus) provides various monitoring options to provide insight into the behavior of nodes in a server group.
postgresql Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-nodes.md
Last updated 07/28/2019
# Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + ## Nodes The Hyperscale (Citus) hosting type allows Azure Database for PostgreSQL
postgresql Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-private-access.md
Last updated 10/15/2021
# Private access in Azure Database for PostgreSQL - Hyperscale (Citus) + This page describes the private access option. For public access, see [here](concepts-firewall-rules.md).
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-read-replicas.md
Last updated 02/03/2022
# Read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) + The read replica feature allows you to replicate data from a Hyperscale (Citus) server group to a read-only server group. Replicas are updated **asynchronously** with PostgreSQL physical replication technology. You can
postgresql Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-security-overview.md
Last updated 01/14/2022
# Security in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + This page outlines the multiple layers of security available to protect the data in your Hyperscale (Citus) server group.
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-server-group.md
Last updated 01/13/2022
# Hyperscale (Citus) server group + ## Nodes The Azure Database for PostgreSQL - Hyperscale (Citus) deployment option allows
postgresql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-alert-on-metric.md
Last updated 3/16/2020
# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Hyperscale (Citus) + This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on [monitoring metrics](concepts-monitoring.md) for your Azure services. We'll set up an alert to trigger when the value of a specified metric crosses a threshold. The alert triggers when the condition is first met, and continues to trigger afterwards.
postgresql Howto App Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-type.md
Last updated 07/17/2020
# Determining Application Type + Running efficient queries on a Hyperscale (Citus) server group requires that tables be properly distributed across servers. The recommended distribution varies by the type of application and its query patterns.
postgresql Howto Build Scalable Apps Classify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-classify.md
Last updated 04/28/2022
# Classify application workload + Here are common characteristics of the workloads that are the best fit for Hyperscale (Citus).
postgresql Howto Build Scalable Apps Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-concepts.md
Last updated 04/28/2022
# Fundamental concepts for scaling + Before we investigate the steps of building a new app, it's helpful to see a quick overview of the terms and concepts involved.
postgresql Howto Build Scalable Apps Model High Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-high-throughput.md
Last updated 04/28/2022
# Model high-throughput transactional apps + ## Common filter as shard key To pick the shard key for a high-throughput transactional application, follow
postgresql Howto Build Scalable Apps Model Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-multi-tenant.md
Last updated 04/28/2022
# Model multi-tenant SaaS apps + ## Tenant ID as the shard key The tenant ID is the column at the root of the workload, or the top of the
postgresql Howto Build Scalable Apps Model Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-real-time.md
Last updated 04/28/2022
# Model real-time analytics apps + ## Colocate large tables with shard key To pick the shard key for a real-time operational analytics application, follow
postgresql Howto Build Scalable Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-overview.md
Last updated 04/28/2022
# Build scalable apps + > [!NOTE] > This article is for you if: >
postgresql Howto Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-choose-distribution-column.md
Last updated 02/28/2022
# Choose distribution columns in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Choosing each table's distribution column is one of the most important modeling decisions you'll make. Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) stores rows in shards based on the value of the rows' distribution column.
postgresql Howto Compute Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-compute-quota.md
Last updated 12/10/2021
# Change compute quotas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal + Azure enforces a vCore quota per subscription per region. There are two independently adjustable limits: vCores for coordinator nodes, and vCores for worker nodes.
postgresql Howto Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-connect.md
Last updated 05/25/2022
# Connect to a server group + Choose your database client below to learn how to configure it to connect to Hyperscale (Citus). # [pgAdmin](#tab/pgadmin) + [pgAdmin](https://www.pgadmin.org/) is a popular and feature-rich open source administration and development platform for PostgreSQL.
administration and development platform for PostgreSQL.
# [psql](#tab/psql) + The [psql utility](https://www.postgresql.org/docs/current/app-psql.html) is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them to PostgreSQL, and see the query results.
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-create-users.md
Last updated 1/8/2019
# Create users in Azure Database for PostgreSQL - Hyperscale (Citus) + ## The server admin account The PostgreSQL engine uses
postgresql Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-high-availability.md
Last updated 07/27/2020
# Configure Hyperscale (Citus) high availability + Azure Database for PostgreSQL - Hyperscale (Citus) provides high availability (HA) to avoid database downtime. With HA enabled, every node in a server group will get a standby. If the original node becomes unhealthy, its standby will be
postgresql Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-logging.md
Last updated 9/13/2021
# Logs in Azure Database for PostgreSQL - Hyperscale (Citus) + PostgreSQL database server logs are available for every node of a Hyperscale (Citus) server group. You can ship logs to a storage server, or to an analytics service. The logs can be used to identify, troubleshoot, and repair
postgresql Howto Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-maintenance.md
Last updated 04/07/2021
# Manage scheduled maintenance settings for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + You can specify maintenance options for each Hyperscale (Citus) server group in your Azure subscription. Options include the maintenance schedule and notification settings for upcoming and finished maintenance events.
postgresql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-manage-firewall-using-portal.md
Last updated 11/16/2021
# Manage public access for Azure Database for PostgreSQL - Hyperscale (Citus) + Server-level firewall rules can be used to manage [public access](concepts-firewall-rules.md) to a Hyperscale (Citus) coordinator node from a specified IP address (or range of IP addresses) in the
postgresql Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-modify-distributed-tables.md
Last updated 8/10/2020
# Distribute and modify tables + ## Distributing tables To create a distributed table, you need to first define the table schema. To do
postgresql Howto Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-monitoring.md
Last updated 10/05/2021
# How to view metrics in Azure Database for PostgreSQL - Hyperscale (Citus) + Resource metrics are available for every node of a Hyperscale (Citus) server group, and in aggregate across the nodes.
postgresql Howto Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-private-access.md
Last updated 01/14/2022
# Private access in Azure Database for PostgreSQL Hyperscale (Citus) + [Private access](concepts-private-access.md) allows resources in an Azure virtual network to connect securely and privately to nodes in a Hyperscale (Citus) server group. This how-to assumes you've already created a virtual
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-read-replicas-portal.md
Last updated 08/03/2021
# Create and manage read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal + In this article, you learn how to create and manage read replicas in Hyperscale (Citus) from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
postgresql Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restart.md
Last updated 05/06/2022
# Restart Azure Database for PostgreSQL - Hyperscale (Citus) + You can restart your Hyperscale (Citus) server group for the Azure portal. Restarting the server group applies to all nodes; you can't selectively restart individual nodes. The restart applies to all PostgreSQL server processes in the nodes. Any applications attempting to use the database will experience connectivity downtime while the restart happens.
postgresql Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restore-portal.md
Last updated 07/09/2021
# Point-in-time restore of a Hyperscale (Citus) server group + This article provides step-by-step procedures to perform [point-in-time recoveries](concepts-backup.md#restore) for a Hyperscale (Citus) server group using backups. You can restore either to the earliest backup or to
postgresql Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-grow.md
Last updated 12/10/2021
# Scale a Hyperscale (Citus) server group + Azure Database for PostgreSQL - Hyperscale (Citus) provides self-service scaling to deal with increased load. The Azure portal makes it easy to add new worker nodes, and to increase the vCores of existing nodes. Adding nodes causes
postgresql Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-initial.md
Last updated 08/03/2021
# Pick initial size for Hyperscale (Citus) server group + The size of a server group, both number of nodes and their hardware capacity, is [easy to change](howto-scale-grow.md)). However you still need to choose an initial size for a new server group. Here are some tips for a
postgresql Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-rebalance.md
Last updated 07/20/2021
# Rebalance shards in Hyperscale (Citus) server group + To take advantage of newly added nodes, rebalance distributed table [shards](concepts-distributed-data.md#shards). Rebalancing moves shards from existing nodes to the new ones. Hyperscale (Citus) offers zero-downtime rebalancing, meaning queries continue without interruption during
postgresql Howto Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ssl-connection-security.md
Last updated 07/16/2020 # Configure TLS in Azure Database for PostgreSQL - Hyperscale (Citus)+ The Hyperscale (Citus) coordinator node requires client applications to connect with Transport Layer Security (TLS). Enforcing TLS between the database server and client applications helps keep data confidential in transit. Extra verification settings described below also protect against "man-in-the-middle" attacks. ## Enforcing TLS connections
postgresql Howto Table Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-table-size.md
Last updated 12/06/2021
# Determine table and relation size + The usual way to find table sizes in PostgreSQL, `pg_total_relation_size`, drastically under-reports the size of distributed tables on Hyperscale (Citus). All this function does on a Hyperscale (Citus) server group is to reveal the size
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-common-connection-issues.md
Last updated 12/17/2021
# Troubleshoot connection issues to Azure Database for PostgreSQL - Hyperscale (Citus) + Connection problems may be caused by several things, such as: * Firewall settings
postgresql Howto Troubleshoot Read Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-read-only.md
Last updated 08/03/2021
# Troubleshoot read-only access to Azure Database for PostgreSQL - Hyperscale (Citus) + PostgreSQL can't run on a machine without some free disk space. To maintain access to PostgreSQL servers, it's necessary to prevent the disk space from running out.
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-upgrade.md
Last updated 4/5/2021
# Upgrade Hyperscale (Citus) server group + These instructions describe how to upgrade to a new major version of PostgreSQL on all server group nodes.
postgresql Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-useful-diagnostic-queries.md
Last updated 8/23/2021
# Useful Diagnostic Queries + ## Finding which node contains data for a specific tenant In the multi-tenant use case, we can determine which worker node contains the
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/overview.md
Last updated 04/20/2022
# What is Hyperscale (Citus)? + ## The superpower of distributed tables Hyperscale (Citus) is PostgreSQL extended with the superpower of "distributed
postgresql Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/product-updates.md
Last updated 10/15/2021
# Product updates for PostgreSQL - Hyperscale (Citus) + ## Updates feed The Microsoft Azure website lists newly available features per product, plus
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-connect-psql.md
Last updated 05/05/2022
# Connect to a Hyperscale (Citus) server group with psql + ## Prerequisites To follow this quickstart, you'll first need to:
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-create-portal.md
Last updated 05/05/2022
# Create a Hyperscale (Citus) server group in the Azure portal + Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that allows you to run horizontally scalable PostgreSQL databases in the cloud.
Let's get started!
# [Direct link](#tab/direct) + Visit [Create Hyperscale (Citus) server group](https://portal.azure.com/#create/Microsoft.PostgreSQLServerGroup) in the Azure portal. # [Via portal search](#tab/portal-search) + 1. Visit the [Azure portal](https://portal.azure.com/) and search for **citus**. Select **Azure Database for PostgreSQL Hyperscale (Citus)**. ![search for citus](../media/quickstart-hyperscale-create-portal/portal-search.png)
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-distribute-tables.md
Last updated 05/05/2022
# Model and load data + In this example, we'll use Hyperscale (Citus) to store and query events recorded from GitHub open source contributors.
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-run-queries.md
Last updated 05/05/2022
# Run queries + ## Prerequisites To follow this quickstart, you'll first need to:
postgresql Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-extensions.md
Last updated 02/24/2022
# PostgreSQL extensions in Azure Database for PostgreSQL – Hyperscale (Citus) + PostgreSQL provides the ability to extend the functionality of your database by using extensions. Extensions allow for bundling multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions can function like built-in features. For more information on PostgreSQL extensions, see [Package related objects into an extension](https://www.postgresql.org/docs/current/static/extend-extensions.html). ## Use PostgreSQL extensions
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-functions.md
Last updated 02/24/2022
# Functions in the Hyperscale (Citus) SQL API + This section contains reference information for the user-defined functions provided by Hyperscale (Citus). These functions help in providing distributed functionality to Hyperscale (Citus).
postgresql Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-limits.md
Last updated 02/25/2022
# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations + The following section describes capacity and functional limits in the Hyperscale (Citus) service.
postgresql Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-metadata.md
Last updated 02/18/2022
# System tables and views + Hyperscale (Citus) creates and maintains special tables that contain information about distributed data in the server group. The coordinator node consults these tables when planning how to run queries across the worker nodes.
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
Last updated 02/24/2022
# The Hyperscale (Citus) SQL API + Azure Database for PostgreSQL - Hyperscale (Citus) includes features beyond standard PostgreSQL. Below is a categorized reference of functions and configuration options for:
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-parameters.md
Last updated 02/18/2022
# Server parameters + There are various server parameters that affect the behavior of Hyperscale (Citus), both from standard PostgreSQL, and specific to Hyperscale (Citus). These parameters can be set in the Azure portal for a Hyperscale (Citus) server
postgresql Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-versions.md
Last updated 10/01/2021
# Supported database versions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + ## PostgreSQL versions The version of PostgreSQL running in a Hyperscale (Citus) server group is
postgresql Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-compute.md
Last updated 05/10/2022
# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) compute and storage+ You can select the compute and storage settings independently for worker nodes and the coordinator node in a Hyperscale (Citus) server
postgresql Resources Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-pricing.md
Last updated 02/23/2022
# Pricing for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + ## General pricing For the most up-to-date pricing information, see the service
postgresql Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-regions.md
Last updated 02/23/2022
# Regional availability for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Hyperscale (Citus) server groups are available in the following Azure regions: * Americas:
postgresql Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-multi-tenant.md
Last updated 05/14/2019
# Tutorial: design a multi-tenant database by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to: > [!div class="checklist"]
postgresql Tutorial Design Database Realtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-realtime.md
Last updated 05/14/2019
# Tutorial: Design a real-time analytics dashboard by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to: > [!div class="checklist"]
postgresql Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-private-access.md
Last updated 01/14/2022
# Create server group with private access in Azure Database for PostgreSQL - Hyperscale (Citus) + This tutorial creates a virtual machine and a Hyperscale (Citus) server group, and establishes [private access](concepts-private-access.md) between them.
For demonstration, weΓÇÖll use a virtual machine running Debian Linux, and the
```sh # provision the VM+ az vm create \ --resource-group link-demo \ --name link-demo-vm \
az vm create \
--generate-ssh-keys # install psql database client+ az vm run-command invoke \ --resource-group link-demo \ --name link-demo-vm \
coordinator node of the server group.
```sh # save db URI+ # # obtained from Settings -> Connection Strings in the Azure portal+ # # replace {your_password} in the string with your actual password+ PG_URI='host=c.link-demo-sg.postgres.database.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require' # attempt to connect to server group with psql in the virtual machine+ az vm run-command invoke \ --resource-group link-demo \ --name link-demo-vm \
Delete the resource group, and the resources inside will be deprovisioned:
az group delete --resource-group link-demo # press y to confirm+ ``` ## Next steps
postgresql Tutorial Shard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-shard.md
Last updated 12/16/2020
# Tutorial: Shard data on worker nodes in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to: > [!div class="checklist"]
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview) + >[!NOTE] > Single Server to Flexible Server migration tool is in private preview.
postgresql How To Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-from-oracle.md
Last updated 03/18/2021
# Migrate Oracle to Azure Database for PostgreSQL + This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL. For detailed and comprehensive migration guidance, see the [Migration guide resources](https://github.com/microsoft/OrcasNinjaTeam/blob/master/Oracle%20to%20PostgreSQL%20Migration%20Guide/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Guide.pdf).
postgresql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-online.md
Last updated 5/6/2019
# Minimal-downtime migration to Azure Database for PostgreSQL - Single Server+ You can perform PostgreSQL migrations to Azure Database for PostgreSQL with minimal downtime by using the newly introduced **continuous sync capability** for the [Azure Database Migration Service](https://aka.ms/get-dms) (DMS). This functionality limits the amount of downtime that is incurred by the application.
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Last updated 05/09/2022
# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI + >[!NOTE] > Single Server to Flexible Server migration tool is in private preview.
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
Last updated 05/09/2022
# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal ++ >[!NOTE] > Single Server to Flexible Server migration tool is in private preview.
postgresql How To Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-dump-and-restore.md
Last updated 09/22/2020
# Migrate your PostgreSQL database by using dump and restore+ You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file. Then use [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) to restore the PostgreSQL database from an archive file created by `pg_dump`.
postgresql How To Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-export-and-import.md
Last updated 09/22/2020 # Migrate your PostgreSQL database using export and import+ You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a script file and [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to import the data into the target database from that file.
postgresql How To Setup Azure Ad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-setup-azure-ad-app-portal.md
Last updated 05/09/2022
# Set up Azure AD app to use with Single to Flexible server Migration + This quick start article shows you how to set up Azure Active Directory (Azure AD) app to use with Single to Flexible server migration. It's an important component of the Single to Flexible migration feature. See [Azure Active Directory app](../../active-directory/develop/howto-create-service-principal-portal.md) for details. Azure AD App helps with role-based access control (RBAC) as the migration infrastructure requires access to both the source and target servers, and is restricted by the roles assigned to the Azure Active Directory App. The Azure AD app instance once created, can be used to manage multiple migrations. To get started, create a new Azure Active Directory Enterprise App by doing the following steps: ## Create Azure AD App
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
Last updated 10/06/2021
# Prepay for Azure Database for PostgreSQL compute resources with reserved capacity - Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br> ## How does the instance reservation work?
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-upgrade-using-dump-and-restore.md
You can consider this method if you have few larger tables in your database and
``` > [!TIP]
-> The process mentioned in this document can also be used to upgrade your Azure Database for PostgreSQL - Flexible server, which is in Preview. The main difference is the connection string for the flexible server target is without the `@dbName`. For example, if the user name is `pg`, the single serverΓÇÖs username in the connect string will be `pg@pg-95`, while with flexible server, you can simply use `pg`.
+> The process mentioned in this document can also be used to upgrade your Azure Database for PostgreSQL - Flexible server. The main difference is the connection string for the flexible server target is without the `@dbName`. For example, if the user name is `pg`, the single serverΓÇÖs username in the connect string will be `pg@pg-95`, while with flexible server, you can simply use `pg`.
## Post upgrade/migrate After the major version upgrade is complete, we recommend to run the `ANALYZE` command in each database to refresh the `pg_statistic` table. Otherwise, you may run into performance issues.
private-5g-core Monitor Private 5G Core With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-log-analytics.md
You can also follow the steps in [Create an overview Log Analytics dashboard usi
## Estimate costs
-Log Analytics will ingest an average of 1.4 GB of data a day for each log streamed to it by a single packet core instance. [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) provides information on how to estimate the cost of using Log Analytics to monitor Azure Private 5G Core.
+Log Analytics will ingest an average of 1.4 GB of data a day from each packet core instance. [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) provides information on how to estimate the cost of using Log Analytics to monitor Azure Private 5G Core.
## Next steps - [Enable Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md)
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Title: Understand access and permissions in the Microsoft Purview Data Map
-description: This article gives an overview permission, access control, and collections in the Microsoft Purview Data Map. Role-based access control is managed within the Microsoft Purview Data Map itself, so this guide will cover the basics to secure your information.
+ Title: Understand access and permissions in the Microsoft Purview governance portal
+description: This article gives an overview permission, access control, and collections in the Microsoft Purview governance portal. Role-based access control is managed within the Microsoft Purview Data Map in the governance portal itself, so this guide will cover the basics to secure your information.
Last updated 05/16/2022
-# Access control in the Microsoft Purview Data Map
+# Access control in the Microsoft Purview governance portal
-The Microsoft Purview Data Map uses **Collections** to organize and manage access across its sources, assets, and other artifacts. This article describes collections and access management in your Microsoft Purview Data Map.
+The Microsoft Purview governance portal uses **Collections** in the Microsoft Purview Data Map to organize and manage access across its sources, assets, and other artifacts. This article describes collections and access management for your account in the Microsoft Purview governance portal.
> [!IMPORTANT] > This article refers to permissions required for the Microsoft Purview governance portal, and applications like the Microsoft Purview Data Map, Data Catalog, Data Estate Insights, etc. If you are looking for permissions information for the Microsoft Purview compliance center, follow [the article for permissions in the Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center-permissions). ## Collections
-A collection is a tool Microsoft Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to Microsoft Purview's resources are managed from collections in the Microsoft Purview account itself.
+A collection is a tool that the Microsoft Purview Data Map uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to the Microsoft Purview governance portal's resources are managed from collections in the Microsoft Purview Data Map.
## Roles
-Microsoft Purview uses a set of predefined roles to control who can access what within the account. These roles are currently:
+The Microsoft Purview governance portal uses a set of predefined roles to control who can access what within the account. These roles are currently:
-- **Collection administrator** - a role for users that will need to assign roles to other users in Microsoft Purview or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
+- **Collection administrator** - a role for users that will need to assign roles to other users in the Microsoft Purview governance portal or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
- **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view data estate insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. - **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms.-- **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.-- **Insights reader** - a role that provides read-only access to insights reports for collections where the insights reader also has at least the **Data reader** role. For more information, see [insights permissions.](insights-permissions.md)-- **Policy author (Preview)** - a role that allows a user to view, update, and delete Microsoft Purview policies through the policy management app within Microsoft Purview.
+- **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.- **Insights reader** - a role that provides read-only access to insights reports for collections where the insights reader also has at least the **Data reader** role. For more information, see [insights permissions.](insights-permissions.md)
+- **Policy author (Preview)** - a role that allows a user to view, update, and delete Microsoft Purview policies through the policy management app within the Microsoft Purview governance portal.
- **Workflow administrator** - a role that allows a user to access the workflow authoring page in the Microsoft Purview governance portal, and publish workflows on collections where they have access permissions. Workflow administrator only has access to authoring, and so will need at least Data reader permission on a collection to be able to access the Purview governance portal. > [!NOTE]
-> At this time, Microsoft Purview Policy author role is not sufficient to create policies. The Microsoft Purview Data source admin role is also required.
+> At this time, Microsoft Purview policy author role is not sufficient to create policies. The Microsoft Purview data source admin role is also required.
## Who should be assigned to what role?
Microsoft Purview uses a set of predefined roles to control who can access what
|I need to edit information about assets, assign classifications, associate them with glossary entries, and so on.|Data curator| |I need to edit the glossary or set up new classification definitions|Data curator| |I need to view Data Estate Insights to understand the governance posture of my data estate|Data curator|
-|My application's Service Principal needs to push data to Microsoft Purview|Data curator|
+|My application's Service Principal needs to push data to the Microsoft Purview Data Map|Data curator|
|I need to set up scans via the Microsoft Purview governance portal|Data curator on the collection **or** data curator **and** data source administrator where the source is registered.|
-|I need to enable a Service Principal or group to set up and monitor scans in Microsoft Purview without allowing them to access the catalog's information |Data source administrator|
-|I need to put users into roles in Microsoft Purview | Collection administrator |
+|I need to enable a Service Principal or group to set up and monitor scans in the Microsoft Purview Data Map without allowing them to access the catalog's information |Data source administrator|
+|I need to put users into roles in the Microsoft Purview governance portal| Collection administrator |
|I need to create and publish access policies | Data source administrator and policy author |
-|I need to create workflows for my Microsoft Purview account | Workflow administrator |
+|I need to create workflows for my Microsoft Purview account in the governance portal| Workflow administrator |
|I need to view insights for collections I'm a part of | Insights reader **or** data curator | >[!NOTE] > **\*Data source administrator permissions on Policies** - Data source administrators are also able to publish data policies.
-## Understand how to use Microsoft Purview's roles and collections
+## Understand how to use the Microsoft Purview governance portal's roles and collections
-All access control is managed in Microsoft Purview's collections. Microsoft Purview's collections can be found in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com) and select the Microsoft Purview governance portal tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
+All access control is managed through collections in the Microsoft Purview Data Map. The collections can be found in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). Open your account in the [Azure portal](https://portal.azure.com) and select the Microsoft Purview governance portal tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
-When a Microsoft Purview account is created, it starts with a root collection that has the same name as the Microsoft Purview account itself. The creator of the Microsoft Purview account is automatically added as a Collection Admin, Data Source Admin, Data Curator, and Data Reader on this root collection, and can edit and manage this collection.
+When a Microsoft Purview (formerly Azure Purview) account is created, it starts with a root collection that has the same name as the account itself. The creator of the account is automatically added as a Collection Admin, Data Source Admin, Data Curator, and Data Reader on this root collection, and can edit and manage this collection.
-Sources, assets, and objects can be added directly to this root collection, but so can other collections. Adding collections will give you more control over who has access to data across your Microsoft Purview account.
+Sources, assets, and objects can be added directly to this root collection, but so can other collections. Adding collections will give you more control over who has access to data across your account.
-All other users can only access information within the Microsoft Purview account if they, or a group they're in, are given one of the above roles. This means, when you create a Microsoft Purview account, no one but the creator can access or use its APIs until they're [added to one or more of the above roles in a collection](how-to-create-and-manage-collections.md#add-role-assignments).
+All other users can only access information within the Microsoft Purview governance portal if they, or a group they're in, are given one of the above roles. This means, when you create an account, no one but the creator can access or use its APIs until they're [added to one or more of the above roles in a collection](how-to-create-and-manage-collections.md#add-role-assignments).
Users can only be added to a collection by a collection admin, or through permissions inheritance. The permissions of a parent collection are automatically inherited by its subcollections. However, you can choose to [restrict permission inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on any collection. If you do this, its subcollections will no longer inherit permissions from the parent and will need to be added directly, though collection admins that are automatically inherited from a parent collection can't be removed.
-You can assign Microsoft Purview roles to users, security groups and service principals from your Azure Active Directory that is associated with your purview account's subscription.
+You can assign roles to users, security groups, and service principals from your Azure Active Directory that is associated with your subscription.
## Assign permissions to your users
-After creating a Microsoft Purview account, the first thing to do is create collections and assign users to roles within those collections.
+After creating a Microsoft Purview (formerly Azure Purview) account, the first thing to do is create collections and assign users to roles within those collections.
> [!NOTE]
-> If you created your Microsoft Purview account using a service principal, to be able to access the Microsoft Purview governance portal and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
+> If you created your account using a service principal, to be able to access the Microsoft Purview governance portal and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
> You can use [this Azure CLI command](/cli/azure/purview/account#az-purview-account-add-root-collection-admin): > > ```azurecli
After creating a Microsoft Purview account, the first thing to do is create coll
### Create collections
-Collections can be customized for structure of the sources in your Microsoft Purview account, and can act like organized storage bins for these resources. When you're thinking about the collections you might need, consider how your users will access or discover information. Are your sources broken up by departments? Are there specialized groups within those departments that will only need to discover some assets? Are there some sources that should be discoverable by all your users?
+Collections can be customized for structure of the sources in your Microsoft Purview Data Map, and can act like organized storage bins for these resources. When you're thinking about the collections you might need, consider how your users will access or discover information. Are your sources broken up by departments? Are there specialized groups within those departments that will only need to discover some assets? Are there some sources that should be discoverable by all your users?
This will inform the collections and subcollections you may need to most effectively organize your data map.
Now that we have a base understanding of collections, permissions, and how they
:::image type="content" source="./media/catalog-permissions/collection-example.png" alt-text="Chart showing a sample collections hierarchy broken up by region and department." border="true":::
-This is one way an organization might structure their data: Starting with their root collection (Contoso, in this example) collections are organized into regions, and then into departments and subdepartments. Data sources and assets can be added to any one these collections to organize data resources by these regions and department, and manage access control along those lines. There's one subdepartment, Revenue, that has strict access guidelines, so permissions will need to be tightly managed.
+This is one way an organization might structure their data: Starting with their root collection (Contoso, in this example) collections are organized into regions, and then into departments and subdepartments. Data sources and assets can be added to any one these collections to organize data resources by these regions and department, and manage access control along those lines. There's one subdepartment, Revenue, that has strict access guidelines so permissions will need to be tightly managed.
-The [data reader role](#roles) can access information within the catalog, but not manage or edit it. So for our example above, adding the Data Reader permission to a group on the root collection and allowing inheritance will give all users in that group reader permissions on Microsoft Purview sources and assets. This makes these resources discoverable, but not editable, by everyone in that group. [Restricting inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on the Revenue group will control access to those assets. Users who need access to revenue information can be added separately to the Revenue collection.
+The [data reader role](#roles) can access information within the catalog, but not manage or edit it. So for our example above, adding the Data Reader permission to a group on the root collection and allowing inheritance will give all users in that group reader permissions on sources and assets in the Microsoft Purview Data Map. This makes these resources discoverable, but not editable, by everyone in that group. [Restricting inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on the Revenue group will control access to those assets. Users who need access to revenue information can be added separately to the Revenue collection.
Similarly with the Data Curator and Data Source Admin roles, permissions for those groups will start at the collection where they're assigned and trickle down to subcollections that haven't restricted inheritance. Below we have assigned permissions for several groups at collections levels in the Americas sub collection. :::image type="content" source="./media/catalog-permissions/collection-permissions-example.png" alt-text="Chart showing a sample collections hierarchy broken up by region and department showing permissions distribution." border="true":::
For full instructions, see our [how-to guide for adding role assignments](how-to
## Administrator change
-There may be a time when your [root collection admin](#roles) needs to change. By default, the user who creates the Microsoft Purview account is automatically assigned collection admin to the root collection. To update the root collection admin, there are three options:
+There may be a time when your [root collection admin](#roles) needs to change. By default, the user who creates the account is automatically assigned collection admin to the root collection. To update the root collection admin, there are three options:
- You can [assign permissions through the portal](how-to-create-and-manage-collections.md#add-role-assignments) as you have for any other role.
There may be a time when your [root collection admin](#roles) needs to change. B
## Next steps
-Now that you have a base understanding of collections, and access control, follow the guides below to create and manage those collections, or get started with registering sources into your Microsoft Purview Resource.
+Now that you have a base understanding of collections, and access control, follow the guides below to create and manage those collections, or get started with registering sources into your Microsoft Purview Data Map.
- [How to create and manage collections](how-to-create-and-manage-collections.md)-- [Microsoft Purview supported data sources](azure-purview-connector-overview.md)
+- [Supported data sources in the Microsoft Purview Data Map](azure-purview-connector-overview.md)
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Title: Microsoft Purview pricing guidelines
-description: This article provides a guideline towards understanding the various components in Microsoft Purview pricing.
+ Title: Pricing guidelines for Microsoft Purview (formerly Azure Purview)
+description: This article provides a guideline to understand and strategize pricing for the components of Microsoft Purview (formerly Azure Purview).
Previously updated : 04/06/2022 Last updated : 05/23/2022
-# Microsoft Purview pricing
+# Pricing for Microsoft Purview (formerly Azure Purview)
-Microsoft Purview enables a unified governance experience by providing a single pane of glass for managing data governance by enabling automated scanning and classifying data at scale.
+Microsoft Purview, formally known as Azure Purview, provides a single pane of glass for managing data governance by enabling automated scanning and classifying data at scale through the Microsoft Purview governance portal.
+## Why do you need to understand the components of pricing?
-## Why do you need to understand the components of the Microsoft Purview pricing?
+- While the pricing for Microsoft Purview (formerly Azure Purview) is on a subscription-based **Pay-As-You-Go** model, there are various dimensions that you can consider while budgeting
+- This guideline is intended to help you plan the budgeting for Microsoft Purview in the governance portal by providing a view on the control factors that impact the budget
-- While the pricing for Microsoft Purview is on a subscription-based **Pay-As-You-Go** model, there are various dimensions that you can consider while budgeting for Microsoft Purview-- This guideline is intended to help you plan the budgeting for Microsoft Purview by providing a view on the various control factors that impact the budget
+## Factors impacting Azure Pricing
++
+There are **direct** and **indirect** costs that need to be considered while planning budgeting and cost management.
-## Factors impacting Azure Pricing
-There are [**direct**](#direct-costs) and [**indirect**](#indirect-costs) costs that need to be considered while planning the Microsoft Purview budgeting and cost management.
## Direct costs + Direct costs impacting Microsoft Purview pricing are based on the following three dimensions: - [**Elastic data map**](#elastic-data-map) - [**Automated scanning & classification**](#automated-scanning-classification-and-ingestion) - [**Advanced resource sets**](#advanced-resource-sets) + ### Elastic data map -- The **Data map** is the foundation of the Microsoft Purview architecture and so needs to be up to date with asset information in the data estate at any given point
+- The **Data map** is the foundation of the Microsoft Purview governance portal architecture and so needs to be up to date with asset information in the data estate at any given point
- The data map is charged in terms of **Capacity Unit** (CU). The data map is provisioned at one CU if the catalog is storing up to 10 GB of metadata storage and serves up to 25 data map operations/sec -- While provisioning an account initially, the data map is always provisioned at one CU
+- The data map is always provisioned at one CU when an account is first created
- However, the data map scales automatically between the minimal and maximal limits of that elasticity window, to cater to changes in the data map with respect to two key factors - **operation throughput** and **metadata storage**
Direct costs impacting Microsoft Purview pricing are based on the following thre
- The number of concurrent users also forms a factor governing the data map capacity unit - Other factors to consider are type of search query, API interaction, workflows, approvals, and so on - Data burst level
- - When there is a need for more operations/second throughput, the Data map can autoscale within the elasticity window to cater to the changed load
+ - When there's a need for more operations/second throughput, the Data map can autoscale within the elasticity window to cater to the changed load
- This constitutes the **burst characteristic** that needs to be estimated and planned for - The burst characteristic comprises the **burst level** and the **burst duration** for which the burst exists - The **burst level** is a multiplicative index of the expected consistent elasticity under steady state
Direct costs impacting Microsoft Purview pricing are based on the following thre
### Automated scanning, classification, and ingestion
-There are two major automated processes that can trigger ingestion of metadata into Microsoft Purview:
+There are two major automated processes that can trigger ingestion of metadata into the Microsoft Purview Data Map:
1. Automatic scans using native [connectors](azure-purview-connector-overview.md). This process includes three main steps: - Metadata scan - Automatic classification
- - Ingestion of metadata into Microsoft Purview
+ - Ingestion of metadata into the Microsoft Purview Data Map
2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines. This process includes:
- - Ingestion of metadata and lineage into Microsoft Purview if Microsoft Purview account is connected to any Azure Data Factory or Azure Synapse pipelines.
+ - Ingestion of metadata and lineage into the Microsoft Purview Data Map if the account is connected to any Azure Data Factory or Azure Synapse pipelines.
+ #### 1. Automatic scans using native connectors+ - A **full scan** processes all assets within a selected scope of a data source whereas an **incremental scan** detects and processes assets, which have been created, modified, or deleted since the previous successful scan - All scans (full or Incremental scans) will pick up **updated, modified, or deleted** assets -- It is important to consider and avoid the scenarios when multiple people or groups belonging to different departments set up scans for the same data source resulting in additional pricing for duplicate scanning
+- It's important to consider and avoid the scenarios when multiple people or groups belonging to different departments set up scans for the same data source resulting in more pricing for duplicate scanning
- Schedule **frequent incremental scans** post the initial full scan aligned with the changes in the data estate. This will ensure the data map is kept up to date always and the incremental scans consume lesser v-core hours as compared to a full scan -- The **ΓÇ£View DetailsΓÇ¥** link for a data source will enable users to run a full scan. However, consider running incremental scans after a full scan for optimized scanning excepting when there is a change to the scan rule set (classifications/file types)
+- The **ΓÇ£View DetailsΓÇ¥** link for a data source will enable users to run a full scan. However, consider running incremental scans after a full scan for optimized scanning excepting when there's a change to the scan rule set (classifications/file types)
- **Register the data source at a parent collection** and **Scope scans at child collection** with different access controls to ensure there are no duplicate scanning costs being entailed - Curtail the users who are allowed to register data sources for scanning through **fine grained access control** and **Data Source Administrator** role using [Collection authorization](./catalog-permissions.md). This will ensure only valid data sources are allowed to be registered and scanning v-core hours is controlled resulting in lower costs for scanning -- Consider that the **type of data source** and the **number of assets** being scanned impact the scan duration
+- Consider that the **type of data source** and the **number of assets** being scanned affect the scan duration
- **Create custom scan rule sets** to include only the subset of **file types** available in your data estate and **classifications** that are relevant to your business requirements to ensure optimal use of the scanners
There are two major automated processes that can trigger ingestion of metadata i
#### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines -- metadata and lineage is ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.
+- metadata and lineage are ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.
### Advanced resource sets -- Microsoft Purview uses **resource sets** to address the challenge of mapping large numbers of data assets to a single logical resource by providing the ability to scan all the files in the data lake and find patterns (GUID, localization patterns, etc.) to group them as a single asset in the data map
+- The Microsoft Purview Data Map uses **resource sets** to address the challenge of mapping large numbers of data assets to a single logical resource by providing the ability to scan all the files in the data lake and find patterns (GUID, localization patterns, etc.) to group them as a single asset in the data map
-- **Advanced Resource Set** is an optional feature, which allows for customers to get enriched resource set information computed such as Total Size, Partition Count, etc., and enables the customization of resource set grouping via pattern rules. If Advanced Resource Set feature is not enabled, your data catalog will still contain resource set assets, but without the aggregated properties. There will be no "Resource Set" meter billed to the customer in this case.
+- **Advanced Resource Set** is an optional feature, which allows for customers to get enriched resource set information computed such as Total Size, Partition Count, etc., and enables the customization of resource set grouping via pattern rules. If Advanced Resource Set feature isn't enabled, your data catalog will still contain resource set assets, but without the aggregated properties. There will be no "Resource Set" meter billed to the customer in this case.
-- Use the basic resource set feature, before switching on the Advanced Resource Sets in Microsoft Purview to verify if requirements are met
+- Use the basic resource set feature, before switching on the Advanced Resource Sets in the Microsoft Purview Data Map to verify if requirements are met
- Consider turning on Advanced Resource Sets if:
- - your data lakes schema is constantly changing, and you are looking for additional value beyond the basic Resource Set feature to enable Microsoft Purview to compute parameters such as #partitions, size of the data estate, etc., as a service
- - there is a need to customize how resource set assets get grouped
+ - Your data lakes schema is constantly changing, and you're looking for more value beyond the basic Resource Set feature to enable the Microsoft Purview Data Map to compute parameters such as #partitions, size of the data estate, etc., as a service
+ - There's a need to customize how resource set assets get grouped
-- It is important to note that billing for Advanced Resource Sets is based on the compute used by the offline tier to aggregate resource set information and is dependent on the size/number of resource sets in your catalog
+- It's important to note that billing for Advanced Resource Sets is based on the compute used by the offline tier to aggregate resource set information and is dependent on the size/number of resource sets in your catalog
## Indirect costs
-Indirect costs impacting Microsoft Purview pricing to be considered are:
+Indirect costs impacting Microsoft Purview (formerly Azure Purview) pricing to be considered are:
- [Managed resources](https://azure.microsoft.com/pricing/details/azure-purview/)
- - When a Microsoft Purview account is provisioned, a storage account and event hub queue are created within the subscription in order to cater to secured scanning, which may be charged separately
+ - When an account is provisioned, a storage account and event hub queue are created within the subscription in order to cater to secured scanning, which may be charged separately
- [Azure private endpoint](./catalog-private-link.md)
- - Azure private end points are used for Microsoft Purview accounts where it is required for users on a virtual network (VNet) to securely access the catalog over a private link
+ - Azure private end points are used for Microsoft Purview (formerly Azure Purview), where it's required for users on a virtual network (VNet) to securely access the catalog over a private link
- The prerequisites for setting up private endpoints could result in extra costs - [Self-hosted integration runtime related costs](./manage-integration-runtimes.md) - Self-hosted integration runtime requires infrastructure, which results in extra costs
- - It is required to deploy and register Self-hosted integration runtime (SHIR) inside the same virtual network where Microsoft Purview ingestion private endpoints are deployed
- - [Additional memory requirements for scanning](./register-scan-sapecc-source.md#create-and-run-scan)
- - Certain data sources such as SAP require additional memory on the SHIR machine for scanning
+ - It's required to deploy and register Self-hosted integration runtime (SHIR) inside the same virtual network where Microsoft Purview ingestion private endpoints are deployed
+ - [Other memory requirements for scanning](./register-scan-sapecc-source.md#create-and-run-scan)
+ - Certain data sources such as SAP require more memory on the SHIR machine for scanning
- [Virtual Machine Sizing](../virtual-machines/sizes.md)
Indirect costs impacting Microsoft Purview pricing to be considered are:
- [Azure Alerts](../azure-monitor/alerts/alerts-overview.md) - Azure Alerts can notify customers of issues found with infrastructure or applications using the monitoring data in Azure Monitor
- - The additional pricing for Azure Alerts is available [here](https://azure.microsoft.com/pricing/details/monitor/)
+ - The pricing for Azure Alerts is available [here](https://azure.microsoft.com/pricing/details/monitor/)
- [Cost Management Budgets & Alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) - Automatically generated cost alerts are used in Azure to monitor Azure usage and spending based on when Azure resources are consumed
Indirect costs impacting Microsoft Purview pricing to be considered are:
## Next steps-- [Microsoft Purview pricing page](https://azure.microsoft.com/pricing/details/azure-purview/)
+- [Microsoft Purview, forerly Azure Purview, pricing page](https://azure.microsoft.com/pricing/details/azure-purview/)
purview Create Catalog Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-catalog-portal.md
Title: 'Quickstart: Create a Microsoft Purview account in the Azure portal'
-description: This Quickstart describes how to create a Microsoft Purview account and configure permissions to begin using it.
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account and configure permissions to begin using it.
Previously updated : 11/15/2021 Last updated : 05/23/2022
-#Customer intent: As a data steward, I want create a new Microsoft Purview Account so that I can scan and classify my data.
-# Quickstart: Create a Microsoft Purview account in the Azure portal
+# Quickstart: Create an account in the Microsoft Purview governance portal
-This quickstart describes the steps to create a Microsoft Purview account in the Azure portal and get started on the process of classifying, securing, and discovering your data in Microsoft Purview!
+This quickstart describes the steps to Create a Microsft Purview (formerly Azure Purview) account through the Azure portal. Then we'll get started on the process of classifying, securing, and discovering your data in the Microsoft Purview Data Map!
-Microsoft Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Microsoft Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog, that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your data estate. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Microsoft Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
-## Create a Microsoft Purview account
+## Create an account
-1. Go to the **Microsoft Purview accounts** page in the [Azure portal](https://portal.azure.com).
+1. Search for **Microsoft Purview** in the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/create-catalog-portal/purview-accounts-page.png" alt-text="Screenshot showing the purview accounts page in the Azure portal"::: 1. Select **Create** to create a new Microsoft Purview account.
- :::image type="content" source="media/create-catalog-portal/select-create.png" alt-text="Screenshot with the create button highlighted a Microsoft Purview in the Azure portal.":::
+ :::image type="content" source="media/create-catalog-portal/select-create.png" alt-text="Screenshot of the Microsoft Purview accounts page with the create button highlighted in the Azure portal.":::
Or instead, you can go to the marketplace, search for **Microsoft Purview**, and select **Create**. :::image type="content" source="media/create-catalog-portal/search-marketplace.png" alt-text="Screenshot showing Microsoft Purview in the Azure Marketplace, with the create button highlighted.":::
-1. On the new Create Microsoft Purview account page, under the **Basics** tab, select the Azure subscription where you want to create your Microsoft Purview account.
+1. On the new Create Microsoft Purview account page under the **Basics** tab, select the Azure subscription where you want to create your account.
-1. Select an existing **resource group** or create a new one to hold your Microsoft Purview account.
+1. Select an existing **resource group** or create a new one to hold your account.
To learn more about resource groups, see our article on [using resource groups to manage your Azure resources](../azure-resource-manager/management/manage-resource-groups-portal.md#what-is-a-resource-group).
For more information about Microsoft Purview, [see our overview page](overview.m
:::image type="content" source="media/create-catalog-portal/name-error.png" alt-text="Screenshot showing the Create Microsoft Purview account screen with an account name that is already in use, and the error message highlighted."::: 1. Choose a **location**.
- The list shows only locations that support Microsoft Purview. The location you choose will be the region where your Microsoft Purview account and meta data will be stored. Sources can be housed in other regions.
+ The list shows only locations that support the Microsoft Purview governance portal. The location you choose will be the region where your Microsoft Purview account and meta data will be stored. Sources can be housed in other regions.
> [!Note]
- > Microsoft Purview does not support moving accounts across regions, so be sure to deploy to the correction region. You can find out more information about this in [move operation support for resources](../azure-resource-manager/management/move-support-resources.md).
+ > The Microsoft Purview, formerly Azure Purview, does not support moving accounts across regions, so be sure to deploy to the correction region. You can find out more information about this in [move operation support for resources](../azure-resource-manager/management/move-support-resources.md).
-1. Select **Review & Create**, and then select **Create**. It takes a few minutes to complete the creation. The newly created Microsoft Purview account instance will appear in the list on your **Microsoft Purview accounts** page.
+1. Select **Review & Create**, and then select **Create**. It takes a few minutes to complete the creation. The newly created account will appear in the list on your **Microsoft Purview accounts** page.
:::image type="content" source="media/create-catalog-portal/create-resource.png" alt-text="Screenshot showing the Create Microsoft Purview account screen with the Review + Create button highlighted"::: ## Open the Microsoft Purview governance portal
-After your Microsoft Purview account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open the Microsoft Purview governance portal:
+After your account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open the Microsoft Purview governance portal:
* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page. :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
-* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account, and sign in to your workspace.
+* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account name, and sign in to your workspace.
## Next steps
-In this quickstart, you learned how to create a Microsoft Purview account and how to access it through the Microsoft Purview governance portal.
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account, and how to access it.
Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication. To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
-Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview:
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview Data Map:
* [Using the Microsoft Purview governance portal](use-azure-purview-studio.md) * [Create a collection](quickstart-create-collection.md)
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-and-manage-collections.md
Title: How to create and manage collections
-description: This article explains how to create and manage collections within Microsoft Purview.
+description: This article explains how to create and manage collections within the Microsoft Purview Data Map.
Previously updated : 01/24/2022 Last updated : 05/23/2022
-# Create and manage collections in Microsoft Purview
+# Create and manage collections in the Microsoft Purview Data Map
-Collections in Microsoft Purview can be used to organize assets and sources by your business's flow. They are also the tool used to manage access across Microsoft Purview. This guide will take you through the creation and management of these collections, as well as cover steps about how to register sources and add assets into your collections.
+Collections in the Microsoft Purview Data Map can be used to organize assets and sources by your business's flow. They're also the tool used to manage access across the Microsoft Purview governance portal. This guide will take you through the creation and management of these collections, as well as cover steps about how to register sources and add assets into your collections.
## Prerequisites
Collections in Microsoft Purview can be used to organize assets and sources by y
* Your own [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
-* An active [Microsoft Purview account](create-catalog-portal.md).
+* An active [Microsoft Purview (formerly Azure Purview) account](create-catalog-portal.md).
### Check permissions
-In order to create and manage collections in Microsoft Purview, you will need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the Microsoft Purview account in [Azure portal](https://portal.azure.com).
+In order to create and manage collections in the Microsoft Purview Data Map, you'll need to be a **Collection Admin** within the Microsoft Purview governance portal. We can check these permissions in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the account in the [Azure portal](https://portal.azure.com).
1. Select Data Map > Collections from the left pane to open collection management page. :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. This is the top collection in your collection list and will have the same name as your Microsoft Purview account. In the following example, it's called Contoso Microsoft Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
+1. Select your root collection. This is the top collection in your collection list and will have the same name as your account. In the following example, it's called Contoso Microsoft Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
:::image type="content" source="./media/how-to-create-and-manage-collections/select-root-collection.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the root collection highlighted." border="true":::
In order to create and manage collections in Microsoft Purview, you will need to
:::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Microsoft Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant your permission.
+1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant your permission.
:::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the collection admin section highlighted." border="true":::
You'll need to be a collection admin in order to delete a collection. If you are
## Add roles and restrict access through collections
-Since permissions are managed through collections in Microsoft Purview, it is important to understand the roles and what permissions they will give your users. A user granted permissions on a collection will have access to sources and assets associated with that collection, and inherit permissions to subcollections. Inheritance [can be restricted](#restrict-inheritance), but is allowed by default.
+Since permissions are managed through collections in the Microsoft Purview Data Map, it's important to understand the roles and what permissions they'll give your users. A user granted permissions on a collection will have access to sources and assets associated with that collection, and inherit permissions to subcollections. Inheritance [can be restricted](#restrict-inheritance), but is allowed by default.
The following guide will discuss the roles, how to manage them, and permissions inheritance.
A few of the main roles are:
:::image type="content" source="./media/how-to-create-and-manage-collections/search-user-permissions.png" alt-text="Screenshot of Microsoft Purview governance portal collection admin window with the search bar highlighted." border="true":::
-1. Select **OK** to save your changes, and you will see the new users reflected in the role assignments list.
+1. Select **OK** to save your changes, and you'll see the new users reflected in the role assignments list.
### Remove role assignments
A few of the main roles are:
### Restrict inheritance
-Collection permissions are inherited automatically from the parent collection. For example, any permissions on the root collection (the collection at the top of the list that has the same name as your Microsoft Purview account), will be inherited by all collections below it. You can restrict inheritance from a parent collection at any time, using the restrict inherited permissions option.
+Collection permissions are inherited automatically from the parent collection. For example, any permissions on the root collection (the collection at the top of the list that has the same name as your account), will be inherited by all collections below it. You can restrict inheritance from a parent collection at any time, using the restrict inherited permissions option.
-Once you restrict inheritance, you will need to add users directly to the restricted collection to grant them access.
+Once you restrict inheritance, you'll need to add users directly to the restricted collection to grant them access.
1. Navigate to the collection where you want to restrict inheritance and select the **Role assignments** tab. 1. Select **Restrict inherited permissions** and select **Restrict access** in the popup dialog to remove inherited permissions from this collection and any subcollections. Note that collection admin permissions won't be affected.
The collections listed here are restricted to subcollections of the data source
:::image type="content" source="./media/how-to-create-and-manage-collections/scan-under-collection.png" alt-text="Screenshot of a new scan window with the collection dropdown highlighted."border="true":::
-1. Back in the collection window, you will see the data sources linked to the collection on the sources card.
+1. Back in the collection window, you'll see the data sources linked to the collection on the sources card.
:::image type="content" source="./media/how-to-create-and-manage-collections/source-under-collection.png" alt-text="Screenshot of the data map Microsoft Purview governance portal window with the newly added source card highlighted in the map."border="true":::
Assets and sources are also associated with collections. During a scan, if the s
1. Permissions in asset details page: 1. Check the collection-based permission model by following the [add roles and restricting access on collections guide above](#add-roles-and-restrict-access-through-collections).
- 1. If you don't have read permission on a collection, the assets under that collection will not be listed in search results. If you get the direct URL of one asset and open it, you will see the no access page. Contact your Microsoft Purview admin to grant you the access. You can select the **Refresh** button to check the permission again.
+ 1. If you don't have read permission on a collection, the assets under that collection won't be listed in search results. If you get the direct URL of one asset and open it, you'll see the no access page. Contact your collection admin to grant you the access. You can select the **Refresh** button to check the permission again.
:::image type="content" source="./media/how-to-create-and-manage-collections/no-access.png" alt-text="Screenshot of Microsoft Purview governance portal asset window where the user has no permissions, and has no access to information or options." border="true":::
Assets and sources are also associated with collections. During a scan, if the s
### Search by collection
-1. In Microsoft Purview, the search bar is located at the top of the Microsoft Purview governance portal UX.
+1. In the Microsoft Purview governance portal, the search bar is located at the top of the portal window.
- :::image type="content" source="./media/how-to-create-and-manage-collections/purview-search-bar.png" alt-text="Screenshot showing the location of the Microsoft Purview search bar" border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/purview-search-bar.png" alt-text="Screenshot showing the location of the Microsoft Purview governance portal search bar." border="true":::
1. When you select the search bar, you can see your recent search history and recently accessed assets. Select **View all** to see all of the recently viewed assets. :::image type="content" source="./media/how-to-create-and-manage-collections/search-no-keywords.png" alt-text="Screenshot showing the search bar before any keywords have been entered" border="true":::
-1. Enter in keywords that help identify your asset such as its name, data type, classifications, and glossary terms. As you enter in keywords relating to your desired asset, Microsoft Purview displays suggestions on what to search and potential asset matches. To complete your search, select **View search results** or press **Enter**.
+1. Enter in keywords that help identify your asset such as its name, data type, classifications, and glossary terms. As you enter in keywords relating to your desired asset, the Microsoft Purview governance portal displays suggestions on what to search and potential asset matches. To complete your search, select **View search results** or press **Enter**.
:::image type="content" source="./media/how-to-create-and-manage-collections/search-keywords.png" alt-text="Screenshot showing the search bar as a user enters in keywords" border="true":::
-1. The search results page shows a list of assets that match the keywords provided in order of relevance. There are various factors that can affect the relevance score of an asset. You can filter down the list more by selecting specific collections, data stores, classifications, contacts, labels, and glossary terms that apply to the asset you are looking for.
+1. The search results page shows a list of assets that match the keywords provided in order of relevance. There are various factors that can affect the relevance score of an asset. You can filter down the list more by selecting specific collections, data stores, classifications, contacts, labels, and glossary terms that apply to the asset you're looking for.
:::image type="content" source="./media/how-to-create-and-manage-collections/search-results.png" alt-text="Screenshot showing the results of a search" border="true":::
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Title: Introduction to Microsoft Purview
-description: This article provides an overview of Microsoft Purview, including its features and the problems it addresses. Microsoft Purview enables any user to register, discover, understand, and consume data sources.
+ Title: Introduction to Microsoft Purview (formerly Azure Purview)
+description: This article provides an overview of Microsoft Purview (formerly Azure Purview), including its features and the problems it addresses. Microsoft Purview enables any user to register, discover, understand, and consume data sources.
Last updated 05/16/2022
-# What is Microsoft Purview?
+# What is Microsoft Purview (formerly Azure Purview)?
Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Microsoft Purview allows you to: - Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage.
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
Title: Azure DB for MySQL (preview)
-description: Set up a search indexer to index data stored in Azure Database for MySQL for full text search in Azure Cognitive Search.
+description: Learn how to set up a search indexer to index data stored in Azure Database for MySQL for full text search in Azure Cognitive Search.
ms.devlang: rest-api - Previously updated : 02/28/2022++ Last updated : 06/10/2022 # Index data from Azure Database for MySQL
Last updated 02/28/2022
In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Database for MySQL and makes it searchable in Azure Cognitive Search.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing files in Azure DB for MySQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. When configured to include a high water mark and soft deletion, the indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in your search index. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [Creating indexers in Azure Cognitive Search](search-howto-create-indexers.md) with information that's specific to indexing files in Azure DB for MySQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers:
+
+- Create a data source
+- Create an index
+- Create an indexer
+
+When configured to include a high water mark and soft deletion, the indexer takes all changes, uploads, and deletes for your MySQL database. It reflects these changes in your search index. Data extraction occurs when you submit the Create Indexer request.
## Prerequisites
-+ [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
+- [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
+
+- [Azure Database for MySQL single server](../mysql/single-server-overview.md).
-+ [Azure Database for MySQL](../mysql/overview.md) ([single server](../mysql/single-server-overview.md)).
+- A table or view that provides the content. A primary key is required. If you're using a view, it must have a [high water mark column](#DataChangeDetectionPolicy).
-+ A table or view that provides the content. A primary key is required. If you're using a view, it must have a [high water mark column](#DataChangeDetectionPolicy).
+- Read permissions. A *full access* connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Reader** permissions on MySQL.
-+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Reader** permissions on MySQL.
+- A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
+ You can also use the [Azure SDK for .NET](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql). You can't use the portal for indexer creation, but you can manage indexers and data sources once they're created.
- You can also use the [Azure SDK for .NET](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql). You can't use the portal for indexer creation, but you can manage indexers and data sources once they're created.
+For more information, see [Azure Database for MySQL](../mysql/overview.md).
## Preview limitations
-Currently, change tracking and deletion detection aren't working if the date or timestamp is uniform for all rows. This is a known issue that will be addressed in an update to the preview. Until this issue is addressed, donΓÇÖt add a skillset to the MySQL indexer.
+Currently, change tracking and deletion detection aren't working if the date or timestamp is uniform for all rows. This limitation is a known issue to be addressed in an update to the preview. Until this issue is addressed, donΓÇÖt add a skillset to the MySQL indexer.
The preview doesnΓÇÖt support geometry types and blobs.
The data source definition specifies the data to index, credentials, and policie
} ```
-1. Set "type" to `"mysql"` (required).
+1. Set `type` to `"mysql"` (required).
-1. Set "credentials" to an ADO.NET connection string. You can find connection strings in Azure portal, on the **Connection strings** page for MySQL.
+1. Set `credentials` to an ADO.NET connection string. You can find connection strings in Azure portal, on the **Connection strings** page for MySQL.
-1. Set "container" to the name of the table.
+1. Set `container` to the name of the table.
-1. [Set "dataChangeDetectionPolicy"](#DataChangeDetectionPolicy) if data is volatile and you want the indexer to pick up just the new and updated items on subsequent runs.
+1. Set [`dataChangeDetectionPolicy`](#DataChangeDetectionPolicy) if data is volatile and you want the indexer to pick up just the new and updated items on subsequent runs.
-1. [Set "dataDeletionDetectionPolicy"](#DataDeletionDetectionPolicy) if you want to remove search documents from a search index when the source item is deleted.
+1. Set [`dataDeletionDetectionPolicy`](#DataDeletionDetectionPolicy) if you want to remove search documents from a search index when the source item is deleted.
## Add search fields to an index
In a [search index](search-what-is-an-index.md), add search index fields that co
] ```
-If the primary key in the source table matches the document key (in this case, "ID"), the indexer will import the primary key as the document key.
+If the primary key in the source table matches the document key (in this case, "ID"), the indexer imports the primary key as the document key.
<a name="TypeMapping"></a> ### Mapping data types
-The following table maps the MySQL database to Cognitive Search equivalents. See [Supported data types (Azure Cognitive Search)](/rest/api/searchservice/supported-data-types) for more information.
+The following table maps the MySQL database to Cognitive Search equivalents. For more information, see [Supported data types (Azure Cognitive Search)](/rest/api/searchservice/supported-data-types).
> [!NOTE] > The preview does not support geometry types and blobs.
Once the index and data source have been created, you're ready to create the ind
1. [Specify field mappings](search-indexer-field-mappings.md) if there are differences in field name or type, or if you need multiple versions of a source field in the search index.
-An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
+An indexer runs automatically when it's created. You can prevent it from running by setting `disabled` to `true`. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
## Check indexer status
Execution history contains up to 50 of the most recently completed executions, w
Once an indexer has fully populated a search index, you might want subsequent indexer runs to incrementally index just the new and changed rows in your database.
-To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your data source definition. This property tells the indexer which change tracking mechanism is used on your data.
+To enable incremental indexing, set the `dataChangeDetectionPolicy` property in your data source definition. This property tells the indexer which change tracking mechanism is used on your data.
For Azure Database for MySQL indexers, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy).
-An indexer's change detection policy relies on having a "high water mark" column that captures the row version, or the date and time when a row was last updated. It's often a DATE, DATETIME, or TIMESTAMP column at a granularity sufficient for meeting the requirements of a high water mark column.
+An indexer's change detection policy relies on having a *high water mark* column that captures the row version, or the date and time when a row was last updated. It's often a `DATE`, `DATETIME`, or `TIMESTAMP` column at a granularity sufficient for meeting the requirements of a high water mark column.
In your MySQL database, the high water mark column must meet the following requirements:
-+ All data inserts must specify a value for the column.
-+ All updates to an item also change the value of the column.
-+ The value of this column increases with each insert or update.
-+ Queries with the following WHERE and ORDER BY clauses can be executed efficiently: `WHERE [High Water Mark Column] > [Current High Water Mark Value] ORDER BY [High Water Mark Column]`
+- All data inserts must specify a value for the column.
+- All updates to an item also change the value of the column.
+- The value of this column increases with each insert or update.
+- Queries with the following `WHERE` and `ORDER BY` clauses can be executed efficiently: `WHERE [High Water Mark Column] > [Current High Water Mark Value] ORDER BY [High Water Mark Column]`
The following example shows a [data source definition](#define-the-data-source) with a change detection policy:
api-key: [admin key]
> [!IMPORTANT] > If you're using a view, you must set a high water mark policy in your indexer data source. >
-> If the source table does not have an index on the high water mark column, queries used by the MySQL indexer may time out. In particular, the `ORDER BY [High Water Mark Column]` clause requires an index to run efficiently when the table contains many rows.
+> If the source table does not have an index on the high water mark column, queries used by the MySQL indexer might time out. In particular, the `ORDER BY [High Water Mark Column]` clause requires an index to run efficiently when the table contains many rows.
<a name="DataDeletionDetectionPolicy"></a> ## Indexing deleted rows
-When rows are deleted from the table or view, you normally want to delete those rows from the search index as well. However, if the rows are physically removed from the table, an indexer has no way to infer the presence of records that no longer exist. The solution is to use a "soft-delete" technique to logically delete rows without removing them from the table. You'll do this by adding a column to your table or view and mark rows as deleted using that column.
+When rows are deleted from the table or view, you normally want to delete those rows from the search index as well. However, if the rows are physically removed from the table, an indexer has no way to infer the presence of records that no longer exist. The solution is to use a *soft-delete* technique to logically delete rows without removing them from the table. Add a column to your table or view and mark rows as deleted using that column.
-Given a column that provides deletion state, an indexer can be configured to remove any search documents for which deletion state is set to true. The configuration property that supports this behavior is a data deletion detection policy, which is specified in the [data source definition](#define-the-data-source) as follows:
+Given a column that provides deletion state, an indexer can be configured to remove any search documents for which deletion state is set to `true`. The configuration property that supports this behavior is a data deletion detection policy, which is specified in the [data source definition](#define-the-data-source) as follows:
```http {
Given a column that provides deletion state, an indexer can be configured to rem
} ```
-The "softDeleteMarkerValue" must be a string. For example, if you have an integer column where deleted rows are marked with the value 1, use `"1"`. If you have a BIT column where deleted rows are marked with the Boolean true value, use the string literal `True` or `true` (the case doesn't matter).
+The `softDeleteMarkerValue` must be a string. For example, if you have an integer column where deleted rows are marked with the value 1, use `"1"`. If you have a `BIT` column where deleted rows are marked with the Boolean true value, use the string literal `True` or `true` (the case doesn't matter).
## Next steps You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure MySQL:
-+ [Index large data sets](search-howto-large-index.md)
-+ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
+- [Index large data sets](search-howto-large-index.md)
+- [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
search Search Manage Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-azure-cli.md
ms.devlang: azurecli Previously updated : 05/23/2022 Last updated : 06/08/2022 # Manage your Azure Cognitive Search service with the Azure CLI
You can run Azure CLI commands and scripts on Windows, macOS, Linux, or in [Azur
> * [Scale up or down with replicas and partitions](#scale-replicas-and-partitions) > * [Create a shared private link resource](#create-a-shared-private-link-resource)
-Occasionally, questions are asked about tasks *not* on the above list. Currently, you cannot use either the **az search** module or the management REST API to change a server name, region, or tier. Dedicated resources are allocated when a service is created. As such, changing the underlying hardware (location or node type) requires a new service. Similarly, there are no tools or APIs for transferring content, such as an index, from one service to another.
+Occasionally, questions are asked about tasks *not* on the above list.
-Within a service, content creation and management are through [Search Service REST API](/rest/api/searchservice/) or [.NET SDK](/dotnet/api/overview/azure/search.documents-readme). While there are no dedicated PowerShell commands for content, you can write scripts that call REST or .NET APIs to create and load indexes.
+You cannot change a server name, region, or tier programmatically or in the portal. Dedicated resources are allocated when a service is created. As such, changing the underlying hardware (location or node type) requires a new service.
+
+You cannot use tools or APIs to transfer content, such as an index, from one service to another. Within a service, programmatic creation of content is through [Search Service REST API](/rest/api/searchservice/) or an SDK such as [Azure SDK for .NET](/dotnet/api/overview/azure/search.documents-readme). While there are no dedicated commands for content migration, you can write script that calls REST API or a client library to create and load indexes on a new service.
+
+Preview administration features are typically not available in the **az search** module. If you want to use a preview feature, [use the Management REST API](search-manage-rest.md) and a preview API version.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
az network private-endpoint dns-zone-group create \
--zone-name "searchServiceZone" ```
-For more information on creating private endpoints in PowerShell, see this [Private Link Quickstart](../private-link/create-private-endpoint-cli.md)
+For more information on creating private endpoints in Azure CLI, see this [Private Link Quickstart](../private-link/create-private-endpoint-cli.md)
### Manage private endpoint connections
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-powershell.md
ms.devlang: powershell Previously updated : 05/23/2022 Last updated : 06/08/2022
You can run PowerShell cmdlets and scripts on Windows, Linux, or in [Azure Cloud
> * [Scale up or down with replicas and partitions](#scale-replicas-and-partitions) > * [Create a shared private link resource](#create-a-shared-private-link-resource)
-Occasionally, questions are asked about tasks *not* on the above list. Currently, you cannot use either the **Az.Search** module or the management REST API to change a server name, region, or tier. Dedicated resources are allocated when a service is created. As such, changing the underlying hardware (location or node type) requires a new service. Similarly, there are no tools or APIs for transferring content, such as an index, from one service to another.
+Occasionally, questions are asked about tasks *not* on the above list.
-Within a service, programmatic creation of content is through [Search Service REST API](/rest/api/searchservice/) or [.NET SDK](/dotnet/api/overview/azure/search.documents-readme). While there are no dedicated PowerShell commands for content, you can write PowerShell script that calls REST or .NET APIs to create and load indexes.
+You cannot change a server name, region, or tier programmatically or in the portal. Dedicated resources are allocated when a service is created. As such, changing the underlying hardware (location or node type) requires a new service.
+
+You cannot use tools or APIs to transfer content, such as an index, from one service to another. Within a service, programmatic creation of content is through [Search Service REST API](/rest/api/searchservice/) or an SDK such as [Azure SDK for .NET](/dotnet/api/overview/azure/search.documents-readme). While there are no dedicated commands for content migration, you can write script that calls REST API or a client library to create and load indexes on a new service.
+
+Preview administration features are typically not available in the **Az.Search** module. If you want to use a preview feature, [use the Management REST API](search-manage-rest.md) and a preview API version.
<a name="check-versions-and-load"></a>
New-AzSearchService -ResourceGroupName <resource-group-name> `
### Create an S3HD service
-To create a [S3HD](./search-sku-tier.md#tier-descriptions) service, a combination of `-Sku` and `-HostingMode` is used. Set `-Sku` to `Standard3` and `-HostingMode` to `HighDensity`.
+To create an [S3HD](./search-sku-tier.md#tier-descriptions) service, a combination of `-Sku` and `-HostingMode` is used. Set `-Sku` to `Standard3` and `-HostingMode` to `HighDensity`.
```azurepowershell-interactive New-AzSearchService -ResourceGroupName <resource-group-name> `
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
Previously updated : 05/23/2022 Last updated : 06/08/2022 # Manage your Azure Cognitive Search service with REST APIs
Last updated 05/23/2022
> * [.NET SDK](/dotnet/api/microsoft.azure.management.search) > * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)
-In this article, learn how to create and configure an Azure Cognitive Search service using the [Management REST APIs](/rest/api/searchmanagement/). Only the Management REST APIs are guaranteed to provide early access to [preview features](/rest/api/searchmanagement/management-api-versions#2021-04-01-preview).
+In this article, learn how to create and configure an Azure Cognitive Search service using the [Management REST APIs](/rest/api/searchmanagement/). Only the Management REST APIs are guaranteed to provide early access to [preview features](/rest/api/searchmanagement/management-api-versions#2021-04-01-preview). Set a preview API version to access preview features.
> [!div class="checklist"] > * [List search services](#list-search-services)
-> * [Create or update a service](#create-or-update-services)
+> * [Create or update a service](#create-or-update-a-service)
> * [(preview) Enable Azure role-based access control for data plane](#enable-rbac) > * [(preview) Enforce a customer-managed key policy](#enforce-cmk) > * [(preview) Disable semantic search](#disable-semantic-search)
Returns all search services under the current subscription, including detailed s
GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2021-04-01-preview ```
-## Create or update services
+## Create or update a service
Creates or updates a search service under the current subscription:
PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups
} ```
+## Create an S3HD service
+
+To create an [S3HD](search-sku-tier.md#tier-descriptions) service, use a combination of `-Sku` and `-HostingMode` properties. Set "sku" to `Standard3` and "hostingMode" to `HighDensity`.
+
+```rest
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+{
+ "location": "{{region}}",
+ "sku": {
+ "name": "Standard3"
+ },
+ "properties": {
+ "replicaCount": 1,
+ "partitionCount": 1,
+ "hostingMode": "HighDensity"
+ }
+}
+```
+ <a name="enable-rbac"></a> ## (preview) Enable Azure role-based authentication for data plane
search Search Sku Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md
Previously updated : 06/26/2021 Last updated : 06/08/2022
Tiers include **Free**, **Basic**, **Standard**, and **Storage Optimized**. Stan
**Free** creates a limited search service for smaller projects, like running tutorials and code samples. Internally, system resources are shared among multiple subscribers. You cannot scale a free service or run significant workloads.
-**Basic** and **Standard** are the most commonly used billable tiers, with **Standard** being the default because it gives you more flexibility in scaling for workloads. With dedicated resources under your control, you can deploy larger projects, optimize performance, and increase capacity.
+The most commonly used billable tiers include the following:
-Some tiers are designed for certain types of work. For example, **Standard 3 High Density (S3 HD)** is a *hosting mode* for S3, where the underlying hardware is optimized for a large number of smaller indexes and is intended for multitenancy scenarios. S3 HD has the same per-unit charge as S3, but the hardware is optimized for fast file reads on a large number of smaller indexes.
++ **Basic** has just one partition but with the ability to meet SLA with its support for three replicas.
-**Storage Optimized** tiers offer larger storage capacity at a lower price per TB than the Standard tiers. The primary tradeoff is higher query latency, which you should validate for your specific application requirements. To learn more about the performance considerations of this tier, see [Performance and optimization considerations](search-performance-optimization.md).
++ **Standard** is the default. It gives you more flexibility in scaling for workloads. You can scale both partitions and replicas. With dedicated resources under your control, you can deploy larger projects, optimize performance, and increase capacity.+
+Some tiers are designed for certain types of work:
+++ **Standard 3 High Density (S3 HD)** is a *hosting mode* for S3, where the underlying hardware is optimized for a large number of smaller indexes and is intended for multitenancy scenarios. S3 HD has the same per-unit charge as S3, but the hardware is optimized for fast file reads on a large number of smaller indexes.+++ **Storage Optimized (L1, L2)** tiers offer larger storage capacity at a lower price per TB than the Standard tiers. These tiers are designed for large indexes that don't change very often. The primary tradeoff is higher query latency, which you should validate for your specific application requirements. To learn more about the performance considerations of this tier, see [Performance and optimization considerations](search-performance-optimization.md). You can find out more about the various tiers on the [pricing page](https://azure.microsoft.com/pricing/details/search/), in the [Service limits in Azure Cognitive Search](search-limits-quotas-capacity.md) article, and on the portal page when you're provisioning a service.
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
You can define the order in which automation rules will run. Later automation ru
For example, if "First Automation Rule" changed an incident's severity from Medium to Low, and "Second Automation Rule" is defined to run only on incidents with Medium or higher severity, it won't run on that incident.
+Rules based on the update trigger have their own separate order queue. If such rules are triggered to run on a just-created incident (by a change made by another automation rule), they will run only after all the applicable rules based on the create trigger have run.
+ ## Common use cases and scenarios ### Incident-triggered automation
If you've used playbooks to create tickets in external systems when incidents ar
## Automation rules execution
-Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
+Automation rules are run sequentially, according to the [order](#order) you [determine](create-manage-use-automation-rules.md#finish-creating-your-rule). Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
Playbook actions within an automation rule may be treated differently under some circumstances, according to the following criteria:
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
You will only see the storage types that you actually have defined resources for
| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Kusto function alias:** | CGFWFirewallActivity |
-| **Kusto function URL:** | https://aka.ms/Sentinel-barracudacloudfirewall-function |
+| **Kusto function URL:** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Barracuda%20CloudGen%20Firewall/Parsers/CGFWFirewallActivity |
| **Vendor documentation/<br>installation instructions** | https://aka.ms/Sentinel-barracudacloudfirewall-connector | | **Supported by** | [Barracuda](https://www.barracuda.com/support) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Illusive Networks Admin Guide](https://support.illusivenetworks.com/hc/en-us/sections/360002292119-Documentation-by-Version) |
-| **Supported by** | [Illusive Networks](https://www.illusivenetworks.com/technical-support/) |
+| **Supported by** | [Illusive Networks](https://illusive.com/support/) |
## Imperva WAF Gateway (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Kusto function alias:** | Morphisec |
-| **Kusto function URL** | https://aka.ms/Sentinel-Morphiescutpp-parser |
+| **Kusto function URL** | https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Morphisec/Parsers/Morphisec/Morphisec |
| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **DCR support** | Not currently supported | | **Azure Function App code** | https://aka.ms/Sentinel-netskope-functioncode | | **API credentials** | <li>Netskope API Token |
-| **Vendor documentation/<br>installation instructions** | <li>[Netskope Cloud Security Platform](https://www.netskope.com/platform)<li>[Netskope API Documentation](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html)<li>[Obtain an API Token](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v2-overview.html) |
+| **Vendor documentation/<br>installation instructions** | <li>[Netskope Cloud Security Platform](https://www.netskope.com/platform)<li>[Netskope API Documentation](https://docs.netskope.com/en/rest-api-v1-overview.html)<li>[Obtain an API Token](https://docs.netskope.com/en/rest-api-v2-overview.html) |
| **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPS) | | **Kusto function alias** | Netskope | | **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-netskope-parser |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Kusto function alias:** | incident_lookup |
-| **Kusto function URL** | https://aka.ms/Sentinel-Onapsis-parser |
+| **Kusto function URL** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Onapsis%20Platform/Parsers/OnapsisLookup.txt |
| **Supported by** | [Onapsis](https://onapsis.force.com/s/login/) |
sentinel Migration Qradar Historical Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-qradar-historical-data.md
Last updated 05/03/2022- # Export historical data from QRadar
This article describes how to export your historical data from QRadar. After you
:::image type="content" source="media/migration-export-ingest/export-data.png" alt-text="Diagram illustrating steps involved in export and ingestion." lightbox="media/migration-export-ingest/export-data.png" border="false":::
-Follow the steps in these sections to export your historical data using [QRadar forwarding destination](https://www.ibm.com/docs/en/qsip/7.5?topic=administration-forward-data-other-systems).
+To export your QRadar data, you use the QRadar REST API to run Ariel Query Language (AQL) queries on data stored in an Ariel database. Because the export process is resource intensive, we recommend that you use small time ranges in your queries, and only migrate the data you need.
-## Configure QRadar forwarding destination
+## Create AQL query
-Configure the QRadar forwarding destination, including your profile, rules, and destination address:
+1. In the QRadar Console, select the **Log Activity** tab.
+1. Create a new AQL search query or select a saved search query to export the data. Ensure that the query includes the `START` and `STOP` functions to set the date and time range.
-1. [Configure a forwarding profile](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-configuring-forwarding-profiles).
-1. [Add a forwarding destination](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-adding-forwarding-destinations):
- 1. Set the **Event Format** to **JSON**.
- 2. Set the **Destination Address** to a server that has syslog running on TCP port 5141 and stores the ingested logs to a local folder path.
- 3. Select the forwarding profile created in step 1.
- 4. Enable the forwarding destination configuration.
+ Learn how to use [AQL](https://www.ibm.com/docs/en/qsip/7.5?topic=aql-ariel-query-language) and how to [save search criteria](https://www.ibm.com/docs/en/qsip/7.5?topic=searches-saving-search-criteria) in AQL.
-## Configure routing rules
+1. Copy the AQL query for later use.
+1. Encode the AQL query to the URL encoded format. Paste the query you copied in step 3 [into the decoder](https://www.url-encode-decode.com/). Copy the encoded format output.
-Configure routing rules:
+## Execute search query
-1. [Configure routing rules to forward data](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-configuring-routing-rules-forward-data).
-1. Set the **Mode** to **Offline**.
-1. Select the relevant **Forwarding Event Processor**.
-1. Set the **Data Source** to **Events**.
-1. Select **Add Filter** to add filter criteria for data that needs to be exported. For example, use the **Log Source Time** field to set a timestamp range.
-1. Select **Forward** and select the forwarding destination created when you [configured the QRadar forwarding destination](#configure-qradar-forwarding-destination) in step 2.
-1. [Enable the routing rule configuration](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-viewing-managing-routing-rules).
-1. Repeat steps 1-7 for each event processor from which you need to export data.
+You can execute the search query using one of these methods.
+
+- **QRadar Console user ID**. To use this method, ensure that the console user ID being used for data migration is assigned to a [security profile](https://www.ibm.com/docs/en/qradar-on-cloud?topic=management-security-profiles) that can access the data you need for the export.
+- **API token**. To use this method, [generate an API token in QRadar](https://www.ibm.com/docs/en/qradar-common?topic=app-creating-authorized-service-token-qradar-operations).
+
+To execute the search query:
+
+1. Log in to the system from which you'll download the historical data. Ensure that this system has access to the QRadar Console and QRadar API on TCP/443 via HTTPS.
+1. To execute the search query that retrieves the historical data, open a command prompt and run one of these commands:
+
+ - For the QRadar Console user ID method, run:
+
+ ```
+ curl -s -X POST -u <enter_qradar_console_user_id> -H 'Version: 12.0' -H 'Accept: application/json' 'https://<enter_qradar_console_ip_or_hostname>/api/ariel/searches?query_expression=<enter_encoded_AQL_from_previous_step>'
+ ```
+ - For the API token method, run:
+
+ ```
+ curl -s -X POST -H 'SEC: <enter_api_token>' -H 'Version: 12.0' -H 'Accept: application/json' 'https://<enter_qradar_console_ip_or_hostname>/api/ariel/searches?query_expression=<enter_encoded_AQL_from_previous_step>
+ ```
+
+ The search job execution time may vary, depending on the AQL time range and amount of queried data. We recommended that you run the query in small time ranges, and to query only the data you need for the export.
+
+ The output should return a status, such as `COMPLETED`, `EXECUTE`, `WAIT`, a `progress` value, and a `search_id` value. For example:
+
+ :::image type="content" source="media/migration-qradar-historical-data/export-output.png" alt-text="Screenshot of the output of the search query command." border="false":::
+
+1. Copy the value in the `search_id` field. You'll use this ID to check the progress and status of the search query execution, and to download the results after the search execution is complete.
+1. To check the status and the progress of the search, run one of these commands:
+ - For the QRadar Console user ID method, run:
+
+ ```
+ curl -s -X POST -u <enter_qradar_console_user_id> -H 'Version: 12.0' -H 'Accept: application/json' 'https:// <enter_qradar_console_ip_or_hostname>/api/ariel/searches/<enter_search_id_from_previous_step>'
+ ```
+
+ - For the API token method, run:
+
+ ```
+ curl -s -X POST -H 'SEC: <enter_api_token>' -H 'Version: 12.0' -H 'Accept: application/json' 'https:// <enter_qradar_console_ip_or_hostname>/api/ariel/searches/<enter_search_id_from_previous_step>'
+ ```
+
+1. Review the output. If the value in the `status` field is `COMPLETED`, continue to the next step. If the status isn't `COMPLETED`, check the value in the `progress` field, and after 5-10 minutes, run the command you ran in step 4.
+1. Review the output and ensure that the status is `COMPELETED`.
+1. Run one of these commands to download the results or returned data from the JSON file to a folder on the current system:
+ - For the QRadar Console user ID method, run:
+
+ ```
+ curl -s -X GET -u <enter_qradar_console_user_id> -H 'Version: 12.0' -H 'Accept: application/json' 'https:// <enter_qradar_console_ip_or_hostname>/api/ariel/searches/<enter_search_id_from_previous_step>/results' > <enter_path_to_file>.json
+ ```
+
+ - For the API token method, run:
+
+ ```
+ curl -s -X GET -H 'SEC: <enter_api_token>' -H 'Version: 12.0' -H 'Accept: application/json' 'https:// <enter_qradar_console_ip_or_hostname>/api/ariel/searches/<enter_search_id_from_previous_step>/results' > <enter_path_to_file>.json
+ ```
+
+1. To retrieve the data that you need to export, [create the AQL query](#create-aql-query) (steps 1-4) and [execute the query](#execute-search-query) (steps 1-7) again. Adjust the time range and search queries to get the data you need.
## Next steps
sentinel Migration Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-track.md
The workbook helps you to:
This article describes how to track your migration with the **Microsoft Sentinel Deployment and Migration** workbook, how to customize and manage the workbook, and how to use the workbook tabs to deploy and monitor data connectors, analytics, incidents, playbooks, automation rules, U E B A, and data management. Learn more about how to use [Azure Monitor workbooks](monitor-your-data.md) in Microsoft Sentinel.
-## Deploy the workbook content
+## Deploy the workbook content and view the workbook
1. In the Azure portal, select Microsoft Sentinel and then select **Workbooks**. 1. From the search bar, search for `migration`.
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | |||||
-|**Apache Log4j Vulnerability Detection** | Analytics rules, hunting queries | Application, Security - Threat Protection, Security - Vulnerability Management | Microsoft|
+|**Apache Log4j Vulnerability Detection** | Analytics rules, hunting queries, workbooks, playbooks | Application, Security - Threat Protection, Security - Vulnerability Management | Microsoft|
|**Cybersecurity Maturity Model Certification (CMMC)** | [Analytics rules, workbook, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-cybersecurity-maturity-model-certification-cmmc/ba-p/2111184) | Compliance | Microsoft| | **IoT/OT Threat Monitoring with Defender for IoT** | [Analytics rules, playbooks, workbook](iot-solution.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft | |**Maturity Model for Event Log Management M2131** | [Analytics rules, hunting queries, playbooks, workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/modernize-log-management-with-the-maturity-model-for-event-log/ba-p/3072842) | Compliance | Microsoft|
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Corelight for Microsoft Sentinel**|Data connector, workbooks, analytics rules, hunting queries, parser | IT Operations, Security - Network | [Zeek Network](https://support.corelight.com/)|
-## Zscalar
+## Zscaler
|Name |Includes |Categories |Supported by | |||||
-|**Zscalar Private Access**|Data connector, workbook, analytics rules, hunting queries, parser | Security - Network | Microsoft|
+|**Zscaler Private Access**|Data connector, workbook, analytics rules, hunting queries, parser | Security - Network | Microsoft|
## Next steps
service-fabric How To Managed Cluster Grant Access Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-grant-access-other-resources.md
Title: Grant an application access to other Azure resources on a Service Fabric managed cluster
-description: This article explains how to grant your managed-identity-enabled Service Fabric application access to other Azure resources supporting Azure Active Directory-based authentication on a Service Fabric managed cluster.
-- Previously updated : 10/05/2021
+ Title: Grant access to Azure resources on a Service Fabric cluster
+description: Learn how to grant a managed-identity-enabled Service Fabric application access to other Azure resources that support Azure Active Directory authentication.
++++ Last updated : 06/01/2022
-# Granting a Service Fabric application's managed identity access to Azure resources on a Service Fabric managed cluster
+# Grant a Service Fabric application access to Azure resources on a Service Fabric cluster
+
+Before an application can use its managed identity to access other resources, grant permissions to that identity on the protected Azure resource being accessed. Granting permissions is typically a management action on the *control plane* of the Azure service that owns the protected resource routed by Azure Resource Manager. That service enforces any applicable role-based access checking.
+
+The exact sequence of steps depends on the type of Azure resource being accessed and the language and client used to grant permissions. This article assumes a user-assigned identity assigned to the application and includes several examples. Consult the documentation of the respective Azure services for up-to-date instructions on granting permissions.
-Before the application can use its managed identity to access other resources, permissions must be granted to that identity on the protected Azure resource being accessed. Granting permissions is typically a management action on the 'control plane' of the Azure service owning the protected resource routed via Azure Resource Manager, which will enforce any applicable role-based access checking.
+## Grant access to Azure Storage
-The exact sequence of steps will then depend on the type of Azure resource being accessed, as well as the language/client used to grant permissions. The remainder of the article assumes a user-assigned identity assigned to the application and includes several typical examples for your convenience, but it is in no way an exhaustive reference for this topic; consult the documentation of the respective Azure services for up-to-date instructions on granting permissions.
+You can use the Service Fabric application's managed identity, which is user-assigned in this case, to get the data from an Azure storage blob. Grant the identity the required permissions in the [Azure portal](https://portal.azure.com/) by using the following steps:
-## Granting access to Azure Storage
-You can use the Service Fabric application's managed identity (user-assigned in this case) to retrieve the data from an Azure storage blob. Grant the identity the required permissions in the Azure portal with the following steps:
+1. Navigate to the storage account.
+1. Select the Access Control (IAM) link in the left panel.
+1. (Optional) Check existing access: select **System-assigned** or **User-assigned** managed identity in the **Find** control. Select the appropriate identity from the ensuing result list.
+1. Select **Add** > **Add role assignment** on top of the page to add a new role assignment for the application's identity.
+1. Under **Role**, from the dropdown list, select **Storage Blob Data Reader**.
+1. In the next dropdown list, under **Assign access to**, choose **User assigned managed identity**.
+1. Next, ensure the proper subscription is listed in **Subscription** dropdown list and then set **Resource Group** to **All resource groups**.
+1. Under **Select**, choose the UAI corresponding to the Service Fabric application and then select **Save**.
-1. Navigate to the storage account
-2. Click the Access control (IAM) link in the left panel.
-3. (optional) Check existing access: select System- or User-assigned managed identity in the 'Find' control; select the appropriate identity from the ensuing result list
-4. Click + Add role assignment on top of the page to add a new role assignment for the application's identity.
-Under Role, from the dropdown, select Storage Blob Data Reader.
-5. In the next dropdown, under Assign access to, choose `User assigned managed identity`.
-6. Next, ensure the proper subscription is listed in Subscription dropdown and then set Resource Group to All resource groups.
-7. Under Select, choose the UAI corresponding to the Service Fabric application and then click Save.
+Support for system-assigned Service Fabric managed identities doesn't include integration in the Azure portal. If your application uses a system-assigned identity, find the client ID of the application's identity, and then repeat the steps above but selecting the **Azure AD user, group, or service principal** option in the **Find** control.
-Support for system-assigned Service Fabric managed identities does not include integration in the Azure portal; if your application uses a system-assigned identity, you will have to find first the client ID of the application's identity, and then repeat the steps above but selecting the `Azure AD user, group, or service principal` option in the Find control.
+## Grant access to Azure Key Vault
-## Granting access to Azure Key Vault
-Similarly with accessing storage, you can leverage the managed identity of a Service Fabric application to access an Azure key vault. The steps for granting access in the Azure portal are similar to those listed above, and won't be repeated here. Refer to the image below for differences.
+Similarly to accessing storage, you can use the managed identity of a Service Fabric application to access an Azure Key Vault. The steps for granting access in the Azure portal are similar to the steps listed above. Refer to the image below for differences.
-![Key Vault access policy](../key-vault/media/vs-secure-secret-appsettings/add-keyvault-access-policy.png)
+![Screenshot shows the Key Vault with Access policies selected.](../key-vault/media/vs-secure-secret-appsettings/add-keyvault-access-policy.png)
-The following example illustrates granting access to a vault via a template deployment; add the snippet(s) below as another entry under the `resources` element of the template. The sample demonstrates access granting for both user-assigned and system-assigned identity types, respectively - choose the applicable one.
+The following example illustrates granting access to a vault by using a template deployment. Add the snippets below as another entry under the `resources` element of the template. The sample demonstrates access granting for both user-assigned and system-assigned identity types, respectively. Choose the applicable one.
```json # under 'variables':
The following example illustrates granting access to a vault via a template depl
} }, ```
-And for system-assigned managed identities:
+
+For system-assigned managed identities:
+ ```json # under 'variables': "variables": {
And for system-assigned managed identities:
} ```
-For more details, please see [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
+For more information, see [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
## Next steps+ * [Deploy an application with Managed Identity to a Service Fabric managed cluster](how-to-managed-cluster-application-managed-identity.md)
service-fabric Service Fabric Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md
Customers should configure their Azure-hosted workloads and on-premises applicat
## Windows Defender
-By default, Windows Defender antivirus is installed on Windows Server 2016. For details, see [Windows Defender Antivirus on Windows Server 2016](/windows/security/threat-protection/windows-defender-antivirus/windows-defender-antivirus-on-windows-server-2016). The user interface is installed by default on some SKUs, but is not required. To reduce any performance impact and resource consumption overhead incurred by Windows Defender, and if your security policies allow you to exclude processes and paths for open-source software, declare the following Virtual Machine Scale Set Extension Resource Manager template properties to exclude your Service Fabric cluster from scans:
+By default, Windows Defender antivirus is installed on Windows Server 2016. For details, see [Windows Defender Antivirus on Windows Server 2016](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows). The user interface is installed by default on some SKUs, but is not required. To reduce any performance impact and resource consumption overhead incurred by Windows Defender, and if your security policies allow you to exclude processes and paths for open-source software, declare the following Virtual Machine Scale Set Extension Resource Manager template properties to exclude your Service Fabric cluster from scans:
```json
service-fabric Service Fabric Cluster Upgrade Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-os.md
Title: Upgrading Linux OS for Azure Service Fabric
-description: Learn about options for migrating your Azure Service Fabric cluster to another Linux OS
- Previously updated : 09/14/2021
+ Title: Upgrade Linux OS for Azure Service Fabric
+description: Learn about options for migrating your Azure Service Fabric cluster to another Linux operating system.
++++ Last updated : 06/01/2022
-# Upgrading Linux OS for Azure Service Fabric
+# Upgrade Linux OS for Azure Service Fabric
-This document describes the guide to migrate your Azure Service Fabric for Linux cluster from Ubuntu version 16.04 LTS to 18.04 LTS. Each OS (operating system) version requires a distinct SF runtime package, which requires the steps described in this document to facilitate a smooth migration.
+This document describes how to migrate your Azure Service Fabric for Linux cluster from Ubuntu version 16.04 LTS to 18.04 LTS. Each operating system (OS) version requires a different Service Fabric runtime package. This article describes the steps required to facilitate a smooth migration to the newer version.
-## Overview
+## Approach to migration
-The general approach is to:
+The general approach to the migration follows these steps:
-1. Switch the Service Fabric cluster ARM (Azure Resource Manager) resource "vmImage" to "Ubuntu18_04" to pull future code upgrades for this OS version. This temporary OS mismatch against existing node types will block automatic code upgrade rollouts to ensure safe rollover.
+1. Switch the Service Fabric cluster Azure Resource Manager resource `vmImage` to `Ubuntu18_04`. This setting pulls future code upgrades for this OS version. This temporary OS mismatch against existing node types blocks automatic code upgrade rollouts to ensure safe rollover.
- * Avoid issuing manual SF cluster code upgrades during the OS migration. Doing so may cause the old node type nodes to enter a state that will require human intervention.
+ > [!TIP]
+ > Avoid issuing manual Service Fabric cluster code upgrades during the OS migration. Doing so may cause the old node type nodes to enter a state that requires human intervention.
-2. For each node type in the cluster, create another node type targeting the Ubuntu 18.04 OS image for the underlying Virtual Machine Scale Set. Each new node type will assume the role of its old counterpart.
+1. For each node type in the cluster, create another node type that targets the Ubuntu 18.04 OS image for the underlying Virtual Machine Scale Set. Each new node type assumes the role of its old counterpart.
- * A new primary node type will have to be created to replace the old node type marked as "isPrimary": true.
-
- * For each additional non-primary node type, these nodes types will similarly be marked "isPrimary": false.
+ * A new primary node type has to be created to replace the old node type marked as `isPrimary: true`.
+ * For each non-primary node type, these nodes types are marked `isPrimary: false`.
+ * Ensure after the new target OS node type is created that existing workloads continue to function correctly. If issues are observed, address the changes required in the app or pre-installed machine packages before proceeding with removing the old node type.
- * Ensure after the new target OS node type is created that existing workloads continue to function correctly. If issues are observed, address the changes required in the app or pre-installed machine packages before proceeding with removing the old node type.
-3. Mark the old primary node type "isPrimary": false. This will result in a long-running set of upgrades to transition all seed nodes.
-4. (For Bronze durability node types ONLY): Connect to the cluster via [sfctl](service-fabric-sfctl.md) / [PowerShell](/powershell/module/ServiceFabric) / [FabricClient](/dotnet/api/system.fabric.fabricclient) and disable all nodes in the old node type.
-5. Remove the old node types.
-
-> [!NOTE]
-> Az PowerShell generates a new dns name for the added node type so external traffic will have to be redirected to this endpoint.
+1. Mark the old primary node type `isPrimary: false`. This setting results in a long-running set of upgrades to transition all seed nodes.
+1. (For Bronze durability node types ONLY): Connect to the cluster by using [sfctl](service-fabric-sfctl.md), [PowerShell](/powershell/module/ServiceFabric), or [FabricClient](/dotnet/api/system.fabric.fabricclient). Disable all nodes in the old node type.
+1. Remove the old node types.
+[Az PowerShell](/powershell/azure/) generates a new DNS name for the added node type. Redirect external traffic to this endpoint.
## Ease of use steps for non-production clusters
-> [!NOTE]
-> The steps below demonstrate how to quickly prototype the node type migration via Az PowerShell cmdlets in a TEST-only cluster. For production clusters facing real business traffic, the same steps are expected to be done by issuing ARM upgrades, to preserve replayability & a consistent declarative source of truth.
+This procedure demonstrates how to quickly prototype the node type migration by using Az PowerShell cmdlets in a TEST-only cluster. For production clusters facing real business traffic, we expect that the same steps are done by issuing Resource Manager upgrades, to preserve repeatability and a consistent declarative source of truth.
-1. Update vmImage setting on Service Fabric cluster resource using [Update-AzServiceFabricVmImage](/powershell/module/az.servicefabric/update-azservicefabricvmimage):
+1. Update the `vmImage` setting on the Service Fabric cluster resource using [Update-AzServiceFabricVmImage](/powershell/module/az.servicefabric/update-azservicefabricvmimage):
- [Azure PowerShell](/powershell/azure/install-az-ps):
```powershell # Replace subscriptionId, resourceGroup, clusterName with ones corresponding to your cluster. $subscriptionId="cea219db-0593-4b27-8bfa-a703332bf433"
The general approach is to:
# dns-Contoso01SFCluster-nt1u18.westus2.cloudapp.azure.com ```
-3. Update old primary node type to non-primary in order to roll over seed nodes and system services to the new node type:
+3. Update the old primary node type to non-primary in order to roll over seed nodes and system services to the new node type:
```powershell # Query to ensure background upgrades are done.
The general approach is to:
Get-AzServiceFabricCluster -ResourceGroupName $resourceGroup ```
- Example output:
- ```
+ Your output should look like this example:
+
+ ```output
NodeTypes : NodeTypeDescription : Name : nt1
The general approach is to:
ReverseProxyEndpointPort : ```
-4. To remove Bronze durability node types, first disable the nodes before proceeding to remove the old node type. Connect via *ssh* to a cluster node and run the following commands:
+4. To remove Bronze durability node types, disable the nodes before proceeding to remove the old node type. Connect to a cluster node by using *ssh*. Run the following commands:
```bash # as root user:
The general approach is to:
for n in $nodes; do echo "Disabling $n"; sfctl node disable --node-name $n --deactivation-intent RemoveNode --timeout 300; done ```
-5. Remove the previous node type by removing the SF cluster resource node type attribute and decommissioning the associated virtual machine scale set & networking resources.
+5. Remove the previous node type by removing the Service Fabric cluster resource node type attribute and decommissioning the associated virtual machine scale set and networking resources:
```powershell $resourceGroup="Group1"
The general approach is to:
``` > [!NOTE]
- > In some cases this may hit the below error. In such case you may find through Service Fabric Explorer (SFX) the InfrastructureService for the removed node type is in error state. To resolve this, retry the removal.
- ```
- Remove-AzServiceFabricNodeType : Code: ClusterUpgradeFailed, Message: Long running operation failed with status 'Failed'
- ```
+ > In some cases this command might hit the following error:
+ >
+ > ```powershell
+ > Remove-AzServiceFabricNodeType : Code: ClusterUpgradeFailed, Message: Long running operation failed with status 'Failed'
+ > ```
+ >
+ > You might find by using Service Fabric Explorer (SFX) the InfrastructureService that the removed node type is in an error state. Retry the removal.
-Once it has been confirmed workloads have been successfully migrated to the new node types and old node types have been purged, the cluster is clear to proceed with subsequent Service Fabric runtime code version & configuration upgrades.
+Confirm that workloads have been successfully migrated to the new node types and old node types have been purged. Then the cluster can proceed with Service Fabric runtime code version and configuration upgrades.
## Next steps
Once it has been confirmed workloads have been successfully migrated to the new
* Learn more about [Service Fabric cluster scaling](service-fabric-cluster-scaling.md). * [Scale your cluster in and out](service-fabric-cluster-scale-in-out.md) * [Remove a node type in Azure Service Fabric](service-fabric-how-to-remove-node-type.md)-
spring-cloud How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-appdynamics-java-agent-monitor.md
Previously updated : 10/19/2021 Last updated : 06/07/2022 ms.devlang: azurecli
To activate an application through the Azure portal, use the following steps.
## Automate provisioning
-You can also run a provisioning automation pipeline using Terraform or an Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
+You can also run a provisioning automation pipeline using Terraform, Bicep, or Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
### Automate provisioning using Terraform
resource "azurerm_spring_cloud_java_deployment" "example" {
} ```
+### Automate provisioning using Bicep
+
+To configure the environment variables in a Bicep file, add the following code to the file, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=bicep).
+
+```bicep
+deploymentSettings: {
+ environmentVariables: {
+ APPDYNAMICS_AGENT_APPLICATION_NAME : '<your-app-name>'
+ APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY : '<your-agent-access-key>'
+ APPDYNAMICS_AGENT_ACCOUNT_NAME : '<your-agent-account-name>'
+ APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME : 'true'
+ APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX : '<your-agent-node-name>'
+ APPDYNAMICS_AGENT_TIER_NAME : '<your-agent-tier-name>'
+ APPDYNAMICS_CONTROLLER_HOST_NAME : '<your-AppDynamics-controller-host-name>'
+ APPDYNAMICS_CONTROLLER_SSL_ENABLED : 'true'
+ APPDYNAMICS_CONTROLLER_PORT : '443'
+ }
+ jvmOptions: '-javaagent:/opt/agents/appdynamics/java/javaagent.jar'
+}
+```
+ ### Automate provisioning using an ARM template To configure the environment variables in an ARM template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=json).
-```ARM template
+```JSON
"deploymentSettings": { "environmentVariables": { "APPDYNAMICS_AGENT_APPLICATION_NAME" : "<your-app-name>",
You can also see the garbage collection process, as shown in this screenshot:
:::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service-garbage-collection.jpg" alt-text="AppDynamics screenshot showing the Garbage Collection section of the Memory page." lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service-garbage-collection.jpg":::
-The following screenshot shows the **Slow Transactions** page:
+The following screenshot shows the **Slow Transactions** page:
:::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service-slowest-transactions.jpg" alt-text="AppDynamics screenshot showing the Slow Transactions page." lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service-slowest-transactions.jpg":::
spring-cloud How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-application-insights.md
Previously updated : 02/09/2022 Last updated : 06/08/2022 zone_pivot_groups: spring-cloud-tier-selection
Enable the Java In-Process Agent by using the following procedure.
1. Select **Save** to save the change.
-> [!Note]
+> [!NOTE]
> Do not use the same Application Insights instance in different Azure Spring Apps instances, or you'll see mixed data. ::: zone-end
az spring build-service builder buildpack-binding delete \
::: zone pivot="sc-standard-tier"
-The following sections describe how to automate your deployment using Azure Resource Manager templates (ARM templates) or Terraform.
+The following sections describe how to automate your deployment using Bicep, Azure Resource Manager templates (ARM templates) or Terraform.
+
+### Bicep
+
+To deploy using a Bicep file, copy the following content into a *main.bicep* file. For more information, see [Microsoft.AppPlatform Spring/monitoringSettings](/azure/templates/microsoft.appplatform/spring/monitoringsettings).
+
+```bicep
+param location string = resourceGroup().location
+
+resource customize_this 'Microsoft.AppPlatform/Spring@2020-07-01' = {
+ name: 'customize this'
+ location: location
+ properties: {}
+}
+
+resource customize_this_default 'Microsoft.AppPlatform/Spring/monitoringSettings@2020-11-01-preview' = {
+ parent: customize_this
+ name: 'default'
+ properties: {
+ appInsightsInstrumentationKey: '00000000-0000-0000-0000-000000000000'
+ appInsightsSamplingRate: 88
+ }
+}
+```
### ARM templates
To deploy using an ARM template, copy the following content into an *azuredeploy
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.AppPlatform/Spring",
- "name": "customize this",
- "apiVersion": "2020-07-01",
- "location": "[resourceGroup().location]",
- "resources": [
- {
- "type": "monitoringSettings",
- "apiVersion": "2020-11-01-preview",
- "name": "default",
- "properties": {
- "appInsightsInstrumentationKey": "00000000-0000-0000-0000-000000000000",
- "appInsightsSamplingRate": 88.0
- },
- "dependsOn": [
- "[resourceId('Microsoft.AppPlatform/Spring', 'customize this')]"
- ]
- }
- ],
- "properties": {}
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.AppPlatform/Spring",
+ "apiVersion": "2020-07-01",
+ "name": "customize this",
+ "location": "[parameters('location')]",
+ "properties": {}
+ },
+ {
+ "type": "Microsoft.AppPlatform/Spring/monitoringSettings",
+ "apiVersion": "2020-11-01-preview",
+ "name": "[format('{0}/{1}', 'customize this', 'default')]",
+ "properties": {
+ "appInsightsInstrumentationKey": "00000000-0000-0000-0000-000000000000",
+ "appInsightsSamplingRate": 88
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.AppPlatform/Spring', 'customize this')]"
+ ]
+ }
+ ]
}+ ``` ### Terraform
Automation in Enterprise tier is pending support. Documentation will be added as
The Java agent will be updated/upgraded regularly with the JDK, which may affect the following scenarios.
-> [!Note]
+> [!NOTE]
> The JDK version will be updated/upgraded quarterly per year. * Existing applications that use the Java agent before updating/upgrading won't be affected.
The Java agent will be updated/upgraded when the buildpack is updated.
Azure Spring Apps has enabled a hot-loading mechanism to adjust the settings of agent configuration without restart of applications.
-> [!Note]
+> [!NOTE]
> The hot-loading mechanism has a delay in minutes. * When the Java agent has been previously enabled, changes to the Application Insights instance and/or SamplingRate do NOT require applications to be restarted.
spring-cloud How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-dynatrace-one-agent-monitor.md
Previously updated : 08/31/2021 Last updated : 06/07/2022 ms.devlang: azurecli
The following sections describe how to activate Dynatrace OneAgent.
--resource-group <your-resource-group-name> \ --service <your-Azure-Spring-Apps-name> \ --name <your-application-name> \
- --is-public true
+ --is-public true
``` ### Determine the values for the required environment variables
To add the key/value pairs using the Azure portal, use the following steps:
## Automate provisioning
-Using Terraform or an Azure Resource Manager template (ARM template), you can also run a provisioning automation pipeline. This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
+Using Terraform, Bicep, or Azure Resource Manager template (ARM template), you can also run a provisioning automation pipeline. This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
### Automate provisioning using Terraform
environment_variables = {
} ```
+### Automate provisioning using an Bicep file
+
+To configure the environment variables in a Bicep file, add the following code to the file, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=bicep).
+
+```bicep
+environmentVariables: {
+ DT_TENANT: '<your-environment-ID>'
+ DT_TENANTTOKEN: '<your-tenant-token>'
+ DT_CONNECTION_POINT: '<your-communication-endpoint>'
+ DT_CLUSTER_ID: '<your-cluster-ID>'
+}
+```
+ ### Automate provisioning using an ARM template To configure the environment variables in an ARM template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=json).
-```ARM template
+```json
"environmentVariables": { "DT_TENANT": "<your-environment-ID>", "DT_TENANTTOKEN": "<your-tenant-token>",
spring-cloud How To Elastic Apm Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-elastic-apm-java-agent-monitor.md
Previously updated : 12/07/2021 Last updated : 06/07/2022
Before proceeding, you'll need your Elastic APM server connectivity information
## Automate provisioning
-You can also run a provisioning automation pipeline using Terraform or an Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
+You can also run a provisioning automation pipeline using Terraform, Bicep, or an Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
### Automate provisioning using Terraform
resource "azurerm_spring_cloud_java_deployment" "example" {
} ```
+### Automate provisioning using a Bicep file
+
+To configure the environment variables in a Bicep file, add the following code to the file, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=bicep).
+
+```bicep
+deploymentSettings: {
+ environmentVariables: {
+ ELASTIC_APM_SERVICE_NAME='<your-app-name>',
+ ELASTIC_APM_APPLICATION_PACKAGES='<your-app-package>',
+ ELASTIC_APM_SERVER_URL='<your-Elastic-APM-server-URL>',
+ ELASTIC_APM_SECRET_TOKEN='<your-Elastic-APM-secret-token>'
+ },
+ jvmOptions: '-javaagent:<elastic-agent-location>',
+ ...
+}
+```
+ ### Automate provisioning using an ARM template To configure the environment variables in an ARM template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=json).
-```arm
+```json
"deploymentSettings": { "environmentVariables": { "ELASTIC_APM_SERVICE_NAME"="<your-app-name>",
spring-cloud How To New Relic Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-new-relic-monitor.md
Previously updated : 04/07/2021 Last updated : 06/08/2021 ms.devlang: azurecli
You can also activate this agent from portal with the following procedure.
1. Find the app from **Settings**/**Apps** in the navigation pane.
- [ ![Find app to monitor](media/new-relic-monitoring/find-app.png) ](media/new-relic-monitoring/find-app.png)
+ [![Find app to monitor](media/new-relic-monitoring/find-app.png)](media/new-relic-monitoring/find-app.png)
2. Select the application to jump to the **Overview** page.
- [ ![Overview page](media/new-relic-monitoring/overview-page.png) ](media/new-relic-monitoring/overview-page.png)
+ [![Overview page](media/new-relic-monitoring/overview-page.png)](media/new-relic-monitoring/overview-page.png)
3. Select **Configuration** in the left navigation pane to add/update/delete the **Environment Variables** of the application.
- [ ![Update environment](media/new-relic-monitoring/configurations-update-environment.png) ](media/new-relic-monitoring/configurations-update-environment.png)
+ [![Update environment](media/new-relic-monitoring/configurations-update-environment.png)](media/new-relic-monitoring/configurations-update-environment.png)
4. Select **General settings** to add/update/delete the **JVM options** of the application.
- [ ![Update JVM Option](media/new-relic-monitoring/update-jvm-option.png) ](media/new-relic-monitoring/update-jvm-option.png)
+ [![Update JVM Option](media/new-relic-monitoring/update-jvm-option.png)](media/new-relic-monitoring/update-jvm-option.png)
5. View the application api/gateway **Summary** page from the New Relic dashboard.
- [ ![App summary page](media/new-relic-monitoring/app-summary-page.png) ](media/new-relic-monitoring/app-summary-page.png)
+ [![App summary page](media/new-relic-monitoring/app-summary-page.png)](media/new-relic-monitoring/app-summary-page.png)
6. View the application customers-service **Summary** page from the New Relic dashboard.
-
- [ ![Customers-service page](media/new-relic-monitoring/customers-service.png) ](media/new-relic-monitoring/customers-service.png)
-7. View the **Service Map** page from the New Relic dashboard.
+ [![Customers-service page](media/new-relic-monitoring/customers-service.png)](media/new-relic-monitoring/customers-service.png)
- [ ![Service map page](media/new-relic-monitoring/service-map.png) ](media/new-relic-monitoring/service-map.png)
+7. View the **Service Map** page from the New Relic dashboard.
+
+ [![Service map page](media/new-relic-monitoring/service-map.png)](media/new-relic-monitoring/service-map.png)
8. View the **JVMs** page of the application from the New Relic dashboard.
- [ ![JVM page](media/new-relic-monitoring/jvm-page.png) ](media/new-relic-monitoring/jvm-page.png)
+ [![JVM page](media/new-relic-monitoring/jvm-page.png)](media/new-relic-monitoring/jvm-page.png)
9. View the application profile from the New Relic dashboard.
- [ ![Application profile](media/new-relic-monitoring/profile-app.png) ](media/new-relic-monitoring/profile-app.png)
+ [![Application profile](media/new-relic-monitoring/profile-app.png)](media/new-relic-monitoring/profile-app.png)
## Automate provisioning
-You can also run a provisioning automation pipeline using Terraform or an Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
+You can also run a provisioning automation pipeline using Terraform, Bicep, or an Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
### Automate provisioning using Terraform
resource "azurerm_spring_cloud_java_deployment" "example" {
} ```
+### Automate provisioning using a Bicep file
+
+To configure the environment variables in a Bicep file, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=bicep).
+
+```bicep
+deploymentSettings: {
+ environmentVariables: {
+ NEW_RELIC_APP_NAME : '<app-name>',
+ NEW_RELIC_LICENSE_KEY : '<new-relic-license-key>'
+ },
+ jvmOptions: '-javaagent:/opt/agents/newrelic/java/newrelic-agent.jar',
+ ...
+}
+```
+ ### Automate provisioning using an ARM template To configure the environment variables in an ARM template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=json).
-```ARM template
+```json
"deploymentSettings": { "environmentVariables": { "NEW_RELIC_APP_NAME" : "<app-name>",
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
For more information, see [Manage Azure Data Lake Analytics using the Azure port
Before you begin, review the two migration options below, and decide whether to only copy data from Gen1 to Gen2 (recommended) or perform a complete migration.
+> [!NOTE]
+> No matter which option you select, a container named **gen1** will be created on the Gen2 account, and all data from the Gen1 account will be copied to this new 'gen1' container. When the migration is complete, in order to find the data on a path that existed on Gen1, you must add the prefix **gen1/** to the same path to access it on Gen2. For example, a path that was named 'FolderRoot/FolderChild/FileName.csv' on Gen1 will be available at 'gen1/FolderRoot/FolderChild/FileName.csv' on Gen2. Container names can't be renamed on Gen2, so this **gen1** container on Gen2 can't be renamed post migration. However, the data can be copied to a new container in Gen2 if needed.
+ ## Choose a migration option **Option 1: Copy data only (recommended).** In this option, data will be copied from Gen1 to Gen2. As the data is being copied, the Gen1 account will become read-only. After the data is copied, both the Gen1 and Gen2 accounts will be accessible. However, you must update the applications and compute workloads to use the new ADLS Gen2 endpoint.
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-manage-find-blobs.md
The below table shows all the valid operators for `Find Blobs by Tags`:
| > | Greater than | `"Date" > '2018-06-18'` | | >= | Greater than or equal | `"Priority" >= '5'` | | < | Less than | `"Age" < '32'` |
-| <= | Less than or equal | `"Company" <= 'Contoso'` |
+| <= | Less than or equal | `"Priority" <= '5'` |
| AND | Logical and | `"Rank" >= '010' AND "Rank" < '100'` | | @container | Scope to a specific container | `@container = 'videofiles' AND "status" = 'done'` |
The below table shows the valid operators for conditional operations:
| > | Greater than | `"Date" > '2018-06-18'` | | >= | Greater than or equal | `"Priority" >= '5'` | | < | Less than | `"Age" < '32'` |
-| <= | Less than or equal | `"Company" <= 'Contoso'` |
+| <= | Less than or equal | `"Priority" <= '5'` |
| AND | Logical and | `"Rank" >= '010' AND "Rank" < '100'` | | OR | Logical or | `"Status" = 'Done' OR "Priority" >= '05'` |
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Create a Python application named *blob-quickstart-v12*.
cd blob-quickstart-v12 ```
-1. In side the *blob-quickstart-v12* directory, create another directory called *data*. This directory is where the blob data files will be created and stored.
-
- ```console
- mkdir data
- ```
- ### Install the package While still in the application directory, install the Azure Blob Storage client library for Python package by using the `pip install` command.
storage Storage Ref Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy.md
description: This article provides reference information for the azcopy command.
Previously updated : 05/26/2022 Last updated : 06/09/2022
To report issues or to learn more about the tool, see [https://github.com/Azure/
- [azcopy jobs show](storage-ref-azcopy-jobs-show.md) - [azcopy list](storage-ref-azcopy-list.md) - [azcopy login](storage-ref-azcopy-login.md)
+- [azcopy login status](storage-ref-azcopy-login-status.md)
- [azcopy logout](storage-ref-azcopy-logout.md) - [azcopy make](storage-ref-azcopy-make.md) - [azcopy remove](storage-ref-azcopy-remove.md)
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
storage Storage Use Azcopy Blobs Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-download.md
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
storage Storage Use Azcopy Blobs Synchronize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-synchronize.md
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
storage Storage Use Azcopy Blobs Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-upload.md
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
storage Storage Use Azcopy Google Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-google-cloud.md
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
storage Storage Use Azcopy Migrate On Premises Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-migrate-on-premises-data.md
For more information about AzCopy, see any of these articles:
- [Transfer data with AzCopy and Amazon S3 buckets](storage-use-azcopy-s3.md) -- [Configure, optimize, and troubleshoot AzCopy](storage-use-azcopy-configure.md)
+- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md)
+
+- [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)
+
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
storage Storage Use Azcopy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-troubleshoot.md
+
+ Title: Troubleshoot problems with AzCopy (Azure Storage) | Microsoft Docs
+description: Find workarounds to common issues with AzCopy v10.
+++ Last updated : 06/09/2022+++++
+# Troubleshoot problems with AzCopy v10
+
+This article describes common issues that you might encounter while using AzCopy, helps you to identify the causes of those issues, and then suggests ways to resolve them.
+
+## Identifying problems
+
+You can determine whether a job succeeds by looking at the exit code.
+
+If the exit code is `0-success`, then the job completed successfully.
+
+If the exit code is `error`, then examine the log file. Once you understand the exact error message, then it becomes much easier to search for the right key words and figure out the solution. To learn more, see [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md).
+
+If the exit code is `panic`, then check the log file exists. If the file doesn't exist, file a bug or reach out to support.
+
+## 403 errors
+
+It's common to encounter 403 errors. Sometimes they're benign and don't result in failed transfer. For example, in AzCopy logs, you might see that a HEAD request received 403 errors. Those errors appear when AzCopy checks whether a resource is public. In most cases, you can ignore those instances.
+
+In some cases 403 errors can result in a failed transfer. If this happens, other attempts to transfer files will likely fail until you resolve the issue. 403 errors can occur as a result of authentication and authorization issues. They can also occur when requests are blocked due to the storage account firewall configuration.
+
+### Authentication / Authorization issues
+
+403 errors that prevent data transfer occur because of issues with SAS tokens, role based access control (Azure RBAC) roles, and access control list (ACL) configurations.
+
+##### SAS tokens
+
+If you're using a shared access signature (SAS) token, verify the following:
+
+- The expiration and start times of the SAS token are appropriate.
+
+- You selected all the necessary permissions for the token.
+
+- You generated the token by using an official SDK or tool. Try Storage Explorer if you haven't already.
+
+##### Azure RBAC
+
+If you're using role based access control (Azure RBAC) roles via the `azcopy login` command, verify that you have the appropriate Azure roles assigned to your identity (For example: the Storage Blob Data Contributor role).
+
+To learn more about Azure roles, see [Assign an Azure role for access to blob data](../blobs/assign-azure-role-data-access.md).
+
+##### ACLs
+
+If you're using access control lists (ACLs), verify that your identity appears in an ACL entry for each file or directory that you intend to access. Also, make sure that each ACL entry reflects the appropriate permission level.
+
+To learn more about ACLs and ACL entries, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control.md).
+
+To learn about how to incorporate Azure roles together with ACLs, and how system evaluates them to make authorization decisions, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
+
+### Firewall and private endpoint issues
+
+If the storage firewall configuration isn't configured to allow access from the machine where AzCopy is running, AzCopy operations will return an HTTP 403 error.
+
+##### Transferring data from or to a local machine
+
+If you're uploading or downloading data between a storage account and an on-premise machine, make sure that the machine that runs AzCopy is able to access either the source or destination storage account. You might have to use IP network rules in the firewall settings of either the source **or** destination accounts to allow access from the public IP address of the machine.
+
+##### Transferring data between storage accounts
+
+403 authorization errors can prevent you from transferring data between accounts by using the client machine where AzCopy is running.
+
+If you're copying data between storage accounts, make sure that the machine that runs AzCopy is able to access both the source **and** the destination account. You might have to use IP network rules in the firewall settings of both the source and destination accounts to allow access from the public IP address of the machine. The service will use the IP address of the AzCopy client machine to authorize the source to destination traffic. To learn how to add a public IP address to the firewall settings of a storage account, see [Grant access from an internet IP range](storage-network-security.md#grant-access-from-an-internet-ip-range).
+
+In case your VM doesn't or can't have a public IP address, consider using a private endpoint. See [Use private endpoints for Azure Storage](storage-private-endpoints.md).
+
+##### Using a Private link
+
+A [Private Link](../../private-link/private-link-overview.md) is at the virtual network (VNet) / subnet level. If you want AzCopy requests to go through a Private Link, then AzCopy must make those requests from a VM running in that VNet / subnet. For example, if you configure a Private Link in VNet1 / Subnet1 but the VM on which AzCopy runs is in VNet1 / Subnet2, then AzCopy requests won't use the Private Link and they're expected to fail.
+
+## Proxy-related errors
+
+If you encounter TCP errors such as `dial tcp: lookup proxy.x.x: no such host`, it means that your environment isn't configured to use the correct proxy, or you're using an advanced proxy that AzCopy doesn't recognize.
+
+You need to update the proxy settings to reflect the correct configurations. See [Configure proxy settings](storage-ref-azcopy-configuration-settings.md?toc=/azure/storage/blobs/toc.json#configure-proxy-settings).
+
+You can also bypass the proxy by setting the environment variable NO_PROXY="*".
+
+Here are the endpoints that AzCopy needs to use:
+
+| Log in endpoints | Azure Storage endpoints |
+|||
+| `login.microsoftonline.com (global Azure)` | `(blob \| file \| dfs).core.windows.net (global Azure)` |
+| `login.chinacloudapi.cn (Azure China)` | `(blob \| file \| dfs).core.chinacloudapi.cn (Azure China)` |
+| `login.microsoftonline.de (Azure Germany)` | `(blob \| file \| dfs).core.cloudapi.de (Azure Germany)` |
+| `login.microsoftonline.us (Azure US Government)` | `(blob \| file \| dfs).core.usgovcloudapi.net (Azure US Government)` |
+
+## x509: certificate signed by unknown authority
+
+This error is often related to the use of a proxy, which is using a Secure Sockets Layer (SSL) certificate that isn't trusted by the operating system. Verify your settings and make sure that the certificate is trusted at the operating system level.
+
+We recommend adding the certificate to your machine's root certificate store as that's where the trusted authorities are kept.
+
+## Unrecognized Parameters
+
+If you receive an error message stating that your parameters aren't recognized, make sure that you're using the correct version of AzCopy. AzCopy V8 and earlier versions are deprecated. [AzCopy V10](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json) is the current version, and it's a complete rewrite that doesn't share any syntax with the previous versions. Refer to this migration guide [here](https://github.com/Azure/azure-storage-azcopy/blob/main/MigrationGuideV8toV10.md).
+
+Also, make sure to utilize built-in help messages by using the `-h` switch with any command (For example: `azcopy copy -h`). See [Get command help](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#get-command-help). To view the same information online, see [azcopy copy](storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json).
+
+To help you understand commands, we provide an education tool located [here](https://azcopyvnextrelease.z22.web.core.windows.net/). This tool demonstrates the most popular AzCopy commands along with the most popular command flags. Our examples are [here](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#transfer-data). If you have any question, try searching through existing [GitHub issues](https://github.com/Azure/azure-storage-azcopy/issues) first to see if it was answered already.
+
+## Conditional access policy error
+
+You can receive the following error when you invoke the `azcopy login` command.
+
+"Failed to perform login command:
+failed to login with tenantID "common", Azure directory endpoint "https://login.microsoftonline.com", autorest/adal/devicetoken: -REDACTED- AADSTS50005: User tried to log in to a device from a platform (Unknown) that's currently not supported through Conditional Access policy. Supported device platforms are: iOS, Android, Mac, and Windows flavors.
+Trace ID: -REDACTED-
+Correlation ID: -REDACTED-
+Timestamp: 2021-01-05 01:58:28Z"
+
+This error means that your administrator has configured a conditional access policy that specifies what type of device you can log in from. AzCopy uses the device code flow, which can't guarantee that the machine where you're using the AzCopy tool is also where you're logging in.
+
+If your device is among the list of supported platforms, then you might be able to use Storage Explorer, which integrates AzCopy for all data transfers (it passes tokens to AzCopy via the secret store) but provides a login workflow that supports passing device information. AzCopy itself also supports managed identities and service principals, which could be used as an alternative.
+
+If your device isn't among the list of supported platforms, contact your administrator for help.
+
+## Server busy, network errors, timeouts
+
+If you see a large number of failed requests with the `503 Server Busy` status, then your requests are being throttled by the storage service. If you're seeing network errors or timeouts, you might be attempting to push through too much data across your infrastructure and that infrastructure is having difficulty handling it. In all cases, the workaround is similar.
+
+If you see a large file failing over and over again due to certain chunks failing each time, then try to limit the concurrent network connections or throughput limit depending on your specific case. We suggest that you first lower the performance drastically at first, observe whether it solved the initial problem, then ramp up performance again until an overall balance is achieved.
+
+For more information, see [Optimize the performance of AzCopy with Azure Storage](storage-use-azcopy-optimize.md)
+
+If you're copying data between accounts by using AzCopy, the quality and reliability of the network from where you run AzCopy might impact the overall performance. Event though data transfers from server to server, AzCopy does initiate calls for each file to copy between service endpoints.
+
+## Known constraints with AzCopy
+
+- Copying data from government clouds to commercial clouds isn't supported. However, copying data from commercial clouds to government clouds is supported.
+
+- Asynchronous service-side copy isn't supported. AzCopy performs synchronous copy only. In other words, by the time the job finishes, the data has been moved.
+
+- If when copying to an Azure File share you forgot to specify the flag `--preserve-smb-permissions`, and you do not want to transfer the data again, then consider using Robocopy to bring over the permissions.
+
+- If you're copying to Azure Files and you forgot to specify the `--preserve-smb-permissions` flag, and you don't want to transfer the data again, consider using Robocopy to bring over the only the permissions.
+
+- Azure Functions has a different endpoint for MSI authentication, which AzCopy doesn't yet support.
+
+## Known temporary issues
+
+There's a service issue impacting AzCopy 10.11+ which are using the [PutBlobFromURL API](/rest/api/storageservices/put-blob-from-url) to copy blobs smaller than the given block size (whose default is 8 MiB). If the user has any firewall (VNet / IP / PL / SE Policy) on the source account, the `PutBlobFromURL` API might mistakenly return the message `409 Copy source blob has been modified`. The workaround is to use AzCopy 10.10.
+
+- https://azcopyvnext.azureedge.net/release20210415/azcopy_darwin_amd64_10.10.0.zip
+- https://azcopyvnext.azureedge.net/release20210415/azcopy_linux_amd64_10.10.0.tar.gz
+- https://azcopyvnext.azureedge.net/release20210415/azcopy_windows_386_10.10.0.zip
+- https://azcopyvnext.azureedge.net/release20210415/azcopy_windows_amd64_10.10.0.zip
+
+## See also
+
+- [Get started with AzCopy](storage-use-azcopy-v10.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
description: AzCopy is a command-line utility that you can use to copy data to,
Previously updated : 05/11/2022 Last updated : 06/09/2022
The following table lists all AzCopy v10 commands. Each command links to a refer
|[azcopy jobs remove](storage-ref-azcopy-jobs-remove.md?toc=/azure/storage/blobs/toc.json)|Remove all files associated with the given job ID.| |[azcopy jobs resume](storage-ref-azcopy-jobs-resume.md?toc=/azure/storage/blobs/toc.json)|Resumes the existing job with the given job ID.| |[azcopy jobs show](storage-ref-azcopy-jobs-show.md?toc=/azure/storage/blobs/toc.json)|Shows detailed information for the given job ID.|
+|[azcopy jobs](storage-ref-azcopy-jobs.md?toc=/azure/storage/blobs/toc.json)|Subcommands related to managing jobs.|
|[azcopy list](storage-ref-azcopy-list.md?toc=/azure/storage/blobs/toc.json)|Lists the entities in a given resource.| |[azcopy login](storage-ref-azcopy-login.md?toc=/azure/storage/blobs/toc.json)|Logs in to Azure Active Directory to access Azure Storage resources.|
+|[azcopy login status](storage-ref-azcopy-login-status.md)|Lists the entities in a given resource.|
|[azcopy logout](storage-ref-azcopy-logout.md?toc=/azure/storage/blobs/toc.json)|Logs the user out and terminates access to Azure Storage resources.| |[azcopy make](storage-ref-azcopy-make.md?toc=/azure/storage/blobs/toc.json)|Creates a container or file share.| |[azcopy remove](storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json)|Delete blobs or files from an Azure storage account.|
See any of the following resources:
- [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md) -- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
+
+- [Troubleshoot problems with AzCopy v10](storage-use-azcopy-troubleshoot.md)
## Use a previous version
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
Identity-based authentication for Azure Files offers several benefits over using
- **Enforce granular access control on Azure file shares** You can grant permissions to a specific identity at the share, directory, or file level. For example, suppose that you have several teams using a single Azure file share for project collaboration. You can grant all teams access to non-sensitive directories, while limiting access to directories containing sensitive financial data to your Finance team only. -- **Back up Windows ACLs (also known as NTFS) along with your data**
+- **Back up Windows ACLs (also known as NTFS permissions) along with your data**
You can use Azure file shares to back up your existing on-premises file shares. Azure Files preserves your ACLs along with your data when you back up a file share to Azure file shares over SMB. ## How it works
stream-analytics Stream Analytics Managed Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-managed-identities-overview.md
Previously updated : 03/02/2022 Last updated : 06/09/2022 # Managed identities for Azure Stream Analytics
Stream Analytics supports two types of managed identities:
Below is a table that shows Azure Stream Analytics inputs and outputs that support system-assigned managed identity or user-assigned managed identity:
-| Type |  Adapter | User-assigned managed identity | System-assigned managed identity |
+| Type |  Adapter | User-assigned managed identity (Preview) | System-assigned managed identity |
|--|-||| | Storage Account | Blob/ADLS Gen 2 | Yes | Yes | | Inputs | Event Hubs | Yes | Yes |
Below is a table that shows Azure Stream Analytics inputs and outputs that suppo
| | SQL Database | Yes | Yes | | | Blob/ADLS Gen 2 | Yes | Yes | | | Table Storage | No | No |
-| | Service Bus Topic | No | No |
-| | Service Bus Queue | No | No |
-| | Cosmos DB | No | No |
+| | Service Bus Topic | Yes | Yes |
+| | Service Bus Queue | Yes | Yes |
+| | Cosmos DB | Yes | Yes |
| | Power BI | Yes | No | | | Data Lake Storage Gen1 | Yes | Yes | | | Azure Functions | No | No |
synapse-analytics Concept Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/concept-deep-learning.md
+
+ Title: 'Deep learning'
+description: This article provides a conceptual overview of the deep learning and data science capabilities available through Apache Spark on Azure Synapse Analytics.
++++ Last updated : 04/19/2022+++
+# Deep learning (Preview)
+
+Apache Spark in Azure Synapse Analytics enables machine learning with big data, providing the ability to obtain valuable insight from large amounts of structured, unstructured, and fast-moving data. There are several options when training machine learning models using Azure Spark in Azure Synapse Analytics: Apache Spark MLlib, Azure Machine Learning, and various other open-source libraries.
+
+## GPU-enabled Apache Spark pools
+
+To simplify the process for creating and managing pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes. To learn more about how to create a GPU-accelerated pool, you can visit the quickstart on how to [create a GPU-accelerated pool](../quickstart-create-apache-gpu-pool-portal.md).
+
+> [!NOTE]
+> - GPU-accelerated pools can be created in workspaces located in East US, Australia East, and North Europe.
+> - GPU-accelerated pools are only available with the Apache Spark 3.1 and 3.2 runtime.
+> - You might need to request a [limit increase](../spark/apache-spark-rapids-gpu.md#quotas-and-resource-constraints-in-azure-synapse-gpu-enabled-pools) in order to create GPU-enabled clusters.
+
+## GPU ML Environment
+
+Azure Synapse Analytics provides built-in support for deep learning infrastructure. The Azure Synapse Analytics runtimes for Apache Spark 3 include support for the most common deep learning libraries like TensorFlow and PyTorch. The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod which are commonly used for distributed training.
+
+### Tensorflow
+
+TensorFlow is an open source machine learning framework for all developers. It is used for implementing machine learning and deep learning applications.
+
+For more information about Tensorflow, you can visit the [Tensorflow API documentation](https://www.tensorflow.org/api_docs/python/tf).
+
+### PyTorch
+
+PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.
+
+For more information about PyTorch, you can visit the [PyTorch documentation](https://pytorch.org/docs/stable/https://docsupdatetracker.net/index.html).
+
+### Horovod
+
+Horovod is a distributed deep learning training framework for TensorFlow, Keras, and PyTorch. Horovod was developed to make distributed deep learning fast and easy to use. With this framework, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of code. In addition, Horovod can run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline.
+
+To learn more about how to run distributed training jobs in Azure Synapse Analytics, you can visit the following tutorials:
+ - [Tutorial: Distributed training with Horovod and PyTorch](./tutorial-horovod-pytorch.md)
+ - [Tutorial: Distributed training with Horovod and Tensorflow](./tutorial-horovod-tensorflow.md)
+
+For more information about Horovod, you can visit the [Horovod documentation](https://horovod.readthedocs.io/stable/),
+
+### Petastorm
+
+Petastorm is an open source data access library which enables single-node or distributed training of deep learning models. This library enables training directly from datasets in Apache Parquet format and datasets that have already been loaded as an Apache Spark DataFrame. Petastorm supports popular training frameworks such as Tensorflow and PyTorch.
+
+For more information about Petastorm, you can visit the [Petastorm GitHub page](https://github.com/uber/petastorm) or the [Petastorm API documentation](https://petastorm.readthedocs.io/latest).
+
+## Next steps
+
+This article provides an overview of the various options to train machine learning models within Apache Spark pools in Azure Synapse Analytics. You can learn more about model training by following the tutorial below:
+
+- Run SparkML experiments: [Apache SparkML Tutorial](../spark/apache-spark-machine-learning-mllib-notebook.md)
+- View libraries within the Apache Spark 3 runtime: [Apache Spark 3 Runtime](../spark/apache-spark-3-runtime.md)
+- Accelerate ETL workloads with RAPIDS: [Apache Spark Rapids](../spark/apache-spark-rapids-gpu.md)
synapse-analytics Tutorial Horovod Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-pytorch.md
+
+ Title: 'Tutorial: Distributed training with Horovod and Pytorch'
+description: Tutorial on how to run distributed training with the Horovod Estimator and PyTorch
+++ Last updated : 04/19/2022++++
+# Tutorial: Distributed Training with Horovod Estimator and PyTorch (Preview)
+
+[Horovod](https://github.com/horovod/horovod) is a distributed training framework for libraries like TensorFlow and PyTorch. With Horovod, users can scale up an existing training script to run on hundreds of GPUs in just a few lines of code.
+
+Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime.For Spark ML pipeline applications using PyTorch, users can use the horovod.spark estimator API. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial leverages PyTorch and the Horovod Estimator to run the training process.
+
+## Prerequisites
+
+- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with.
+- Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
+
+## Configure the Apache Spark session
+
+At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the numExecutors and spark.rapids.memory.gpu.reserve. For very large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For Tensorflow models, users will need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
+
+In the example below, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html). The values provided below are the suggested, best practice values for Azure Synapse GPU-large pools.
+
+```spark
+
+%%configure -f
+{
+ "driverMemory": "30g",
+ "driverCores": 4,
+ "executorMemory": "60g",
+ "executorCores": 12,
+ "numExecutors": 3,
+ "conf":{
+ "spark.rapids.memory.gpu.reserve": "10g",
+ "spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH": "true",
+ "spark.kryoserializer.buffer.max": "2000m"
+ }
+}
+```
+
+For this tutorial, we will use the following configurations:
+
+```python
+
+%%configure -f
+{
+ "numExecutors": 3,
+ "conf":{
+ "spark.rapids.memory.gpu.reserve": "10g"
+ }
+}
+```
+
+> [!NOTE]
+> When training with Horovod, users should set the Spark configuration for ```numExecutors``` to be less or equal to the number of nodes.
+
+## Import dependencies
+
+In this tutorial, we will leverage PySpark to read and process the dataset. We will then use PyTorch and Horovod to build the distributed neural network (DNN) model and run the training process. To get started, we will need to import the following dependencies:
+
+```python
+# base libs
+import sys
+import uuid
+
+# numpy
+import numpy as np
+
+# pyspark related
+import pyspark
+import pyspark.sql.types as T
+from pyspark.ml.evaluation import MulticlassClassificationEvaluator
+from pyspark.sql import SparkSession
+from pyspark.sql.functions import udf
+
+# pytorch related
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+
+# horovod related
+import horovod.spark.torch as hvd
+from horovod.spark.common.backend import SparkBackend
+from horovod.spark.common.store import Store
+
+# azure related
+from azure.synapse.ml.horovodutils import AdlsStore
+```
+
+## Connect to alternative storage account
+
+We will need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you will need to modify the following properties below: ```remote_url```, ```account_name```, and ```linked_service_name```.
+
+```python
+num_proc = 3 # equal to numExecutors
+batch_size = 128
+epochs = 3
+lr_single_node = 0.01 # learning rate for single node code
+
+uuid_str = str(uuid.uuid4()) # with uuid, each run will use a new directory
+work_dir = '/tmp/' + uuid_str
+
+# create adls store for model training, use your own adls account info
+remote_url = "<<ABFS path to storage account>>"
+account_name = "<<name of storage account>>"
+linked_service_name = "<<name of linked service>>"
+sas_token = TokenLibrary.getConnectionString(linked_service_name)
+adls_store_path = remote_url + work_dir
+
+store = AdlsStore.create(adls_store_path,
+ storage_options={
+ 'account_name': account_name,
+ 'sas_token': sas_token
+ },
+ save_runs=True)
+
+print(adls_store_path)
+```
+
+## Prepare dataset
+
+Next, we will prepare the dataset for training. In this tutorial, we will use the MNIST dataset from [Azure Open Datasets](https://docs.microsoft.com/azure/open-datasets/dataset-mnist?tabs=azureml-opendatasets).
+
+```python
+# Initialize SparkSession
+spark = SparkSession.builder.getOrCreate()
+
+# Download MNIST dataset from Azure Open Datasets
+from azureml.opendatasets import MNIST
+
+mnist = MNIST.get_tabular_dataset()
+mnist_df = mnist.to_pandas_dataframe()
+mnist_df.info()
+
+# Preprocess dataset
+mnist_df['features'] = mnist_df.iloc[:, :784].values.tolist()
+mnist_df.drop(mnist_df.iloc[:, :784], inplace=True, axis=1)
+mnist_df.head()
+```
+
+## Process data with Apache Spark
+
+Now, we will create an Apache Spark dataframe. This dataframe will be used with the ```HorovodEstimator``` for training.
+
+```python
+# Create Spark DataFrame for training
+df = spark.createDataFrame(mnist_df)
+
+# repartition DataFrame for training
+train_df = df.repartition(num_proc)
+
+# Train/test split
+train_df, test_df = train_df.randomSplit([0.9, 0.1])
+
+# show the dataset
+train_df.show()
+train_df.count()
+```
+
+## Define DNN model
+
+Once we have finished processing our dataset, we can now define our PyTorch model. The same code could also be used to train a single-node PyTorch model.
+
+```python
+# Define the PyTorch model without any Horovod-specific parameters
+class Net(nn.Module):
+
+ def __init__(self):
+ super(Net, self).__init__()
+ self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
+ self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
+ self.conv2_drop = nn.Dropout2d()
+ self.fc1 = nn.Linear(320, 50)
+ self.fc2 = nn.Linear(50, 10)
+
+ def forward(self, x):
+ x = x.float()
+ x = F.relu(F.max_pool2d(self.conv1(x), 2))
+ x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
+ x = x.view(-1, 320)
+ x = F.relu(self.fc1(x))
+ x = F.dropout(x, training=self.training)
+ x = self.fc2(x)
+ return F.log_softmax(x)
++
+model = Net()
+optimizer = optim.SGD(model.parameters(),
+ lr=lr_single_node * num_proc,
+ momentum=0.5) # notice the lr is scaled up
+loss = nn.NLLLoss()
+
+```
+
+## Train model
+
+Now, we can train a Horovod Spark estimator on top of our Apache Spark dataframe.
+```python
+# Train a Horovod Spark Estimator on the DataFrame
+backend = SparkBackend(num_proc=num_proc,
+ stdout=sys.stdout,
+ stderr=sys.stderr,
+ prefix_output_with_timestamp=True)
+
+torch_estimator = hvd.TorchEstimator(
+ backend=backend,
+ store=store,
+ partitions_per_process=1, # important for GPU training
+ model=model,
+ optimizer=optimizer,
+ loss=lambda input, target: loss(input, target.long()),
+ input_shapes=[[-1, 1, 28, 28]],
+ feature_cols=['features'],
+ label_cols=['label'],
+ batch_size=batch_size,
+ epochs=epochs,
+ validation=0.1,
+ verbose=2)
+
+torch_model = torch_estimator.fit(train_df).setOutputCols(['label_prob'])
+```
+
+## Evaluate trained model
+
+Once the training process has finished, we can then evaluate the model on the test dataset.
+
+```python
+# Evaluate the model on the held-out test DataFrame
+pred_df = torch_model.transform(test_df)
+
+argmax = udf(lambda v: float(np.argmax(v)), returnType=T.DoubleType())
+pred_df = pred_df.withColumn('label_pred', argmax(pred_df.label_prob))
+evaluator = MulticlassClassificationEvaluator(predictionCol='label_pred',
+ labelCol='label',
+ metricName='accuracy')
+
+print('Test accuracy:', evaluator.evaluate(pred_df))
+```
+
+## Clean up resources
+
+To ensure the Spark instance is shut down, end any connected sessions(notebooks). The pool shuts down when the **idle time** specified in the Apache Spark pool is reached. You can also select **stop session** from the status bar at the upper right of the notebook.
+
+![Screenshot showing the Stop session button on the status bar.](./media/tutorial-build-applications-use-mmlspark/stop-session.png)
+
+## Next steps
+
+* [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
+* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
synapse-analytics Tutorial Horovod Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-tensorflow.md
+
+ Title: 'Tutorial: Distributed training with Horovod and Tensorflow'
+description: Tutorial on how to run distributed training with the Horovod Runner and Tensorflow
+++ Last updated : 04/19/2022++++
+# Tutorial: Distributed Training with Horovod Runner and Tensorflow (Preview)
+
+[Horovod](https://github.com/horovod/horovod) is a distributed training framework for libraries like TensorFlow and PyTorch. With Horovod, users can scale up an existing training script to run on hundreds of GPUs in just a few lines of code.
+
+Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime.For Spark ML pipeline applications using Tensorflow, users can use ```HorovodRunner```. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial leverages Tensorflow and the ```HorovodRunner``` to run the training process.
+
+## Prerequisites
+
+- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with.
+- Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
+
+## Configure the Apache Spark session
+
+At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. For very large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For Tensorflow models, users will need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
+
+In the example below, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html). The values provided below are the suggested, best practice values for Azure Synapse GPU-large pools.
+
+```spark
+
+%%configure -f
+{
+ "driverMemory": "30g",
+ "driverCores": 4,
+ "executorMemory": "60g",
+ "executorCores": 12,
+ "numExecutors": 3,
+ "conf":{
+ "spark.rapids.memory.gpu.reserve": "10g",
+ "spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH": "true",
+ "spark.kryoserializer.buffer.max": "2000m"
+ }
+}
+```
+
+For this tutorial, we will use the following configurations:
+
+```python
+
+%%configure -f
+{
+ "numExecutors": 3,
+ "conf":{
+ "spark.rapids.memory.gpu.reserve": "10g",
+ "spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH": "true"
+ }
+}
+```
+
+> [!NOTE]
+> When training with Horovod, users should set the Spark configuration for ```numExecutors``` to be less or equal to the number of nodes.
+
+## Setup primary storage account
+
+We will need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account.
+
+In this example, we will read from the primary Azure Synapse Analytics storage account. To do this, you will need to modify the following properties below: ```remote_url```.
+
+```python
+# Specify training parameters
+num_proc = 3 # equal to numExecutors
+batch_size = 128
+epochs = 3
+lr_single_node = 0.1 # learning rate for single node code
+
+# configure adls store remote url
+remote_url = "<<abfss path to storage account>>
+```
+
+## Prepare dataset
+
+Next, we will prepare the dataset for training. In this tutorial, we will use the MNIST dataset from [Azure Open Datasets](https://docs.microsoft.com/azure/open-datasets/dataset-mnist?tabs=azureml-opendatasets).
+
+```python
+def get_dataset(rank=0, size=1):
+ # import dependency libs
+ from azureml.opendatasets import MNIST
+ from sklearn.preprocessing import OneHotEncoder
+ import numpy as np
+
+ # Download MNIST dataset from Azure Open Datasets
+ mnist = MNIST.get_tabular_dataset()
+ mnist_df = mnist.to_pandas_dataframe()
+
+ # Preprocess dataset
+ mnist_df['features'] = mnist_df.iloc[:, :784].values.tolist()
+ mnist_df.drop(mnist_df.iloc[:, :784], inplace=True, axis=1)
+
+ x = np.array(mnist_df['features'].values.tolist())
+ y = np.array(mnist_df['label'].values.tolist()).reshape(-1, 1)
+
+ enc = OneHotEncoder()
+ enc.fit(y)
+ y = enc.transform(y).toarray()
+
+ (x_train, y_train), (x_test, y_test) = (x[:60000], y[:60000]), (x[60000:],
+ y[60000:])
+
+ # Prepare dataset for distributed training
+ x_train = x_train[rank::size]
+ y_train = y_train[rank::size]
+ x_test = x_test[rank::size]
+ y_test = y_test[rank::size]
+
+ # Reshape and Normalize data for model input
+ x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
+ x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
+ x_train = x_train.astype('float32')
+ x_test = x_test.astype('float32')
+ x_train /= 255.0
+ x_test /= 255.0
+
+ return (x_train, y_train), (x_test, y_test)
+
+```
+
+## Define DNN model
+
+Once we have finished processing our dataset, we can now define our Tensorflow model. The same code could also be used to train a single-node Tensorflow model.
+
+```python
+# Define the TensorFlow model without any Horovod-specific parameters
+def get_model():
+ from tensorflow.keras import models
+ from tensorflow.keras import layers
+
+ model = models.Sequential()
+ model.add(
+ layers.Conv2D(32,
+ kernel_size=(3, 3),
+ activation='relu',
+ input_shape=(28, 28, 1)))
+ model.add(layers.Conv2D(64, (3, 3), activation='relu'))
+ model.add(layers.MaxPooling2D(pool_size=(2, 2)))
+ model.add(layers.Dropout(0.25))
+ model.add(layers.Flatten())
+ model.add(layers.Dense(128, activation='relu'))
+ model.add(layers.Dropout(0.5))
+ model.add(layers.Dense(10, activation='softmax'))
+ return model
+
+```
+
+## Define a training function for a single node
+
+First, we will train our Tensorflow model on the driver node of the Apache Spark pool. Once we have finished the training process, we will evaluate the model and print the loss and accuracy scores.
+
+```python
+
+def train(learning_rate=0.1):
+ import tensorflow as tf
+ from tensorflow import keras
+
+ gpus = tf.config.experimental.list_physical_devices('GPU')
+ for gpu in gpus:
+ tf.config.experimental.set_memory_growth(gpu, True)
+
+ # Prepare dataset
+ (x_train, y_train), (x_test, y_test) = get_dataset()
+
+ # Initialize model
+ model = get_model()
+
+ # Specify the optimizer (Adadelta in this example)
+ optimizer = keras.optimizers.Adadelta(learning_rate=learning_rate)
+
+ model.compile(optimizer=optimizer,
+ loss='categorical_crossentropy',
+ metrics=['accuracy'])
+
+ model.fit(x_train,
+ y_train,
+ batch_size=batch_size,
+ epochs=epochs,
+ verbose=2,
+ validation_data=(x_test, y_test))
+ return model
+
+# Run the training process on the driver
+model = train(learning_rate=lr_single_node)
+
+# Evaluate the single node, trained model
+_, (x_test, y_test) = get_dataset()
+loss, accuracy = model.evaluate(x_test, y_test, batch_size=128)
+print("loss:", loss)
+print("accuracy:", accuracy)
+
+```
+
+## Migrate to HorovodRunner for distributed training
+
+Next, we will take a look at how the same code could be re-run using ```HorovodRunner``` for distributed training.
+
+### Define training function
+
+To do this, we will first define a training function for ```HorovodRunner```.
+
+```python
+# Define training function for Horovod runner
+def train_hvd(learning_rate=0.1):
+ # Import base libs
+ import tempfile
+ import os
+ import shutil
+ import atexit
+
+ # Import tensorflow modules to each worker
+ import tensorflow as tf
+ from tensorflow import keras
+ import horovod.tensorflow.keras as hvd
+
+ # Initialize Horovod
+ hvd.init()
+
+ # Pin GPU to be used to process local rank (one GPU per process)
+ # These steps are skipped on a CPU cluster
+ gpus = tf.config.experimental.list_physical_devices('GPU')
+ for gpu in gpus:
+ tf.config.experimental.set_memory_growth(gpu, True)
+ if gpus:
+ tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()],
+ 'GPU')
+
+ # Call the get_dataset function you created, this time with the Horovod rank and size
+ (x_train, y_train), (x_test, y_test) = get_dataset(hvd.rank(), hvd.size())
+
+ # Initialize model with random weights
+ model = get_model()
+
+ # Adjust learning rate based on number of GPUs
+ optimizer = keras.optimizers.Adadelta(learning_rate=learning_rate *
+ hvd.size())
+
+ # Use the Horovod Distributed Optimizer
+ optimizer = hvd.DistributedOptimizer(optimizer)
+
+ model.compile(optimizer=optimizer,
+ loss='categorical_crossentropy',
+ metrics=['accuracy'])
+
+ # Create a callback to broadcast the initial variable states from rank 0 to all other processes.
+ # This is required to ensure consistent initialization of all workers when training is started with random weights or restored from a checkpoint.
+ callbacks = [
+ hvd.callbacks.BroadcastGlobalVariablesCallback(0),
+ ]
+
+ # Model checkpoint location.
+ ckpt_dir = tempfile.mkdtemp()
+ ckpt_file = os.path.join(ckpt_dir, 'checkpoint.h5')
+ atexit.register(lambda: shutil.rmtree(ckpt_dir))
+
+ # Save checkpoints only on worker 0 to prevent conflicts between workers
+ if hvd.rank() == 0:
+ callbacks.append(
+ keras.callbacks.ModelCheckpoint(ckpt_file,
+ monitor='val_loss',
+ mode='min',
+ save_best_only=True))
+
+ model.fit(x_train,
+ y_train,
+ batch_size=batch_size,
+ callbacks=callbacks,
+ epochs=epochs,
+ verbose=2,
+ validation_data=(x_test, y_test))
+
+ # Return model bytes only on worker 0
+ if hvd.rank() == 0:
+ with open(ckpt_file, 'rb') as f:
+ return f.read()
+
+```
+
+### Run training
+
+Once we have defined the model, we will run the training process.
+
+```python
+# Run training
+import os
+import sys
+import horovod.spark
++
+best_model_bytes = \
+ horovod.spark.run(train_hvd, args=(lr_single_node, ), num_proc=num_proc,
+ env=os.environ.copy(),
+ stdout=sys.stdout, stderr=sys.stderr, verbose=2,
+ prefix_output_with_timestamp=True)[0]
+```
+
+### Save checkpoints to ADLS storage
+
+The code below shows how to save the checkpoints to the Azure Data Lake Storage (ADLS) account.
+
+```python
+import tempfile
+import fsspec
+import os
+
+local_ckpt_dir = tempfile.mkdtemp()
+local_ckpt_file = os.path.join(local_ckpt_dir, 'mnist-ckpt.h5')
+adls_ckpt_file = remote_url + local_ckpt_file
+
+with open(local_ckpt_file, 'wb') as f:
+ f.write(best_model_bytes)
+
+## Upload local file to ADLS
+fs = fsspec.filesystem('abfss')
+fs.upload(local_ckpt_file, adls_ckpt_file)
+
+print(adls_ckpt_file)
+```
+
+### Evaluate Horovod trained model
+
+Once we have finished training our model, we can then take a look at the loss and accuracy for the final model.
+
+```python
+import tensorflow as tf
+
+hvd_model = tf.keras.models.load_model(local_ckpt_file)
+
+_, (x_test, y_test) = get_dataset()
+loss, accuracy = hvd_model.evaluate(x_test, y_test, batch_size=128)
+print("loaded model loss and accuracy:", loss, accuracy)
+```
+
+## Clean up resources
+
+To ensure the Spark instance is shut down, end any connected sessions(notebooks). The pool shuts down when the **idle time** specified in the Apache Spark pool is reached. You can also select **stop session** from the status bar at the upper right of the notebook.
+
+![Screenshot showing the Stop session button on the status bar.](./media/tutorial-build-applications-use-mmlspark/stop-session.png)
+
+## Next steps
+
+* [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
+* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
synapse-analytics Tutorial Load Data Petastorm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-load-data-petastorm.md
+
+ Title: 'Load data with Petastorm'
+description: This article provides a conceptual overview of how to load data with Petastorm.
++++ Last updated : 04/19/2022+++
+# Load data with Petastorm (Preview)
+
+Petastorm is an open source data access library which enables single-node or distributed training of deep learning models. This library enables training directly from datasets in Apache Parquet format and datasets that have already been loaded as an Apache Spark DataFrame. Petastorm supports popular training frameworks such as Tensorflow and PyTorch.
+
+For more information about Petastorm, you can visit the [Petastorm GitHub page](https://github.com/uber/petastorm) or the [Petastorm API documentation](https://petastorm.readthedocs.io/latest).
+
+## Prerequisites
+
+- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with.
+- Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
+
+## Configure the Apache Spark session
+
+At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. In the example below, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html).
+
+```python
+%%configure -f
+{
+ "numExecutors": 3,
+ "conf":{
+ "spark.rapids.memory.gpu.reserve": "10g"
+ }
+}
+```
+
+## Petastorm write APIs
+
+A dataset created using Petastorm is stored in an Apache Parquet format. On top of a Parquet schema, Petastorm also stores higher-level schema information that makes multidimensional arrays into a native part of a Petastorm dataset.
+
+In the sample below, we create a dataset using PySpark. We write the dataset to an Azure Data Lake Storage Gen2 account.
+
+```python
+import numpy as np
+from pyspark.sql import SparkSession
+from pyspark.sql.types import IntegerType
+
+from petastorm.codecs import ScalarCodec, CompressedImageCodec, NdarrayCodec
+from petastorm.etl.dataset_metadata import materialize_dataset
+from petastorm.unischema import dict_to_spark_row, Unischema, UnischemaField
+
+# The schema defines how the dataset schema looks like
+HelloWorldSchema = Unischema('HelloWorldSchema', [
+ UnischemaField('id', np.int32, (), ScalarCodec(IntegerType()), False),
+ UnischemaField('image1', np.uint8, (128, 256, 3), CompressedImageCodec('png'), False),
+ UnischemaField('array_4d', np.uint8, (None, 128, 30, None), NdarrayCodec(), False),
+])
++
+def row_generator(x):
+ """Returns a single entry in the generated dataset. Return a bunch of random values as an example."""
+ return {'id': x,
+ 'image1': np.random.randint(0, 255, dtype=np.uint8, size=(128, 256, 3)),
+ 'array_4d': np.random.randint(0, 255, dtype=np.uint8, size=(4, 128, 30, 3))}
++
+def generate_petastorm_dataset(output_url):
+ rowgroup_size_mb = 256
+
+ spark = SparkSession.builder.config('spark.driver.memory', '2g').master('local[2]').getOrCreate()
+ sc = spark.sparkContext
+
+ # Wrap dataset materialization portion. Will take care of setting up spark environment variables as
+ # well as save petastorm specific metadata
+ rows_count = 10
+ with materialize_dataset(spark, output_url, HelloWorldSchema, rowgroup_size_mb):
+
+ rows_rdd = sc.parallelize(range(rows_count))\
+ .map(row_generator)\
+ .map(lambda x: dict_to_spark_row(HelloWorldSchema, x))
+
+ spark.createDataFrame(rows_rdd, HelloWorldSchema.as_spark_schema()) \
+ .coalesce(10) \
+ .write \
+ .mode('overwrite') \
+ .parquet(output_url)
++
+output_url = 'abfs://container_name@storage_account_url/data_dir' #use your own adls account info
+generate_petastorm_dataset(output_url)
+```
+
+## Petastorm read APIs
+
+### Read dataset from a primary storage account
+
+The ```petastorm.reader.Reader``` class is the main entry point for user code that accesses the data from an ML framework such as Tensorflow or Pytorch. You can read a dataset using the ```petastorm.reader.Reader``` class and the ```petastorm.make_reader``` factory method.
+
+In the example below, you can see how you can pass an ```abfs``` URL protocol.
+
+```python
+from petastorm import make_reader
+
+#on primary storage associated with the workspace, url can be abbreviated with container path for data directory
+with make_reader('abfs://<container_name>/<data directory path>/') as reader:
+ for row in reader:
+ print(row)
+```
+
+### Read dataset from secondary storage account
+
+If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you will need to modify the following properties below: ```remote_url```, ```account_name```, and ```linked_service_name```.
+
+```python
+from petastorm import make_reader
+
+# create sas token for storage account access, use your own adls account info
+remote_url = "abfs://container_name@storage_account_url"
+account_name = "<<adls account name>>"
+linked_service_name = '<<linked service name>>'
+TokenLibrary = spark._jvm.com.microsoft.azure.synapse.tokenlibrary.TokenLibrary
+sas_token = TokenLibrary.getConnectionString(linked_service_name)
+
+with make_reader('{}/data_directory'.format(remote_url), storage_options = {'sas_token' : sas_token}) as reader:
+ for row in reader:
+ print(row)
+```
+
+### Read dataset in batches
+
+In the example below, you can see how you can pass an ```abfs``` URL protocol to read data in batches. This example uses the ```make_batch_reader``` class.
+
+```python
+from petastorm import make_batch_reader
+
+with make_batch_reader('abfs://<container_name>/<data directory path>/', schema_fields=["value1", "value2"]) as reader:
+ for schema_view in reader:
+ print("Batched read:\nvalue1: {0} value2: {1}".format(schema_view.value1, schema_view.value2))
+```
+
+## PyTorch API
+
+To read a Petastorm dataset from PyTorch, you can use the adapter ```petastorm.pytorch.DataLoader``` class. This allows for custom PyTorch collating functions and transforms to be supplied.
+
+In this example, we will show how Petastorm DataLoader can be used to load a Petastorm dataset with the help of make_reader API. This first section creates the definition of a ```Net``` class and ```train``` and ```test``` function.
+
+```python
+from __future__ import division, print_function
+
+import argparse
+import pyarrow
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+from torchvision import transforms
+
+from petastorm import make_reader, TransformSpec
+from petastorm.pytorch import DataLoader
+from pyspark.sql.functions import col
+
+class Net(nn.Module):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
+ self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
+ self.conv2_drop = nn.Dropout2d()
+ self.fc1 = nn.Linear(320, 50)
+ self.fc2 = nn.Linear(50, 10)
+
+ def forward(self, x):
+ x = F.relu(F.max_pool2d(self.conv1(x), 2))
+ x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
+ x = x.view(-1, 320)
+ x = F.relu(self.fc1(x))
+ x = F.dropout(x, training=self.training)
+ x = self.fc2(x)
+ return F.log_softmax(x, dim=1)
+
+def train(model, device, train_loader, log_interval, optimizer, epoch):
+ model.train()
+ for batch_idx, row in enumerate(train_loader):
+ data, target = row['image'].to(device), row['digit'].to(device)
+ optimizer.zero_grad()
+ output = model(data)
+ loss = F.nll_loss(output, target)
+ loss.backward()
+ optimizer.step()
+ if batch_idx % log_interval == 0:
+ print('Train Epoch: {} [{}]\tLoss: {:.6f}'.format(
+ epoch, batch_idx * len(data), loss.item()))
+
+def test(model, device, test_loader):
+ model.eval()
+ test_loss = 0
+ correct = 0
+ count = 0
+ with torch.no_grad():
+ for row in test_loader:
+ data, target = row['image'].to(device), row['digit'].to(device)
+ output = model(data)
+ test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
+ pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
+ correct += pred.eq(target.view_as(pred)).sum().item()
+ count += data.shape[0]
+ test_loss /= count
+ print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
+ test_loss, correct, count, 100. * correct / count))
+
+def _transform_row(mnist_row):
+ # For this example, the images are stored as simpler ndarray (28,28), but the
+ # training network expects 3-dim images, hence the additional lambda transform.
+ transform = transforms.Compose([
+ transforms.Lambda(lambda nd: nd.reshape(28, 28, 1)),
+ transforms.ToTensor(),
+ transforms.Normalize((0.1307,), (0.3081,))
+ ])
+ # In addition, the petastorm pytorch DataLoader does not distinguish the notion of
+ # data or target transform, but that actually gives the user more flexibility
+ # to make the desired partial transform, as shown here.
+ result_row = {
+ 'image': transform(mnist_row['image']),
+ 'digit': mnist_row['digit']
+ }
+
+ return result_row
+```
+
+In this example, an Azure Data Lake Storage account is used to store intermediate data. To store this data, you must set up a Linked Service to the storage account and retrieve the following pieces of information: ```remote_url```, ```account_name```, and ```linked_service_name```.
+
+```python
+from petastorm import make_reader
+
+# create sas token for storage account access, use your own adls account info
+remote_url = "abfs://container_name@storage_account_url"
+account_name = "<account name>"
+linked_service_name = '<linked service name>'
+TokenLibrary = spark._jvm.com.microsoft.azure.synapse.tokenlibrary.TokenLibrary
+sas_token = TokenLibrary.getConnectionString(linked_service_name)
+
+# Read Petastorm dataset and apply custom PyTorch transformation functions
+
+device = torch.device('cpu') #For GPU, it will be torch.device('cuda'). More details: https://pytorch.org/docs/stable/tensor_attributes.html#torch-device
+
+model = Net().to(device)
+optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
+
+loop_epochs = 1
+reader_epochs = 1
+
+transform = TransformSpec(_transform_row, removed_fields=['idx'])
+
+for epoch in range(1, loop_epochs + 1):
+ with DataLoader(make_reader('{}/train'.format(remote_url), num_epochs=reader_epochs, transform_spec=transform),batch_size=5) as train_loader:
+ train(model, device, train_loader, 10, optimizer, epoch)
+ with DataLoader(make_reader('{}/test'.format(remote_url), num_epochs=reader_epochs, transform_spec=transform), batch_size=5) as test_loader:
+ test(model, device, test_loader)
+```
+
+## Next steps
+
+* [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
+* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
synapse-analytics Apache Spark Gpu Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-gpu-concept.md
Previously updated : 11/10/2021 Last updated : 4/10/2022
-# GPU-accelerated Apache Spark pools in Azure Synapse Analytics
+# GPU-accelerated Apache Spark pools in Azure Synapse Analytics (Preview)
+ Azure Synapse Analytics now supports Apache Spark pools accelerated with graphics processing units (GPUs). By using NVIDIA GPUs, data scientists and engineers can reduce the time necessary to run data integration pipelines, score machine learning models, and more. This article describes how GPU-accelerated pools can be created and used with Azure Synapse Analytics. This article also details the GPU drivers and libraries that are pre-installed as part of the GPU-accelerated runtime.
By using NVIDIA GPUs, data scientists and engineers can reduce the time necessar
> Azure Synapse GPU-enabled pools are currently in Public Preview. ## Create a GPU-accelerated pool+ To simplify the process for creating and managing pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes. To learn more about how to create a GPU-accelerated pool, you can visit the quickstart on how to [create a GPU-accelerated pool](../quickstart-create-apache-gpu-pool-portal.md). > [!NOTE]
To simplify the process for creating and managing pools, Azure Synapse takes car
## GPU-accelerated runtime ### NVIDIA GPU driver, CUDA, and cuDNN+ Azure Synapse Analytics now offers GPU-accelerated Apache Spark pools, which include various NVIDIA libraries and configurations. By default, Azure Synapse Analytics installs the NVIDIA driver and libraries required to use GPUs on Spark driver and worker instances: - CUDA 11.2 - libnccl2=2.8.4
Azure Synapse Analytics now offers GPU-accelerated Apache Spark pools, which inc
> This software contains source code provided by NVIDIA Corporation. Specifically, to support the GPU-accelerated pools, Azure Synapse Apache Spark pools include code from [CUDA Samples](https://docs.nvidia.com/cuda/eula/#nvidia-cuda-samples-preface). ### NVIDIA End User License Agreement (EULA)+ When you select a GPU-accelerated Hardware option in Synapse Spark, you implicitly agree to the terms and conditions outlined in the NVIDIA EULA with respect to: - CUDA 11.2: [EULA :: CUDA Toolkit Documentation (nvidia.com)](https://docs.nvidia.com/cuda/eula/https://docsupdatetracker.net/index.html) - libnccl2=2.8.4: [nccl/LICENSE.txt at master ┬╖ NVIDIA/nccl (github.com)](https://github.com/NVIDIA/nccl/blob/master/LICENSE.txt)
When you select a GPU-accelerated Hardware option in Synapse Spark, you implicit
- The CUDA, NCCL, and cuDNN libraries, and the [NVIDIA End User License Agreement (with NCCL Supplement)](https://docs.nvidia.com/deeplearning/nccl/sla/https://docsupdatetracker.net/index.html#overview) for the NCCL library ## Accelerate ETL workloads+ With built-in support for NVIDIAΓÇÖs [RAPIDS Accelerator for Apache Spark](https://nvidia.github.io/spark-rapids/), GPU-accelerated Spark pools in Azure Synapse can provide significant performance improvements compared to standard analytical benchmarks without requiring any code changes. Built on top of NVIDIA CUDA and UCX, NVIDIA RAPIDS enables GPU-accelerated SQL, DataFrame operations, and Spark shuffles. Since there are no code changes required to leverage these accelerations, users can also accelerate their data pipelines that rely on Linux FoundationΓÇÖs Delta Lake or MicrosoftΓÇÖs Hyperspace indexing. To learn more about how you can use the NVIDIA RAPIDS Accelerator with your GPU-accelerated pool in Azure Synapse Analytics, visit this guide on how to [improve performance with RAPIDS](apache-spark-rapids-gpu.md).
+## Train deep learning models
+
+Deep learning models are often data and computation intensive. Because of this, organizations often accelerate their training process with GPU-enabled clusters. In Azure Synapse Analytics, organizations can build models using frameworks like Tensorflow and PyTorch. Then, users can scale up their deep learning models with Horovod and Petastorm.
+
+To learn more about how you can train distributed deep learning models, visit the following guides:
+ - [Tutorial: Distributed training with Horovod and Tensorflow](../machine-learning/tutorial-horovod-tensorflow.md)
+ - [Tutorial: Distributed training with Horovod and PyTorch](../machine-learning/tutorial-horovod-pytorch.md)
+ ## Improve machine learning scoring workloads+ Many organizations rely on large batch scoring jobs to frequently execute during narrow windows of time. To achieve improved batch scoring jobs, you can also use GPU-accelerated Spark pools with MicrosoftΓÇÖs [Hummingbird library](https://github.com/Microsoft/hummingbird). With Hummingbird, users can take their traditional, tree-based ML models and compile them into tensor computations. Hummingbird allows users to then seamlessly leverage native hardware acceleration and neural network frameworks to accelerate their ML model scoring without needing to rewrite their models. ## Next steps+ - [Azure Synapse Analytics](../overview-what-is.md)
synapse-analytics Apache Spark Rapids Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-rapids-gpu.md
Most Spark jobs can see improved performance through tuning configuration settin
### Workspace level
-Every Azure Synapse workspace comes with a default quota of 50 GPU vCores. In order to increase your quota of GPU cores, send an email to AzureSynapseGPU@microsoft.com with your workspace name, the region, and the total GPU quota required for your workload.
+Every Azure Synapse workspace comes with a default quota of 50 GPU vCores. In order to increase your quota of GPU cores, please [submit a support request through the Azure portal](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-create-support-ticket.md).
## Next steps - [Azure Synapse Analytics](../overview-what-is.md)
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
WHERE
AND blocking.state = 'Granted' ORDER BY ObjectLockRequestTime ASC;
+
+```
## Retrieve query text from waiting and blocking queries
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
Azure Virtual Desktop stores various information for service objects, such as ho
## Customer input
-To set up Azure Virtual Desktop, you must create host pools and other service objects. During configuration, you must enter information such as the host pool name, application group name, and so on. This information is considered "customer input." Customer input is stored in the geography associated with the Azure region the resource is created in. The stored data includes all data that you input into the host pool deployment process and any data you add after deployment while making configuration changes to Azure Virtual Desktop objects. Basically, stored data is the same data you can access using the Azure Virtual Desktop portal, PowerShell, or Azure command-line interface (CLI).
+To set up Azure Virtual Desktop, you must create host pools and other service objects. During configuration, you must enter information such as the host pool name, application group name, and so on. This information is considered "customer input." Customer input is stored in the geography associated with the Azure region the resource is created in. The stored data includes all data that you input into the host pool deployment process and any data you add after deployment while making configuration changes to Azure Virtual Desktop objects. Basically, stored data is the same data you can access using the Azure Virtual Desktop portal, PowerShell, or Azure command-line interface (CLI). For example, you can review the [available PowerShell commands](/powershell/module/az.desktopvirtualization/?view=azps-8.0.0&preserve-view=true) to get an idea of what customer input data the Azure Virtual Desktop service stores.
Azure Resource Manager paths to service objects are considered organizational information, so data residency doesn't apply to them. Data about Azure Resource Manager paths is stored outside of the chosen geography.
virtual-machines Av1 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/av1-series-retirement.md
Title: Av1-series retirement
-description: Retirement information for the Av1 series VM sizes.
-
+description: Retirement information for the Av1 series virtual machine sizes. Before retirement, migrate your workloads to Av2-series virtual machines.
+ - Previously updated : 07/26/2021-+ Last updated : 06/08/2022++ # Av1-series retirement
-On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you'll need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+On August 31, 2024, we retire Basic and Standard A-series virtual machines (VMs). Before that date, migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
> [!NOTE]
-> In some cases, users must deallocate the VM prior to resizing. This can happen if the new size is not available on the hardware cluster that is currently hosting the VM.
+> In some cases, you must deallocate the VM prior to resizing. This can happen if the new size is not available on the hardware cluster that is currently hosting the VM.
+## Migrate workloads to Av2-series VMs
-## Migrate workloads from Basic and Standard A-series VMs to Av2-series VMs
-
-You can resize your virtual machines to the Av2-series using the [Azure portal, PowerShell, or the CLI](resize-vm.md). Below are examples on how to resize your VM using Azure portal and PowerShell.
+You can resize your virtual machines to the Av2-series using the [Azure portal, PowerShell, or the CLI](resize-vm.md). Below are examples on how to resize your VM using the Azure portal and PowerShell.
> [!IMPORTANT]
-> Resizing the virtual machine will result in a restart. It is suggested to perform actions that will result in a restart during off-peak business hours.
+> Resizing a virtual machine results in a restart. We recommend that you perform actions that result in a restart during off-peak business hours.
+
+### Azure portal
-### Azure portal
1. Open the [Azure portal](https://portal.azure.com).
-1. Type **virtual machines** in the search.
+1. Type *virtual machines* in the search.
1. Under **Services**, select **Virtual machines**. 1. In the **Virtual machines** page, select the virtual machine you want to resize. 1. In the left menu, select **size**. 1. Pick a new Av2 size from the list of available sizes and select **Resize**. ### Azure PowerShell
-1. Set the resource group and VM name variables. Replace the values with information of the VM you want to resize.
+
+1. Set the resource group and VM name variables. Replace the values with information of the VM you want to resize.
```powershell $resourceGroup = "myResourceGroup" $vmName = "myVM" ```
-2. List the VM sizes that are available on the hardware cluster where the VM is hosted.
+
+1. List the VM sizes that are available on the hardware cluster where the VM is hosted.
```powershell Get-AzVMSize -ResourceGroupName $resourceGroup -VMName $vmName ```
-3. Resize the VM to the new size.
+1. Resize the VM to the new size.
```powershell $vm = Get-AzVM -ResourceGroupName $resourceGroup -VMName $vmName
You can resize your virtual machines to the Av2-series using the [Azure portal,
## Help and support
-If you have questions, ask community experts in [Microsoft Q&A](/answers/topics/azure-virtual-machines.html). If you have a support plan and need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest):
+If you have questions, ask community experts in [Microsoft Q&A](/answers/topics/azure-virtual-machines.html). If you have a support plan and need technical help, create a support request:
+
+1. In the [Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page, select **Create a support request**. Follow the **New support request** page instructions. Use the following values:
+ * For **Issue type**, select **Technical**.
+ * For **Service**, select **My services**.
+ * For **Service type**, select **Virtual Machine running Windows/Linux**.
+ * For **Resource**, select your VM.
+ * For **Problem type**, select **Assistance with resizing my VM**.
+ * For **Problem subtype**, select the option that applies to you.
-1. For Issue type, select Technical.
-1. For Subscription, select your subscription.
-1. For Service, click My services.
-1. For Service type, select Virtual Machine running Windows/Linux.
-1. For Summary, enter a summary of your request.
-1. For Problem type, select Assistance with resizing my VM.
-1. For Problem subtype, select the option that applies to you.
+Follow instructions in the **Solutions** and **Details** tabs, as applicable, and then **Review + create**.
## Next steps+ Learn more about the [Av2-series VMs](av2-series.md)
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared-enable.md
description: Configure an Azure managed disk with shared disks so that you can s
Previously updated : 01/13/2022 Last updated : 06/09/2022
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 06/07/2022 Last updated : 06/09/2022
VMs in the cluster can read or write to their attached disk based on the reserva
Shared managed disks offer shared block storage that can be accessed from multiple VMs, these are exposed as logical unit numbers (LUNs). LUNs are then presented to an initiator (VM) from a target (disk). These LUNs look like direct-attached-storage (DAS) or a local drive to the VM.
-Shared managed disks do not natively offer a fully managed file system that can be accessed using SMB/NFS. You need to use a cluster manager, like Windows Server Failover Cluster (WSFC) or Pacemaker, that handles cluster node communication and write locking.
+Shared managed disks don't natively offer a fully managed file system that can be accessed using SMB/NFS. You need to use a cluster manager, like Windows Server Failover Cluster (WSFC), or Pacemaker, that handles cluster node communication and write locking.
## Limitations
Shared managed disks do not natively offer a fully managed file system that can
Shared disks support several operating systems. See the [Windows](#windows) or [Linux](#linux) sections for the supported operating systems.
+## Billing implications
+
+When you share a disk, your billing could be impacted in two different ways, depending on the type of disk.
+
+For shared premium SSDs, in addition to cost of the disk's tier, there's an extra charge that increases with each VM the SSD is mounted to. See [managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for details.
+
+Ultra disks don't have an extra charge for each VM that they're mounted to. They're billed on the total IOPS and MBps that the disk is configured for. Normally, an ultra disk has two performance throttles that determine its total IOPS/MBps. However, when configured as a shared ultra disk, two more performance throttles are exposed, for a total of four. These two additional throttles allow for increased performance at an extra expense and each meter has a default value, which raises the performance and cost of the disk.
+
+The four performance throttles a shared ultra disk has are diskMBpsReadWrite, diskIOPSReadOnly, diskMBpsReadWrite, and diskMBpsReadOnly. Each performance throttle can be configured to change the performance of your disk. The performance for shared ultra disk is calculated in the following ways: total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and for total provisioned throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly).
+
+Once you've determined your total provisioned IOPS and total provisioned throughput, you can use them in the [pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=managed-disks) to determine the cost of an ultra shared disk.
+ ## Disk sizes [!INCLUDE [virtual-machines-disks-shared-sizes](../../includes/virtual-machines-disks-shared-sizes.md)]
Linux clusters can use cluster managers such as [Pacemaker](https://wiki.cluster
The following diagram illustrates a sample 2-node clustered database application that uses SCSI PR to enable failover from one node to the other.
-![Two node cluster. An application running on the cluster is handling access to the disk](media/virtual-machines-disks-shared-disks/shared-disk-updated-two-node-cluster-diagram.png)
+![Two node cluster consisting of Azure VM1, VM2, and a disk shared between them. An application running on the cluster handles access to the disk.](media/virtual-machines-disks-shared-disks/shared-disk-updated-two-node-cluster-diagram.png)
The flow is as follows: 1. The clustered application running on both Azure VM1 and VM2 registers its intent to read or write to the disk. 1. The application instance on VM1 then takes exclusive reservation to write to the disk.
-1. This reservation is enforced on your Azure disk and the database can now exclusively write to the disk. Any writes from the application instance on VM2 will not succeed.
+1. This reservation is enforced on your Azure disk and the database can now exclusively write to the disk. Any writes from the application instance on VM2 won't succeed.
1. If the application instance on VM1 goes down, the instance on VM2 can now initiate a database failover and take-over of the disk. 1. This reservation is now enforced on the Azure disk and the disk will no longer accept writes from VM1. It will only accept writes from VM2. 1. The clustered application can complete the database failover and serve requests from VM2.
The flow is as follows:
### Ultra disks reservation flow
-Ultra disks offer an additional throttle, for a total of two throttles. Due to this, ultra disks reservation flow can work as described in the earlier section, or it can throttle and distribute performance more granularly.
+Ultra disks offer two extra throttles, for a total of four throttles. Due to this, ultra disks reservation flow can work as described in the earlier section, or it can throttle and distribute performance more granularly.
:::image type="content" source="media/virtual-machines-disks-shared-disks/ultra-reservation-table.png" alt-text="An image of a table that depicts the `ReadOnly` or `Read/Write` access for Reservation Holder, Registered, and Others.":::
With premium SSD, the disk IOPS and throughput is fixed, for example, IOPS of a
### Ultra disk performance throttles
-Ultra disks have the unique capability of allowing you to set your performance by exposing modifiable attributes and allowing you to modify them. By default, there are only two modifiable attributes but, shared ultra disks have two additional attributes.
+Ultra disks have the unique capability of allowing you to set your performance by exposing modifiable attributes and allowing you to modify them. By default, there are only two modifiable attributes but, shared ultra disks have two more attributes.
|Attribute |Description | |||
-|DiskIOPSReadWrite |The total number of IOPS allowed across all VMs mounting the share disk with write access. |
+|DiskIOPSReadWrite |The total number of IOPS allowed across all VMs mounting the shared disk with write access. |
|DiskMBpsReadWrite |The total throughput (MB/s) allowed across all VMs mounting the shared disk with write access. | |DiskIOPSReadOnly* |The total number of IOPS allowed across all VMs mounting the shared disk as `ReadOnly`. | |DiskMBpsReadOnly* |The total throughput (MB/s) allowed across all VMs mounting the shared disk as `ReadOnly`. | \* Applies to shared ultra disks only
-The following formulas explain how the performance attributes can be set, since they are user modifiable:
+The following formulas explain how the performance attributes can be set, since they're user modifiable:
- DiskIOPSReadWrite/DiskIOPSReadOnly: - IOPS limits of 300 IOPS/GiB, up to a maximum of 160 K IOPS per disk
The following is an example of a 4-node Linux cluster with a single writer and t
:::image type="content" source="media/virtual-machines-disks-shared-disks/ultra-four-node-example.png" alt-text="Four node ultra throttling example":::
-#### Ultra pricing
+##### Ultra pricing
-Ultra shared disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and total provisioned Throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly). There is no extra charge for each additional VM mount. For example, an ultra shared disk with the following configuration (diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMBpsReadWrite: 600, DiskIOPSReadOnly: 100, DiskMBpsReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MBps regardless of whether it is mounted to two VMs or five VMs.
+Ultra shared disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and total provisioned Throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly). There's no extra charge for each additional VM mount. For example, an ultra shared disk with the following configuration (diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMBpsReadWrite: 600, DiskIOPSReadOnly: 100, DiskMBpsReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MBps regardless of whether it is mounted to two VMs or five VMs.
## Next steps
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
While it is possible to create custom VM images by hand or by other tools, the p
### Infrastructure As Code - There is no need to manage long-term infrastructure (*like Storage Accounts to hold customization data*) or transient infrastructure (*like temporary Virtual Machine to build the image*). -- Image Builder stores your VM image build specification and customization artifacts as Azure resources removing the need of maintaining offline definitions and the risk of environment drifts caused by accidental deletions or updates.
+- Image Builder stores your VM image build artifacts as Azure resources which removes the need to maintain offline definitions and the risk of environment drifts caused by accidental deletions or updates.
### Security
While it is possible to create custom VM images by hand or by other tools, the p
- You do not have to make your customization artifacts publicly accessible for Image Builder to be able to fetch them. Image Builder can use your [Azure Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) to fetch these resources and you can restrict the privileges of this identity as tightly as required using Azure-RBAC. This not only means you can keep your artifacts secret, but they also cannot be tampered with by unauthorized actors. - Copies of customization artifacts, transient compute & storage resources, and resulting images are all stored securely within your subscription with access controlled by Azure-RBAC. This includes the build VM used to create the customized image and ensuring your customization scripts and files are not being copied to an unknown VM in an unknown subscription. Furthermore, you can achieve a high degree of isolation from other customersΓÇÖ workloads using [Isolated VM offerings](./isolation.md) for the build VM. - You can connect Image Builder to your existing virtual networks so you can communicate with existing configuration servers (DSC, Chef, Puppet, etc.), file shares, or any other routable servers & services.-- You can configure Image Builder to assign your User Assigned Identities to the Image Builder Build VM (*that is created by the Image Builder service in your subscription and is used to build and customize the image*). You can then use these identities at customization time to access Azure resources, including secrets, in your subscription. There is no need to assign Image Builder direct access to those resources.
+- You can configure Image Builder to assign your User Assigned Identities to the Image Builder Build VM. The Image Builder Build VM is created by the Image Builder service in your subscription and is used to build and customize the image. You can then use these identities at customization time to access Azure resources, including secrets, in your subscription. There is no need to assign Image Builder direct access to those resources.
## Regions
The Azure Image Builder Service is available in the following regions: regions.
- East Asia - Korea Central - South Africa North
+- USGov Arizona (Public Preview)
+- USGov Virginia (Public Preview)
+
+> [!IMPORTANT]
+> Register the feature "Microsoft.VirtualMachineImages/FairfaxPublicPreview" to access the Azure Image Builder public preview in Fairfax regions (USGov Arizona and USGov Virginia).
+
+Use the following command to register the feature for Azure Image Builder in Fairfax regions (USGov Arizona and USGov Virginia).
+```azurecli-interactive
+az feature register --namespace Microsoft.VirtualMachineImages --name FairfaxPublicPreview
+```
## OS support
Azure Image Builder will support Azure Marketplace base OS images:
- Windows 10 RS5 Enterprise/Enterprise multi-session/Professional - Windows 2016 - Windows 2019
+- CBL-Mariner
>[!IMPORTANT] > Listed operating systems have been tested and now work with Azure Image Builder. However, Azure Image Builder should work with any Linux or Windows image in the marketplace.
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
Title: Add a data disk to Linux VM using the Azure CLI description: Learn to add a persistent data disk to your Linux VM with the Azure CLI-+ Previously updated : 05/12/2021-- Last updated : 06/08/2022+ + # Add a disk to a Linux VM **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets This article shows you how to attach a persistent disk to your VM so that you can preserve your data - even if your VM is reprovisioned due to maintenance or resizing. - ## Attach a new disk to a VM If you want to add a new, empty data disk on your VM, use the [az vm disk attach](/cli/azure/vm/disk) command with the `--new` parameter. If your VM is in an Availability Zone, the disk is automatically created in the same zone as the VM. For more information, see [Overview of Availability Zones](../../availability-zones/az-overview.md). The following example creates a disk named *myDataDisk* that is 50 Gb in size:
az vm disk attach \
--size-gb 50 ```
+### Lower latency
+
+In select regions, the disk attach latency has been reduced, so you'll see an improvement of up to 15%. This is useful if you have planned/unplanned failovers between VMs, you're scaling your workload, or are running a high scale stateful workload such as Azure Kubernetes Service. However, this improvement is limited to the explicit disk attach command, `az vm disk attach`. You won't see the performance improvement if you call a command that may implicitly perform an attach, like `az vm update`. You don't need to take any action other than calling the explicit attach command to see this improvement.
++ ## Attach an existing disk To attach an existing disk, find the disk ID and pass the ID to the [az vm disk attach](/cli/azure/vm/disk) command. The following example queries for a disk named *myDataDisk* in *myResourceGroup*, then attaches it to the VM named *myVM*:
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/detach-disk.md
Previously updated : 07/18/2018 Last updated : 06/08/2022
az vm disk detach \
The disk stays in storage but is no longer attached to a virtual machine.
+### Lower latency
+
+In select regions, the disk detach latency has been reduced, so you'll see an improvement of up to 15%. This is useful if you have planned/unplanned failovers between VMs, you're scaling your workload, or are running a high scale stateful workload such as Azure Kubernetes Service. However, this improvement is limited to the explicit disk detach command, `az vm disk detach`. You won't see the performance improvement if you call a command that may implicitly perform a detach, like `az vm update`. You don't need to take any action other than calling the explicit detach command to see this improvement.
++ ## Detach a data disk using the portal
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
The location is the region where the custom image will be created. The following
- East Asia - Korea Central - South Africa North
+- USGov Arizona (Public Preview)
+- USGov Virginia (Public Preview)
+> [!IMPORTANT]
+> Register the feature "Microsoft.VirtualMachineImages/FairfaxPublicPreview" to access the Azure Image Builder public preview in Fairfax regions (USGov Arizona and USGov Virginia).
+
+Use the following command to register the feature for Azure Image Builder in Fairfax regions (USGov Arizona and USGov Virginia).
+```azurecli-interactive
+az feature register --namespace Microsoft.VirtualMachineImages --name FairfaxPublicPreview
+```
```json "location": "<region>",
virtual-machines Using Cloud Init https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md
Once the VM has been provisioned, cloud-init will run through all the modules an
> [!NOTE] > Not every module failure results in a fatal cloud-init overall configuration failure. For example, using the `runcmd` module, if the script fails, cloud-init will still report provisioning succeeded because the runcmd module executed.
-For more details of cloud-init logging, refer to the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/topics/logging.html)
+For more details of cloud-init logging, refer to the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/topics/logging.html)
+## Telemetry
+cloud-init collects usage data and sends it to Microsoft to help improve our products and services. Telemetry is only collected during the provisioning process (first boot of the VM). The data collected helps us investigate provisioning failures and monitor performance and reliability. Data collected does not include any personally identifiable information. Read our [privacy statement](http://go.microsoft.com/fwlink/?LinkId=521839) to learn more. Some examples of telemetry being collected are (this is not an exhaustive list): OS-related information (cloud-init version, distro version, kernel version), performance metrics of essential VM provisioning actions (time to obtain DHCP lease, time to retrieve metadata necessary to configure the VM, etc.), cloud-init log, and dmesg log.
+
+Telemetry collection is currently enabled for a majority of our marketplace images that use cloud-init. It is enabled by specifying KVP telemetry reporter for cloud-init. In a majority of Azure marketplace images, this configuration can be found in the file /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg. Removing this file during image preparation will disable telemetry collection for any VM created from this image.
+
+Sample content of 10-azure-kvp.cfg
+```
+reporting:
+ logging:
+ type: log
+ telemetry:
+ type: hyperv
+```
## Next steps [Troubleshoot issues with cloud-init](cloud-init-troubleshooting.md).
virtual-machines Attach Disk Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/attach-disk-ps.md
Previously updated : 10/16/2018 Last updated : 06/08/2022
First, review these tips:
This article uses PowerShell within the [Azure Cloud Shell](../../cloud-shell/overview.md), which is constantly updated to the latest version. To open the Cloud Shell, select **Try it** from the top of any code block.
+## Lower latency
+
+In select regions, the disk attach latency has been reduced, so you'll see an improvement of up to 15%. This is useful if you have planned/unplanned failovers between VMs, you're scaling your workload, or are running a high scale stateful workload such as Azure Kubernetes Service. However, this improvement is limited to the explicit disk attach command, `Add-AzVMDataDisk`. You won't see the performance improvement if you call a command that may implicitly perform an attach, like `Update-AzVM`. You don't need to take any action other than calling the explicit attach command to see this improvement.
++ ## Add an empty data disk to a virtual machine This example shows how to add an empty data disk to an existing virtual machine.
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/detach-disk.md
Title: Detach a data disk from a Windows VM - Azure description: Detach a data disk from a virtual machine in Azure using the Resource Manager deployment model.-+ Previously updated : 03/03/2021- Last updated : 06/08/2022+
When you no longer need a data disk that's attached to a virtual machine, you ca
If you want to use the existing data on the disk again, you can reattach it to the same virtual machine, or another one.
-
- ## Detach a data disk using PowerShell You can *hot* remove a data disk using PowerShell, but make sure nothing is actively using the disk before detaching it from the VM.
Update-AzVM `
The disk stays in storage but is no longer attached to a virtual machine.
+### Lower latency
+
+In select regions, the disk detach latency has been reduced, so you'll see an improvement of up to 15%. This is useful if you have planned/unplanned failovers between VMs, you're scaling your workload, or are running a high scale stateful workload such as Azure Kubernetes Service. However, this improvement is limited to the explicit disk detach command, `Remove-AzVMDataDisk`. You won't see the performance improvement if you call a command that may implicitly perform a detach, like `Update-AzVM`. You don't need to take any action other than calling the explicit detach command to see this improvement.
++ ## Detach a data disk using the portal You can *hot* remove a data disk, but make sure nothing is actively using the disk before detaching it from the VM.
You can *hot* remove a data disk, but make sure nothing is actively using the di
1. In the **Disks** pane, to the far right of the data disk that you would like to detach, select the **X** button to detach. 1. Select **Save** on the top of the page to save your changes.
-The disk stays in storage but is no longer attached to a virtual machine. The disk is not deleted.
+The disk stays in storage but is no longer attached to a virtual machine. The disk isn't deleted.
## Next steps
virtual-machines Automation Configure Extra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-extra-disks.md
Title: Add more disks to SAP deployment automation configuration
-description: Configure more disks for your system in the SAP deployment automation framework on Azure. Add extra disks to a new system, or an existing system.
+ Title: Custom disk configurations
+description: Provide custom disk configurations for your system in the SAP deployment automation framework on Azure. Add extra disks to a new system, or an existing system.
Previously updated : 11/17/2021 Last updated : 06/09/2022
The table below shows the default disk configuration for HANA systems.
| M208ms_v2 | Standard_M208ms_v2 | P10 (128 GB) | 4 P40 (2048 GB) | 3 P15 (256 GB) | P30 (1024 GB) | P6 (64 GB) | 3 P40 (2048 GB) | | M416s_v2 | Standard_M416s_v2 | P10 (128 GB) | 4 P40 (2048 GB) | 3 P15 (256 GB) | P30 (1024 GB) | P6 (64 GB) | 3 P40 (2048 GB) | | M416ms_v2 | Standard_M416m_v2 | P10 (128 GB) | 4 P50 (4096 GB) | 3 P15 (256 GB) | P30 (1024 GB) | P6 (64 GB) | 4 P50 (4096 GB) |
+| E20ds_v4 | Standard_E20ds_v4 | P6 (64 GB) | 3 P10 (128 GB) | 1 Ultra (80 GB) | P15 (256 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E20ds_v5 | Standard_E20ds_v5 | P6 (64 GB) | 3 P10 (128 GB) | 1 Ultra (80 GB) | P15 (256 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E32ds_v4 | Standard_E32ds_v4 | P6 (64 GB) | 3 P10 (128 GB) | 1 Ultra (128 GB) | P15 (256 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E32ds_v5 | Standard_E32ds_v5 | P6 (64 GB) | 3 P10 (128 GB) | 1 Ultra (128 GB) | P15 (256 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E48ds_v4 | Standard_E48ds_v4 | P6 (64 GB) | 3 P15 (256 GB) | 1 Ultra (192 GB) | P20 (512 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E48ds_v5 | Standard_E48ds_v4 | P6 (64 GB) | 3 P15 (256 GB) | 1 Ultra (192 GB) | P20 (512 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E64ds_v3 | Standard_E64ds_v3 | P6 (64 GB) | 3 P15 (256 GB) | 1 Ultra (220 GB) | P20 (512 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E64ds_v4 | Standard_E64ds_v4 | P6 (64 GB) | 3 P15 (256 GB) | 1 Ultra (256 GB) | P20 (512 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E64ds_v5 | Standard_E64ds_v5 | P6 (64 GB) | 3 P15 (256 GB) | 1 Ultra (256 GB) | P20 (512 GB) | P6 (64 GB) | 1 P15 (256 GB) |
+| E96ds_v5 | Standard_E96ds_v4 | P6 (64 GB) | 3 P15 (256 GB) | 1 Ultra (256 GB) | P20 (512 GB) | P6 (64 GB) | 1 P15 (256 GB) |
### AnyDB databases
The table below shows the default disk configuration for AnyDB systems.
## Custom sizing file
-The disk sizing for an SAP system can be defined using a custom sizing file.
+The disk sizing for an SAP system can be defined using a custom sizing json file. The file is grouped in four sections: "db", "app", "scs", and "web" and each section contains a list of disk configuration names, for example for the database tier "M32ts", "M64s", etc.
-Create a file using the structure shown below and save the file in the same folder as the parameter file for the system, for instance 'XO1_db_sizes.json'. Then, define the parameter `db_disk_sizes_filename` in the parameter file for the database tier. For example, `db_disk_sizes_filename = "XO1_db_sizes.json"`.
+These sections contain the information for which is the default Virtual machine size and the list of disk to be deployed for each tier.
-The following sample code is an example configuration for the database tier. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU) and a backup disk (LUN 13, using the standard SSDN SKU).
+Create a file using the structure shown below and save the file in the same folder as the parameter file for the system, for instance 'XO1_sizes.json'. Then, define the parameter `custom_disk_sizes_filename` in the parameter file. For example, `custom_disk_sizes_filename = "XO1_db_sizes.json"`.
+
+> [!TIP]
+> The path to the disk configuration needs to be relative to the folder containing the tfvars file.
++
+The following sample code is an example configuration file. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU) and a backup disk (LUN 13, using the standard SSDN SKU). The application tier servers (Application, Central Services amd Web Dispatchers) will be deployed with jus a single 'sap' data disk.
```json {
The following sample code is an example configuration for the database tier. It
"lun_start" : 13 }
+ ]
+ }
+ },
+ "app" : {
+ "Default": {
+ "compute": {
+ "vm_size" : "Standard_D4s_v3"
+ },
+ "storage": [
+ {
+ "name" : "os",
+ "count" : 1,
+ "disk_type" : "Premium_LRS",
+ "size_gb" : 128,
+ "caching" : "ReadWrite"
+ },
+ {
+ "name" : "sap",
+ "count" : 1,
+ "disk_type" : "Premium_LRS",
+ "size_gb" : 128,
+ "caching" : "ReadWrite",
+ "write_accelerator" : false,
+ "lun_start" : 0
+ }
+
+ ]
+ }
+ },
+ "scs" : {
+ "Default": {
+ "compute": {
+ "vm_size" : "Standard_D4s_v3"
+ },
+ "storage": [
+ {
+ "name" : "os",
+ "count" : 1,
+ "disk_type" : "Premium_LRS",
+ "size_gb" : 128,
+ "caching" : "ReadWrite"
+ },
+ {
+ "name" : "sap",
+ "count" : 1,
+ "disk_type" : "Premium_LRS",
+ "size_gb" : 128,
+ "caching" : "ReadWrite",
+ "write_accelerator" : false,
+ "lun_start" : 0
+ }
+
+ ]
+ }
+ },
+ "web" : {
+ "Default": {
+ "compute": {
+ "vm_size" : "Standard_D4s_v3"
+ },
+ "storage": [
+ {
+ "name" : "os",
+ "count" : 1,
+ "disk_type" : "Premium_LRS",
+ "size_gb" : 128,
+ "caching" : "ReadWrite"
+ },
+ {
+ "name" : "sap",
+ "count" : 1,
+ "disk_type" : "Premium_LRS",
+ "size_gb" : 128,
+ "caching" : "ReadWrite",
+ "write_accelerator" : false,
+ "lun_start" : 0
+ }
+ ] } }
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
The table below contains the parameters that define the resource group.
> | -- | -- | - | > | `resource_group_name` | Name of the resource group to be created | Optional | > | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |--
+> | `resource_group_tags` | Tags to be associated to the resource group | Optional |
## SAP Virtual Hostname parameters
The database tier defines the infrastructure for the database tier, supported da
> | `database_high_availability` | Defines if the database tier is deployed highly available. | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) | > | `database_server_count` | Defines the number of database servers. | Optional | Default value is 1 | > | `database_vm_zones` | Defines the Availability Zones for the database servers. | Optional | |
-> | `database_size` | Defines the database sizing information. | Required | See [Custom Sizing](automation-configure-extra-disks.md) |
-> | `db_disk_sizes_filename` | Defines the custom database sizing. | Optional | See [Custom Sizing](automation-configure-extra-disks.md) |
+> | `db_sizing_dictionary_key` | Defines the database sizing information. | Required | See [Custom Sizing](automation-configure-extra-disks.md) |
+> | `db_disk_sizes_filename` | Defines the custom database sizing file name. | Optional | See [Custom Sizing](automation-configure-extra-disks.md) |
> | `database_vm_use_DHCP` | Controls if Azure subnet provided IP addresses should be used. | Optional | | > | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet). | Optional | | > | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet). | Optional | |
The application tier defines the infrastructure for the application tier, which
> | - | | --| | > | `enable_app_tier_deployment` | Defines if the application tier is deployed | Optional | | > | `sid` | Defines the SAP application SID | Required | |
-> | `app_tier_vm_sizing` | Lookup value defining the VM SKU and the disk layout for tha application tier servers | Optional |
+> | `app_tier_sizing_dictionary_key` | Lookup value defining the VM SKU and the disk layout for tha application tier servers | Optional |
> | `app_disk_sizes_filename` | Defines the custom disk size file for the application tier servers | Optional | See [Custom Sizing](automation-configure-extra-disks.md) | > | `app_tier_authentication_type` | Defines the authentication type for the application tier virtual machine(s) | Optional | | > | `app_tier_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) | Optional | |
The table below contains the networking parameters.
\* = Required For brown field deployments.
+## Key Vault Parameters
+
+If you don't want to use the workload zone key vault but another one, this can be added in the system's tfvars file.
+
+The table below defines the parameters used for defining the Key Vault information.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | -- | | | -- |
+> | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | |
+> | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
+> | `enable_purge_control_for_keyvaults | Disables the purge protection for Azure key vaults. | Optional | Only use this for test environments |
++ ### Anchor virtual machine parameters The SAP deployment automation framework supports having an Anchor virtual machine. The anchor virtual machine will be the first virtual machine to be deployed and is used to anchor the proximity placement group.
By default the SAP System deployment uses the credentials from the SAP Workload
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | - | -- |
+> | Variable | Description | Type |
+> | - | - | -- |
> | `resource_offset` | Provides and offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional | > | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks using customer provided keys | Optional | > | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional | > | `license_type` | Specifies the license type for the virtual machines. | Possible values are `RHEL_BYOS` and `SLES_BYOS`. For Windows the possible values are `None`, `Windows_Client` and `Windows_Server`. | > | `use_zonal_markers` | Specifies if zonal Virtual Machines will include a zonal identifier. 'xooscs_z1_00l###' vs 'xooscs00l###'| Default value is true. |
+> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups | |
+> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups| |
## NFS support
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
automation_username = "azureadm"
The table below defines the parameters used for defining the Key Vault information > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | - | - |
-> | `user_keyvault_id` | Azure resource identifier for the system credentials key vault | Optional |
-> | `spn_keyvault_id` | Azure resource identifier for the deployment credentials (SPNs) key vault | Optional |
-
+> | Variable | Description | Type | Notes |
+> | -- | | | -- |
+> | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | |
+> | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
+> | `enable_purge_control_for_keyvaults | Disables the purge protection for Azure key vaults. | Optional | Only use this for test environments |
## Private DNS
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 06/02/2022 Last updated : 06/08/2022
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- June 08, 2022: Change in [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to adjust timeouts when using NFSv4.1 (related to NFSv4.1 lease renewal) for more resilient Pacemaker configuration
- June 02, 2022: Change in the [SAP Deployment Guide](deployment-guide.md) to add a link to RHEL in-place upgrade documentation - June 02, 2022: Change in [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to add sizing considerations - May 11, 2022: Change in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure](./sap-high-availability-guide-wsfc-shared-disk.md), [Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS](./sap-high-availability-infrastructure-wsfc-shared-disk.md) and [SAP ASCS/SCS instance multi-SID high availability with Windows server failover clustering and Azure shared disk](./sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md) to update instruction about the usage of Azure shared disk for SAP deployment with PPG.
virtual-machines High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files.md
vm-windows Previously updated : 06/02/2022 Last updated : 06/08/2022
Read the following SAP Notes and papers first:
* [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341) * [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491) * [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
+* [NetApp NFS Best Practices](https://www.netapp.com/media/10720-tr-4067.pdf)
## Overview
The following items are prefixed with either **[A]** - applicable to all nodes,
# If using NFSv4.1 sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \ directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
- op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
+ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=105 \
--group g-QAS_ASCS sudo pcs resource create vip_QAS_ASCS IPaddr2 \
The following items are prefixed with either **[A]** - applicable to all nodes,
# If using NFSv4.1 sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \ directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \
- op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \
+ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=105 \
--group g-QAS_AERS sudo pcs resource create vip_QAS_AERS IPaddr2 \
The following items are prefixed with either **[A]** - applicable to all nodes,
If using enqueue server 1 architecture (ENSA1), define the resources as follows: ```
- sudo pcs property set maintenance-mode=true
+ sudo pcs property set maintenance-mode=true
+ # If using NFSv3
sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \
The following items are prefixed with either **[A]** - applicable to all nodes,
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
+ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
+ op monitor interval=20 on-fail=restart timeout=105 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-QAS_ASCS
+ sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000
+ # If using NFSv3
sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS
-
+
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
+ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=105 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-QAS_AERS
+ sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000 sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1 sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false
The following items are prefixed with either **[A]** - applicable to all nodes,
``` sudo pcs property set maintenance-mode=true
+ # If using NFSv3
sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \
The following items are prefixed with either **[A]** - applicable to all nodes,
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
+ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 \
+ op monitor interval=20 on-fail=restart timeout=105 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-QAS_ASCS
+
sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000
+ # If using NFSv3
sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS
-
+
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
+ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=105 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-QAS_AERS
+ sudo pcs resource meta rsc_sap_QAS_ERS01 resource-stickiness=3000 sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000
The following items are prefixed with either **[A]** - applicable to all nodes,
If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322). > [!NOTE]
+ > The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-specific pause, related to NFSv4.1 lease renewals.
+ > For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
> The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup. Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
virtual-machines High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md
vm-windows Previously updated : 06/02/2022 Last updated : 06/08/2022
Read the following SAP Notes and papers first:
The guides contain all required information to set up Netweaver HA and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much more detailed information. * [SUSE High Availability Extension 12 SP3 Release Notes][suse-ha-12sp3-relnotes] * [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
+* [NetApp NFS Best Practices](https://www.netapp.com/media/10720-tr-4067.pdf)
## Overview
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure primitive fs_<b>QAS</b>_ASCS Filesystem device='<b>10.1.0.4</b>:/usrsap<b>qas</b>/usrsap<b>QAS</b>ascs' directory='/usr/sap/<b>QAS</b>/ASCS<b>00</b>' fstype='nfs' options='sec=sys,vers=4.1' \ op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \
- op monitor interval=20s timeout=40s
+ op monitor interval=20s timeout=105s
sudo crm configure primitive vip_<b>QAS</b>_ASCS IPaddr2 \ params ip=<b>10.1.1.20</b> \
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure primitive fs_<b>QAS</b>_ERS Filesystem device='<b>10.1.0.4</b>:/usrsap<b>qas</b>/usrsap<b>QAS</b>ers' directory='/usr/sap/<b>QAS</b>/ERS<b>01</b>' fstype='nfs' options='sec=sys,vers=4.1' \ op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \
- op monitor interval=20s timeout=40s
+ op monitor interval=20s timeout=105s
sudo crm configure primitive vip_<b>QAS</b>_ERS IPaddr2 \ params ip=<b>10.1.1.21</b> \
The following items are prefixed with either **[A]** - applicable to all nodes,
If using enqueue server 1 architecture (ENSA1), define the resources as follows: <pre><code>sudo crm configure property maintenance-mode="true"
-
+ # If using NFSv3
sudo crm configure primitive rsc_sap_<b>QAS</b>_ASCS<b>00</b> SAPInstance \ operations \$id=rsc_sap_<b>QAS</b>_ASCS<b>00</b>-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=<b>QAS</b>_ASCS<b>00</b>_<b>anftstsapvh</b> START_PROFILE="/sapmnt/<b>QAS</b>/profile/<b>QAS</b>_ASCS<b>00</b>_<b>anftstsapvh</b>" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10
-
+
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_<b>QAS</b>_ASCS<b>00</b> SAPInstance \
+ operations \$id=rsc_sap_<b>QAS</b>_ASCS<b>00</b>-operations \
+ op monitor interval=11 timeout=105 on-fail=restart \
+ params InstanceName=<b>QAS</b>_ASCS<b>00</b>_<b>anftstsapvh</b> START_PROFILE="/sapmnt/<b>QAS</b>/profile/<b>QAS</b>_ASCS<b>00</b>_<b>anftstsapvh</b>" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 failure-timeout=105 migration-threshold=1 priority=10
+
+ # If using NFSv3
sudo crm configure primitive rsc_sap_<b>QAS</b>_ERS<b>01</b> SAPInstance \ operations \$id=rsc_sap_<b>QAS</b>_ERS<b>01</b>-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b> START_PROFILE="/sapmnt/<b>QAS</b>/profile/<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b>" AUTOMATIC_RECOVER=false IS_ERS=true \ meta priority=1000
-
+
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_<b>QAS</b>_ERS<b>01</b> SAPInstance \
+ operations \$id=rsc_sap_<b>QAS</b>_ERS<b>01</b>-operations \
+ op monitor interval=11 timeout=105 on-fail=restart \
+ params InstanceName=<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b> START_PROFILE="/sapmnt/<b>QAS</b>/profile/<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b>" AUTOMATIC_RECOVER=false IS_ERS=true \
+ meta priority=1000
+ sudo crm configure modgroup g-<b>QAS</b>_ASCS add rsc_sap_<b>QAS</b>_ASCS<b>00</b> sudo crm configure modgroup g-<b>QAS</b>_ERS add rsc_sap_<b>QAS</b>_ERS<b>01</b>
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
<pre><code>sudo crm configure property maintenance-mode="true"
+ # If using NFSv3
sudo crm configure primitive rsc_sap_<b>QAS</b>_ASCS<b>00</b> SAPInstance \ operations \$id=rsc_sap_<b>QAS</b>_ASCS<b>00</b>-operations \ op monitor interval=11 timeout=60 on-fail=restart \
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_<b>QAS</b>_ASCS<b>00</b> SAPInstance \
+ operations \$id=rsc_sap_<b>QAS</b>_ASCS<b>00</b>-operations \
+ op monitor interval=11 timeout=105 on-fail=restart \
+ params InstanceName=<b>QAS</b>_ASCS<b>00</b>_<b>anftstsapvh</b> START_PROFILE="/sapmnt/<b>QAS</b>/profile/<b>QAS</b>_ASCS<b>00</b>_<b>anftstsapvh</b>" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000
+
+ # If using NFSv3
sudo crm configure primitive rsc_sap_<b>QAS</b>_ERS<b>01</b> SAPInstance \ operations \$id=rsc_sap_<b>QAS</b>_ERS<b>01</b>-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b> START_PROFILE="/sapmnt/<b>QAS</b>/profile/<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b>" AUTOMATIC_RECOVER=false IS_ERS=true
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_<b>QAS</b>_ERS<b>01</b> SAPInstance \
+ operations \$id=rsc_sap_<b>QAS</b>_ERS<b>01</b>-operations \
+ op monitor interval=11 timeout=105 on-fail=restart \
+ params InstanceName=<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b> START_PROFILE="/sapmnt/<b>QAS</b>/profile/<b>QAS</b>_ERS<b>01</b>_<b>anftstsapers</b>" AUTOMATIC_RECOVER=false IS_ERS=true
+
sudo crm configure modgroup g-<b>QAS</b>_ASCS add rsc_sap_<b>QAS</b>_ASCS<b>00</b> sudo crm configure modgroup g-<b>QAS</b>_ERS add rsc_sap_<b>QAS</b>_ERS<b>01</b>
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+ > [!NOTE]
+ > The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-specific pause, related to NFSv4.1 lease renewals.
+ > For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
+ > The timeouts in the above configuration may need to be adapted to the specific SAP setup.
+ Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running. <pre><code>sudo crm_mon -r
virtual-network-manager Create Virtual Network Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-powershell.md
Title: 'Quickstart: Create a mesh network with Azure Virtual Network Manager using Azure PowerShell' description: Use this quickstart to learn how to create a mesh network with Virtual Network Manager using Azure PowerShell.--++ Last updated 11/02/2021
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Make sure you have the latest PowerShell modules, or you can use Azure Cloud Shell in the portal.
+* During preview, the `4.15.1-preview` version of `Az.Network` is required to access the required cmdlets.
* If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+> [!IMPORTANT]
+> Perform this quickstart using Powershell localy, not through Azure Cloud Shell. The version of `Az.Network` in Azure Cloud Shell does not currently support the Azure Virtual Network Manager cmdlets.
+ ## Register subscription for public preview Use the following command to register your Azure subscription for Public Preview of Azure Virtual Network
Register-AzProviderFeature -FeatureName AllowAzureNetworkManager -ProviderNamesp
Install the latest *Az.Network* Azure PowerShell module using this command: ```azurepowershell-interactive
-Install-Module -Name Az.Network -AllowPrerelease
+ Install-Module -Name Az.Network -RequiredVersion 4.15.1-preview -AllowPrerelease
``` ## Create a resource group
New-AzResourceGroup @rg
1. Define the scope and access type this Azure Virtual Network Manager instance will have. You can choose to create the scope with subscriptions group or management group or a combination of both. Create the scope by using New-AzNetworkManagerScope. ```azurepowershell-interactive
- Import-Module -Name Az.Network -RequiredVersion "4.12.1"
+ Import-Module -Name Az.Network -RequiredVersion "4.15.1"
[System.Collections.Generic.List[string]]$subGroup = @() $subGroup.Add("/subscriptions/abcdef12-3456-7890-abcd-ef1234567890")
virtual-network-manager How To Create Mesh Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network-powershell.md
Title: 'Create a mesh network topology with Azure Virtual Network Manager (Preview) - Azure PowerShell' description: Learn how to create a mesh network topology with Azure Virtual Network Manager using Azure PowerShell.--++ Last updated 11/02/2021
This section will help you create a network group containing the virtual network
}' ```
-1. Create the network group using either the static membership group (GroupMember) or the dynamic membership group (ConditionalMembership) define previously using New-AzNetworkManagerGroup.
+1. Create the network group using either the static membership group (GroupMember) or the dynamic membership group (ConditionalMembership) defined previously using New-AzNetworkManagerGroup.
```azurepowershell-interactive $ng = @{
virtual-wan About Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md
LetΓÇÖs say there are flows from a virtual network VNET1 connected to Hub_1 to v
| Flow destination route-prefix | HRP of Hub_1 | HRP of Hub_2 | Path used by flow | All possible paths | Explanation | | | | | | ||
-| 10.61.1.5 | AS Path | N/A | 4 | 1,2,3,4 | Paths 1, 4 and 5 have the shortest AS Path but ER takes precedence over VPN, so path 4 is chosen. |
-| 10.61.1.5 | VPN | N/A | 1 | 1,2,3,4 | VPN route is preferred over ER, so paths 1 and 2 are preferred, but path 1 has the shorter AS Path. |
-| 10.61.1.5 | ER | N/A | 4 | 1,2,3,4 | ER routes 3 and 4 are selected, but path 4 has the shorter AS Path. |
+| 10.61.1.5 | AS Path | Any setting | 4 | 1,2,3,4 | Paths 1 and 4 have the shortest AS Path but for local routes ER takes precedence over VPN, so path 4 is chosen. |
+| 10.61.1.5 | VPN | Any setting | 1 | 1,2,3,4 | VPN route is preferred over ER due to HRP setting, so paths 1 and 2 are preferred, but path 1 has the shorter AS Path. |
+| 10.61.1.5 | ER | Any setting | 4 | 1,2,3,4 | ER routes 3 and 4 are preferred, but path 4 has the shorter AS Path. |
**When only remote routes are available:**
virtual-wan Azure Monitor Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/azure-monitor-insights.md
Previously updated : 06/22/2021 Last updated : 06/09/2022
You can select **View detailed metrics** to access the detailed **Metrics** page
## Next steps * To learn more, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
-* For a full description of all the Virtual WAN metrics, see [Monitoring Virtual WAN](monitor-virtual-wan.md).
+* For a full description of all the Virtual WAN metrics, see [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md).
+* For additional Virtual WAN monitoring information, see [Monitoring Azure Virtual WAN](monitor-virtual-wan.md)
virtual-wan Monitor Bgp Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-bgp-dashboard.md
+
+ Title: 'Monitor S2S VPN BGP routes - BGP dashboard'
+
+description: Learn how to monitor BGP peers for site-to-site VPNs using the BGP dashboard.
+++ Last updated : 06/06/2022++
+# Monitor site-to-site VPN BGP routes using the BGP dashboard
+
+This article helps you monitor Virtual WAN site-to-site VPN BGP information using the **BGP Dashboard**. Using the BGP dashboard, you can monitor BGP peers, advertised routes, and learned routes. The BGP dashboard is available for site-to-site VPNs that are configured to use BGP. The BGP dashboard can be accessed on the page for the site that you want to monitor.
+
+## BGP dashboard
+
+The following steps walk you through one way to navigate to your site and open the BGP dashboard.
+
+1. Go to the **Azure portal -> your virtual WAN**.
+1. On your virtual WAN, in the left pane, under Connectivity, clickΓÇ»**VPN sites**. On the VPN sites page, you can see the sites that are connected to your virtual WAN.
+1. Click the site that you want to view.
+1. On the page for the site, click **BGP Dashboard**.
+
+ :::image type="content" source="./media/monitor-bgp-dashboard/bgp-dashboard.png" alt-text="Screenshot shows the overview page for the site with the B G P dashboard highlighted." lightbox="./media/monitor-bgp-dashboard/bgp-dashboard.png":::
+
+## <a name="peers"></a>BGP peers
+
+1. To open the BGP Peers page, go to the **BGP Dashboard**.
+
+1. The **BGP Peers** page is the main view that you see when you open the BGP dashboard.
+
+ :::image type="content" source="./media/monitor-bgp-dashboard/bgp-peers.png" alt-text="Screenshot shows the B G P Peers page." lightbox="./media/monitor-bgp-dashboard/bgp-peers.png":::
+
+1. On the **BGP Peers** page, the following values are available:
+
+ |Value | Description|
+ |||
+ |Peer address| The BGP address of the remote connection. |
+ |Local address | The BGP address of the virtual wan hub. |
+ | Gateway instance| The instance of the virtual wan hub. |
+ |ASN| The Autonomous System Number. |
+ |Status | The status the peer is currently in.<br>Available statuses are: Connecting, Connected |
+ |Connected duration |The length of time the peer has been connected. HH:MM:SS |
+ |Routes received |The number of routes received by the remote site. |
+ |Messages sent |The number of messages sent to the remote site. |
+ |Messages received | The number of messages received from the remote site. |
+
+## <a name="advertised"></a>Advertised routes
+
+The **Advertised Routes** page contains the routes that are being advertised to remote sites.
+
+1. On the **BGP Peers** page, click **Routes the site-to-site gateway is advertising** to show the **Advertised Routes** page.
+
+ :::image type="content" source="./media/monitor-bgp-dashboard/routes-advertising.png" alt-text="Screenshot shows B G P peers page with routes the site-to-site gateway is advertising highlighted." lightbox="./media/monitor-bgp-dashboard/routes-advertising.png":::
+
+1. On the **Advertised Routes** page, you can view the top 50 BGP routes. To view all routes, click **Download advertised routes**.
+
+ :::image type="content" source="./media/monitor-bgp-dashboard/advertised-routes.png" alt-text="Screenshot shows the Advertised Routes page with Download advertised routes highlighted." lightbox="./media/monitor-bgp-dashboard/advertised-routes.png":::
+
+1. On the **Advertised Routes** page, the following values are available:
+
+ |Value | Description|
+ |||
+ | Network |The address prefix that is being advertised. |
+ | Link Name | The name of the link. |
+ | Local address | A BGP address of the virtual wan hub.|
+ | Next hop | The next hop address for the prefix. |
+ |AS Path | The BGP AS path attribute. |
+
+## <a name="learned"></a>Learned routes
+
+The **Learned Routes** page shows the routes that are learned.
+
+1. On the **BGP Peers** page, click **Routes the site-to-site gateway is learning** to show the **Learned Routes** page.
+
+ :::image type="content" source="./media/monitor-bgp-dashboard/routes-learning.png" alt-text="Screenshot shows B G P peers page with routes the site-to-site gateway is learning highlighted." lightbox="./media/monitor-bgp-dashboard/routes-learning.png":::
+
+1. On the **Learned Routes** page, you can view the top 50 BGP routes. To view all routes, click **Download learned routes**.
+
+ :::image type="content" source="./media/monitor-bgp-dashboard/learned-routes.png" alt-text="Screenshot shows the Learned Routes page with Download advertised routes highlighted." lightbox="./media/monitor-bgp-dashboard/learned-routes.png":::
+
+1. On the **Learned Routes** page, the following values are available:
+
+ |Value | Description|
+ |||
+ | Network | The address prefix that is being advertised. |
+ | Link Name |The name of the link. |
+ |Local address |A BGP address of the virtual wan hub. |
+ |Source Peer |The address the routes is being learned from. |
+ | AS Path | The BGP AS path attribute. |
+
+## Next steps
+
+For more monitoring information, see [Monitoring Azure Virtual WAN](monitor-virtual-wan.md).
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
+
+ Title: 'Monitoring Azure Virtual WAN - Data reference'
+description: Learn about Azure Virtual WAN logs and metrics using Azure Monitor.
+++ Last updated : 06/08/2022+++++
+# Monitoring Virtual WAN data reference
+
+This article provides a reference of log and metric data collected to analyze the performance and availability of Virtual WAN. See [Monitoring Virtual WAN](monitor-virtual-wan.md) for details on collecting and analyzing monitoring data for Virtual WAN.
+
+## <a name="metrics"></a>Metrics
+
+Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
+
+### <a name="hub-router-metrics"></a>Virtual hub router metrics
+
+The following metric is available for virtual hub router within a virtual hub:
+
+| Metric | Description|
+| | |
+| **Virtual Hub Data Processed** | Data in bytes/second on how much traffic traverses the virtual hub router in a given time period. Note that only the following flows use the virtual hub router - VNET to VNET same hub and interhub Branch to VNET interhub via VPN or ExpressRoute gateways.|
+
+#### PowerShell steps
+
+To query, use the following example PowerShell commands. The necessary fields are explained below the example.
+
+**Step 1:**
+
+```azurepowershell-interactive
+$MetricInformation = Get-AzMetric -ResourceId "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/VirtualHubs/<VirtualHubName>" -MetricName "VirtualHubDataProcessed" -TimeGrain 00:05:00 -StartTime 2022-2-20T01:00:00Z -EndTime 2022-2-20T01:30:00Z -AggregationType Average
+```
+
+**Step 2:**
+
+```azurepowershell-interactive
+$MetricInformation.Data
+```
+
+* **Resource ID** - Your virtual hub's Resource ID can be found on the Azure portal. Navigate to the virtual hub page within vWAN and select JSON View under Essentials.
+
+* **Metric Name** - Refers to the name of the metric you're querying, which in this case is called 'VirtualHubDataProcessed'. This metric shows all the data that the virtual hub router has processed in the selected time period of the hub.
+
+* **Time Grain** - Refers to the frequency at which you want to see the aggregation. In the current command, you'll see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D.
+
+* **Start Time and End Time** - This time is based on UTC, so please ensure that you're entering UTC values when inputting these parameters. If these parameters aren't used, by default the past one hour's worth of data is shown.
+
+* **Aggregation Types** - Average/Minimum/Maximum/Total
+ * Average - Total average of bytes/sec per the selected time period.
+ * Minimum ΓÇô Minimum bytes that were sent during the selected time grain period.
+ * Maximum ΓÇô Maximum bytes that were sent during the selected time grain period.
+ * Total ΓÇô Total bytes/sec that were sent during the selected time grain period.
+
+### <a name="s2s-metrics"></a>Site-to-site VPN gateway metrics
+
+The following metrics are available for Virtual WAN site-to-site VPN gateways:
+
+#### Tunnel Packet Drop metrics
+
+| Metric | Description|
+| | |
+| **Tunnel Egress Packet Drop Count** | Count of Outgoing packets dropped by tunnel.|
+| **Tunnel Ingress Packet Drop Count** | Count of Incoming packets dropped by tunnel.|
+| **Tunnel NAT Packet Drops** | Number of NATed packets dropped on a tunnel by drop type and NAT rule.|
+| **Tunnel Egress TS Mismatch Packet Drop** | Outgoing packet drop count from traffic selector mismatch of a tunnel.|
+| **Tunnel Ingress TS Mismatch Packet Drop** | Incoming packet drop count from traffic selector mismatch of a tunnel.|
+
+#### IPSec metrics
+
+| Metric | Description|
+| | |
+| **Tunnel MMSA Count** | Number of MMSAs getting created or deleted.|
+| **Tunnel QMSA Count** | Number of IPSEC QMSAs getting created or deleted.|
+
+#### Routing metrics
+
+| Metric | Description|
+| | |
+| **BGP Peer Status** | BGP connectivity status per peer and per instance.|
+| **BGP Routes Advertised** | Number of routes advertised per peer and per instance.|
+| **BGP Routes Learned** | Number of routes learned per peer and per instance.|
+| **VNET Address Prefix Count** | Number of VNET address prefixes that are used/advertised by the gateway.|
+
+You can review per peer and instance metrics by selecting **Apply splitting** and choosing the preferred value.
+
+#### Traffic Flow metrics
+
+| Metric | Description|
+| | |
+| **Gateway Bandwidth** | Average site-to-site aggregate bandwidth of a gateway in bytes per second.|
+| **Tunnel Bandwidth** | Average bandwidth of a tunnel in bytes per second.|
+| **Tunnel Egress Bytes** | Outgoing bytes of a tunnel. |
+| **Tunnel Egress Packets** | Outgoing packet count of a tunnel. |
+| **Tunnel Ingress Bytes** | Incoming bytes of a tunnel.|
+| **Tunnel Ingress Packet** | Incoming packet count of a tunnel.|
+| **Tunnel Peak PPS** | Number of packets per second per link connection in the last minute.|
+| **Tunnel Flow Count** | Number of distinct flows created per link connection.|
+
+### <a name="p2s-metrics"></a>Point-to-site VPN gateway metrics
+
+The following metrics are available for Virtual WAN point-to-site VPN gateways:
+
+| Metric | Description|
+| | |
+| **Gateway P2S Bandwidth** | Average point-to-site aggregate bandwidth of a gateway in bytes per second. |
+| **P2S Connection Count** |Point-to-site connection count of a gateway. Point-to-site connection count of a gateway. To ensure you're viewing accurate Metrics in Azure Monitor, select the **Aggregation Type** for **P2S Connection Count** as **Sum**. You may also select **Max** if you also Split By **Instance**. |
+| **User VPN Routes Count** | Number of User VPN Routes configured on the VPN gateway. This metric can be broken down into **Static** and **Dynamic** Routes.
+
+### <a name="er-metrics"></a>Azure ExpressRoute gateway metrics
+
+The following metrics are available for Azure ExpressRoute gateways:
+
+| Metric | Description|
+| | |
+| **BitsInPerSecond** | Bits per second ingressing Azure via ExpressRoute gateway that can be further split for specific connections. |
+| **BitsOutPerSecond** | Bits per second egressing Azure via ExpressRoute gateway that can be further split for specific connection. |
+| **Bits Received Per Second** | Total Bits received on ExpressRoute gateway per second. |
+| **CPU Utilization** | CPU Utilization of the ExpressRoute gateway.|
+| **Packets per second** | Total Packets received on ExpressRoute gateway per second.|
+| **Count of routes advertised to peer**| Count of Routes Advertised to Peer by ExpressRoute gateway. |
+| **Count of routes learned from peer**| Count of Routes Learned from Peer by ExpressRoute gateway.|
+| **Frequency of routes changed** | Frequency of Route changes in ExpressRoute gateway.|
+| **Number of VMs in Virtual Network**| Number of VMs that use this ExpressRoute gateway.|
+
+### <a name="metrics-steps"></a>View gateway metrics
+
+The following steps help you locate and view metrics:
+
+1. In the portal, navigate to the virtual hub that has the gateway.
+
+1. Select **VPN (Site to site)** to locate a site-to-site gateway, **ExpressRoute** to locate an ExpressRoute gateway, or **User VPN (Point to site)** to locate a point-to-site gateway.
+
+1. Select **Metrics**.
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/view-metrics.png" alt-text="Screenshot shows a site to site VPN pane with View in Azure Monitor selected." lightbox="./media/monitor-virtual-wan-reference/view-metrics.png":::
+
+1. On the **Metrics** page, you can view the metrics that you're interested in.
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/metrics-page.png" alt-text="Screenshot that shows the 'Metrics' page with the categories highlighted." lightbox="./media/monitor-virtual-wan-reference/metrics-page.png":::
+
+## <a name="diagnostic"></a>Diagnostic logs
+
+The following diagnostic logs are available, unless otherwise specified.
+
+### <a name="s2s-diagnostic"></a>Site-to-site VPN gateway diagnostics
+
+The following diagnostics are available for Virtual WAN site-to-site VPN gateways:
+
+| Metric | Description|
+| | |
+| **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and additional diagnostics.|
+| **Tunnel Diagnostic Logs** | These are IPsec tunnel-related logs such as connect and disconnect events for a site-to-site IPsec tunnel, negotiated SAs, disconnect reasons, and additional diagnostics.|
+| **Route Diagnostic Logs** | These are logs related to events for static routes, BGP, route updates, and additional diagnostics. |
+| **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections. |
+
+### <a name="p2s-diagnostic"></a>Point-to-site VPN gateway diagnostics
+
+The following diagnostics are available for Virtual WAN point-to-site VPN gateways:
+
+| Metric | Description|
+| | |
+| **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and other diagnostics. |
+| **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections.|
+| **P2S Diagnostic Logs** | These are User VPN (Point-to-site) P2S configuration and client events. They include client connect/disconnect, VPN client address allocation, and other diagnostics.|
+
+### ExpressRoute gateway diagnostics
+
+Diagnostic logs for ExpressRoute gateways in Azure Virtual WAN aren't supported.
+
+### <a name="view-diagnostic"></a>View diagnostic logs configuration
+
+The following steps help you create, edit, and view diagnostic settings:
+
+1. In the portal, navigate to your Virtual WAN resource, then select **Hubs** in the **Connectivity** group.
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/select-hub.png" alt-text="Screenshot that shows the Hub selection in the vWAN Portal." lightbox="./media/monitor-virtual-wan-reference/select-hub.png":::
+
+1. Under the **Connectivity** group on the left select the gateway you want to examine the diagnostics:
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/select-hub-gateway.png" alt-text="Screenshot that shows the Connectivity section for the hub." lightbox="./media/monitor-virtual-wan-reference/select-hub-gateway.png":::
+
+1. On the right part of the page, click on **View in Azure Monitor** link right to **Logs** then select an option. You can choose to send to Log Analytics, stream to an event hub, or to simply archive to a storage account.
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png" alt-text="Screenshot for Select View in Azure Monitor for Logs." lightbox="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png":::
+
+1. In this page, you can create new diagnostic setting (**+Add diagnostic setting**) or edit existing one (**Edit setting**). You can choose to send the diagnostic logs to Log Analytics (as shown in the example below), stream to an event hub, send to a 3rd-party solution, or to archive to a storage account.
+
+ :::image type="content" source="./media/monitor-virtual-wan-reference/select-gateway-settings.png" alt-text="Screenshot for Select Diagnostic Log settings." lightbox="./media/monitor-virtual-wan-reference/select-gateway-settings.png":::
+
+### Log Analytics sample query
+
+If you selected to send diagnostic data to a Log Analytics Workspace, then you can use SQL-like queries such as the example below to examine the data. For more information, see [Log Analytics Query Language](/services-hub/health/log_analytics_query_language).
+
+The following example contains a query to obtain site-to-site route diagnostics.
+
+`AzureDiagnostics | where Category == "RouteDiagnosticLog"`
+
+Replace the values below, after the **= =**, as needed based on the tables reported in the previous section of this article.
+
+* "GatewayDiagnosticLog"
+* "IKEDiagnosticLog"
+* "P2SDiagnosticLogΓÇ¥
+* "TunnelDiagnosticLog"
+* "RouteDiagnosticLog"
+
+In order to execute the query, you have to open the Log Analytics resource you configured to receive the diagnostic logs, and then select **Logs** under the **General** tab on the left side of the pane:
++
+For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided to make log analysis easier. Using its graphical interface, it will be possible to investigate into the diagnostic data without manually writing any Log Analytics query.
+
+## <a name="activity-logs"></a>Activity logs
+
+**Activity log** entries are collected by default and can be viewed in the Azure portal. You can use Azure activity logs (formerly known as *operational logs* and *audit logs*) to view all operations submitted to your Azure subscription.
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
+
+## <a name="schemas"></a>Schemas
+
+For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../azure-monitor/essentials/resource-logs-schema.md).
+
+When reviewing any metrics through Log Analytics, the output will contain the following columns:
+
+|**Column**|**Type**|**Description**|
+| | | |
+|TimeGrain|string|PT1M (metric values are pushed every minute)|
+|Count|real|Usually equal to 2 (each MSEE pushes a single metric value every minute)|
+|Minimum|real|The minimum of the two metric values pushed by the two MSEEs|
+|Maximum|real|The maximum of the two metric values pushed by the two MSEEs|
+|Average|real|Equal to (Minimum + Maximum)/2|
+|Total|real|Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried)|
+
+## <a name="azure-firewall"></a>Monitoring secured hub (Azure Firewall)
+
+If you have chosen to secure your virtual hub using Azure Firewall, relevant logs and metrics are available here: [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md).
+You can monitor the Secured Hub using Azure Firewall logs and metrics. You can also use activity logs to audit operations on Azure Firewall resources.
+For every Azure Virtual WAN you secure and convert to a Secured Hub, an explicit firewall resource object is created in the resource group where the hub is located.
++
+Diagnostics and logging configuration must be done from there accessing the **Diagnostic Setting** tab:
++
+## Next steps
+
+* To learn how to monitor Azure Firewall logs and metrics, see [Tutorial: Monitor Azure Firewall logs](../firewall/firewall-diagnostics.md).
+* For additional information about Virtual WAN monitoring, see [Monitoring Azure Virtual WAN](monitor-virtual-wan.md).
+* To learn more about metrics in Azure Monitor, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
Title: 'Monitoring Azure Virtual WAN'
-description: Learn about Azure Virtual WAN logs and metrics using Azure Monitor.
+ Title: Monitoring Virtual WAN
+
+description: Start here to learn how to monitor Virtual WAN.
+ Previously updated : 05/25/2022--+ Last updated : 06/02/2022
-# Monitoring Virtual WAN
-
-You can monitor Azure Virtual WAN using Azure Monitor. Virtual WAN is a networking service that brings together many networking, security, and routing functionalities to provide a single operational interface. Virtual WAN VPN gateways, ExpressRoute gateways, and Azure Firewall have logging and metrics available through Azure Monitor.
-
-This article discusses metrics and diagnostics that are available through the portal. Metrics are lightweight and can support near real-time scenarios, making them useful for alerting and fast issue detection.
-
-### Monitoring Secured Hub (Azure Firewall)
-
-If you have chosen to secure your Virtual Hub using Azure Firewall, relevant logs and metrics are available here: [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md).
-You can monitor the Secured Hub using Azure Firewall logs and metrics. You can also use activity logs to audit operations on Azure Firewall resources.
-For every Azure Virtual WAN you secure and convert to a Secured Hub, an explicit firewall resource object is created in the resource group where the hub is located.
--
-Diagnostics and logging configuration must be done from there accessing the **Diagnostic Setting** tab:
--
-## Metrics
-
-Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
-
-### Virtual Hub Router
-
-The following metric is available for Virtual Hub Router within a Virtual Hub:
-
-#### Virtual Hub Router Metric
-
-| Metric | Description|
-| | |
-| **Virtual Hub Data Processed** | Data in bytes/second on how much traffic traverses the Virtual Hub Router in a given time period. Please note only the following flows use the Virtual Hub Router - VNET to VNET same hub and inter hub Branch to VNET interhub via VPN or Express Route Gateways.|
-
-##### PowerShell Commands
-
-To query via PowerShell, use the following commands:
-
-**Step 1:**
-```azurepowershell-interactive
-$MetricInformation = Get-AzMetric -ResourceId "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/VirtualHubs/<VirtualHubName>" -MetricName "VirtualHubDataProcessed" -TimeGrain 00:05:00 -StartTime 2022-2-20T01:00:00Z -EndTime 2022-2-20T01:30:00Z -AggregationType Average
-```
-
-**Step 2:**
-```azurepowershell-interactive
-$MetricInformation.Data
-```
-
-**Resource ID** - Your Virtual Hub's Resource ID can be found on the Azure portal. Navigate to the Virtual Hub page within vWAN and select JSON View under Essentials.
-
-**Metric Name** - Refers to the name of the metric you are querying, which in this case is called 'VirtualHubDataProcessed'. This metric shows all the data that the Virtual Hub Router has processed in the selected time period of the hub.
-
-**Time Grain** - Refers to the frequency at which you want to see the aggregation. In the current command, you will see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D.
-
-**Start Time and End Time** - This time is based on UTC, so please ensure that you are entering UTC values when inputting these parameters. If these parameters are not used, by default the past one hour's worth of data is shown.
-
-**Aggregation Types** - Average/Minimum/Maximum/Total
-* Average - Total average of bytes/sec per the selected time period
-* Minimum ΓÇô Minimum bytes that were sent during the selected time grain period.
-* Maximum ΓÇô Maximum bytes that were sent during the selected time grain period
-* Total ΓÇô Total bytes/sec that were sent during the selected time grain period.
-
-### Site-to-site VPN gateways
-
-The following metrics are available for Azure site-to-site VPN gateways:
-
-#### Tunnel Packet Drop Metrics
-| Metric | Description|
-| | |
-| **Tunnel Egress Packet Drop Count** | Count of Outgoing packets dropped by tunnel.|
-| **Tunnel Ingress Packet Drop Count** | Count of Incoming packets dropped by tunnel.|
-| **Tunnel NAT Packet Drops** | Number of NATed packets dropped on a tunnel by drop type and NAT rule.|
-| **Tunnel Egress TS Mismatch Packet Drop** | Outgoing packet drop count from traffic selector mismatch of a tunnel.|
-| **Tunnel Ingress TS Mismatch Packet Drop** | Incoming packet drop count from traffic selector mismatch of a tunnel.|
-
-#### IPSEC Metrics
-| Metric | Description|
-| | |
-| **Tunnel MMSA Count** | Number of MMSAs getting created or deleted.|
-| **Tunnel QMSA Count** | Number of IPSEC QMSAs getting created or deleted.|
-
-#### Routing Metrics
-| Metric | Description|
-| | |
-| **BGP Peer Status** | BGP connectivity status per peer and per instance.|
-| **BGP Routes Advertised** | Number of routes advertised per peer and per instance.|
-| **BGP Routes Learned** | Number of routes learned per peer and per instance.|
-| **VNET Address Prefix Count** | Number of VNET address prefixes that are used/advertised by the gateway.|
-
-You can review per peer and instance metrics by selecting **Apply splitting** and choosing the preferred value.
-
-#### Traffic Flow Metrics
-| Metric | Description|
-| | |
-| **Gateway Bandwidth** | Average site-to-site aggregate bandwidth of a gateway in bytes per second.|
-| **Tunnel Bandwidth** | Average bandwidth of a tunnel in bytes per second.|
-| **Tunnel Egress Bytes** | Outgoing bytes of a tunnel. |
-| **Tunnel Egress Packets** | Outgoing packet count of a tunnel. |
-| **Tunnel Ingress Bytes** | Incoming bytes of a tunnel.|
-| **Tunnel Ingress Packet** | Incoming packet count of a tunnel.|
-| **Tunnel Peak PPS** | Number of packets per second per link connection in the last minute.|
-| **Tunnel Flow Count** | Number of distinct flows created per link connection.|
-
-### Point-to-site VPN gateways
-
-The following metrics are available for Azure point-to-site VPN gateways:
-
-| Metric | Description|
-| | |
-| **Gateway P2S Bandwidth** | Average point-to-site aggregate bandwidth of a gateway in bytes per second. |
-| **P2S Connection Count** |Point-to-site connection count of a gateway. Point-to-site connection count of a gateway. To ensure you are viewing accurate Metrics in Azure Monitor, select the **Aggregation Type** for **P2S Connection Count** as **Sum**. You may also select **Max** if you also Split By **Instance**. |
-| **User VPN Routes Count** | Number of User VPN Routes configured on the VPN Gateway. This metric can be broken down into **Static** and **Dynamic** Routes.
-
-### Azure ExpressRoute gateways
-
-The following metrics are available for Azure ExpressRoute gateways:
-
-| Metric | Description|
-| | |
-| **BitsInPerSecond** | Bits per second ingressing Azure via ExpressRoute gateway which can be further split for specific connections. |
-| **BitsOutPerSecond** | Bits per second egressing Azure via ExpressRoute gateway which can be further split for specific connection. |
-| **Bits Received Per Second** | Total Bits received on ExpressRoute gateway per second. |
-| **CPU Utilization** | CPU Utilization of the ExpressRoute gateway.|
-| **Packets per second** | Total Packets received on ExpressRoute gateway per second.|
-| **Count of routes advertised to peer**| Count of Routes Advertised to Peer by ExpressRoute gateway. |
-| **Count of routes learned from peer**| Count of Routes Learned from Peer by ExpressRoute gateway.|
-| **Frequency of routes changed** | Frequency of Route changes in ExpressRoute gateway.|
-| **Number of VMs in Virtual Network**| Number of VMs that use this ExpressRoute gateway.|
-
-### <a name="metrics-steps"></a>View gateway metrics
-
-The following steps help you locate and view metrics:
-
-1. In the portal, navigate to the virtual hub that has the gateway.
-
-2. Select **VPN (Site to site)** to locate a site-to-site gateway, **ExpressRoute** to locate an ExpressRoute gateway, or **User VPN (Point to site)** to locate a point-to-site gateway.
-
-3. Select **Metrics**.
-
- :::image type="content" source="./media/monitor-virtual-wan/view-metrics.png" alt-text="Screenshot shows a site to site VPN pane with View in Azure Monitor selected.":::
-
-4. On the **Metrics** page, you can view the metrics that you are interested in.
-
- :::image type="content" source="./media/monitor-virtual-wan/metrics-page.png" alt-text="Screenshot that shows the 'Metrics' page with the categories highlighted.":::
-
-## <a name="diagnostic"></a>Diagnostic logs
-
-### Site-to-site VPN gateways
-
-The following diagnostics are available for Azure site-to-site VPN gateways:
+# Monitoring Azure Virtual WAN
-| Metric | Description|
-| | |
-| **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and additional diagnostics.|
-| **Tunnel Diagnostic Logs** | These are IPsec tunnel-related logs such as connect and disconnect events for a site-to-site IPsec tunnel, negotiated SAs, disconnect reasons, and additional diagnostics.|
-| **Route Diagnostic Logs** | These are logs related to events for static routes, BGP, route updates, and additional diagnostics. |
-| **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections. |
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-### Point-to-site VPN gateways
+This article describes the monitoring data generated by Azure Virtual WAN. Virtual WAN uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
-The following diagnostics are available for Azure point-to-site VPN gateways:
+## Virtual WAN Insights
-| Metric | Description|
-| | |
-| **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and other diagnostics. |
-| **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections.|
-| **P2S Diagnostic Logs** | These are User VPN (Point-to-site) P2S configuration and client events. They include client connect/disconnect, VPN client address allocation, and other diagnostics.|
+Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "Insights".
-### Express Route gateways
+Virtual WAN uses Network Insights to provide users and operators with the ability to view the state and status of a virtual WAN, presented via an autodiscovered topological map. Resource state and status overlays on the map give you a snapshot view of the overall health of the virtual WAN. You can navigate resources on the map via one-click access to the resource configuration pages of the Virtual WAN portal. For more information, see [Azure Monitor Network Insights for Virtual WAN](azure-monitor-insights.md).
-Diagnostic logs for Express Route gateways in Azure Virtual WAN are not supported.
+## Monitoring data
-### <a name="diagnostic-steps"></a>View diagnostic logs configuration
+Virtual WAN collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md).
-The following steps help you create, edit, and view diagnostic settings:
+See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for detailed information on the metrics and logs metrics created by Virtual WAN.
-1. In the portal, navigate to your Virtual WAN resource, then select **Hubs** in the **Connectivity** group.
+## Collection and routing
- :::image type="content" source="./media/monitor-virtual-wan/select-hub.png" alt-text="Screenshot that shows the Hub selection in the vWAN Portal.":::
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-2. Under the **Connectivity** group on the left select the gateway you want to examine the diagnostics:
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
- :::image type="content" source="./media/monitor-virtual-wan/select-hub-gateway.png" alt-text="Screenshot that shows the Connectivity section for the hub.":::
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Virtual WAN are listed in [Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md).
-3. On the right part of the page, click on **View in Azure Monitor** link right to **Logs** then select an option. You can choose to send to Log Analytics, stream to an event hub, or to simply archive to a storage account.
+> [!IMPORTANT]
+> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator.md).
- :::image type="content" source="./media/monitor-virtual-wan/view-hub-gateway-logs.png" alt-text="Screenshot for Select View in Azure Monitor for Logs.":::
+The metrics and logs you can collect are discussed in the following sections.
-4. In this page, you can create new diagnostic setting (**+Add diagnostic setting**) or edit existing one (**Edit setting**). You can choose to send the diagnostic logs to Log Analytics (as shown in the example below), stream to an event hub, send to a 3rd-party solution, or to archive to a storage account.
+## Analyzing metrics
- :::image type="content" source="./media/monitor-virtual-wan/select-gateway-settings.png" alt-text="Screenshot for Select Diagnostic Log settings.":::
+You can analyze metrics for Virtual WAN with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
-### <a name="sample-query"></a>Log Analytics sample query
+For a list of the platform metrics collected for Virtual WAN, see [Monitoring Virtual WAN data reference metrics](monitor-virtual-wan-reference.md#metrics).
-If you selected to send diagnostic data to a Log Analytics Workspace, then you can use SQL-like queries such as the example below to examine the data. For more information, see [Log Analytics Query Language](/services-hub/health/log_analytics_query_language).
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-The following example contains a query to obtain site-to-site route diagnostics.
+## Analyzing logs
-`AzureDiagnostics | where Category == "RouteDiagnosticLog"`
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-Replace the values below, after the **= =**, as needed based on the tables reported in the previous section of this article.
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md).
-* "GatewayDiagnosticLog"
-* "IKEDiagnosticLog"
-* "P2SDiagnosticLogΓÇ¥
-* "TunnelDiagnosticLog"
-* "RouteDiagnosticLog"
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-In order to execute the query, you have to open the Log Analytics resource you configured to receive the diagnostic logs, and then select **Logs** under the **General** tab on the left side of the pane:
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md).
+To analyze logs, go to your Virtual WAN gateway (User VPN, site-to-site VPN, or ExpressRoute). In the **Essentials** section of the page, select **Logs -> View in Azure Monitor**.
-For additional Log Analytics query samples for Azure VPN Gateway, both Site-to-Site and Point-to-Site, you can visit the page [Troubleshoot Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md).
-For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided to make log analysis easier. Using its graphical interface, it will be possible to investigate into the diagnostic data without manually writing any Log Analytics query.
+## Alerts
-## <a name="activity-logs"></a>Activity logs
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-types.md#metric-alerts), [logs](../azure-monitor/alerts/alerts-types.md#log-alerts), and the [activity log](../azure-monitor/alerts/alerts-types.md#activity-log-alerts). Different types of alerts have benefits and drawbacks.
-**Activity log** entries are collected by default and can be viewed in the Azure portal. You can use Azure activity logs (formerly known as *operational logs* and *audit logs*) to view all operations submitted to your Azure subscription.
+To create a metric alert, see [Tutorial: Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md).
## Next steps
-* To learn how to monitor Azure Firewall logs and metrics, see [Tutorial: Monitor Azure Firewall logs](../firewall/firewall-diagnostics.md).
-* To learn more about metrics in Azure Monitor, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
+* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a reference of the metrics, logs, and other important values created by Virtual WAN.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
web-application-firewall Web Application Firewall Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-troubleshoot.md
Title: Troubleshoot - Azure Web Application Firewall description: This article provides troubleshooting information for Web Application Firewall (WAF) for Azure Application Gateway -+ Previously updated : 11/14/2019- Last updated : 06/09/2022+
There are a few things you can do if requests that should pass through your Web
First, ensure youΓÇÖve read the [WAF overview](ag-overview.md) and the [WAF configuration](application-gateway-waf-configuration.md) documents. Also, make sure youΓÇÖve enabled [WAF monitoring](../../application-gateway/application-gateway-diagnostics.md) These articles explain how the WAF functions, how the WAF rule sets work, and how to access WAF logs.
-The OWASP rulesets are designed to be very strict out of the box, and to be tuned to suit the specific needs of the application or organization using WAF. It is entirely normal, and actually expected in many cases, to create exclusions, custom rules, and even disable rules that may be causing issues or false positives. Per-site and per-URI policies allow for these changes to only affect specific sites/URIs, so any changes shouldnΓÇÖt have to affect other sites that may not be running into the same issues.
+The OWASP rulesets are designed to be strict out of the box, and to be tuned to suit the specific needs of the application or organization using WAF. It's entirely normal, and expected in many cases, to create exclusions, custom rules, and even disable rules that may be causing issues or false positives. Per-site and per-URI policies allow for these changes to only affect specific sites/URIs. So any changes shouldnΓÇÖt have to affect other sites that may not be running into the same issues.
## Understanding WAF logs
-The purpose of WAF logs is to show every request that is matched or blocked by the WAF. It is a ledger of all evaluated requests that are matched or blocked. If you notice that the WAF blocks a request that it shouldn't (a false positive), you can do a few things. First, narrow down, and find the specific request. Look through the logs to find the specific URI, timestamp, or transaction ID of the request. When you find the associated log entries, you can begin to act on the false positives.
+The purpose of WAF logs is to show every request that is matched or blocked by the WAF. It's a ledger of all evaluated requests that are matched or blocked. If you notice that the WAF blocks a request that it shouldn't (a false positive), you can do a few things. First, narrow down, and find the specific request. Look through the logs to find the specific URI, timestamp, or transaction ID of the request. When you find the associated log entries, you can begin to act on the false positives.
For example, say you have a legitimate traffic containing the string *1=1* that you want to pass through your WAF. If you try the request, the WAF blocks traffic that contains your *1=1* string in any parameter or field. This is a string often associated with a SQL injection attack. You can look through the logs and see the timestamp of the request and the rules that blocked/matched.
The final two log entries show the request was blocked because the anomaly score
With this information, and the knowledge that rule 942130 is the one that matched the *1=1* string, you can do a few things to stop this from blocking your traffic: -- Use an Exclusion List
+- Use an exclusion list
+
+ For more information about exclusion lists, see [WAF configuration](application-gateway-waf-configuration.md).
- See [WAF configuration](application-gateway-waf-configuration.md) for more information about exclusion lists.
- Disable the rule. ### Using an exclusion list
-To make an informed decision about handling a false positive, itΓÇÖs important to familiarize yourself with the technologies your application uses. For example, say there isn't a SQL server in your technology stack, and you are getting false positives related to those rules. Disabling those rules doesn't necessarily weaken your security.
+To make an informed decision about handling a false positive, itΓÇÖs important to familiarize yourself with the technologies your application uses. For example, say there isn't a SQL server in your technology stack, and you're getting false positives related to those rules. Disabling those rules doesn't necessarily weaken your security.
-One benefit of using an exclusion list is that only a specific part of a request is being disabled. However, this means that a specific exclusion is applicable to all traffic passing through your WAF because it is a global setting. For example, this could lead to an issue if *1=1* is a valid request in the body for a certain app, but not for others. Another benefit is that you can choose between body, headers, and cookies to be excluded if a certain condition is met, as opposed to excluding the whole request.
+One benefit of using an exclusion list is that only a specific part of a request is being disabled. However, this means that a specific exclusion is applicable to all traffic passing through your WAF because it's a global setting. For example, this could lead to an issue if *1=1* is a valid request in the body for a certain app, but not for others. Another benefit is that you can choose between body, headers, and cookies to be excluded if a certain condition is met, as opposed to excluding the whole request.
-Occasionally, there are cases where specific parameters get passed into the WAF in a manner that may not be intuitive. For example, there is a token that gets passed when authenticating using Azure Active Directory. This token, *__RequestVerificationToken*, usually get passed in as a Request Cookie. However, in some cases where cookies are disabled, this token is also passed as a request attribute or "arg". If this happens, you need to ensure that *__RequestVerificationToken* is added to the exclusion list as a **Request attribute name** as well.
+Occasionally, there are cases where specific parameters get passed into the WAF in a manner that may not be intuitive. For example, there's a token that gets passed when authenticating using Azure Active Directory. This token, *__RequestVerificationToken*, usually get passed in as a Request Cookie. However, in some cases where cookies are disabled, this token is also passed as a request attribute or "arg". If this happens, you need to ensure that *__RequestVerificationToken* is added to the exclusion list as a **Request attribute name** as well.
![Exclusions](../media/web-application-firewall-troubleshoot/exclusion-list.png)
In this example, you want to exclude the **Request attribute name** that equals
Another way to get around a false positive is to disable the rule that matched on the input the WAF thought was malicious. Since you've parsed the WAF logs and have narrowed the rule down to 942130, you can disable it in the Azure portal. See [Customize web application firewall rules through the Azure portal](application-gateway-customize-waf-rules-portal.md).
-One benefit of disabling a rule is that if you know all traffic that contains a certain condition that will normally be blocked is valid traffic, you can disable that rule for the entire WAF. However, if itΓÇÖs only valid traffic in a specific use case, you open up a vulnerability by disabling that rule for the entire WAF since it is a global setting.
+One benefit of disabling a rule is that if you know all traffic that contains a certain condition that will normally be blocked is valid traffic, you can disable that rule for the entire WAF. However, if itΓÇÖs only valid traffic in a specific use case, you open up a vulnerability by disabling that rule for the entire WAF since it's a global setting.
If you want to use Azure PowerShell, see [Customize web application firewall rules through PowerShell](application-gateway-customize-waf-rules-powershell.md). If you want to use Azure CLI, see [Customize web application firewall rules through the Azure CLI](application-gateway-customize-waf-rules-cli.md).
Fiddler is a useful tool once again to find request header names. In the followi
:::image type="content" source="../media/web-application-firewall-troubleshoot/fiddler-2.png" alt-text="Screenshot of the Progress Telerik Fiddler Web Debugger. The Raw tab lists request header details like the connection, content-type, and user-agent." border="false":::
-Another way to view request and response headers is to look inside the developer tools of Chrome. You can press F12 or right-click -> **Inspect** -> **Developer Tools**, and select the **Network** tab. Load a web page, and click the request you want to inspect.
+Another way to view request and response headers is to look inside the developer tools of Chrome. You can press F12 or right-click -> **Inspect** -> **Developer Tools**, and select the **Network** tab. Load a web page, and select the request you want to inspect.
![Chrome F12](../media/web-application-firewall-troubleshoot/chrome-f12.png)
If the request contains cookies, the **Cookies** tab can be selected to view the
- Disable request body inspection
- By setting **Inspect request body** to off, the request bodies of all traffic will not be evaluated by your WAF. This may be useful if you know that the request bodies arenΓÇÖt malicious to your application.
+ By setting **Inspect request body** to off, the request bodies of all traffic won't be evaluated by your WAF. This may be useful if you know that the request bodies arenΓÇÖt malicious to your application.
- By disabling this option, only the request body is not inspected. The headers and cookies remain inspected, unless individual ones are excluded using the exclusion list functionality.
+ When you disable this option, only the request body isn't inspected. The headers and cookies remain inspected, unless individual ones are excluded using the exclusion list functionality.
- File size limits