Updates from: 06/10/2022 01:14:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
The **CryptographicKeys** element contains the following attributes:
| Attribute |Required | Description | | | -- | -- | | SamlMessageSigning |Yes | The X509 certificate (RSA key set) to use to sign SAML messages. Azure AD B2C uses this key to sign the requests and send them to the identity provider. |
-| SamlAssertionDecryption |No* | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. <br/><br/> * Required if the external IDP Encryts SAML assertions.|
+| SamlAssertionDecryption |No* | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. <br/><br/> * Required if the external IDP encrypts SAML assertions.|
| MetadataSigning |No | The X509 certificate (RSA key set) to use to sign SAML metadata. Azure AD B2C uses this key to sign the metadata. | ## Next steps
active-directory Check Status User Account Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/check-status-user-account-provisioning.md
Previously updated : 05/11/2021 Last updated : 05/30/2022
This article describes how to check the status of provisioning jobs after they h
## Overview
-Provisioning connectors are set up and configured using the [Azure portal](https://portal.azure.com), by following the [provided documentation](../saas-apps/tutorial-list.md) for the supported application. Once configured and running, provisioning jobs can be reported on using one of two methods:
+Provisioning connectors are set up and configured using the [Azure portal](https://portal.azure.com), by following the [provided documentation](../saas-apps/tutorial-list.md) for the supported application. Once configured and running, provisioning jobs can be reported on using the following methods:
-* **Azure portal** - This article primarily describes retrieving report information from the [Azure portal](https://portal.azure.com), which provides both a provisioning summary report as well as detailed provisioning audit logs for a given application.
-* **Audit API** - Azure Active Directory also provides an Audit API that enables programmatic retrieval of the detailed provisioning audit logs. See [Azure Active Directory audit API reference](/graph/api/resources/directoryaudit) for documentation specific to using this API. While this article does not specifically cover how to use the API, it does detail the types of provisioning events that are recorded in the audit log.
+- The [Azure portal](https://portal.azure.com)
+
+- Streaming the provisioning logs into [Azure Monitor](../app-provisioning/application-provisioning-log-analytics.md). This method allows for extended data retention and building custom dashboards, alerts, and queries.
+
+- Querying the [Microsoft Graph API](/graph/api/resources/provisioningobjectsummary) for the provisioning logs.
+
+- Downloading the provisioning logs as a CSV or JSON file.
### Definitions
This article uses the following terms, defined below:
## Getting provisioning reports from the Azure portal
-To get provisioning report information for a given application, start by launching the [Azure portal](https://portal.azure.com) and **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs (preview)** in the **Activity** section. You can also browse to the Enterprise Application for which provisioning is configured. For example, if you are provisioning users to LinkedIn Elevate, the navigation path to the application details is:
+To get provisioning report information for a given application, start by launching the [Azure portal](https://portal.azure.com) and **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs** in the **Activity** section. You can also browse to the Enterprise Application for which provisioning is configured. For example, if you are provisioning users to LinkedIn Elevate, the navigation path to the application details is:
**Azure Active Directory > Enterprise Applications > All applications > LinkedIn Elevate**
The **Current Status** should be the first place admins look to check on the ope
 ![Summary report](./media/check-status-user-account-provisioning/provisioning-progress-bar-section.png)
-## Provisioning logs (preview)
+## Provisioning logs
+
+All activities performed by the provisioning service are recorded in the Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). You can access the provisioning logs in the Azure portal by selecting **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs ** in the **Activity** section. You can search the provisioning data based on the name of the user or the identifier in either the source system or the target system. For details, see [Provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context).
-All activities performed by the provisioning service are recorded in the Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). You can access the provisioning logs in the Azure portal by selecting **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs (preview)** in the **Activity** section. You can search the provisioning data based on the name of the user or the identifier in either the source system or the target system. For details, see [Provisioning logs (preview)](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context).
-Logged activity event types include:
## Troubleshooting
For scenario-based guidance on how to troubleshoot automatic user provisioning,
## Additional Resources * [Managing user account provisioning for Enterprise Apps](configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
It's common for a security review to be required as part of a deployment. If you
If the automatic user provisioning implementation fails to work as desired in the production environment, the following rollback steps below can assist you in reverting to a previous known good state:
-1. Review the [provisioning summary report](../app-provisioning/check-status-user-account-provisioning.md) and [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md#provisioning-logs-preview) to determine what incorrect operations occurred on the affected users and/or groups.
+1. Review the [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md) to determine what incorrect operations occurred on the affected users and/or groups.
1. Use provisioning audit logs to determine the last known good state of the users and/or groups affected. Also review the source systems (Azure AD or AD).
Refer to the following links to troubleshoot any issues that may turn up during
* [Export or import your provisioning configuration by using Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md)
-* [Writing expressions for attribute mappings in Azure Active directory](../app-provisioning/functions-for-customizing-application-data.md)
+* [Writing expressions for attribute mappings in Azure Active directory](../app-provisioning/functions-for-customizing-application-data.md)
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
It's common for a security review to be required as part of the deployment of a
The cloud HR user provisioning implementation might fail to work as desired in the production environment. If so, the following rollback steps can assist you in reverting to a previous known good state.
-1. Review the [provisioning summary report](../app-provisioning/check-status-user-account-provisioning.md#getting-provisioning-reports-from-the-azure-portal) and [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md#provisioning-logs-preview) to determine what incorrect operations were performed on the affected users or groups. For more information on the provisioning summary report and logs, see [Manage cloud HR app user provisioning](#manage-your-configuration).
+1. Review the [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md#provisioning-logs) to determine what incorrect operations were performed on the affected users or groups. For more information on the provisioning summary report and logs, see [Manage cloud HR app user provisioning](#manage-your-configuration).
2. The last known good state of the users or groups affected can be determined through the provisioning audit logs or by reviewing the target systems (Azure AD or Active Directory). 3. Work with the app owner to update the users or groups affected directly in the app by using the last known good state values.
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
To collect debug logs for support diagnostics, use the following steps on the NP
``` Mkdir c:\NPS
- Cd NPS
+ Cd c:\NPS
netsh trace start Scenario=NetConnection capture=yes tracefile=c:\NPS\nettrace.etl logman create trace "NPSExtension" -ow -o c:\NPS\NPSExtension.etl -p {7237ED00-E119-430B-AB0F-C63360C8EE81} 0xffffffffffffffff 0xff -nb 16 16 -bs 1024 -mode Circular -f bincirc -max 4096 -ets logman update trace "NPSExtension" -p {EC2E6D3A-C958-4C76-8EA4-0262520886FF} 0xffffffffffffffff 0xff -ets
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
The web app sample in this tutorial uses the [express-session](https://www.npmjs
## Add app registration details
-1. Create an *.env* file in the root of your project folder. Then add the following code:
+1. Create a *.env* file in the root of your project folder. Then add the following code:
:::code language="text" source="~/ms-identity-node/App/.env":::
Fill in these details with the values you obtain from Azure app registration por
## Add code for user login and token acquisition
+1. Create a new file named *auth.js* under the *router* folder and add the following code there:
+ :::code language="js" source="~/ms-identity-node/App/routes/auth.js"::: 2. Next, update the *index.js* route by replacing the existing code with the following:
Fill in these details with the values you obtain from Azure app registration por
## Add code for calling the Microsoft Graph API
-Create a file named **fetch.js** in the root of your project and add the following code:
+Create a file named *fetch.js* in the root of your project and add the following code:
:::code language="js" source="~/ms-identity-node/App/fetch.js":::
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
Connect-MgGraph -TenantId $tenantId -Scopes 'Application.Read.All','Application.
# Look up the details about the server app's service principal and app role. $serverServicePrincipal = (Get-MgServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
-$serverServicePrincipalObjectId = $serverServicePrincipal.ObjectId
+$serverServicePrincipalObjectId = $serverServicePrincipal.Id
$appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id # Assign the managed identity access to the app role.
active-directory Agile Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/agile-provisioning-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Agile Provisioning'
+description: Learn how to configure single sign-on between Azure Active Directory and Agile Provisioning.
++++++++ Last updated : 05/23/2022++++
+# Tutorial: Azure AD SSO integration with Agile Provisioning
+
+In this tutorial, you'll learn how to integrate Agile Provisioning with Azure Active Directory (Azure AD). When you integrate Agile Provisioning with Azure AD, you can:
+
+* Control in Azure AD who has access to Agile Provisioning.
+* Enable your users to be automatically signed-in to Agile Provisioning with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Agile Provisioning single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Agile Provisioning supports **SP** and **IDP** initiated SSO.
+
+## Add Agile Provisioning from the gallery
+
+To configure the integration of Agile Provisioning into Azure AD, you need to add Agile Provisioning from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Agile Provisioning** in the search box.
+1. Select **Agile Provisioning** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Agile Provisioning
+
+Configure and test Azure AD SSO with Agile Provisioning using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Agile Provisioning.
+
+To configure and test Azure AD SSO with Agile Provisioning, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Agile Provisioning SSO](#configure-agile-provisioning-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Agile Provisioning test user](#create-agile-provisioning-test-user)** - to have a counterpart of B.Simon in Agile Provisioning that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Agile Provisioning** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `<CustomerFullyQualifiedName>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CustomerFullyQualifiedName>/web-portal/saml/SSO`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerFullyQualifiedName>/web-portal/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Agile Provisioning Client support team](mailto:support@flexcomlabs.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Agile Provisioning.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Agile Provisioning**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Agile Provisioning SSO
+
+To configure single sign-on on **Agile Provisioning** side, you need to send the **App Federation Metadata Url** to [Agile Provisioning support team](mailto:support@flexcomlabs.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Agile Provisioning test user
+
+In this section, you create a user called Britta Simon in Agile Provisioning. Work with [Agile Provisioning support team](mailto:support@flexcomlabs.com) to add the users in the Agile Provisioning platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Agile Provisioning Sign on URL where you can initiate the login flow.
+
+* Go to Agile Provisioning Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Agile Provisioning for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Agile Provisioning tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Agile Provisioning for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Agile Provisioning you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Airwatch Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airwatch-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with AirWatch | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with AirWatch'
description: Learn how to configure single sign-on between Azure Active Directory and AirWatch.
Previously updated : 01/20/2021 Last updated : 06/08/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * AirWatch single sign-on (SSO)-enabled subscription.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** page, enter the values for the following fields:
- 1. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.awmdm.com/AirWatch/Login?gid=companycode`
-
- 1. In the **Identifier (Entity ID)** text box, type the value as:
+ a. In the **Identifier (Entity ID)** text box, type the value as:
`AirWatch`
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL|
+ |--|
+ | `https://<SUBDOMAIN>.awmdm.com/<COMPANY_CODE>` |
+ | `https://<SUBDOMAIN>.airwatchportals.com/<COMPANY_CODE>` |
+ |
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<subdomain>.awmdm.com/AirWatch/Login?gid=companycode`
+ > [!NOTE]
- > This value is not the real. Update this value with the actual Sign-on URL. Contact [AirWatch Client support team](https://www.vmware.com/in/support/acquisitions/airwatch.html) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not the real. Update these values with the actual Reply URL and Sign-on URL. Contact [AirWatch Client support team](https://www.vmware.com/in/support/acquisitions/airwatch.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. AirWatch application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
active-directory Asccontracts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asccontracts-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ASC Contracts | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ASC Contracts'
description: Learn how to configure single sign-on between Azure Active Directory and ASC Contracts.
Previously updated : 01/17/2019 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory integration with ASC Contracts
+# Tutorial: Azure AD SSO integration with ASC Contracts
-In this tutorial, you learn how to integrate ASC Contracts with Azure Active Directory (Azure AD).
-Integrating ASC Contracts with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ASC Contracts with Azure Active Directory (Azure AD). When you integrate ASC Contracts with Azure AD, you can:
-* You can control in Azure AD who has access to ASC Contracts.
-* You can enable your users to be automatically signed-in to ASC Contracts (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ASC Contracts.
+* Enable your users to be automatically signed-in to ASC Contracts with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with ASC Contracts, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ASC Contracts single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ASC Contracts single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ASC Contracts supports **IDP** initiated SSO
+* ASC Contracts supports **IDP** initiated SSO.
-## Adding ASC Contracts from the gallery
+## Add ASC Contracts from the gallery
To configure the integration of ASC Contracts into Azure AD, you need to add ASC Contracts from the gallery to your list of managed SaaS apps.
-**To add ASC Contracts from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **ASC Contracts**, select **ASC Contracts** from result panel then click **Add** button to add the application.
-
- ![ASC Contracts in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ASC Contracts based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ASC Contracts needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ASC Contracts** in the search box.
+1. Select **ASC Contracts** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with ASC Contracts, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for ASC Contracts
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ASC Contracts Single Sign-On](#configure-asc-contracts-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ASC Contracts test user](#create-asc-contracts-test-user)** - to have a counterpart of Britta Simon in ASC Contracts that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with ASC Contracts using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ASC Contracts.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with ASC Contracts, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ASC Contracts SSO](#configure-asc-contracts-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ASC Contracts test user](#create-asc-contracts-test-user)** - to have a counterpart of B.Simon in ASC Contracts that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with ASC Contracts, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **ASC Contracts** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **ASC Contracts** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
-
- ![ASC Contracts Domain and URLs single sign-on information](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** page, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<subdomain>.asccontracts.com/shibboleth`
To configure Azure AD single sign-on with ASC Contracts, perform the following s
> [!NOTE] > These values are not real. Update these values with the actual Identifier and Reply URL. Contact ASC Networks Inc. (ASC) team at **613.599.6178** to get these values.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-6. On the **Set up ASC Contracts** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set up ASC Contracts** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure ASC Contracts Single Sign-On
-
-To configure single sign-on on **ASC Contracts** side, call ASC Networks Inc. (ASC) support at **613.599.6178** and provide them with the downloaded **Federation Metadata XML**. They set this application up to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ASC Contracts.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ASC Contracts.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ASC Contracts**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ASC Contracts**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure ASC Contracts SSO
-2. In the applications list, select **ASC Contracts**.
-
- ![The ASC Contracts link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ASC Contracts** side, call ASC Networks Inc. (ASC) support at **613.599.6178** and provide them with the downloaded **Federation Metadata XML**. They set this application up to have the SAML SSO connection set properly on both sides.
### Create ASC Contracts test user Work with ASC Networks Inc. (ASC) support team at **613.599.6178** to get the users added in the ASC Contracts platform.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the ASC Contracts tile in the Access Panel, you should be automatically signed in to the ASC Contracts for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the ASC Contracts for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ASC Contracts tile in the My Apps, you should be automatically signed in to the ASC Contracts for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ASC Contracts you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Carlsonwagonlit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/carlsonwagonlit-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Carlson Wagonlit Travel | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Carlson Wagonlit Travel.
+ Title: 'Tutorial: Azure AD SSO integration with CWT'
+description: Learn how to configure single sign-on between Azure Active Directory and CWT.
Previously updated : 07/21/2021 Last updated : 06/08/2022
-# Tutorial: Azure Active Directory integration with Carlson Wagonlit Travel
+# Tutorial: Azure AD SSO integration with CWT
-In this tutorial, you'll learn how to integrate Carlson Wagonlit Travel with Azure Active Directory (Azure AD). When you integrate Carlson Wagonlit Travel with Azure AD, you can:
+In this tutorial, you'll learn how to integrate CWT with Azure Active Directory (Azure AD). When you integrate CWT with Azure AD, you can:
-* Control in Azure AD who has access to Carlson Wagonlit Travel.
-* Enable your users to be automatically signed-in to Carlson Wagonlit Travel with their Azure AD accounts.
+* Control in Azure AD who has access to CWT.
+* Enable your users to be automatically signed-in to CWT with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Carlson Wagonlit Travel with Azu
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Carlson Wagonlit Travel single sign-on (SSO) enabled subscription.
+* CWT single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Carlson Wagonlit Travel supports **IDP** initiated SSO.
+* CWT supports **IDP** initiated SSO.
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+## Add CWT from the gallery
-## Add Carlson Wagonlit Travel from the gallery
-
-To configure the integration of Carlson Wagonlit Travel into Azure AD, you need to add Carlson Wagonlit Travel from the gallery to your list of managed SaaS apps.
+To configure the integration of CWT into Azure AD, you need to add CWT from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Carlson Wagonlit Travel** in the search box.
-1. Select **Carlson Wagonlit Travel** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **CWT** in the search box.
+1. Select **CWT** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Carlson Wagonlit Travel
+## Configure and test Azure AD SSO for CWT
-Configure and test Azure AD SSO with Carlson Wagonlit Travel using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Carlson Wagonlit Travel.
+Configure and test Azure AD SSO with CWT using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CWT.
-To configure and test Azure AD SSO with Carlson Wagonlit Travel, perform the following steps:
+To configure and test Azure AD SSO with CWT, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Carlson Wagonlit Travel SSO](#configure-carlson-wagonlit-travel-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Carlson Wagonlit Travel test user](#create-carlson-wagonlit-travel-test-user)** - to have a counterpart of B.Simon in Carlson Wagonlit Travel that is linked to the Azure AD representation of user.
+1. **[Configure CWT SSO](#configure-cwt-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create CWT test user](#create-cwt-test-user)** - to have a counterpart of B.Simon in CWT that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Carlson Wagonlit Travel** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **CWT** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following step:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- In the **Identifier** text box, type the value:
- `cwt-stage`
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
-5. On the **Set-up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+1. On the **Set-up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-6. On the **Set-up Carlson Wagonlit Travel** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set-up CWT** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Carlson Wagonlit Travel.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CWT.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Carlson Wagonlit Travel**.
+1. In the applications list, select **CWT**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Carlson Wagonlit Travel SSO
+## Configure CWT SSO
-To configure single sign-on on **Carlson Wagonlit Travel** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Carlson Wagonlit Travel support team](https://www.mycwt.com/traveler-help/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **CWT** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CWT support team](https://www.mycwt.com/traveler-help/). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Carlson Wagonlit Travel test user
+### Create CWT test user
-In this section, you create a user called Britta Simon in Carlson Wagonlit Travel. Work with [Carlson Wagonlit Travel support team](https://www.mycwt.com/traveler-help/) to add the users in the Carlson Wagonlit Travel platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in CWT. Work with [CWT support team](https://www.mycwt.com/traveler-help/) to add the users in the CWT platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the Carlson Wagonlit Travel for which you set up the SSO.
+* Click on Test this application in Azure portal and you should be automatically signed in to the CWT for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the Carlson Wagonlit Travel tile in the My Apps, you should be automatically signed in to the Carlson Wagonlit Travel for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the CWT tile in the My Apps, you should be automatically signed in to the CWT for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Carlson Wagonlit Travel you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure CWT you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Cloud Service Picco Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-service-picco-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Cloud Service PICCO | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Cloud Service PICCO'
description: Learn how to configure single sign-on between Azure Active Directory and Cloud Service PICCO.
Previously updated : 12/21/2018 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory integration with Cloud Service PICCO
+# Tutorial: Azure AD SSO integration with Cloud Service PICCO
-In this tutorial, you learn how to integrate Cloud Service PICCO with Azure Active Directory (Azure AD).
-Integrating Cloud Service PICCO with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Cloud Service PICCO with Azure Active Directory (Azure AD). When you integrate Cloud Service PICCO with Azure AD, you can:
-* You can control in Azure AD who has access to Cloud Service PICCO.
-* You can enable your users to be automatically signed-in to Cloud Service PICCO (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Cloud Service PICCO.
+* Enable your users to be automatically signed-in to Cloud Service PICCO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Cloud Service PICCO, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Cloud Service PICCO single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cloud Service PICCO single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Cloud Service PICCO supports **SP** initiated SSO
-* Cloud Service PICCO supports **Just In Time** user provisioning
+* Cloud Service PICCO supports **SP** initiated SSO.
+* Cloud Service PICCO supports **Just In Time** user provisioning.
-## Adding Cloud Service PICCO from the gallery
+## Add Cloud Service PICCO from the gallery
To configure the integration of Cloud Service PICCO into Azure AD, you need to add Cloud Service PICCO from the gallery to your list of managed SaaS apps.
-**To add Cloud Service PICCO from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Cloud Service PICCO**, select **Cloud Service PICCO** from result panel then click **Add** button to add the application.
-
- ![Cloud Service PICCO in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Cloud Service PICCO** in the search box.
+1. Select **Cloud Service PICCO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Cloud Service PICCO based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Cloud Service PICCO needs to be established.
+## Configure and test Azure AD SSO for Cloud Service PICCO
-To configure and test Azure AD single sign-on with Cloud Service PICCO, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Cloud Service PICCO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cloud Service PICCO.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Cloud Service PICCO Single Sign-On](#configure-cloud-service-picco-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Create Cloud Service PICCO test user](#create-cloud-service-picco-test-user)** - to have a counterpart of Britta Simon in Cloud Service PICCO that is linked to the Azure AD representation of user.
-5. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Cloud Service PICCO, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Cloud Service PICCO SSO](#configure-cloud-service-picco-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cloud Service PICCO test user](#create-cloud-service-picco-test-user)** - to have a counterpart of B.Simon in Cloud Service PICCO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Cloud Service PICCO, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Cloud Service PICCO** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Cloud Service PICCO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Configure single sign-on link](common/select-sso.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![Cloud Service PICCO Domain and URLs single sign-on information](common/sp-identifier-reply.png)
+ a. In the **Identifier** box, type a value using the following pattern:
+ `<SUB DOMAIN>.cloudservicepicco.com`
- a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://<SUB DOMAIN>.cloudservicepicco.com/app`
- b. In the **Identifier** box, type a URL using the following pattern:
- `<SUB DOMAIN>.cloudservicepicco.com`
-
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
`https://<SUB DOMAIN>.cloudservicepicco.com/app` > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [Cloud Service PICCO Client support team](mailto:picco.support@est.fujitsu.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Cloud Service PICCO Client support team](mailto:picco.support@est.fujitsu.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-4. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
-
-### Configure Cloud Service PICCO Single Sign-On
-
-To configure single sign-on on **Cloud Service PICCO** side, you need to send the **App Federation Metadata Url** to [Cloud Service PICCO support team](mailto:picco.support@est.fujitsu.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Cloud Service PICCO.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cloud Service PICCO.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Cloud Service PICCO**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Cloud Service PICCO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Cloud Service PICCO SSO
-2. In the applications list, select **Cloud Service PICCO**.
-
- ![The Cloud Service PICCO link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Cloud Service PICCO** side, you need to send the **App Federation Metadata Url** to [Cloud Service PICCO support team](mailto:picco.support@est.fujitsu.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Cloud Service PICCO test user In this section, a user called Britta Simon is created in Cloud Service PICCO. Cloud Service PICCO supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Cloud Service PICCO, a new one is created after authentication.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Cloud Service PICCO tile in the Access Panel, you should be automatically signed in to the Cloud Service PICCO for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Cloud Service PICCO Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Cloud Service PICCO Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Cloud Service PICCO tile in the My Apps, this will redirect to Cloud Service PICCO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Cloud Service PICCO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Guardium Data Protection Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/guardium-data-protection-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Guardium Data Protection'
+description: Learn how to configure single sign-on between Azure Active Directory and Guardium Data Protection.
++++++++ Last updated : 05/31/2022++++
+# Tutorial: Azure AD SSO integration with Guardium Data Protection
+
+In this tutorial, you'll learn how to integrate Guardium Data Protection with Azure Active Directory (Azure AD). When you integrate Guardium Data Protection with Azure AD, you can:
+
+* Control in Azure AD who has access to Guardium Data Protection.
+* Enable your users to be automatically signed-in to Guardium Data Protection with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Guardium Data Protection single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Guardium Data Protection supports **SP** and **IDP** initiated SSO.
+
+## Add Guardium Data Protection from the gallery
+
+To configure the integration of Guardium Data Protection into Azure AD, you need to add Guardium Data Protection from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Guardium Data Protection** in the search box.
+1. Select **Guardium Data Protection** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Guardium Data Protection
+
+Configure and test Azure AD SSO with Guardium Data Protection using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Guardium Data Protection.
+
+To configure and test Azure AD SSO with Guardium Data Protection, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Guardium Data Protection SSO](#configure-guardium-data-protection-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Guardium Data Protection test user](#create-guardium-data-protection-test-user)** - to have a counterpart of B.Simon in Guardium Data Protection that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Guardium Data Protection** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `<hostname>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<hostname>:8443/saml/sso`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<hostname>:8443`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Guardium Data Protection support team](mailto:NA@ibm.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Guardium Data Protection application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the Guardium Data Protection application image.](common/default-attributes.png "Image")
+
+1. In addition to above, Guardium Data Protection application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute |
+ |-| |
+ | jobtitle | user.jobtitle |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Guardium Data Protection** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Guardium Data Protection.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Guardium Data Protection**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Guardium Data Protection SSO
+
+To configure single sign-on on **Guardium Data Protection** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Guardium Data Protection support team](mailto:NA@ibm.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Guardium Data Protection test user
+
+In this section, you create a user called Britta Simon in Guardium Data Protection. Work with [Guardium Data Protection support team](mailto:NA@ibm.com) to add the users in the Guardium Data Protection platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Guardium Data Protection Sign on URL where you can initiate the login flow.
+
+* Go to Guardium Data Protection Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Guardium Data Protection for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Guardium Data Protection tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Guardium Data Protection for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Guardium Data Protection you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Javelo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/javelo-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Javelo'
+description: Learn how to configure single sign-on between Azure Active Directory and Javelo.
++++++++ Last updated : 06/06/2022++++
+# Tutorial: Azure AD SSO integration with Javelo
+
+In this tutorial, you'll learn how to integrate Javelo with Azure Active Directory (Azure AD). When you integrate Javelo with Azure AD, you can:
+
+* Control in Azure AD who has access to Javelo.
+* Enable your users to be automatically signed-in to Javelo with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Javelo single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Javelo supports **SP** initiated SSO.
+* Javelo supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Javelo from the gallery
+
+To configure the integration of Javelo into Azure AD, you need to add Javelo from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Javelo** in the search box.
+1. Select **Javelo** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Javelo
+
+Configure and test Azure AD SSO with Javelo using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Javelo.
+
+To configure and test Azure AD SSO with Javelo, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Javelo SSO](#configure-javelo-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Javelo test user](#create-javelo-test-user)** - to have a counterpart of B.Simon in Javelo that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Javelo** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, Upload the **Service Provider metadata file** which you can download from the [URL](https://api.javelo.io/omniauth/<CustomerSPIdentifier>_saml/metadata) and perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Screenshot shows Basic SAML Configuration with the Upload metadata file link.](common/upload-metadata.png "Folder")
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![Screenshot shows a dialog box where you can select and upload a file.](common/browse-upload-metadata.png "Logo")
+
+ c. Once the metadata file is successfully uploaded, the necessary URLs get auto populated automatically.
+
+ d. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerSubdomain>.javelo.io/auth/login`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign-on URL. Contact [Javelo Client support team](mailto:Support@javelo.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Javelo.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Javelo**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Javelo SSO
+
+1. Log in to your Javelo company site as an administrator.
+
+1. Go to **Admin** view and navigate to **SSO** tab > **Azure Active Directory** and click **Configure**.
+
+1. In the **Enable SSO with Azure Active Directory** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/javelo-tutorial/settings.png "Configuration")
+
+ a. Enter a valid name in the **Provider** textbox.
+
+ b. In the **Entity ID** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ c. In the **Metadata URL** textbox, paste the **App Federation Metadata Url** which you have copied from the Azure portal.
+
+ d. Click **Test URL**.
+
+ e. Enter a valid domain in the **Email Domains** textbox.
+
+ f. Click **Enable SSO with Azure Active Directory**.
+
+### Create Javelo test user
+
+In this section, a user called B.Simon is created in Javelo. Javelo supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Javelo, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Javelo Sign-on URL where you can initiate the login flow.
+
+* Go to Javelo Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Javelo tile in the My Apps, this will redirect to Javelo Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Javelo you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Paloaltoadmin Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/paloaltoadmin-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with Palo Alto Networks - Admin UI | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Palo Alto Networks - Admin UI'
description: Learn how to configure single sign-on between Azure Active Directory and Palo Alto Networks - Admin UI.
Previously updated : 09/08/2021 Last updated : 06/08/2022 # Tutorial: Azure AD SSO integration with Palo Alto Networks - Admin UI
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Palo Alto Networks - Admin UI single sign-on (SSO) enabled subscription.
+* It is a requirement that the service should be public available. Please refer [this](../develop/single-sign-on-saml-protocol.md) page for more information.
## Scenario description
active-directory Snowflake Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-tutorial.md
Previously updated : 12/22/2021 Last updated : 06/03/2022 # Tutorial: Azure AD SSO integration with Snowflake
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Snowflake SSO
-1. In a different web browser window, login to Snowflake as a Security Administrator.
+1. In a different web browser window, log in to Snowflake as a Security Administrator.
1. **Switch Role** to **ACCOUNTADMIN**, by clicking on **profile** on the top right side of page.
CREATE [ OR REPLACE ] SECURITY INTEGRATION [ IF NOT EXISTS ]
[ SAML2_SNOWFLAKE_ACS_URL = '<string_literal>' ] ```
+If you are using a new Snowflake URL with an organization name as the login URL, it is necessary to update the following parameters:
+
+ Alter the integration to add Snowflake Issuer URL and SAML2 Snowflake ACS URL, please follow the step-6 in [this](https://community.snowflake.com/s/article/HOW-TO-SETUP-SSO-WITH-ADFS-AND-THE-SNOWFLAKE-NEW-URL-FORMAT-OR-PRIVATELINK) article for more information.
+
+1. [ SAML2_SNOWFLAKE_ISSUER_URL = '<string_literal>' ]
+
+ alter security integration `<your security integration name goes here>` set SAML2_SNOWFLAKE_ISSUER_URL = `https://<organization_name>-<account name>.snowflakecomputing.com`;
+
+2. [ SAML2_SNOWFLAKE_ACS_URL = '<string_literal>' ]
+
+ alter security integration `<your security integration name goes here>` set SAML2_SNOWFLAKE_ACS_URL = `https://<organization_name>-<account name>.snowflakecomputing.com/fed/login`;
+ > [!NOTE] > Please follow [this](https://docs.snowflake.com/en/sql-reference/sql/create-security-integration.html) guide to know more about how to create a SAML2 security integration.
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign-on URL where you can initiate the login flow.
-* Go to Snowflake Sign-on URL directly and initiate the login flow from there.
+* Go to Snowflake Sign on URL directly and initiate the login flow from there.
#### IDP initiated: * Click on **Test this application** in Azure portal and you should be automatically signed in to the Snowflake for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Snowflake tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Snowflake for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Snowflake tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Snowflake for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Timeoffmanager Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeoffmanager-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with TimeOffManager | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with TimeOffManager'
description: Learn how to configure single sign-on between Azure Active Directory and TimeOffManager.
Previously updated : 12/10/2019 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with TimeOffManager
+# Tutorial: Azure AD SSO integration with TimeOffManager
In this tutorial, you'll learn how to integrate TimeOffManager with Azure Active Directory (Azure AD). When you integrate TimeOffManager with Azure AD, you can:
In this tutorial, you'll learn how to integrate TimeOffManager with Azure Active
* Enable your users to be automatically signed-in to TimeOffManager with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * TimeOffManager single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
+* TimeOffManager supports **IDP** initiated SSO.
-* TimeOffManager supports **IDP** initiated SSO
-
-* TimeOffManager supports **Just In Time** user provisioning
+* TimeOffManager supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant. -
-## Adding TimeOffManager from the gallery
+## Add TimeOffManager from the gallery
To configure the integration of TimeOffManager into Azure AD, you need to add TimeOffManager from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **TimeOffManager** in the search box. 1. Select **TimeOffManager** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for TimeOffManager
+## Configure and test Azure AD SSO for TimeOffManager
Configure and test Azure AD SSO with TimeOffManager using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TimeOffManager.
-To configure and test Azure AD SSO with TimeOffManager, complete the following building blocks:
+To configure and test Azure AD SSO with TimeOffManager, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with TimeOffManager, complete the following b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **TimeOffManager** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **TimeOffManager** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Reply URL** text box, type a URL using the following pattern: `https://www.timeoffmanager.com/cpanel/sso/consume.aspx?company_id=<companyid>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. TimeOffManager application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/edit-attribute.png)
+ ![Screenshot shows the image of TimeOffManager application.](common/edit-attribute.png "Image")
1. In addition to above, TimeOffManager application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
1. On the **Set up TimeOffManager** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **TimeOffManager**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. Go to **Account \> Account Options \> Single Sign-On Settings**.
- ![Screenshot shows Single Sign-On Settings selected from Account Options.](./media/timeoffmanager-tutorial/ic795917.png "Single Sign-On Settings")
+ ![Screenshot shows Single Sign-On Settings selected from Account Options.](./media/timeoffmanager-tutorial/account.png "Single Sign-On Settings")
3. In the **Single Sign-On Settings** section, perform the following steps:
- ![Screenshot shows the Single Sign-On Settings section where you can enter the values described.](./media/timeoffmanager-tutorial/ic795918.png "Single Sign-On Settings")
+ ![Screenshot shows the Single Sign-On Settings section where you can enter the values described.](./media/timeoffmanager-tutorial/settings.png "Single Sign-On Settings")
a. Open your base-64 encoded certificate in notepad, copy the content of it into your clipboard, and then paste the entire Certificate into **X.509 Certificate** textbox.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In **Single Sign on settings** page, copy the value of **Assertion Consumer Service URL** and paste it in the **Reply URL** text box under **Basic SAML Configuration** section in Azure portal.
- ![Screenshot shows the Assertion Consumer Service U R L link.](./media/timeoffmanager-tutorial/ic795915.png "Single Sign-On Settings")
+ ![Screenshot shows the Assertion Consumer Service U R L link.](./media/timeoffmanager-tutorial/values.png "Single Sign-On Settings")
### Create TimeOffManager test user
In this section, a user called Britta Simon is created in TimeOffManager. TimeOf
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the TimeOffManager tile in the Access Panel, you should be automatically signed in to the TimeOffManager for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the TimeOffManager for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the TimeOffManager tile in the My Apps, you should be automatically signed in to the TimeOffManager for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try TimeOffManager with Azure AD](https://aad.portal.azure.com/)
+Once you configure TimeOffManager you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Versal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/versal-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Versal | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Versal'
description: Learn how to configure single sign-on between Azure Active Directory and Versal.
Previously updated : 12/10/2019 Last updated : 06/07/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Versal
+# Tutorial: Azure AD SSO integration with Versal
In this tutorial, you'll learn how to integrate Versal with Azure Active Directory (Azure AD). When you integrate Versal with Azure AD, you can:
In this tutorial, you'll learn how to integrate Versal with Azure Active Directo
* Enable your users to be automatically signed-in to Versal with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Versal single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. -
-* Versal supports **IDP** initiated SSO
+* Versal supports **IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Versal from the gallery
+## Add Versal from the gallery
To configure the integration of Versal into Azure AD, you need to add Versal from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Versal** in the search box. 1. Select **Versal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Versal
+## Configure and test Azure AD SSO for Versal
Configure and test Azure AD SSO with Versal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Versal.
-To configure and test Azure AD SSO with Versal, complete the following building blocks:
+To configure and test Azure AD SSO with Versal, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Versal, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Versal** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Versal** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** page, perform the following steps:
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the value:
`VERSAL` b. In the **Reply URL** text box, type a URL using the following pattern:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Versal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Versal application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
- ![image](common/edit-attribute.png)
+ ![Screenshot shows the image of Versal application.](common/edit-attribute.png "Attributes")
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up Versal** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Versal**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
You will need to create a course, share it with your organization, and publish i
Please see [Creating a course](https://support.versal.com/hc/articles/203722528-Create-a-course), [Publishing a course](https://support.versal.com/hc/articles/203753398-Publishing-a-course), and [Course and learner management](https://support.versal.com/hc/articles/206029467-Course-and-learner-management) for more information.
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Versal with Azure AD](https://aad.portal.azure.com/)
+Once you configure Versal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
With an AKS cluster deployed into your existing virtual network subnet, you can
[express-route]: ../expressroute/expressroute-introduction.md [network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md
-[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
+[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-managed-identity
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
For more information on configuring your load balancer in a different subnet, se
## Connect Azure Private Link service to internal load balancer (Preview)
-To attach an Azure Private Link Service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
+### Before you begin
+
+You must have the following resource installed:
+
+* The Azure CLI
+* The `aks-preview` extension version 0.5.50 or later
+* Kubernetes version 1.22.x or above
+
+#### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Create a Private Link service connection
+
+To attach an Azure Private Link service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
```yaml apiVersion: v1
pls-xyz pls-xyz.abc123-defg-4hij-56kl-789mnop.eastus2.azure.privatelinkservice
```
-### Create a Private Endpoint to the Private Link Service
+### Create a Private Endpoint to the Private Link service
A Private Endpoint allows you to privately connect to your Kubernetes service object via the Private Link Service created above. To do so, follow the example shown below:
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-service-principal.md
Title: Service principals for Azure Kubernetes Services (AKS)
-description: Create and manage an Azure Active Directory service principal for a cluster in Azure Kubernetes Service (AKS)
+ Title: Use a service principal with Azure Kubernetes Services (AKS)
+description: Create and manage an Azure Active Directory service principal with a cluster in Azure Kubernetes Service (AKS)
Previously updated : 12/06/2021 Last updated : 06/08/2022 #Customer intent: As a cluster operator, I want to understand how to create a service principal and delegate permissions for AKS to access required resources. In large enterprise environments, the user that deploys the cluster (or CI/CD system), may not have permissions to create this service principal automatically when the cluster is created.
-# Service principals with Azure Kubernetes Service (AKS)
+# Use a service principal with Azure Kubernetes Service (AKS)
-To interact with Azure APIs, an AKS cluster requires either an [Azure Active Directory (AD) service principal][aad-service-principal] or a [managed identity](use-managed-identity.md). A service principal or managed identity is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR).
+To access other Azure Active Directory (Azure AD) resources, an AKS cluster requires either an [Azure Active Directory (AD) service principal][aad-service-principal] or a [managed identity][managed-identity-resources-overview]. A service principal or managed identity is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR).
+
+Managed identities are the recommended way to authenticate with other resources in Azure, and is the default authentication method for your AKS cluster. For more information about using a managed identity with your cluster, see [Use a system-assigned managed identity][use-managed-identity].
This article shows how to create and use a service principal for your AKS clusters. ## Before you begin
-To create an Azure AD service principal, you must have permissions to register an application with your Azure AD tenant, and to assign the application to a role in your subscription. If you don't have the necessary permissions, you might need to ask your Azure AD or subscription administrator to assign the necessary permissions, or pre-create a service principal for you to use with the AKS cluster.
-
-If you are using a service principal from a different Azure AD tenant, there are additional considerations around the permissions available when you deploy the cluster. You may not have the appropriate permissions to read and write directory information. For more information, see [What are the default user permissions in Azure Active Directory?][azure-ad-permissions]
-
-### [Azure CLI](#tab/azure-cli)
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-You also need Azure PowerShell version 5.0.0 or later installed. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install the Azure Az PowerShell module][install-the-azure-az-powershell-module].
+To create an Azure AD service principal, you must have permissions to register an application with your Azure AD tenant, and to assign the application to a role in your subscription. If you don't have the necessary permissions, you need to ask your Azure AD or subscription administrator to assign the necessary permissions, or pre-create a service principal for you to use with the AKS cluster.
--
-## Automatically create and use a service principal
-
-### [Azure CLI](#tab/azure-cli)
+If you're using a service principal from a different Azure AD tenant, there are other considerations around the permissions available when you deploy the cluster. You may not have the appropriate permissions to read and write directory information. For more information, see [What are the default user permissions in Azure Active Directory?][azure-ad-permissions]
-When you create an AKS cluster in the Azure portal or using the [az aks create][az-aks-create] command, Azure creates a managed identity.
+## Prerequisites
-In the following Azure CLI example, a service principal is not specified. In this scenario, the Azure CLI creates a managed identity for the AKS cluster.
+Azure CLI version 2.0.59 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
+Azure PowerShell version 5.0.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install the Azure Az PowerShell module][install-the-azure-az-powershell-module].
-When you create an AKS cluster in the Azure portal or using the [New-AzAksCluster][new-azakscluster] command, Azure can generate a new managed identity .
-
-In the following Azure PowerShell example, a service principal is not specified. In this scenario, Azure PowerShell creates a managed identity for the AKS cluster.
-
-```azurepowershell-interactive
-New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
-```
-
-> [!NOTE]
-> For error "Service principal clientID: 00000000-0000-0000-0000-000000000000 not found in Active Directory tenant 00000000-0000-0000-0000-000000000000", see [Additional considerations](#additional-considerations) to remove the `acsServicePrincipal.json` file.
-- ## Manually create a service principal ### [Azure CLI](#tab/azure-cli)
To manually create a service principal with the Azure CLI, use the [az ad sp cre
az ad sp create-for-rbac --name myAKSClusterServicePrincipal ```
-The output is similar to the following example. Make a note of your own `appId` and `password`. These values are used when you create an AKS cluster in the next section.
+The output is similar to the following example. Copy the values for `appId` and `password`. These values are used when you create an AKS cluster in the next section.
```json {
Id : 559513bd-0c19-4c1a-87cd-851a26afd5fc
Type : ```
-To decrypt the value stored in the **Secret** secure string, you use the following example.
+To decrypt the value stored in the **Secret** secure string, run the following command:
```azurepowershell-interactive $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret)
az aks create \
``` > [!NOTE]
-> If you're using an existing service principal with customized secret, ensure the secret is no longer than 190 bytes.
-
-If you deploy an AKS cluster using the Azure portal, on the *Authentication* page of the **Create Kubernetes cluster** dialog, choose to **Configure service principal**. Select **Use existing**, and specify the following values:
--- **Service principal client ID** is your *appId*-- **Service principal client secret** is the *password* value-
-![Image of browsing to Azure Vote](media/kubernetes-service-principal/portal-configure-service-principal.png)
+> If you're using an existing service principal with customized secret, ensure the secret is not longer than 190 bytes.
### [Azure PowerShell](#tab/azure-powershell)
New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -ServiceP
> [!NOTE] > If you're using an existing service principal with customized secret, ensure the secret is no longer than 190 bytes.
-If you deploy an AKS cluster using the Azure portal, on the *Authentication* page of the **Create Kubernetes cluster** dialog, choose to **Configure service principal**. Select **Use existing**, and specify the following values:
--- **Service principal client ID** is your *ApplicationId*-- **Service principal client secret** is the decrypted *Secret* value-
-![Image of browsing to Azure Vote](media/kubernetes-service-principal/portal-configure-service-principal.png)
- ## Delegate access to other Azure resources
The `Scope` for a resource needs to be a full resource ID, such as */subscriptio
> [!NOTE] > If you have removed the Contributor role assignment from the node resource group, the operations below may fail.
-> Permission grants to clusters using System Managed Identity may take up 60 minutes to populate.
+> Permission granted to a cluster using a system-assigned managed identity may take up 60 minutes to populate.
-The following sections detail common delegations that you may need to make.
+The following sections detail common delegations that you may need to assign.
### Azure Container Registry
If you use Azure Container Registry (ACR) as your container image store, you nee
### Networking
-You may use advanced networking where the virtual network and subnet or public IP addresses are in another resource group. Assign the [Network Contributor][rbac-network-contributor] built-in role on the subnet within the virtual network. Alternatively, you can create a [custom role][rbac-custom-role] with permissions to access the network resources in that resource group. See [AKS service permissions][aks-permissions] for more details.
+You may use advanced networking where the virtual network and subnet or public IP addresses are in another resource group. Assign the [Network Contributor][rbac-network-contributor] built-in role on the subnet within the virtual network. Alternatively, you can create a [custom role][rbac-custom-role] with permissions to access the network resources in that resource group. For more information, see [AKS service permissions][aks-permissions].
### Storage
-You may need to access existing Disk resources in another resource group. Assign one of the following set of role permissions:
+If you need to access existing disk resources in another resource group, assign one of the following set of role permissions:
- Create a [custom role][rbac-custom-role] and define the following role permissions: - *Microsoft.Compute/disks/read*
You may need to access existing Disk resources in another resource group. Assign
### Azure Container Instances
-If you use Virtual Kubelet to integrate with AKS and choose to run Azure Container Instances (ACI) in resource group separate to the AKS cluster, the AKS service principal must be granted *Contributor* permissions on the ACI resource group.
+If you use Virtual Kubelet to integrate with AKS and choose to run Azure Container Instances (ACI) in resource group separate from the AKS cluster, the AKS cluster service principal must be granted *Contributor* permissions on the ACI resource group.
-## Additional considerations
+## Other considerations
### [Azure CLI](#tab/azure-cli)
-When using AKS and Azure AD service principals, keep the following considerations in mind.
+When using AKS and an Azure AD service principal, consider the following:
-- The service principal for Kubernetes is a part of the cluster configuration. However, don't use the identity to deploy the cluster.
+- The service principal for Kubernetes is a part of the cluster configuration. However, don't use this identity to deploy the cluster.
- By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time. - Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. - When you specify the service principal **Client ID**, use the value of the `appId`. - On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json` - When you use the [az aks create][az-aks-create] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/aksServicePrincipal.json` on the machine used to run the command.-- If you do not specifically pass a service principal in additional AKS CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used.-- You can also optionally remove the aksServicePrincipal.json file, and AKS will create a new service principal.-- When you delete an AKS cluster that was created by [az aks create][az-aks-create], the service principal that was created automatically is not deleted.
- - To delete the service principal, query for your cluster *servicePrincipalProfile.clientId* and then delete with [az ad sp delete][az-ad-sp-delete]. Replace the following resource group and cluster names with your own values:
+- If you don't specify a service principal with AKS CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used.
+- You can optionally remove the `aksServicePrincipal.json` file, and AKS creates a new service principal.
+- When you delete an AKS cluster that was created by [az aks create][az-aks-create], the service principal created automatically isn't deleted.
+ - To delete the service principal, query for your clusters *servicePrincipalProfile.clientId* and then delete it using the [az ad sp delete][az-ad-sp-delete] command. Replace the values for the `-g` parameter for the resource group name, and `-n` parameter for the cluster name:
```azurecli az ad sp delete --id $(az aks show -g myResourceGroup -n myAKSCluster --query servicePrincipalProfile.clientId -o tsv)
When using AKS and Azure AD service principals, keep the following consideration
### [Azure PowerShell](#tab/azure-powershell)
-When using AKS and Azure AD service principals, keep the following considerations in mind.
+When using AKS and an Azure AD service principal, consider the following:
-- The service principal for Kubernetes is a part of the cluster configuration. However, don't use the identity to deploy the cluster.
+- The service principal for Kubernetes is a part of the cluster configuration. However, don't use this identity to deploy the cluster.
- By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time. - Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. - When you specify the service principal **Client ID**, use the value of the `ApplicationId`. - On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json` - When you use the [New-AzAksCluster][new-azakscluster] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/acsServicePrincipal.json` on the machine used to run the command.-- If you do not specifically pass a service principal in additional AKS PowerShell commands, the default service principal located at `~/.azure/acsServicePrincipal.json` is used.-- You can also optionally remove the acsServicePrincipal.json file, and AKS will create a new service principal.-- When you delete an AKS cluster that was created by [New-AzAksCluster][new-azakscluster], the service principal that was created automatically is not deleted.
- - To delete the service principal, query for your cluster *ServicePrincipalProfile.ClientId* and then delete with [Remove-AzADServicePrincipal][remove-azadserviceprincipal]. Replace the following resource group and cluster names with your own values:
+- If you don't specify a service principal with AKS PowerShell commands, the default service principal located at `~/.azure/acsServicePrincipal.json` is used.
+- You can optionally remove the `acsServicePrincipal.json` file, and AKS creates a new service principal.
+- When you delete an AKS cluster that was created by [New-AzAksCluster][new-azakscluster], the service principal created automatically isn't deleted.
+ - To delete the service principal, query for your clusters *ServicePrincipalProfile.ClientId* and then delete it using the [Remove-AzADServicePrincipal][remove-azadserviceprincipal] command. Replace the values for the `-ResourceGroupName` parameter for the resource group name, and `-Name` parameter for the cluster name:
```azurepowershell-interactive $ClientId = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster ).ServicePrincipalProfile.ClientId
When using AKS and Azure AD service principals, keep the following consideration
### [Azure CLI](#tab/azure-cli)
-The service principal credentials for an AKS cluster are cached by the Azure CLI. If these credentials have expired, you encounter errors deploying AKS clusters. The following error message when running [az aks create][az-aks-create] may indicate a problem with the cached service principal credentials:
+The service principal credentials for an AKS cluster are cached by the Azure CLI. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [az aks create][az-aks-create] may indicate a problem with the cached service principal credentials:
```console Operation failed with status: 'Bad Request'.
Details: The credentials in ServicePrincipalProfile were invalid. Please see htt
(Details: adal: Refresh request failed. Status Code = '401'. ```
-Check the age of the credentials file using the following command:
+Check the age of the credentials file by running the following command:
```console ls -la $HOME/.azure/aksServicePrincipal.json ```
-The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and try to deploy an AKS cluster again.
+The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and retry deploying the AKS cluster.
### [Azure PowerShell](#tab/azure-powershell)
-The service principal credentials for an AKS cluster are cached by Azure PowerShell. If these credentials have expired, you encounter errors deploying AKS clusters. The following error message when running [New-AzAksCluster][new-azakscluster] may indicate a problem with the cached service principal credentials:
+The service principal credentials for an AKS cluster are cached by Azure PowerShell. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [New-AzAksCluster][new-azakscluster] may indicate a problem with the cached service principal credentials:
```console Operation failed with status: 'Bad Request'.
Details: The credentials in ServicePrincipalProfile were invalid. Please see htt
(Details: adal: Refresh request failed. Status Code = '401'. ```
-Check the age of the credentials file using the following command:
+Check the age of the credentials file by running the following command:
```azurepowershell-interactive Get-ChildItem -Path $HOME/.azure/aksServicePrincipal.json ```
-The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and try to deploy an AKS cluster again.
+The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and retry deploying the AKS cluster.
For information on how to update the credentials, see [Update or rotate the cred
[new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment [set-azakscluster]: /powershell/module/az.aks/set-azakscluster [remove-azadserviceprincipal]: /powershell/module/az.resources/remove-azadserviceprincipal
+[use-managed-identity]: use-managed-identity.md
+[managed-identity-resources-overview]: ..//active-directory/managed-identities-azure-resources/overview.md
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
az group delete --name MyResourceGroup --yes --no-wait
## Next steps
-In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hub.
+In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hubs.
To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
To learn more about AKS, and walk through a complete code to deployment example,
[az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register [az-group-delete]: /cli/azure/group#az_group_delete
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
+[sp-delete]: kubernetes-service-principal.md#other-considerations
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
For more information about using Helm, see the Helm documentation.
[helm-documentation]: https://helm.sh/docs/ [helm-existing]: kubernetes-helm.md [helm-install]: https://helm.sh/docs/intro/install/
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
+[sp-delete]: kubernetes-service-principal.md#other-considerations
[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
For more information on AKS, see [AKS overview][aks-intro]. For guidance on a cr
[az aks upgrade]: /cli/azure/aks#az_aks_upgrade [azure-cli-install]: /cli/azure/install-azure-cli [az-group-delete]: /cli/azure/group#az_group_delete
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
+[sp-delete]: kubernetes-service-principal.md#other-considerations
[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE [azure-powershell-install]: /powershell/azure/install-az-ps [get-azakscluster]: /powershell/module/az.aks/get-azakscluster
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
Title: Use managed identities in Azure Kubernetes Service
-description: Learn how to use managed identities in Azure Kubernetes Service (AKS)
+ Title: Use a managed identity in Azure Kubernetes Service
+description: Learn how to use a system-assigned or user-assigned managed identity in Azure Kubernetes Service (AKS)
Previously updated : 06/01/2022 Last updated : 06/07/2022
-# Use managed identities in Azure Kubernetes Service
+# Use a managed identity in Azure Kubernetes Service
-Currently, an Azure Kubernetes Service (AKS) cluster (specifically, the Kubernetes cloud provider) requires an identity to create additional resources like load balancers and managed disks in Azure. This identity can be either a *managed identity* or a *service principal*. If you use a [service principal](kubernetes-service-principal.md), you must either provide one or AKS creates one on your behalf. If you use managed identity, this will be created for you by AKS automatically. Clusters using service principals eventually reach a state in which the service principal must be renewed to keep the cluster working. Managing service principals adds complexity, which is why it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities.
+An Azure Kubernetes Service (AKS) cluster requires an identity to access Azure resources like load balancers and managed disks. This identity can be either a managed identity or a service principal. By default, when you create an AKS cluster a system-assigned managed identity automatically created. The identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources][managed-identity-resources-overview].
-*Managed identities* are essentially a wrapper around service principals, and make their management simpler. Credential rotation for MI happens automatically every 46 days according to Azure Active Directory default. AKS uses both system-assigned and user-assigned managed identity types. These identities are currently immutable. To learn more, read about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+To use a [service principal](kubernetes-service-principal.md), you have to create one, AKS does not create one automatically. Clusters using a service principal eventually expire and the service principal must be renewed to keep the cluster working. Managing service principals adds complexity, which is why it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities.
-## Before you begin
+Managed identities are essentially a wrapper around service principals, and make their management simpler. Managed identities use certificate-based authentication, and each managed identities credential has an expiration of 90 days and it's rolled after 45 days. AKS uses both system-assigned and user-assigned managed identity types, and these identities are immutable.
-You must have the following resource installed:
+## Prerequisites
-- The Azure CLI, version 2.23.0 or later-
-> [!NOTE]
-> AKS will create a kubelet MI in the Node resource group if you do not bring your own kubelet MI.
+Azure CLI version 2.23.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Limitations
-* Tenants move / migrate of managed identity enabled clusters isn't supported.
+* Tenants move or migrate a managed identity-enabled cluster isn't supported.
* If the cluster has `aad-pod-identity` enabled, Node-Managed Identity (NMI) pods modify the nodes' iptables to intercept calls to the Azure Instance Metadata endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use
AKS uses several managed identities for built-in services and add-ons.
| Add-on | Virtual-Node (ACIConnector) | Manages required network resources for Azure Container Instances (ACI) | Contributor role for node resource group | No | OSS project | aad-pod-identity | Enables applications to access cloud resources securely with Azure Active Directory (AAD) | NA | Steps to grant permission at https://github.com/Azure/aad-pod-identity#role-assignment.
-## Create an AKS cluster with managed identities
+> [!NOTE]
+> AKS will create a kubelet managed identity in the Node resource group if you do not specify your own kubelet managed identity.
+
+## Create an AKS cluster using a managed identity
-You can now create an AKS cluster with managed identities by using the following CLI commands.
+You can create an AKS cluster using a system-assigned managed identity by running the following CLI command.
First, create an Azure resource group:
Finally, get credentials to access the cluster:
az aks get-credentials --resource-group myResourceGroup --name myManagedCluster ```
-## Update an AKS cluster to managed identities
+## Update an AKS cluster to use a managed identity
-You can now update an AKS cluster currently working with service principals to work with managed identities by using the following CLI commands.
+To update an AKS cluster currently using a service principals to work with a system-assigned managed identity, run the following CLI command.
```azurecli-interactive az aks update -g <RGName> -n <AKSName> --enable-managed-identity ```+ > [!NOTE]
-> An update will only work if there is an actual VHD update to consume. If you are running the latest VHD, you will need to wait till the next VHD is available in order to do the actual update.
+> An update will only work if there is an actual VHD update to consume. If you are running the latest VHD, you'll need to wait until the next VHD is available in order to perform the update.
> > [!NOTE]
-> After updating, your cluster's control plane and addon pods will switch to use managed identity, but kubelet will KEEP USING SERVICE PRINCIPAL until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity.
+> After updating, your cluster's control plane and addon pods, they use the managed identity, but kubelet will continue using a service principal until you upgrade your agentpool. Perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to a managed identity.
>
-> If your cluster was using --attach-acr to pull from image from Azure Container Registry, after updating your cluster to Managed Identity, you need to rerun `az aks update --attach-acr <ACR Resource ID>` to let the newly created kubelet used for managed identity get the permission to pull from ACR. Otherwise you will not be able to pull from ACR after the upgrade.
+> If your cluster was using `--attach-acr` to pull from image from Azure Container Registry, after updating your cluster to a managed identity, you need to rerun `az aks update --attach-acr <ACR Resource ID>` to let the newly created kubelet used for managed identity get the permission to pull from ACR. Otherwise, you won't be able to pull from ACR after the upgrade.
>
-> The Azure CLI will ensure your addon's permission is correctly set after migrating, if you're not using the Azure CLI to perform the migrating operation, you will need to handle the addon identity's permission by yourself. Here is one example using [ARM](../role-based-access-control/role-assignments-template.md).
+> The Azure CLI will ensure your addon's permission is correctly set after migrating, if you're not using the Azure CLI to perform the migrating operation, you'll need to handle the addon identity's permission by yourself. Here is one example using an [Azure Resource Manager](../role-based-access-control/role-assignments-template.md) template.
> [!WARNING]
-> Nodepool upgrade will cause downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
+> A nodepool upgrade will cause downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
-## Obtain and use the system-assigned managed identity for your AKS cluster
+## Get and use the system-assigned managed identity for your AKS cluster
Confirm your AKS cluster is using managed identity with the following CLI command:
Confirm your AKS cluster is using managed identity with the following CLI comman
az aks show -g <RGName> -n <ClusterName> --query "servicePrincipalProfile" ```
-If the cluster is using managed identities, you will see a `clientId` value of "msi". A cluster using a Service Principal instead will instead show the object ID. For example:
+If the cluster is using a managed identity, the output shows `clientId` with a value of **msi**. A cluster using a service principal shows an object ID. For example:
```output {
If the cluster is using managed identities, you will see a `clientId` value of "
} ```
-After verifying the cluster is using managed identities, you can find the control plane system-assigned identity's object ID with the following command:
+After verifying the cluster is using a managed identity, you can find the control plane system-assigned identity's object ID by running the following command:
```azurecli-interactive az aks show -g <RGName> -n <ClusterName> --query "identity"
az aks show -g <RGName> -n <ClusterName> --query "identity"
``` > [!NOTE]
-> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, CLI will add the role assignement automatically. If you are using ARM template or other clients, you need to use the PrincipalID of the cluster System Assigned Managed Identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
+> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other method, you need to use the PrincipalID of the cluster system-assigned managed identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
>
-> Permission grants to cluster Managed Identity used by Azure Cloud provider may take up 60 minutes to populate.
--
-## Bring your own control plane MI
-A custom control plane identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity.
+> Permission granted to your cluster's managed identity used by Azure may take up 60 minutes to populate.
+## Bring your own control plane managed identity
-You must have the Azure CLI, version 2.15.1 or later installed.
+A custom control plane managed identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity.
-### Limitations
-* USDOD Central, USDOD East, USGov Iowa in Azure Government aren't currently supported.
+> [!NOTE]
+> USDOD Central, USDOD East, USGov Iowa regions in Azure US Government cloud aren't currently supported.
-If you don't have a managed identity yet, you should go ahead and create one for example by using the [az identity][az-identity-create] command.
+If you don't have a managed identity, you should create one by running the [az identity][az-identity-create] command.
```azurecli-interactive az identity create --name myIdentity --resource-group myResourceGroup ```
-Azure CLI will automatically add required role assignment for control plane MI. If you are using ARM template or other clients, you need to create the role assignment manually.
+Azure CLI automatically adds required role assignment for the control plane managed identity. If you are using an ARM template or other method, you need to create the role assignment manually.
+ ```azurecli-interactive az role assignment create --assignee <control-plane-identity-object-id> --role "Managed Identity Operator" --scope <kubelet-identity-resource-id> ```
-If your managed identity is part of your subscription, you can use [az identity CLI command][az-identity-list] to query it.
+If your managed identity is part of your subscription, run the following [az identity CLI command][az-identity-list] command to query it.
```azurecli-interactive az identity list --query "[].{Name:name, Id:id, Location:location}" -o table ```
-Now you can use the following command to create your cluster with your existing identity:
+Run the following command to create a cluster with your existing identity:
```azurecli-interactive az aks create \
az aks create \
--assign-identity <identity-id> ```
-A successful cluster creation using your own managed identities contains this userAssignedIdentities profile information:
+A successful cluster creation using your own managed identity should resemble the following **userAssignedIdentities** profile information:
```output "identity": {
A successful cluster creation using your own managed identities contains this us
}, ```
-## Bring your own kubelet MI
+## Use a pre-created kubelet managed identity
-A Kubelet identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
+A Kubelet identity enables access granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
> [!WARNING]
-> Updating kubelet MI will upgrade Nodepool, which causes downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
-
+> Updating kubelet managed identity upgrades Nodepool, which causes downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
### Prerequisites -- You must have the Azure CLI, version 2.26.0 or later installed.
+- Azure CLI version 2.26.0 or later installed. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
### Limitations -- Only works with a User-Assigned Managed cluster.-- China East, China North in Azure China 21Vianet aren't currently supported.
+- Only works with a user-assigned managed cluster.
+- China East and China North regions in Azure China 21Vianet aren't currently supported.
### Create or obtain managed identities
-If you don't have a control plane managed identity yet, you should go ahead and create one. The following example uses the [az identity create][az-identity-create] command:
+If you don't have a control plane managed identity, you can create by running the following [az identity create][az-identity-create] command:
```azurecli-interactive az identity create --name myIdentity --resource-group myResourceGroup ```
-The result should look like:
+The output should resemble the following:
```output {
The result should look like:
} ```
-If you don't have a kubelet managed identity yet, you should go ahead and create one. The following example uses the [az identity create][az-identity-create] command:
+If you don't have a kubelet managed identity, you can create one by running the following [az identity create][az-identity-create] command:
```azurecli-interactive az identity create --name myKubeletIdentity --resource-group myResourceGroup ```
-The result should look like:
+The output should resemble the following:
```output {
az identity list --query "[].{Name:name, Id:id, Location:location}" -o table
### Create a cluster using kubelet identity
-Now you can use the following command to create your cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
+Now you can use the following command to create your AKS cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
```azurecli-interactive az aks create \
az aks create \
--assign-kubelet-identity <kubelet-identity-resource-id> ```
-A successful cluster creation using your own kubelet managed identity contains the following output:
+A successful AKS cluster creation using your own kubelet managed identity should resemble the following output:
```output "identity": {
A successful cluster creation using your own kubelet managed identity contains t
}, ```
-### Update an existing cluster using kubelet identity
+### Update an existing cluster using kubelet identity
-Update kubelet identity on an existing cluster with your existing identities.
+Update kubelet identity on an existing AKS cluster with your existing identities.
#### Make sure the CLI version is 2.37.0 or later
az version
# Upgrade the version to make sure it is 2.37.0 or later az upgrade ```+ #### Updating your cluster with kubelet identity Now you can use the following command to update your cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
A successful cluster update using your own kubelet managed identity contains the
``` ## Next steps
-* Use [Azure Resource Manager templates ][aks-arm-template] to create Managed Identity enabled clusters.
+
+Use [Azure Resource Manager templates ][aks-arm-template] to create a managed identity-enabled cluster.
<!-- LINKS - external --> [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
A successful cluster update using your own kubelet managed identity contains the
[az-identity-list]: /cli/azure/identity#az_identity_list [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
+[managed-identity-resources-overview]: ../active-directory/managed-identities-azure-resources/overview.md
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
If you configure policy definitions at more than one scope, multiple policies co
In API Management, determine the policy evaluation order by placement of the `base` element in each section in the policy definition at each scope. The `base` element inherits the policies configured in that section at the next broader (parent) scope. The `base` element is included by default in each policy section. > [!NOTE]
-> To view the effective policies at the current scope, select **Recalculate effective policy** in the policy editor.
+> To view the effective policies at the current scope, select **Calculate effective policy** in the policy editor.
To modify the policy evaluation order using the policy editor:
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validation-policies.md
documentationcenter: ''
Previously updated : 03/07/2022 Last updated : 06/07/2022 # API Management policies to validate requests and responses
-This article provides a reference for API Management policies to validate REST or SOAP API requests and responses against schemas defined in the API definition or supplementary JSON or XML schemas. Validation policies protect from vulnerabilities such as injection of headers or payload or leaking sensitive data.
+This article provides a reference for API Management policies to validate REST or SOAP API requests and responses against schemas defined in the API definition or supplementary JSON or XML schemas. Validation policies protect from vulnerabilities such as injection of headers or payload or leaking sensitive data. Learn more about common [API vulnerabilites](mitigate-owasp-api-threats.md).
-While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to an additional class of threats that arenΓÇÖt covered by security products that rely on static, predefined rules.
+While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to an additional class of threats that arenΓÇÖt covered by security products that rely on static, predefined rules.
[!INCLUDE [api-management-policy-intro-links](../../includes/api-management-policy-intro-links.md)]
The `validate-content` policy validates the size or content of a request or resp
[!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
-The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
+The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
| Format | Content types | |||
The following table shows the schema formats and request or response content typ
|XML | Example: `application/xml` | |SOAP | Allowed values: `application/soap+xml` for SOAP 1.2 APIs<br/>`text/xml` for SOAP 1.1 APIs|
+### What content is validated
+
+The policy validates the following content in the request or response against the schema:
+
+* Presence of all required properties.
+* Absence of additional properties, if the schema has the `additionalProperties` field set to `false`.
+* Types of all properties. For example, if a schema specifies a property as an integer, the request (or response) must include an integer and not another type, such as a string.
+* The format of the properties, if specified in the schema - for example, regex (if the `pattern` keyword is specified), `minimum` for integers, and so on.
+
+> [!TIP]
+> For examples of regex pattern constraints that can be used in schemas, see [OWASP Validation Regex Repository](https://owasp.org/www-community/OWASP_Validation_Regex_Repository).
+ ### Policy statement ```xml
After the schema is created, it appears in the list on the **Schemas** page. Sel
> * A schema may cross-reference another schema that is added to the API Management instance. > * Open-source tools to resolve WSDL and XSD schema references and to batch-import generated schemas to API Management are available on [GitHub](https://github.com/Azure-Samples/api-management-schema-import). - ### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | **Supported paragraph roles**:
-The paragraph roles are best used with unstructured documents, structured documents and forms. Roles help analyze the structure of the extracted content for better semantic search and analysis.
+The paragraph roles are best used with unstructured documents. PAragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
* title * sectionHeading
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
A composed model is created by taking a collection of custom models and assignin
## Model data extraction
- | **Model ID** | **Text extraction** | **Selection Marks** | **Tables** | **Paragraphs** | **Key-Value pairs** | **Fields** |**Entities** |
- |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | | | ✓ | | | |
-|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | | ✓ | | ✓ | |
-|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | |
-| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
-| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
-| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
-| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
-| [Custom](concept-custom.md#compare-model-features) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
+ | **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** |
+ |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | ✓ | | | ✓ | | | |
+|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | | ✓ | | ✓ | | | ✓ |
+|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | | ✓ | ✓ | ✓ | | ✓ | |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | |
+| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Custom](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Additionally, the Read API supports Microsoft Word (DOCX), Excel (XLS), PowerPoint (PPT), and HTML files.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier. * Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
* The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
applied-ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/encrypt-data-at-rest.md
Last updated 08/28/2020
-#Customer intent: As a user of the Form Recognizer service, I want to learn how encryption at rest works.
+ # Form Recognizer encryption of data at rest
Azure Form Recognizer automatically encrypts your data when persisting it to the
## Next steps * [Form Recognizer Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
Last updated 05/25/2022 + # Customer spotlight
applied-ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md
Last updated 03/19/2021 + # Tutorial: Use an Azure Function to process stored documents
In this tutorial, you learned how to use an Azure Function written in Python to
> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/) * [What is Form Recognizer?](overview.md)
-* Learn more about the [Layout API](concept-layout.md)
+* Learn more about the [Layout API](concept-layout.md)
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
You can also use an [InlineScript](automation-powershell-workflow.md#use-inlines
Hybrid Runbook Workers on Azure virtual machines can use managed identities to authenticate to Azure resources. Using managed identities for Azure resources instead of Run As accounts provides benefits because you don't need to:
-* Export the Run As certificate and then import it into the Hybrid Runbook Worker.
-* Renew the certificate used by the Run As account.
-* Handle the Run As connection object in your runbook code.
+- Export the Run As certificate and then import it into the Hybrid Runbook Worker.
+- Renew the certificate used by the Run As account.
+- Handle the Run As connection object in your runbook code.
-Follow the next steps to use a managed identity for Azure resources on a Hybrid Runbook Worker:
+There are two ways to use the Managed Identities in Hybrid Runbook Worker scripts.
-1. Create an Azure VM.
-1. Configure managed identities for Azure resources on the VM. See [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm).
-1. Give the VM access to a resource group in Resource Manager. Refer to [Use a Windows VM system-assigned managed identity to access Resource Manager](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager).
-1. Install the Hybrid Runbook Worker on the VM. See [Deploy a Windows Hybrid Runbook Worker](automation-windows-hrw-install.md) or [Deploy a Linux Hybrid Runbook Worker](automation-linux-hrw-install.md).
-1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
+1. Use the system-assigned Managed Identity for the Automation account:
+
+ 1. [Configure](/enable-managed-identity-for-automation.md#enable-a-system-assigned-managed-identity-for-an-azure-automation-account) a System-assigned Managed Identity for the Automation account.
+ 1. Grant this identity the [required permissions](/enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity) within the Subscription to perform its task.
+ 1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
+
+ ```powershell
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ $AzureContext = (Connect-AzAccount -Identity).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext
+
+ # Get all VM names from the subscription
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
+ ```
+ > [!NOTE]
+ > It is **Not** possible to use the Automation Account's User Managed Identity on a Hybrid Runbook Worker, it must be the Automation Account's System Managed Identity.
+
+2. Use the VM Managed Identity for both the Azure VM or Arc-enabled server running as a Hybrid Runbook Worker.
+ Here, you can use either the **VMΓÇÖs User-assigned Managed Identity** or the **VMΓÇÖs System-assigned Managed Identity**.
+
+ > [!NOTE]
+ > This will **Not** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, you can't use the VM Managed Identity. The only available option is to use the Automation Account **System-Assigned Managed Identity** as mentioned in option 1.
+
+ **To use a VM's system-assigned managed identity**:
+
+ 1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) a System Managed Identity for the VM.
+ 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks.
+ 1. Update the runbook to use the [Connect-Az-Account](/powershell/module/az.accounts/connect-azaccount?view=azps-8.0.0) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As Account and perform the associated account management.
```powershell
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- $AzureContext = (Connect-AzAccount -Identity).context
-
- # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ $AzureContext = (Connect-AzAccount -Identity).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext
+
+ # Get all VM names from the subscription
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
+ ```
+
+ **To use a VM's user-assigned managed identity**:
+ 1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#user-assigned-managed-identity) a User Managed Identity for the VM.
+ 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/howto-assign-access-portal) within the Subscription to perform its tasks.
+ 1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount?view=azps-8.0.0) cmdlet with the `Identity ` and `AccountID` parameters to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
- # Get all VM names from the subscription
- Get-AzVM -DefaultProfile $AzureContext | Select Name
+ ```powershell
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with user-managed-assigned managed identity. Replace <ClientId> below with the Client Id of the User Managed Identity
+ $AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext
+
+ # Get all VM names from the subscription
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
```
+ > [!NOTE]
+ > You can find the client Id of the user-assigned managed identity in the Azure portal.
+
+ > :::image type="content" source="./media/automation-hrw-run-runbooks/managed-identities-client-id-inline.png" alt-text="Screenshot of client id in Managed Identites." lightbox="./media/automation-hrw-run-runbooks/managed-identities-client-id-expanded.png":::
- If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you run the runbook in an Azure sandbox instead of Hybrid Runbook Worker and you want to use a user-assigned managed identity, then:
- 1. From line 5, remove `$AzureContext = (Connect-AzAccount -Identity).context`,
- 1. Replace it with `$AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context`, and
- 1. Enter the Client ID.
>[!NOTE]
->By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence?view=azps-7.3.2).
+> By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence?view=azps-7.3.2).
For instance, a runbook with `Get-AzVM` can return all the VMs in the subscription with no call to `Connect-AzAccount`, and the user would be able to access Azure resources without having to authenticate within that runbook. You can disable context autosave in Azure PowerShell, as detailed [here](/powershell/azure/context-persistence?view=azps-7.3.2#save-azure-contexts-across-powershell-sessions). -
+
### Use runbook authentication with Hybrid Worker Credentials Instead of having your runbook provide its own authentication to local resources, you can specify Hybrid Worker Credentials for a Hybrid Runbook Worker group. To specify a Hybrid Worker Credentials, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Users can now restore an Automation account deleted within 30 days. Read [here](
**Type:** New feature
-New scripts are added to the Azure Automation [GitHub repository](https://github.com/azureautomation) to address one of Azure Automation's key scenarios of VM management based on Azure Monitor alert. For more information, see [Trigger runbook from Azure alert](./automation-create-alert-triggered-runbook.md).
+New scripts are added to the Azure Automation [GitHub repository](https://github.com/azureautomation) to address one of Azure Automation's key scenarios of VM management based on Azure Monitor alert. For more information, see [Trigger runbook from Azure alert](./automation-create-alert-triggered-runbook.md#common-azure-vm-management-operations).
- Stop-Azure-VM-On-Alert - Restart-Azure-VM-On-Alert
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
The following steps describe how to assign the App Configuration Data Reader rol
> options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential("<your_clientId>")) > }); >```
- >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid posible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
+ >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid possible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
:::zone-end
In addition to App Service, many other Azure services support managed identities
In this tutorial, you added an Azure managed identity to streamline access to App Configuration and improve credential management for your app. To learn more about how to use App Configuration, continue to the Azure CLI samples. > [!div class="nextstepaction"]
-> [CLI samples](./cli-samples.md)
+> [CLI samples](./cli-samples.md)
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 03/09/2022 Last updated : 06/07/2022
-description: "Troubleshooting common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
+description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux" # Azure Arc-enabled Kubernetes and GitOps troubleshooting
-This document provides troubleshooting guides for issues with Azure Arc-enabled Kubernetes connectivity, permissions, and agents. It also provides troubleshooting guides for Azure GitOps, which can be used in either Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters.
+This document provides troubleshooting guides for issues with Azure Arc-enabled Kubernetes connectivity, permissions, and agents. It also provides troubleshooting guides for Azure GitOps, which can be used in either Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters.
## General troubleshooting
az account show
All agents for Azure Arc-enabled Kubernetes are deployed as pods in the `azure-arc` namespace. All pods should be running and passing their health checks.
-First, verify the Azure Arc helm release:
+First, verify the Azure Arc Helm Chart release:
```console $ helm --namespace default status azure-arc
REVISION: 5
TEST SUITE: None ```
-If the Helm release isn't found or missing, try [connecting the cluster to Azure Arc](./quickstart-connect-cluster.md) again.
+If the Helm Chart release isn't found or missing, try [connecting the cluster to Azure Arc](./quickstart-connect-cluster.md) again.
-If the Helm release is present with `STATUS: deployed`, check the status of the agents using `kubectl`:
+If the Helm Chart release is present with `STATUS: deployed`, check the status of the agents using `kubectl`:
```console $ kubectl -n azure-arc get deployments,pods
pod/metrics-agent-58b765c8db-n5l7k 2/2 Running 0 16h
pod/resource-sync-agent-5cf85976c7-522p5 3/3 Running 0 16h ```
-All pods should show `STATUS` as `Running` with either `3/3` or `2/2` under the `READY` column. Fetch logs and describe the pods returning an `Error` or `CrashLoopBackOff`. If any pods are stuck in `Pending` state, there might be insufficient resources on cluster nodes. [Scale up your cluster](https://kubernetes.io/docs/tasks/administer-cluster/) can get these pods to transition to `Running` state.
+All pods should show `STATUS` as `Running` with either `3/3` or `2/2` under the `READY` column. Fetch logs and describe the pods returning an `Error` or `CrashLoopBackOff`. If any pods are stuck in `Pending` state, there might be insufficient resources on cluster nodes. [Scaling up your cluster](https://kubernetes.io/docs/tasks/administer-cluster/) can get these pods to transition to `Running` state.
## Connecting Kubernetes clusters to Azure Arc
-Connecting clusters to Azure requires both access to an Azure subscription and `cluster-admin` access to a target cluster. If you cannot reach the cluster or you have insufficient permissions, connecting the cluster to Azure Arc will fail.
+Connecting clusters to Azure Arc requires access to an Azure subscription and `cluster-admin` access to a target cluster. If you can't reach the cluster, or if you have insufficient permissions, connecting the cluster to Azure Arc will fail. Make sure you've met all of the [prerequisites to connect a cluster](quickstart-connect-cluster.md#prerequisites).
### Azure CLI is unable to download Helm chart for Azure Arc agents
-If you are using Helm version >= 3.7.0, you will run into the following error when `az connectedk8s connect` is run to connect the cluster to Azure Arc:
+With Helm version >= 3.7.0, you may run into the following error when using `az connectedk8s connect` to connect the cluster to Azure Arc:
```azurecli az connectedk8s connect -n AzureArcTest -g AzureArcTest
Unable to pull helm chart from the registry 'mcr.microsoft.com/azurearck8s/batch
Run 'helm --help' for usage. ```
-In this case, you'll need to install a prior version of [Helm 3](https://helm.sh/docs/intro/install/), where version &lt; 3.7.0. After this, run the `az connectedk8s connect` command again to connect the cluster to Azure Arc.
+To resolve this issue, you'll need to install a prior version of [Helm 3](https://helm.sh/docs/intro/install/), where the version is less than 3.7.0. After you've installed that version, run the `az connectedk8s connect` command again to connect the cluster to Azure Arc.
### Insufficient cluster permissions
-If the provided kubeconfig file does not have sufficient permissions to install the Azure Arc agents, the Azure CLI command will return an error.
+If the provided kubeconfig file doesn't have sufficient permissions to install the Azure Arc agents, the Azure CLI command will return an error.
```azurecli az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
This operation might take a while...
Error: list: failed to list: secrets is forbidden: User "myuser" cannot list resource "secrets" in API group "" at the cluster scope ```
-The user connecting the cluster to Azure Arc should have `cluster-admin` role assigned to them on the cluster.
+To resolve this issue, the user connecting the cluster to Azure Arc should have the `cluster-admin` role assigned to them on the cluster.
### Unable to connect OpenShift cluster to Azure Arc
-If `az connectedk8s connect` is timing out and failing when connecting an OpenShift cluster to Azure Arc, check the following:
+If `az connectedk8s connect` is timing out and failing when connecting an OpenShift cluster to Azure Arc:
-1. The OpenShift cluster needs to meet the version prerequisites: 4.5.41+ or 4.6.35+ or 4.7.18+.
+1. Ensure that the OpenShift cluster meets the version prerequisites: 4.5.41+ or 4.6.35+ or 4.7.18+.
-1. Before running `az connectedk8s connnect`, the following command needs to be run on the cluster:
+1. Before you run `az connectedk8s connnect`, run this command on the cluster:
```console oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
Ensure that you have the latest helm version installed before proceeding to avoid unexpected errors. This operation might take a while... ```+ ### Helm timeout error
+You may see the following Helm timeout error:
+ ```azurecli az connectedk8s connect -n AzureArcTest -g AzureArcTest ```
az connectedk8s connect -n AzureArcTest -g AzureArcTest
Unable to install helm release: Error: UPGRADE Failed: time out waiting for the condition ```
-If you get the above helm timeout issue, you can troubleshoot as follows:
-
- 1. Run the following command:
-
- ```console
- kubectl get pods -n azure-arc
- ```
- 2. Check if the `clusterconnect-agent` or the `config-agent` pods are showing crashloopbackoff, or not all containers are running:
-
- ```output
- NAME READY STATUS RESTARTS AGE
- cluster-metadata-operator-664bc5f4d-chgkl 2/2 Running 0 4m14s
- clusterconnect-agent-7cb8b565c7-wklsh 2/3 CrashLoopBackOff 0 1m15s
- clusteridentityoperator-76d645d8bf-5qx5c 2/2 Running 0 4m15s
- config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s
- ```
- 3. If the below certificate isn't present, the system assigned managed identity didn't get installed.
-
- ```console
- kubectl get secret -n azure-arc -o yaml | grep name:
- ```
-
- ```output
- name: azure-identity-certificate
- ```
- This could be a transient issue. You can try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If you're consistently facing this, it could be an issue with your proxy settings. Please follow [these steps](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy.
- 4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path.
-
+To resolve this issue, try the following steps.
+
+1. Run the following command:
+
+ ```console
+ kubectl get pods -n azure-arc
+ ```
+
+2. Check if the `clusterconnect-agent` or the `config-agent` pods are showing `crashloopbackoff`, or if not all containers are running:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ cluster-metadata-operator-664bc5f4d-chgkl 2/2 Running 0 4m14s
+ clusterconnect-agent-7cb8b565c7-wklsh 2/3 CrashLoopBackOff 0 1m15s
+ clusteridentityoperator-76d645d8bf-5qx5c 2/2 Running 0 4m15s
+ config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s
+ ```
+
+3. If the certificate below isn't present, the system assigned managed identity hasn't been installed.
+
+ ```console
+ kubectl get secret -n azure-arc -o yaml | grep name:
+ ```
+
+ ```output
+ name: azure-identity-certificate
+ ```
+
+ To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy.
+
+4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path.
### Helm validation error
-Helm `v3.3.0-rc.1` version has an [issue](https://github.com/helm/helm/pull/8527) where helm install/upgrade (used by `connectedk8s` CLI extension) results in running of all hooks leading to the following error:
+Helm `v3.3.0-rc.1` version has an [issue](https://github.com/helm/helm/pull/8527) where helm install/upgrade (used by the `connectedk8s` CLI extension) results in running of all hooks leading to the following error:
```azurecli az connectedk8s connect -n AzureArcTest -g AzureArcTest
To recover from this issue, follow these steps:
1. Delete the Azure Arc-enabled Kubernetes resource in the Azure portal. 2. Run the following commands on your machine:
-
- ```console
- kubectl delete ns azure-arc
- kubectl delete clusterrolebinding azure-arc-operator
- kubectl delete secret sh.helm.release.v1.azure-arc.v1
- ```
+
+ ```console
+ kubectl delete ns azure-arc
+ kubectl delete clusterrolebinding azure-arc-operator
+ kubectl delete secret sh.helm.release.v1.azure-arc.v1
+ ```
3. [Install a stable version](https://helm.sh/docs/intro/install/) of Helm 3 on your machine instead of the release candidate version. 4. Run the `az connectedk8s connect` command with the appropriate values to connect the cluster to Azure Arc.
az extension add --name k8s-configuration
### Flux v1 - General
+> [!NOTE]
+> Eventually Azure will stop supporting GitOps with Flux v1, so begin using [Flux v2](./tutorial-use-gitops-flux2.md) as soon as possible.
+ To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these az commands with `--debug` parameter specified: ```azurecli
For more information, see [How do I resolve `webhook does not support dry run` e
### Flux v2 - Error installing the `microsoft.flux` extension
-The `microsoft.flux` extension installs the Flux controllers and Azure GitOps agents into your Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. If the extension is not already installed in a cluster and you create a GitOps configuration resource for that cluster, the extension will be installed automatically.
+The `microsoft.flux` extension installs the Flux controllers and Azure GitOps agents into your Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. If the extension isn't already installed in a cluster and you create a GitOps configuration resource for that cluster, the extension will be installed automatically.
-If you experience an error during installation or if the extension is in a failed state, you can first run a script to investigate. The cluster-type parameter can be set to `connectedClusters` for an Arc-enabled cluster or `managedClusters` for an AKS cluster. The name of the `microsoft.flux` extension will be "flux" if the extension was installed automatically during creation of a GitOps configuration. Look in the "statuses" object for information.
+If you experience an error during installation, or if the extension is in a failed state, run a script to investigate. The cluster-type parameter can be set to `connectedClusters` for an Arc-enabled cluster or `managedClusters` for an AKS cluster. The name of the `microsoft.flux` extension will be "flux" if the extension was installed automatically during creation of a GitOps configuration. Look in the "statuses" object for information.
One example:
kubectl delete namespaces flux-system
``` Some other aspects to consider:
-
-* For AKS cluster, assure that the subscription has the following feature flag enabled: `Microsoft.ContainerService/AKS-ExtensionManager`.
+
+* For an AKS cluster, assure that the subscription has the `Microsoft.ContainerService/AKS-ExtensionManager` feature flag enabled.
```azurecli az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager ```
-* Assure that the cluster does not have any policies that restrict creation of the `flux-system` namespace or resources in that namespace.
+* Assure that the cluster doesn't have any policies that restrict creation of the `flux-system` namespace or resources in that namespace.
-With these actions accomplished you can either [re-create a flux configuration](./tutorial-use-gitops-flux2.md) which will install the flux extension automatically or you can re-install the flux extension manually.
+With these actions accomplished, you can either [recreate a flux configuration](./tutorial-use-gitops-flux2.md), which will install the flux extension automatically, or you can reinstall the flux extension manually.
### Flux v2 - Installing the `microsoft.flux` extension in a cluster with Azure AD Pod Identity enabled
The extension status also returns as "Failed".
"{\"status\":\"Failed\",\"error\":{\"code\":\"ResourceOperationFailure\",\"message\":\"The resource operation completed with terminal provisioning state 'Failed'.\",\"details\":[{\"code\":\"ExtensionCreationFailed\",\"message\":\" error: Unable to get the status from the local CRD with the error : {Error : Retry for given duration didn't get any results with err {status not populated}}\"}]}}", ```
-The issue is that the extension-agent pod is trying to get its token from IMDS on the cluster in order to talk to the extension service in Azure; however, this token request is being intercepted by pod identity ([details here](../../aks/use-azure-ad-pod-identity.md)).
+The extension-agent pod is trying to get its token from IMDS on the cluster in order to talk to the extension service in Azure, but the token request is intercepted by the [pod identity](../../aks/use-azure-ad-pod-identity.md)).
The workaround is to create an `AzurePodIdentityException` that will tell Azure AD Pod Identity to ignore the token requests from flux-extension pods.
spec:
## Monitoring
-Azure Monitor for containers requires its DaemonSet to be run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command:
+Azure Monitor for Containers requires its DaemonSet to run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command:
```console juju config kubernetes-worker allow-privileged=true
juju config kubernetes-worker allow-privileged=true
### Old version of agents used
-Usage of older version of agents where Cluster Connect feature was not yet supported will result in the following error:
+Some older agent versions didn't support the Cluster Connect feature. If you use one of these versions, you may see this error:
```azurecli az connectedk8s proxy -n AzureArcTest -g AzureArcTest
az connectedk8s proxy -n AzureArcTest -g AzureArcTest
Hybrid connection for the target resource does not exist. Agent might not have started successfully. ```
-When this occurs, ensure that you are using `connectedk8s` Azure CLI extension of version >= 1.2.0 and [connect your cluster again](quickstart-connect-cluster.md) to Azure Arc. Also, verify that you've met all the [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements) needed for Arc-enabled Kubernetes. If your cluster is behind an outbound proxy or firewall, verify that websocket connections are enabled for `*.servicebus.windows.net` which is required specifically for the [Cluster Connect](cluster-connect.md) feature.
+Be sure to use the `connectedk8s` Azure CLI extension with version >= 1.2.0, then [connect your cluster again](quickstart-connect-cluster.md) to Azure Arc. Also, verify that you've met all the [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements) needed for Arc-enabled Kubernetes.
+
+If your cluster is behind an outbound proxy or firewall, verify that websocket connections are enabled for `*.servicebus.windows.net`, which is required specifically for the [Cluster Connect](cluster-connect.md) feature.
### Cluster Connect feature disabled
To resolve this error, [enable the Cluster Connect feature](cluster-connect.md#e
## Enable custom locations using service principal
-When you are connecting your cluster to Azure Arc or when you are enabling custom locations feature on an existing cluster, you may observe the following warning:
+When connecting your cluster to Azure Arc or enabling custom locations on an existing cluster, you may see the following warning:
```console Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation. ```
-The above warning is observed when you have used a service principal to log into Azure. This is because a service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
+This warning occurs when you use a service principal to log into Azure. The service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
-1. Login into Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
+1. Sign in into Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
```azurecli az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv ```
-1. Login into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
- - If you are enabling custom locations feature as part of connecting the cluster to Arc, run the following command:
+1. Sign in into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations on the cluster:
- ```azurecli
- az connectedk8s connect -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId>
- ```
+ * To enable custom locations when connecting the cluster to Arc, run the following command:
- - If you are enabling custom locations feature on an existing Azure Arc-enabled Kubernetes cluster, run the following command:
+ ```azurecli
+ az connectedk8s connect -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId>
+ ```
+
+ * To enable custom locations on an existing Azure Arc-enabled Kubernetes cluster, run the following command:
- ```azurecli
- az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations
- ```
+ ```azurecli
+ az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations
+ ```
## Azure Arc-enabled Open Service Mesh
-The following troubleshooting steps provide guidance on validating the deployment of all the Open Service Mesh extension components on your cluster.
+The steps below provide guidance on validating the deployment of all the Open Service Mesh (OSM) extension components on your cluster.
### Check OSM Controller **Deployment**+ ```bash kubectl get deployment -n arc-osm-system --selector app=osm-controller ```
-If the OSM Controller is healthy, you will get an output similar to the following output:
-```
+If the OSM Controller is healthy, you'll see output similar to the following:
+
+```output
NAME READY UP-TO-DATE AVAILABLE AGE osm-controller 1/1 1 1 59m ``` ### Check the OSM Controller **Pod**+ ```bash kubectl get pods -n arc-osm-system --selector app=osm-controller ```
-If the OSM Controller is healthy, you will get an output similar to the following output:
-```
+If the OSM Controller is healthy, you'll see output similar to the following:
+
+```output
NAME READY STATUS RESTARTS AGE osm-controller-b5bd66db-wglzl 0/1 Evicted 0 61m osm-controller-b5bd66db-wvl9w 1/1 Running 0 31m ```
-Even though we had one controller _evicted_ at some point, we have another one which is `READY 1/1` and `Running` with `0` restarts.
-If the column `READY` is anything other than `1/1` the service mesh would be in a broken state.
-Column `READY` with `0/1` indicates the control plane container is crashing - we need to get logs. Use the following command to inspect controller logs:
+Even though one controller was _evicted_ at some point, there's another which is `READY 1/1` and `Running` with `0` restarts. If the column `READY` is anything other than `1/1`, the service mesh would be in a broken state. Column `READY` with `0/1` indicates the control plane container is crashing. Use the following command to inspect controller logs:
+ ```bash kubectl logs -n arc-osm-system -l app=osm-controller ```+ Column `READY` with a number higher than 1 after the `/` would indicate that there are sidecars installed. OSM Controller would most likely not work with any sidecars attached to it. ### Check OSM Controller **Service**+ ```bash kubectl get service -n arc-osm-system osm-controller ```
-If the OSM Controller is healthy, you will have the following output:
-```
+If the OSM Controller is healthy, you'll see the following output:
+
+```output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67m ```
osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67
> The `CLUSTER-IP` would be different. The service `NAME` and `PORT(S)` must be the same as seen in the output. ### Check OSM Controller **Endpoints**+ ```bash kubectl get endpoints -n arc-osm-system osm-controller ```
-If the OSM Controller is healthy, you will get an output similar to the following output:
-```
+If the OSM Controller is healthy, you'll see output similar to the following:
+
+```output
NAME ENDPOINTS AGE osm-controller 10.240.1.115:9092,10.240.1.115:15128 69m ```
-If the user's cluster has no `ENDPOINTS` for `osm-controller` this would indicate that the control plane is unhealthy. This may be caused by the OSM Controller pod crashing, or never deployed correctly.
+If the user's cluster has no `ENDPOINTS` for `osm-controller`, the control plane is unhealthy. This unhealthy state may be caused by the OSM Controller pod crashing, or the pod may never have been deployed correctly.
### Check OSM Injector **Deployment**+ ```bash kubectl get deployments -n arc-osm-system osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
-```
+If the OSM Injector is healthy, you'll see output similar to the following:
+
+```output
NAME READY UP-TO-DATE AVAILABLE AGE osm-injector 1/1 1 1 73m ``` ### Check OSM Injector **Pod**+ ```bash kubectl get pod -n arc-osm-system --selector app=osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
-```
+If the OSM Injector is healthy, you'll see output similar to the following:
+
+```output
NAME READY STATUS RESTARTS AGE osm-injector-5986c57765-vlsdk 1/1 Running 0 73m ```
osm-injector-5986c57765-vlsdk 1/1 Running 0 73m
The `READY` column must be `1/1`. Any other value would indicate an unhealthy osm-injector pod. ### Check OSM Injector **Service**+ ```bash kubectl get service -n arc-osm-system osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
-```
+If the OSM Injector is healthy, you'll see output similar to the following:
+
+```output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m ```
osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m
Ensure the IP address listed for `osm-injector` service is `9090`. There should be no `EXTERNAL-IP`. ### Check OSM Injector **Endpoints**+ ```bash kubectl get endpoints -n arc-osm-system osm-injector ```
-If the OSM Injector is healthy, you will get an output similar to the following output:
+If the OSM Injector is healthy, you'll see output similar to the following:
+ ``` NAME ENDPOINTS AGE osm-injector 10.240.1.172:9090 75m
osm-injector 10.240.1.172:9090 75m
For OSM to function, there must be at least one endpoint for `osm-injector`. The IP address of your OSM Injector endpoints will be different. The port `9090` must be the same. - ### Check **Validating** and **Mutating** webhooks+ ```bash kubectl get ValidatingWebhookConfiguration --selector app=osm-controller ```
-If the Validating Webhook is healthy, you will get an output similar to the following output:
-```
+If the **Validating** webhook is healthy, you'll see output similar to the following:
+
+```output
NAME WEBHOOKS AGE osm-validator-mesh-osm 1 81m ```
osm-validator-mesh-osm 1 81m
kubectl get MutatingWebhookConfiguration --selector app=osm-injector ```
+If the **Mutating** webhook is healthy, you'll see output similar to the following:
-If the Mutating Webhook is healthy, you will get an output similar to the following output:
-```
+```output
NAME WEBHOOKS AGE arc-osm-webhook-osm 1 102m ```
-Check for the service and the CA bundle of the **Validating** webhook
+Check for the service and the CA bundle of the **Validating** webhook by using the following command:
+ ```bash kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service' ```
-A well configured Validating Webhook Configuration would have the following output:
+A well configured **Validating** webhook configuration will have output similar to the following:
+ ```json { "name": "osm-config-validator",
A well configured Validating Webhook Configuration would have the following outp
} ```
-Check for the service and the CA bundle of the **Mutating** webhook
+Check for the service and the CA bundle of the **Mutating** webhook by using the following command:
+ ```bash kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service' ```
-A well configured Mutating Webhook Configuration would have the following output:
+A well configured **Mutating** webhook configuration will have output similar to the following:
``` { "name": "osm-injector",
A well configured Mutating Webhook Configuration would have the following output
} ``` -
-Check whether OSM Controller has given the Validating (or Mutating) Webhook a CA Bundle by using the following command:
+Check whether OSM Controller has given the **Validating** (or **Mutating**) webhook a CA Bundle by using the following command:
```bash kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq -r '.w
``` Example output:+ ```bash 1845 ```
-The number in the output indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or some number under a 1000, it would indicate that the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the ValidatingWebhook would throw an error.
+
+The number in the output indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or a number under 1000, the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the `ValidatingWebhook` will throw an error.
### Check the `osm-mesh-config` resource
-Check for the existence:
+Check for the existence of the resource:
```azurecli-interactive kubectl get meshconfig osm-mesh-config -n arc-osm-system ```
-Check the content of the OSM MeshConfig
+Check the content of the OSM MeshConfig:
```azurecli-interactive kubectl get meshconfig osm-mesh-config -n arc-osm-system -o yaml
metadata:
| spec.featureFlags.enableIngressBackendPolicy | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableIngressBackendPolicy":"true"}}}' --type=merge` | | spec.featureFlags.enableEnvoyActiveHealthChecks | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableEnvoyActiveHealthChecks":"false"}}}' --type=merge` |
-### Check Namespaces
+### Check namespaces
>[!Note]
->The arc-osm-system namespace will never participate in a service mesh and will never be labeled and/or annotated with the key/values below.
+>The arc-osm-system namespace will never participate in a service mesh and will never be labeled or annotated with the key/values below.
-We use the `osm namespace add` command to join namespaces to a given service mesh.
-When a kubernetes namespace is part of the mesh, the following must be true:
+We use the `osm namespace add` command to join namespaces to a given service mesh. When a Kubernetes namespace is part of the mesh, confirm the following:
View the annotations of the namespace `bookbuyer`:+ ```bash kubectl get namespace bookbuyer -o json | jq '.metadata.annotations' ``` The following annotation must be present:+ ``` { "openservicemesh.io/sidecar-injection": "enabled" } ``` - View the labels of the namespace `bookbuyer`: ```bash kubectl get namespace bookbuyer -o json | jq '.metadata.labels' ``` The following label must be present:+ ``` { "openservicemesh.io/monitored-by": "osm" } ```
-Note that if you are not using `osm` CLI, you could also manually add these annotations to your namespaces. If a namespace is not annotated with `"openservicemesh.io/sidecar-injection": "enabled"` or not labeled with `"openservicemesh.io/monitored-by": "osm"` the OSM Injector will not add Envoy sidecars.
+
+If you aren't using `osm` CLI, you could also manually add these annotations to your namespaces. If a namespace isn't annotated with `"openservicemesh.io/sidecar-injection": "enabled"`, or isn't labeled with `"openservicemesh.io/monitored-by": "osm"`, the OSM Injector will not add Envoy sidecars.
>[!Note] >After `osm namespace add` is called, only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment` command. - ### Verify the SMI CRDs
-Check whether the cluster has the required CRDs:
+
+Check whether the cluster has the required Custom Resource Definitions (CRDs) by using the following command:
+ ```bash kubectl get crds ```
-Ensure that the CRDs correspond to the versions available in the release branch. For example, if you are using OSM-Arc v1.0.0-1, navigate to the [SMI supported versions page](https://docs.openservicemesh.io/docs/overview/smi/) and select v1.0 from the Releases dropdown to check which CRDs versions are in use.
+Ensure that the CRDs correspond to the versions available in the release branch. For example, if you're using OSM-Arc v1.0.0-1, navigate to the [SMI supported versions page](https://docs.openservicemesh.io/docs/overview/smi/) and select v1.0 from the Releases dropdown to check which CRDs versions are in use.
Get the versions of the CRDs installed with the following command:+ ```bash for x in $(kubectl get crds --no-headers | awk '{print $1}' | grep 'smi-spec.io'); do kubectl get crd $x -o json | jq -r '(.metadata.name, "-" , .spec.versions[].name, "\n")' done ```
-If CRDs are missing, use the following commands to install them on the cluster. If you are using a version of OSM-Arc that is not v1.0, ensure that you replace the version in the command (ex: v1.1.0 would be release-v1.1).
+If CRDs are missing, use the following commands to install them on the cluster. If you're using a version of OSM-Arc that's not v1.0, ensure that you replace the version in the command (for example, v1.1.0 would be release-v1.1).
```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_http_route_group.yaml
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_traffic_split.yaml ```
-Refer to [OSM release notes](https://github.com/openservicemesh/osm/releases) to see CRD changes between releases.
+To see CRD changes between releases, refer to the [OSM release notes](https://github.com/openservicemesh/osm/releases).
### Troubleshoot certificate management
-Information on how OSM issues and manages certificates to Envoy proxies running on application pods can be found on the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/).
+
+For information on how OSM issues and manages certificates to Envoy proxies running on application pods, see the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/).
### Upgrade Envoy
-When a new pod is created in a namespace monitored by the add-on, OSM will inject an [Envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/guides/upgrade/#envoy) on the OSM docs site.
+
+When a new pod is created in a namespace monitored by the add-on, OSM will inject an [Envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the Envoy version needs to be updated, follow the steps in the [Upgrade Guide](https://docs.openservicemesh.io/docs/guides/upgrade/#envoy) on the OSM docs site.
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
Import an [application repository](./conceptual-gitops-ci-cd.md#application-repo
* **arc-cicd-demo-src** application repository * URL: https://github.com/Azure/arc-cicd-demo-src * Contains the example Azure Vote App that you will deploy using GitOps.
+ * Import the repository with name `arc-cicd-demo-src`
* **arc-cicd-demo-gitops** GitOps repository * URL: https://github.com/Azure/arc-cicd-demo-gitops * Works as a base for your cluster resources that house the Azure Vote App.
+ * Import the repository with name `arc-cicd-demo-gitops`
Learn more about [importing Git repositories](/azure/devops/repos/git/import-git-repository).
The CI/CD workflow will populate the manifest directory with extra manifests to
az k8s-configuration flux create \ --name cluster-config \ --cluster-name arc-cicd-cluster \
- --namespace cluster-config \
+ --namespace flux-system \
--resource-group myResourceGroup \
- -u https://dev.azure.com/<Your organization>/<Your project>/arc-cicd-demo-gitops \
+ -u https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \
--https-user <Azure Repos username> \ --https-key <Azure Repos PAT token> \ --scope cluster \
The CI/CD workflow will populate the manifest directory with extra manifests to
1. Check the state of the deployment in Azure portal. * If successful, you'll see both `dev` and `stage` namespaces created in your cluster.
+ * You can also check on Azure Portal page of your K8s cluster on `GitOps` tab a configuration `cluster-config` is created.
+ ### Import the CI/CD pipelines
The application repository contains a `.pipeline` folder with the pipelines you'
| Pipeline file name | Description | | - | - |
-| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
-| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
+| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
+| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-ci-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** | ### Connect Azure Container Registry to Azure DevOps
CD pipeline manipulates PRs in the GitOps repository. It needs a Service Connect
--set gitOpsAppURL=https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \ --set orchestratorPAT=<Azure Repos PAT token> ```
+> [!NOTE]
+> `Azure Repos PAT token` should have `Build: Read & executee` and `Code: Read` permissions.
+ 3. Configure Flux to send notifications to GitOps connector: ```console cat <<EOF | kubectl apply -f -
spec:
eventSeverity: info eventSources: - kind: GitRepository
- name: <Flux GitRepository to watch>
+ name: cluster-config
- kind: Kustomization
- name: <Flux Kustomization to watch>
+ name: cluster-config-cluster-config
providerRef: name: gitops-connector
For the details on installation, refer to the [GitOps Connector](https://github.
You're now ready to deploy to the `dev` and `stage` environments.
+#### Create environments
+
+In Azure DevOps project create `Dev` and `Stage` environments. See [Create and target an environment](/azure/devops/pipelines/process/environments) for more details.
+ ### Give more permissions to the build service The CD pipeline uses the security token of the running build to authenticate to the GitOps repository. More permissions are needed for the pipeline to create a new branch, push changes, and create pull requests. 1. Go to `Project settings` from the Azure DevOps project main page. 1. Select `Repos/Repositories`.
-1. Select `<GitOps Repo Name>`.
1. Select `Security`.
-1. For the `<Project Name> Build Service (<Organization Name>)`, allow `Contribute`, `Contribute to pull requests`, and `Create branch`.
+1. For the `<Project Name> Build Service (<Organization Name>)` and for the `Project Collection Build Service (<Organization Name>)` (type in the search field, if it doesn't show up), allow `Contribute`, `Contribute to pull requests`, and `Create branch`.
1. Go to `Pipelines/Settings`
-1. Switch off `Limit job authorization scope to referenced Azure DevOps repositories`
+1. Switch off `Protect access to repositories in YAML pipelines` option
For more information, see: - [Grant VC Permissions to the Build Service](/azure/devops/pipelines/scripts/git-commands?preserve-view=true&tabs=yaml&view=azure-devops#version-control )
The CI/CD workflow will populate the manifest directory with extra manifests to
--set gitOpsAppURL=https://github.com/<Your organization>/arc-cicd-demo-gitops/commit \ --set orchestratorPAT=<GitHub PAT token> ```+ 3. Configure Flux to send notifications to GitOps connector: ```console cat <<EOF | kubectl apply -f -
spec:
eventSeverity: info eventSources: - kind: GitRepository
- name: <Flux GitRepository to watch>
+ name: cluster-config
- kind: Kustomization
- name: <Flux Kustomization to watch>
+ name: cluster-config-cluster-config
providerRef: name: gitops-connector
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 06/06/2022 Last updated : 06/08/2022
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
### For Azure Kubernetes Service clusters
-* An AKS cluster that's up and running.
+* An MSI-based AKS cluster that's up and running.
>[!IMPORTANT]
- >Ensure that the AKS cluster is created with MSI (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
+ >**Ensure that the AKS cluster is created with MSI** (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
+ >For new AKS clusters created with ΓÇ£az aks createΓÇ¥, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI run ΓÇ£az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identityΓÇ¥. For more information, refer to [managed identity docs](../../aks/use-managed-identity.md).
* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. * Registration of your subscription with the `AKS-ExtensionManager` feature flag. Use the following command:
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 03/14/2022 Last updated : 06/06/2022
Metadata information about a connected machine is collected after the Connected
* Hardware manufacturer * Hardware model * Cloud provider
-* Amazon Web Services (AWS) account ID, instance ID and region (if running in AWS)
+* Amazon Web Services (AWS) metadata, when running in AWS:
+ * Account ID
+ * Instance ID
+ * Region
+* Google Cloud Platform (GCP) metadata, when running in GCP:
+ * Instance ID
+ * Image
+ * Machine type
+ * OS
+ * Project ID
+ * Project number
+ * Service accounts
+ * Zone
The following metadata information is requested by the agent from Azure:
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 05/24/2022 Last updated : 06/06/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.14 - January 2022
+
+### Fixed
+
+- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+ ## Version 1.13 - November 2021 ### Known issues
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 05/24/2022 Last updated : 06/06/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.19 - June 2022
+
+### New features
+
+- When installed on a Google Compute Engine virtual machine, the agent will now detect and report Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata.
+
+### Fixed
+
+- An issue that could cause the extension manager to hang during extension installation, update, and removal operations has been resolved.
+- Improved support for TLS 1.3
+ ## Version 1.18 - May 2022 ### New features
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Extended the device login timeout to 5 minutes - Removed resource constraints for Azure Monitor Agent to support high throughput scenarios
-## Version 1.14 - January 2022
-
-### Fixed
--- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
The following image shows the architecture for the Arc-enabled SCVMM:
### Supported VMM versions
-Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions.
+Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions and supports SCVMM management servers with a maximum of 3500 VMS.
### Supported scenarios
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
This QuickStart shows you how to connect your SCVMM management server to Azure A
| | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. | | **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud that has at least one cluster with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> For dynamic IP allocation to appliance VM, DHCP server is required. For static IP allocation, VMM static IP pool is required. |
-| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. |
+| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. |
| **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you may experience performance issues. | ## Prepare SCVMM management server
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Virtual network support is configured on the **New Azure Cache for Redis** pane
1. On the **Networking** tab, select **Virtual Networks** as your connectivity method. To use a new virtual network, create it first by following the steps in [Create a virtual network using the Azure portal](../virtual-network/manage-virtual-network.md#create-a-virtual-network) or [Create a virtual network (classic) by using the Azure portal](/previous-versions/azure/virtual-network/virtual-networks-create-vnet-classic-pportal). Then return to the **New Azure Cache for Redis** pane to create and configure your Premium-tier cache. > [!IMPORTANT]
- > When you deploy Azure Cache for Redis to a Resource Manager virtual network, the cache must be in a dedicated subnet that contains no other resources except for Azure Cache for Redis instances. If you attempt to deploy an Azure Cache for Redis instance to a Resource Manager virtual network subnet that contains other resources, or has a NAT Gateway assigned, the deployment fails.
- >
- >
+ > When you deploy Azure Cache for Redis to a Resource Manager virtual network, the cache must be in a dedicated subnet that contains no other resources except for Azure Cache for Redis instances. If you attempt to deploy an Azure Cache for Redis instance to a Resource Manager virtual network subnet that contains other resources, or has a NAT Gateway assigned, the deployment fails. The failure is because Azure Cache for Redis uses a basic load balancer that is not compatible with a NAT Gateway.
| Setting | Suggested value | Description | | | - | -- |
After the port requirements are configured as described in the previous section,
- [Reboot](cache-administration.md#reboot) all of the cache nodes. The cache won't be able to restart successfully if all of the required cache dependencies can't be reachedas documented in [Inbound port requirements](cache-how-to-premium-vnet.md#inbound-port-requirements) and [Outbound port requirements](cache-how-to-premium-vnet.md#outbound-port-requirements). - After the cache nodes have restarted, as reported by the cache status in the Azure portal, you can do the following tests:
- - Ping the cache endpoint by using port 6380 from a machine that's within the same virtual network as the cache, using [tcping](https://www.elifulkerson.com/projects/tcping.php). For example:
+ - Ping the cache endpoint by using port 6380 from a machine that's within the same virtual network as the cache, using [`tcping`](https://www.elifulkerson.com/projects/tcping.php). For example:
`tcping.exe contosocache.redis.cache.windows.net 6380`
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
The `FunctionName` method attribute sets the name of the function, which by defa
1. In the `HttpTrigger` method named `Run`, rename the `FunctionName` method attribute to `HttpExample`.
-Your function definition should now look like the following code:
+Your function definition should now look like the following code, depending on mode:
+
+# [In-process](#tab/in-process)
:::code language="csharp" source="~/functions-docs-csharp/http-trigger-template/HttpExample.cs" range="15-18":::
+# [Isolated process](#tab/isolated-process)
++
+
+ Now that you've renamed the function, you can test it on your local computer. ## Run the function locally
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
The following table shows current supported Node.js versions for each major vers
| Functions version | Node version (Windows) | Node Version (Linux) | ||| |
-| 4.x (recommended) | `~16` (preview)<br/>`~14` (recommended) | `node|16` (preview)<br/>`node|14` (recommended) |
+| 4.x (recommended) | `~16`<br/>`~14` | `node|16`<br/>`node|14` |
| 3.x | `~14`<br/>`~12`<br/>`~10` | `node|14`<br/>`node|12`<br/>`node|10` | | 2.x | `~12`<br/>`~10`<br/>`~8` | `node|10`<br/>`node|8` | | 1.x | 6.11.2 (locked by the runtime) | n/a |
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
Title: Deploy Start/Stop VMs v2 (preview)
-description: This article tells how to deploy the Start/Stop VMs v2 (preview) feature for your Azure VMs in your Azure subscription.
+ Title: Deploy Start/Stop VMs v2
+description: This article tells how to deploy the Start/Stop VMs v2 feature for your Azure VMs in your Azure subscription.
Previously updated : 06/25/2021 Last updated : 06/08/2022 ms.custon: subject-rbac-steps
-# Deploy Start/Stop VMs v2 (preview)
+# Deploy Start/Stop VMs v2
-Perform the steps in this topic in sequence to install the Start/Stop VMs v2 (preview) feature. After completing the setup process, configure the schedules to customize it to your requirements.
+Perform the steps in this topic in sequence to install the Start/Stop VMs v2 feature. After completing the setup process, configure the schedules to customize it to your requirements.
## Permissions considerations Please keep the following in mind before and during deployment:
Please keep the following in mind before and during deployment:
The deployment is initiated from the Start/Stop VMs v2 GitHub organization [here](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md). While this feature is intended to manage all of your VMs in your subscription across all resource groups from a single deployment within the subscription, you can install another instance of it based on the operations model or requirements of your organization. It also can be configured to centrally manage VMs across multiple subscriptions.
-To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (preview) to a dedicated resource group.
+To simplify management and removal, we recommend you deploy Start/Stop VMs v2 to a dedicated resource group.
> [!NOTE]
-> Currently this preview does not support specifying an existing Storage account or Application Insights resource.
+> Currently this solution does not support specifying an existing Storage account or Application Insights resource.
> [!NOTE]
To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (p
## Enable multiple subscriptions
-After the Start/Stop deployment completes, perform the following steps to enable Start/Stop VMs v2 (preview) to take action across multiple subscriptions.
+After the Start/Stop deployment completes, perform the following steps to enable Start/Stop VMs v2 to take action across multiple subscriptions.
1. Copy the value for the Azure Function App name that you specified during the deployment.
In an environment that includes two or more components on multiple Azure Resourc
## Auto stop scenario
-Start/Stop VMs v2 (preview) can help manage the cost of running Azure Resource Manager and classic VMs in your subscription by evaluating machines that aren't used during non-peak periods, such as after hours, and automatically shutting them down if processor utilization is less than a specified percentage.
+Start/Stop VMs v2 can help manage the cost of running Azure Resource Manager and classic VMs in your subscription by evaluating machines that aren't used during non-peak periods, such as after hours, and automatically shutting them down if processor utilization is less than a specified percentage.
The following metric alert properties in the request body support customization:
To learn more about how Azure Monitor metric alerts work and how to configure th
## Next steps
-To learn how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 (preview) feature and perform other management tasks, see the [Manage Start/Stop VMs](manage.md) article.
+To learn how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 feature and perform other management tasks, see the [Manage Start/Stop VMs](manage.md) article.
azure-functions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/manage.md
Title: Manage Start/Stop VMs v2 (preview)
-description: This article tells how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 (preview) feature and perform other management tasks.
+ Title: Manage Start/Stop VMs v2
+description: This article tells how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 feature and perform other management tasks.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# How to manage Start/Stop VMs v2 (preview)
+# How to manage Start/Stop VMs v2
## Azure dashboard
-Start/Stop VMs v2 (preview) includes a [dashboard](../../azure-monitor/best-practices-analysis.md#azure-dashboards) to help you understand the management scope and recent operations against your VMs. It is a quick and easy way to verify the status of each operation thatΓÇÖs performed on your Azure VMs. The visualization in each tile is based on a Log query and to see the query, select the **Open in logs blade** option in the right-hand corner of the tile. This opens the [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md#starting-log-analytics) tool in the Azure portal, and from here you can evaluate the query and modify to support your needs, such as custom [log alerts](../../azure-monitor/alerts/alerts-log.md), a custom [workbook](../../azure-monitor/visualize/workbooks-overview.md), etc.
+Start/Stop VMs v2 includes a [dashboard](../../azure-monitor/best-practices-analysis.md#azure-dashboards) to help you understand the management scope and recent operations against your VMs. It is a quick and easy way to verify the status of each operation thatΓÇÖs performed on your Azure VMs. The visualization in each tile is based on a Log query and to see the query, select the **Open in logs blade** option in the right-hand corner of the tile. This opens the [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md#starting-log-analytics) tool in the Azure portal, and from here you can evaluate the query and modify to support your needs, such as custom [log alerts](../../azure-monitor/alerts/alerts-log.md), a custom [workbook](../../azure-monitor/visualize/workbooks-overview.md), etc.
The log data each tile in the dashboard displays is refreshed every hour, with a manual refresh option on demand by clicking the **Refresh** icon on a given visualization, or by refreshing the full dashboard.
To learn about working with a log-based dashboard, see the following [tutorial](
## Configure email notifications
-To change email notifications after Start/Stop VMs v2 (preview) is deployed, you can modify the action group created during deployment.
+To change email notifications after Start/Stop VMs v2 is deployed, you can modify the action group created during deployment.
1. In the Azure portal, navigate to **Monitor**, then **Alerts**. Select **Action groups**.
The following screenshot is an example email that is sent when the feature shuts
## Next steps
-To handle problems during VM management, see [Troubleshoot Start/Stop VMs v2](troubleshoot.md) (preview) issues.
+To handle problems during VM management, see [Troubleshoot Start/Stop VMs v2](troubleshoot.md) issues.
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
Title: Start/Stop VMs v2 (preview) overview
-description: This article describes version two of the Start/Stop VMs (preview) feature, which starts or stops Azure Resource Manager and classic VMs on a schedule.
+ Title: Start/Stop VMs v2 overview
+description: This article describes version two of the Start/Stop VMs feature, which starts or stops Azure Resource Manager and classic VMs on a schedule.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# Start/Stop VMs v2 (preview) overview
+# Start/Stop VMs v2 overview
-The Start/Stop VMs v2 (preview) feature starts or stops Azure virtual machines (VMs) across multiple subscriptions. It starts or stops Azure VMs on user-defined schedules, provides insights through [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md), and send optional notifications by using [action groups](../../azure-monitor/alerts/action-groups.md). The feature can manage both Azure Resource Manager VMs and classic VMs for most scenarios.
+The Start/Stop VMs v2 feature starts or stops Azure virtual machines (VMs) across multiple subscriptions. It starts or stops Azure VMs on user-defined schedules, provides insights through [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md), and send optional notifications by using [action groups](../../azure-monitor/alerts/action-groups.md). The feature can manage both Azure Resource Manager VMs and classic VMs for most scenarios.
-This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
> [!NOTE] > We've added a plan (**AZ - Availability Zone**) to our Start/Stop V2 solution to enable a high-availability offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cos
## Overview
-Start/Stop VMs v2 (preview) is redesigned and it doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [previous version](../../automation/automation-solution-vm-management.md). This version relies on [Azure Functions](../../azure-functions/functions-overview.md) to handle the VM start and stop execution.
+Start/Stop VMs v2 is redesigned and it doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [previous version](../../automation/automation-solution-vm-management.md). This version relies on [Azure Functions](../../azure-functions/functions-overview.md) to handle the VM start and stop execution.
-A managed identity is created in Azure Active Directory (Azure AD) for this Azure Functions application and allows Start/Stop VMs v2 (preview) to easily access other Azure AD-protected resources, such as the logic apps and Azure VMs. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+A managed identity is created in Azure Active Directory (Azure AD) for this Azure Functions application and allows Start/Stop VMs v2 to easily access other Azure AD-protected resources, such as the logic apps and Azure VMs. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
An HTTP trigger endpoint function is created to support the schedule and sequence scenarios included with the feature, as shown in the following table.
The queue-based trigger functions are required in support of this feature. All t
Each Start/Stop action supports assignment of one or more subscriptions, resource groups, or a list of VMs.
-An Azure Storage account, which is required by Functions, is also used by Start/Stop VMs v2 (preview) for two purposes:
+An Azure Storage account, which is required by Functions, is also used by Start/Stop VMs v2 for two purposes:
- Uses Azure Table Storage to store the execution operation metadata (that is, the start/stop VM action).
Email notifications are also sent as a result of the actions performed on the VM
## New releases
-When a new version of Start/Stop VMs v2 (preview) is released, your instance is auto-updated without having to manually redeploy.
+When a new version of Start/Stop VMs v2 is released, your instance is auto-updated without having to manually redeploy.
## Supported scoping options
Specifying a list of VMs can be used when you need to perform the start and stop
- Your account has been granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) permission in the subscription. -- Start/Stop VMs v2 (preview) is available in all Azure global and US Government cloud regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions.
+- Start/Stop VMs v2 is available in all Azure global and US Government cloud regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions.
## Next steps
-To deploy this feature, see [Deploy Start/Stop VMs](deploy.md) (preview).
+To deploy this feature, see [Deploy Start/Stop VMs](deploy.md).
azure-functions Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/remove.md
Title: Remove Start/Stop VMs v2 (preview) overview
-description: This article describes how to remove the Start/Stop VMs v2 (preview) feature.
+ Title: Remove Start/Stop VMs v2 overview
+description: This article describes how to remove the Start/Stop VMs v2 feature.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# How to remove Start/Stop VMs v2 (preview)
+# How to remove Start/Stop VMs v2
-After you enable the Start/Stop VMs v2 (preview) feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done by deleting the resource group dedicated to store the following resources in support of Start/Stop VMs v2 (preview):
+After you enable the Start/Stop VMs v2 feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done by deleting the resource group dedicated to store the following resources in support of Start/Stop VMs v2:
- The Azure Functions applications - Schedules in Azure Logic Apps
After you enable the Start/Stop VMs v2 (preview) feature to manage the running s
- Azure Storage account > [!NOTE]
-> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2, or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this version.
## Delete the dedicated resource group
To delete the resource group, follow the steps outlined in the [Azure Resource M
## Next steps
-To re-deploy this feature, see [Deploy Start/Stop v2](deploy.md) (preview).
+To re-deploy this feature, see [Deploy Start/Stop v2](deploy.md).
azure-functions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/troubleshoot.md
Title: Troubleshoot Start/Stop VMs (preview)
-description: This article tells how to troubleshoot issues encountered with the Start/Stop VMs (preview) feature for your Azure VMs.
+ Title: Troubleshoot Start/Stop VMs
+description: This article tells how to troubleshoot issues encountered with the Start/Stop VMs feature for your Azure VMs.
Previously updated : 06/25/2021 Last updated : 06/08/2022
-# Troubleshoot common issues with Start/Stop VMs (preview)
+# Troubleshoot common issues with Start/Stop VMs
-This article provides information on troubleshooting and resolving issues that may occur while attempting to install and configure Start/Stop VMs (preview). For general information, see [Start/Stop VMs overview](overview.md).
+This article provides information on troubleshooting and resolving issues that may occur while attempting to install and configure Start/Stop VMs. For general information, see [Start/Stop VMs overview](overview.md).
## General validation and troubleshooting
This section covers how to troubleshoot general issues with the schedules scenar
### Azure dashboard
-You can start by reviewing the Azure shared dashboard. The Azure shared dashboard deployed as part of Start/Stop VMs v2 (preview) is a quick and easy way to verify the status of each operation that's performed on your VMs. Refer to the **Recently attempted actions on VMs** tile to see all the recent operations executed on your VMs. There is some latency, around five minutes, for data to show up in the report as it pulls data from the Application Insights resource.
+You can start by reviewing the Azure shared dashboard. The Azure shared dashboard deployed as part of Start/Stop VMs v2 is a quick and easy way to verify the status of each operation that's performed on your VMs. Refer to the **Recently attempted actions on VMs** tile to see all the recent operations executed on your VMs. There is some latency, around five minutes, for data to show up in the report as it pulls data from the Application Insights resource.
### Logic Apps
Depending on which Logic Apps you have enabled to support your start/stop scenar
### Azure Storage
-You can review the details for the operations performed on the VMs that are written to the table **requestsstoretable** in the Azure storage account used for Start/Stop VMs v2 (preview). Perform the following steps to view those records.
+You can review the details for the operations performed on the VMs that are written to the table **requestsstoretable** in the Azure storage account used for Start/Stop VMs v2. Perform the following steps to view those records.
-1. Navigate to the storage account in the Azure portal and in the account select **Storage Explorer (preview)** from the left-hand pane.
+1. Navigate to the storage account in the Azure portal and in the account select **Storage Explorer** from the left-hand pane.
1. Select **TABLES** and then select **requeststoretable**. 1. Each record in the table represents the start/stop action performed against an Azure VM based on the target scope defined in the logic app scenario. You can filter the results by any one of the record properties (for example, TIMESTAMP, ACTION, or TARGETTOPLEVELRESOURCENAME).
From the logic app, the **Scheduled** HTTP function is invoked with Payload sche
Perform the following steps to see the invocation details. 1. In the Azure portal, navigate to **Azure Functions**.
-1. Select the Function app for Start/Stop VMs v2 (preview) from the list.
+1. Select the Function app for Start/Stop VMs v2 from the list.
1. Select **Functions** from the left-hand pane. 1. In the list, you see several functions associated for each scenario. Select the **Scheduled** HTTP function. 1. Select **Monitor** from the left-hand pane.
Learn more about monitoring Azure Functions and logic apps:
* [Monitor logic apps](../../logic-apps/monitor-logic-apps.md).
-* If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is also available for this preview version.
+* If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2, or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is also available for this version.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Since the Dependency agent works at the kernel level, support is also dependent
| Distribution | OS version | Kernel version | |:|:|:|
-| Red Hat Linux 8 | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
+| Red Hat Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
| | 8.3 | 4.18.0-240.\*el8_3.x86_64 | | | 8.2 | 4.18.0-193.\*el8_2.x86_64 | | | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
Since the Dependency agent works at the kernel level, support is also dependent
| | 7.4 | 3.10.0-693 | | Red Hat Linux 6 | 6.10 | 2.6.32-754 | | | 6.9 | 2.6.32-696 |
-| CentOS Linux 8 | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
+| CentOS Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
| | 8.3 | 4.18.0-240.\*el8_3.x86_64 | | | 8.2 | 4.18.0-193.\*el8_2.x86_64 | | | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
description: Learn about Azure Monitor alerts, alert rules, action processing ru
Previously updated : 04/26/2022 Last updated : 06/09/2022
When the alert is considered resolved, the alert rule sends out a resolved notif
## Manage your alerts programmatically
-You can programmatically query for alerts using:
-You can also use [Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade). Resource graphs are good for managing alerts across multiple subscriptions.
+You can query you alerts instances to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
+We recommended that you use [Azure Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade) with the 'AlertsManagementResources' schema for managing alerts across multiple subscriptions. For an sample query, see [Azure Resource Graph sample queries for Azure Monitor](../resource-graph-samples.md).
+
+You can use Azure Resource Graphs:
+ - with [Azure PowerShell](/powershell/module/az.monitor/)
+ - with the [Azure CLI](/cli/azure/monitor?view=azure-cli-latest&preserve-view=true)
+ - in the Azure portal
+
+You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) for lower scale querying or to update fired alerts.
## Pricing See the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/) for information about pricing.
azure-monitor Alerts Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md
Navigate to Alerts > Alert processing rules (preview) > filter by the containing
### Change scope of a rule using PowerShell
-1. Get the existing rule ([metric alerts](/powershell/module/az.monitor/get-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/get-azactivitylogalert), [alert processing rules](/powershell/module/az.alertsmanagement/get-azactionrule)).
+1. Get the existing rule ([metric alerts](/powershell/module/az.monitor/get-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/get-azactivitylogalert), alert [processing rules](/powershell/module/az.alertsmanagement/get-azalertprocessingrule)).
2. Modify the scope. If needed, split into two rules (relevant for some cases of metric alerts, as noted above).
-3. Redeploy the rule ([metric alerts](/powershell/module/az.monitor/add-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/enable-azactivitylogalert), [alert processing rules](/powershell/module/az.alertsmanagement/set-azactionrule)).
+3. Redeploy the rule ([metric alerts](/powershell/module/az.monitor/add-azmetricalertrulev2), [activity log alerts](/powershell/module/az.monitor/enable-azactivitylogalert), [alert processing rules](/powershell/module/az.alertsmanagement/set-azalertprocessingrule)).
### Change the scope of a rule using Azure CLI
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Action groups provide a modular and reusable way of triggering actions for Azure
To learn more about action groups, see [Create and manage action groups in the Azure portal](../alerts/action-groups.md). > [!NOTE]
-> If you are using Log Serch alert notice that the query should project a ΓÇ£ComputerΓÇ¥ column with the configurtaion items list in order to have them as a part of the payload.
+> If you are using a log alert, the query results must include a ΓÇ£ComputerΓÇ¥ column containing the configuration items list.
To add a webhook to an action, follow these instructions for Secure Webhook:
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Below is our step-by-step troubleshooting guide for extension/agent based monito
:::image type="content" source="media/azure-web-apps-net-core/auto-instrumentation-status.png" alt-text="Screenshot displaying auto instrumentation status web page." lightbox="media/azure-web-apps-net-core/auto-instrumentation-status.png":::
-##### No Data
-
-1. List and identify the process that is hosting an app. Navigate to your terminal and on the command line type `ps ax`.
-
- The output should be similar to:
-
- ```bash
- PID TTY STAT TIME COMMAND
-
- 1 ? SNs 0:00 /bin/bash /opt/startup/startup.sh
-
- 19 ? SNs 0:00 /usr/sbin/sshd
-
- 27 ? SNLl 5:52 dotnet dotnet6demo.dll
-
- 50 ? SNs 0:00 sshd: root@pts/0
-
- 53 pts/0 SNs+ 0:00 -bash
-
- 55 ? SNs 0:00 sshd: root@pts/1
-
- 57 pts/1 SNs+ 0:00 -bash
- ```
--
-1. Then list environment variables from app process. On the command line type `cat /proc/27/environ | tr '\0' '\n`.
-
- The output should be similar to:
-
- ```bash
- ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=Microsoft.ApplicationInsights.StartupBootstrapper
-
- DOTNET_STARTUP_HOOKS=/DotNetCoreAgent/2.8.39/StartupHook/Microsoft.ApplicationInsights.StartupHook.dll
-
- APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://westus-0.in.applicationinsights.azure.com/
-
- ```
-
-1. Validate that `ASPNETCORE_HOSTINGSTARTUPASSEMBLIES`, `DOTNET_STARTUP_HOOKS`, and `APPLICATIONINSIGHTS_CONNECTION_STRING` are set.
- #### Default website deployed with web apps doesn't support automatic client-side monitoring
azure-monitor Azure Cli Application Insights Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-cli-application-insights-component.md
- Title: Manage Application Insights components in Azure CLI
-description: Use this sample code to manage components in Application Insights. This feature is part of Azure Monitor.
--- Previously updated : 09/10/2012
-ms.tool: azure-cli
--
-# Manage Application Insights components by using Azure CLI
-
-In Azure Monitor, components are independently deployable parts of your distributed or microservices application. Use these Azure CLI commands to manage components in Application Insights.
-
-The examples in this article do the following management tasks:
--- Create a component.-- Connect a component to a webapp.-- Link a component to a storage account with a component.-- Create a continuous export configuration for a component.--
-## Create a component
-
-If you don't already have a resource group and workspace, create them by using [az group create](/cli/azure/group#az-group-create) and [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create):
-
-```azurecli
-az group create --name ContosoAppInsightRG --location eastus2
-az monitor log-analytics workspace create --resource-group ContosoAppInsightRG \
- --workspace-name AppInWorkspace
-```
-
-To create a component, run the [az monitor app-insights component create](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create) command. The [az monitor app-insights component show](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-show) command displays the component.
-
-```azurecli
-az monitor app-insights component create --resource-group ContosoAppInsightRG \
- --app ContosoApp --location eastus2 --kind web --application-type web \
- --retention-time 120
-az monitor app-insights component show --resource-group ContosoAppInsightRG --app ContosoApp
-```
-
-## Connect a webapp
-
-This example connects your component to a webapp. You can create a webapp by using the [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create) and [az webapp create](/cli/azure/webapp#az-webapp-create) commands:
-
-```azurecli
-az appservice plan create --resource-group ContosoAppInsightRG --name ContosoAppService
-az webapp create --resource-group ContosoAppInsightRG --name ContosoApp \
- --plan ContosoAppService --name ContosoApp8765
-```
-
-Run the [az monitor app-insights component connect-webapp](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-connect-webapp) command to connect your component to the webapp:
-
-```azurecli
-az monitor app-insights component connect-webapp --resource-group ContosoAppInsightRG \
- --app ContosoApp --web-app ContosoApp8765 --enable-debugger false --enable-profiler false
-```
-
-You can instead connect to an Azure function by using the [az monitor app-insights component connect-function](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-connect-function) command.
-
-## Link a component to storage
-
-You can link a component to a storage account. To create a storage account, use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command:
-
-```azurecli
-az storage account create --resource-group ContosoAppInsightRG \
- --name contosolinkedstorage --location eastus2 --sku Standard_LRS
-```
-
-To link your component to the storage account, run the [az monitor app-insights component linked-storage link](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-link) command. You can see the existing links by using the [az monitor app-insights component linked-storage show](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-show) command:
--
-```azurecli
-az monitor app-insights component linked-storage link --resource-group ContosoAppInsightRG \
- --app ContosoApp --storage-account contosolinkedstorage
-az monitor app-insights component linked-storage show --resource-group ContosoAppInsightRG \
- --app ContosoApp
-```
-
-To unlink the storage, run the [az monitor app-insights component linked-storage unlink](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-unlink) command:
-
-```AzureCLI
-az monitor app-insights component linked-storage unlink \
- --resource-group ContosoAppInsightRG --app ContosoApp
-```
-
-## Set up continuous export
-
-Continuous export saves events from Application Insights portal in a storage container in JSON format.
-
-> [!NOTE]
-> Continuous export is only supported for classic Application Insights resources. [Workspace-based Application Insights resources](../app/create-workspace-resource.md) must use [diagnostic settings](../app/create-workspace-resource.md#export-telemetry).
->
-
-To create a storage container, run the [az storage container create](/cli/azure/storage/container#az-storage-container-create) command.
-
-```azurecli
-az storage container create --name contosostoragecontainer --account-name contosolinkedstorage \
- --public-access blob
-```
-
-You need access for the container to be write only. Run the [az storage container policy create](/cli/azure/storage/container/policy#az-storage-container-policy-create) cmdlet:
-
-```azurecli
-az storage container policy create --container-name contosostoragecontainer \
- --account-name contosolinkedstorage --name WAccessPolicy --permissions w
-```
-
-Create an SAS key by using the [az storage container generate-sas](/cli/azure/storage/container#az-storage-container-generate-sas) command. Be sure to use the `--output tsv` parameter value to save the key without unwanted formatting like quotation marks. For more information, see [Use Azure CLI effectively](/cli/azure/use-cli-effectively).
-
-```azurecli
-containersas=$(az storage container generate-sas --name contosostoragecontainer \
- --account-name contosolinkedstorage --permissions w --output tsv)
-```
-
-To create a continuous export, run the [az monitor app-insights component continues-export create](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-create) command:
-
-```azurecli
-az monitor app-insights component continues-export create --resource-group ContosoAppInsightRG \
- --app ContosoApp --record-types Event --dest-account contosolinkedstorage \
- --dest-container contosostoragecontainer --dest-sub-id 00000000-0000-0000-0000-000000000000 \
- --dest-sas $containersas
-```
-
-You can delete a configured continuous export by using the [az monitor app-insights component continues-export delete](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-delete) command:
-
-```azurecli
-az monitor app-insights component continues-export list \
- --resource-group ContosoAppInsightRG --app ContosoApp
-az monitor app-insights component continues-export delete \
- --resource-group ContosoAppInsightRG --app ContosoApp --id abcdefghijklmnopqrstuvwxyz=
-```
-
-## Clean up deployment
-
-If you created a resource group to test these commands, you can remove the resource group and all its contents by using the [az group delete](/cli/azure/group#az-group-delete) command:
-
-```azurecli
-az group delete --name ContosoAppInsightRG
-```
-
-## Azure CLI commands used in this article
--- [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create)-- [az group create](/cli/azure/group#az-group-create)-- [az group delete](/cli/azure/group#az-group-delete)-- [az monitor app-insights component connect-webapp](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-connect-webapp)-- [az monitor app-insights component continues-export create](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-create)-- [az monitor app-insights component continues-export delete](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-delete)-- [az monitor app-insights component continues-export list](/cli/azure/monitor/app-insights/component/continues-export#az-monitor-app-insights-component-continues-export-list)-- [az monitor app-insights component create](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create)-- [az monitor app-insights component linked-storage link](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-link)-- [az monitor app-insights component linked-storage unlink](/cli/azure/monitor/app-insights/component/linked-storage#az-monitor-app-insights-component-linked-storage-unlink)-- [az monitor app-insights component show](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-show)-- [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create)-- [az storage account create](/cli/azure/storage/account#az-storage-account-create)-- [az storage container create](/cli/azure/storage/container#az-storage-container-create)-- [az storage container generate-sas](/cli/azure/storage/container#az-storage-container-generate-sas)-- [az storage container policy create](/cli/azure/storage/container/policy#az-storage-container-policy-create)-- [az webapp create](/cli/azure/webapp#az-webapp-create)-
-## Next steps
-
-[Azure Monitor CLI samples](../cli-samples.md)
-
-[Find and diagnose performance issues](../app/tutorial-performance.md)
-
-[Monitor and alert on application health](../app/tutorial-alert.md)
azure-monitor Solution Targeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-targeting.md
description: Targeting monitoring solutions allows you to limit monitoring solut
Previously updated : 04/27/2017 Last updated : 06/08/2022
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
A Log Analytics workspace is a unique environment for log data from Azure Monito
You can use a single workspace for all your data collection, or you may create multiple workspaces based on a variety of requirements such as the geographic location of the data, access rights that define which users can access data, and configuration settings such as the pricing tier and data retention.
-To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see Design a Log Analytics workspace configuration(workspace-design.md).
+To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Design a Log Analytics workspace configuration](./workspace-design.md).
## Data structure
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-cloudservice.md
Title: Profile live Azure Cloud Services with Application Insights | Microsoft Docs
-description: Enable Application Insights Profiler for Azure Cloud Services.
+ Title: Enable Profiler for Azure Cloud Services | Microsoft Docs
+description: Profile live Azure Cloud Services with Application Insights Profiler.
Previously updated : 08/06/2018 Last updated : 05/25/2022
-# Profile live Azure Cloud Services with Application Insights
+# Enable Profiler for Azure Cloud Services
-You can also deploy Application Insights Profiler on these
-* [Azure App Service](profiler.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Service Fabric applications](profiler-servicefabric.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines](profiler-vm.md?toc=/azure/azure-monitor/toc.json)
+Receive performance traces for your [Azure Cloud Service](../../cloud-services-extended-support/overview.md) by enabling the Application Insights Profiler. The Profiler is installed on your Cloud Service via the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md).
-Application Insights Profiler is installed with the Azure Diagnostics extension. You just need to configure Azure Diagnostics to install Profiler and send profiles to your Application Insights resource.
+In this article, you will:
-## Enable Profiler for Azure Cloud Services
-1. Check to make sure that you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or newer. If you are using OS family 4, you'll need to install .NET Framework 4.6.1 or newer with a [startup task](../../cloud-services/cloud-services-dotnet-install-dotnet.md). OS Family 5 includes a compatible version of .NET Framework by default.
+- Enable your Cloud Service to send diagnostics data to Application Insights.
+- Configure the Azure Diagnostics extension within your solution to install Profiler.
+- Deploy your service and generate traffic to view Profiler traces.
-1. Add [Application Insights SDK to Azure Cloud Services](../app/cloudservices.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+## Pre-requisites
- **The bug in the profiler that ships in the WAD for Cloud Services has been fixed.** The latest version of WAD (1.12.2.0) for Cloud Services works with all recent versions of the App Insights SDK. Cloud Service hosts will upgrade WAD automatically, but it isn't immediate. To force an upgrade, you can redeploy your service or reboot the node.
+- Make sure you've [set up diagnostics for Azure Cloud Services](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines).
+- Use [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or newer.
+ - If you're using [OS Family 4](../../cloud-services/cloud-services-guestos-update-matrix.md#family-4-releases), install .NET Framework 4.6.1 or newer with a [startup task](../../cloud-services/cloud-services-dotnet-install-dotnet.md).
+ - [OS Family 5](../../cloud-services/cloud-services-guestos-update-matrix.md#family-5-releases) includes a compatible version of .NET Framework by default.
-1. Track requests with Application Insights:
+## Track requests with Application Insights
- * For ASP.NET web roles, Application Insights can track the requests automatically.
+When publishing your CloudService to Azure portal, add the [Application Insights SDK to Azure Cloud Services](../app/cloudservices.md).
- * For worker roles, [add code to track requests](profiler-trackrequests.md?toc=/azure/azure-monitor/toc.json).
-1. Configure the Azure Diagnostics extension to enable Profiler:
+Once you've added the SDK and published your Cloud Service to the Azure portal, track requests using Application Insights.
- a. Locate the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) *diagnostics.wadcfgx* file for your application role, as shown here:
+- **For ASP.NET web roles**, Application Insights tracks the requests automatically.
+- **For worker roles**, you need to [add code manually to your application to track requests](profiler-trackrequests.md).
- ![Location of the diagnostics config file](./media/profiler-cloudservice/cloud-service-solution-explorer.png)
+## Configure the Azure Diagnostics extension
- If you can't find the file, see [Set up diagnostics for Azure Cloud Services and Virtual Machines](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines).
+Locate the Azure Diagnostics *diagnostics.wadcfgx* file for your application role:
- b. Add the following `SinksConfig` section as a child element of `WadCfg`:
- ```xml
- <WadCfg>
- <DiagnosticMonitorConfiguration>...</DiagnosticMonitorConfiguration>
- <SinksConfig>
- <Sink name="MyApplicationInsightsProfiler">
- <!-- Replace with your own Application Insights instrumentation key. -->
- <ApplicationInsightsProfiler>00000000-0000-0000-0000-000000000000</ApplicationInsightsProfiler>
- </Sink>
- </SinksConfig>
- </WadCfg>
- ```
+Add the following `SinksConfig` section as a child element of `WadCfg`:
- > [!NOTE]
- > If the *diagnostics.wadcfgx* file also contains another sink of type ApplicationInsights, all three of the following instrumentation keys must match:
- > * The key that's used by your application.
- > * The key that's used by the ApplicationInsights sink.
- > * The key that's used by the ApplicationInsightsProfiler sink.
- >
- > You can find the actual instrumentation key value that's used by the `ApplicationInsights` sink in the *ServiceConfiguration.\*.cscfg* files.
- > After the Visual Studio 15.5 Azure SDK release, only the instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other.
+```xml
+<WadCfg>
+ <DiagnosticMonitorConfiguration>...</DiagnosticMonitorConfiguration>
+ <SinksConfig>
+ <Sink name="MyApplicationInsightsProfiler">
+ <!-- Replace with your own Application Insights instrumentation key. -->
+ <ApplicationInsightsProfiler>00000000-0000-0000-0000-000000000000</ApplicationInsightsProfiler>
+ </Sink>
+ </SinksConfig>
+</WadCfg>
+```
-1. Deploy your service with the new Diagnostics configuration, and Application Insights Profiler is configured to run on your service.
+> [!NOTE]
+> The instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other.
+
+Deploy your service with the new Diagnostics configuration. Application Insights Profiler is now configured to run on your Cloud Service.
+
+## Generate traffic to your service
+
+Now that your Azure Cloud Service is deployed with Profiler, you can generate traffic to view Profiler traces.
+
+Generate traffic to your application by setting up an [availability test](../app/monitor-web-app-availability.md). Wait 10 to 15 minutes for traces to be sent to the Application Insights instance.
+
+Navigate to your Azure Cloud Service's Application Insights resource. In the left side menu, select **Performance**.
++
+Select the **Profiler** for your Cloud Service.
++
+Select **Profile now** to start a profiling session. This process will take a few minutes.
++
+For more instructions on profiling sessions, see the [Profiler overview](./profiler-overview.md#start-a-profiler-on-demand-session).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Next steps
-* Generate traffic to your application (for example, launch an [availability test](../app/monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance.
-* See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal.
-* To troubleshoot Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
-
+- Learn more about [configuring Profiler](./profiler-settings.md).
+- [Troubleshoot Profiler issues](./profiler-troubleshooting.md).
azure-monitor Vminsights Configure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-configure-workspace.md
Previously updated : 12/22/2020 Last updated : 06/07/2022
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
description: This article describes how you enable VM insights for a hybrid clou
Previously updated : 07/27/2020 Last updated : 06/08/2022
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.13.19190 | 0882504FE5828C4C4BA0A869BD9F6D5B0020A52156DDBD21D55AAADA762923C4 |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.13.19190 | 7D90A2A7C6F1D7FB2BCC274ADC4C5D6C118E832FF8A620971734AED4F446B030 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.14.20760 | D4DB398FAD36E86FEACCC41D7B8AF46711346A943806769B6CE017F0BF1625FF |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.14.20760 | 3DE3B485BA79B57E74B3DFB60FD277A30C8A5D1BD898455AD77FECF20E0E2610 |
## Install the Dependency agent on Windows
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
description: Learn how to deploy and configure VM insights. Find out the system
Previously updated : 12/22/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-policy.md
description: Describes how you enable VM insights for multiple Azure virtual mac
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-portal.md
description: Learn how to enable VM insights on a single Azure virtual machine o
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
description: Describes how to enable VM insights for Azure virtual machines or v
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Enable Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-resource-manager.md
description: This article describes how you enable VM insights for one or more A
Previously updated : 07/27/2020 Last updated : 06/08/2022
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
+
+ Title: How to Query Logs from VM insights
+description: VM insights solution collects metrics and log data to and this article describes the records and includes sample queries.
+++ Last updated : 06/08/2022++
+# How to query logs from VM insights
+
+VM insights collects performance and connection metrics, computer and process inventory data, and health state information and forwards it to the Log Analytics workspace in Azure Monitor. This data is available for [query](../logs/log-query-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting.
+
+## Map records
+
+One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is added to VM insights. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource.
+
+There are internally generated properties you can use to identify unique processes and computers:
+
+- Computer: Use *ResourceId* or *ResourceName_s* to uniquely identify a computer within a Log Analytics workspace.
+- Process: Use *ResourceId* to uniquely identify a process within a Log Analytics workspace. *ResourceName_s* is unique within the context of the machine on which the process is running (MachineResourceName_s)
+
+Because multiple records can exist for a specified process and computer in a specified time range, queries can return more than one record for the same computer or process. To include only the most recent record, add `| summarize arg_max(TimeGenerated, *) by ResourceId` to the query.
+
+### Connections and ports
+
+The Connection Metrics feature introduces two new tables in Azure Monitor logs - VMConnection and VMBoundPort. These tables provide information about the connections for a machine (inbound and outbound), as well as the server ports that are open/active on them. ConnectionMetrics are also exposed via APIs that provide the means to obtain a specific metric during a time window. TCP connections resulting from *accepting* on a listening socket are inbound, while those created by *connecting* to a given IP and port are outbound. The direction of a connection is represented by the Direction property, which can be set to either **inbound** or **outbound**.
+
+Records in these tables are generated from data reported by the Dependency Agent. Every record represents an observation over a 1-minute time interval. The TimeGenerated property indicates the start of the time interval. Each record contains information to identify the respective entity, that is, connection or port, as well as metrics associated with that entity. Currently, only network activity that occurs using TCP over IPv4 is reported.
+
+#### Common fields and conventions
+
+The following fields and conventions apply to both VMConnection and VMBoundPort:
+
+- Computer: Fully-qualified domain name of reporting machine
+- AgentId: The unique identifier for a machine with the Log Analytics agent
+- Machine: Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId
+- Process: Name of the Azure Resource Manager resource for the process exposed by ServiceMap. It is of the form *p-{hex string}*. Process is unique within a machine scope and to generate a unique process ID across machines, combine Machine and Process fields.
+- ProcessName: Executable name of the reporting process.
+- All IP addresses are strings in IPv4 canonical format, for example *13.107.3.160*
+
+To manage cost and complexity, connection records do not represent individual physical network connections. Multiple physical network connections are grouped into a logical connection, which is then reflected in the respective table. Meaning, records in *VMConnection* table represent a logical grouping and not the individual physical connections that are being observed. Physical network connection sharing the same value for the following attributes during a given one-minute interval, are aggregated into a single logical record in *VMConnection*.
+
+| Property | Description |
+|:--|:--|
+|Direction |Direction of the connection, value is *inbound* or *outbound* |
+|Machine |The computer FQDN |
+|Process |Identity of process or groups of processes, initiating/accepting the connection |
+|SourceIp |IP address of the source |
+|DestinationIp |IP address of the destination |
+|DestinationPort |Port number of the destination |
+|Protocol |Protocol used for the connection. Values is *tcp*. |
+
+To account for the impact of grouping, information about the number of grouped physical connections is provided in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|LinksEstablished |The number of physical network connections that have been established during the reporting time window |
+|LinksTerminated |The number of physical network connections that have been terminated during the reporting time window |
+|LinksFailed |The number of physical network connections that have failed during the reporting time window. This information is currently available only for outbound connections. |
+|LinksLive |The number of physical network connections that were open at the end of the reporting time window|
+
+#### Metrics
+
+In addition to connection count metrics, information about the volume of data sent and received on a given logical connection or network port are also included in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|BytesSent |Total number of bytes that have been sent during the reporting time window |
+|BytesReceived |Total number of bytes that have been received during the reporting time window |
+|Responses |The number of responses observed during the reporting time window.
+|ResponseTimeMax |The largest response time (milliseconds) observed during the reporting time window. If no value, the property is blank.|
+|ResponseTimeMin |The smallest response time (milliseconds) observed during the reporting time window. If no value, the property is blank.|
+|ResponseTimeSum |The sum of all response times (milliseconds) observed during the reporting time window. If no value, the property is blank.|
+
+The third type of data being reported is response time - how long does a caller spend waiting for a request sent over a connection to be processed and responded to by the remote endpoint. The response time reported is an estimation of the true response time of the underlying application protocol. It is computed using heuristics based on the observation of the flow of data between the source and destination end of a physical network connection. Conceptually, it is the difference between the time the last byte of a request leaves the sender, and the time when the last byte of the response arrives back to it. These two timestamps are used to delineate request and response events on a given physical connection. The difference between them represents the response time of a single request.
+
+In this first release of this feature, our algorithm is an approximation that may work with varying degree of success depending on the actual application protocol used for a given network connection. For example, the current approach works well for request-response based protocols such as HTTP(S), but does not work with one-way or message queue-based protocols.
+
+Here are some important points to consider:
+
+1. If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported.
+2. Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the machine is open to inbound traffic.
+3. To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the IsWildcardBind record property with the specific IP address, will be set to "True" to indicate that the port is exposed over every interface of the reporting machine.
+4. Ports that are bound only on a specific interface have IsWildcardBind set to *False*.
+
+#### Naming and Classification
+
+For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it is the same as DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the machine for RemoteIp. The RemoteDnsQuestions property represents the DNS questions reported by the machine for RemoteIp. The RemoveClassification property is reserved for future use.
+
+#### Geolocation
+
+*VMConnection* also includes geolocation information for the remote end of each connection record in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|RemoteCountry |The name of the country/region hosting RemoteIp. For example, *United States* |
+|RemoteLatitude |The geolocation latitude. For example, *47.68* |
+|RemoteLongitude |The geolocation longitude. For example, *-122.12* |
+
+#### Malicious IP
+
+Every RemoteIp property in *VMConnection* table is checked against a set of IPs with known malicious activity. If the RemoteIp is identified as malicious the following properties will be populated (they are empty, when the IP is not considered malicious) in the following properties of the record:
+
+| Property | Description |
+|:--|:--|
+|MaliciousIp |The RemoteIp address |
+|IndicatorThreadType |Threat indicator detected is one of the following values, *Botnet*, *C2*, *CryptoMining*, *Darknet*, *DDos*, *MaliciousUrl*, *Malware*, *Phishing*, *Proxy*, *PUA*, *Watchlist*. |
+|Description |Description of the observed threat. |
+|TLPLevel |Traffic Light Protocol (TLP) Level is one of the defined values, *White*, *Green*, *Amber*, *Red*. |
+|Confidence |Values are *0 ΓÇô 100*. |
+|Severity |Values are *0 ΓÇô 5*, where *5* is the most severe and *0* is not severe at all. Default value is *3*. |
+|FirstReportedDateTime |The first time the provider reported the indicator. |
+|LastReportedDateTime |The last time the indicator was seen by Interflow. |
+|IsActive |Indicates indicators are deactivated with *True* or *False* value. |
+|ReportReferenceLink |Links to reports related to a given observable. |
+|AdditionalInformation |Provides additional information, if applicable, about the observed threat. |
+
+### Ports
+
+Ports on a machine that actively accept incoming traffic or could potentially accept traffic, but are idle during the reporting time window, are written to the VMBoundPort table.
+
+Every record in VMBoundPort is identified by the following fields:
+
+| Property | Description |
+|:--|:--|
+|Process | Identity of process (or groups of processes) with which the port is associated with.|
+|Ip | Port IP address (can be wildcard IP, *0.0.0.0*) |
+|Port |The Port number |
+|Protocol | The protocol. Example, *tcp* or *udp* (only *tcp* is currently supported).|
+
+The identity a port is derived from the above five fields and is stored in the PortId property. This property can be used to quickly find records for a specific port across time.
+
+#### Metrics
+
+Port records include metrics representing the connections associated with them. Currently, the following metrics are reported (the details for each metric are described in the previous section):
+
+- BytesSent and BytesReceived
+- LinksEstablished, LinksTerminated, LinksLive
+- ResposeTime, ResponseTimeMin, ResponseTimeMax, ResponseTimeSum
+
+Here are some important points to consider:
+
+- If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported.
+- Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the machine is open to inbound traffic.
+- To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the *IsWildcardBind* property for the record with the specific IP address, will be set to *True*. This indicates the port is exposed over every interface of the reporting machine.
+- Ports that are bound only on a specific interface have IsWildcardBind set to *False*.
+
+### VMComputer records
+
+Records with a type of *VMComputer* have inventory data for servers with the Dependency agent. These records have the properties in the following table:
+
+| Property | Description |
+|:--|:--|
+|TenantId | The unique identifier for the workspace |
+|SourceSystem | *Insights* |
+|TimeGenerated | Timestamp of the record (UTC) |
+|Computer | The computer FQDN |
+|AgentId | The unique ID of the Log Analytics agent |
+|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
+|DisplayName | Display name |
+|FullDisplayName | Full display name |
+|HostName | The name of machine without domain name |
+|BootTime | The machine boot time (UTC) |
+|TimeZone | The normalized time zone |
+|VirtualizationState | *virtual*, *hypervisor*, *physical* |
+|Ipv4Addresses | Array of IPv4 addresses |
+|Ipv4SubnetMasks | Array of IPv4 subnet masks (in the same order as Ipv4Addresses). |
+|Ipv4DefaultGateways | Array of IPv4 gateways |
+|Ipv6Addresses | Array of IPv6 addresses |
+|MacAddresses | Array of MAC addresses |
+|DnsNames | Array of DNS names associated with the machine. |
+|DependencyAgentVersion | The version of the Dependency agent running on the machine. |
+|OperatingSystemFamily | *Linux*, *Windows* |
+|OperatingSystemFullName | The full name of the operating system |
+|PhysicalMemoryMB | The physical memory in megabytes |
+|Cpus | The number of processors |
+|CpuSpeed | The CPU speed in MHz |
+|VirtualMachineType | *hyperv*, *vmware*, *xen* |
+|VirtualMachineNativeId | The VM ID as assigned by its hypervisor |
+|VirtualMachineNativeName | The name of the VM |
+|VirtualMachineHypervisorId | The unique identifier of the hypervisor hosting the VM |
+|HypervisorType | *hyperv* |
+|HypervisorId | The unique ID of the hypervisor |
+|HostingProvider | *azure* |
+|_ResourceId | The unique identifier for an Azure resource |
+|AzureSubscriptionId | A globally unique identifier that identifies your subscription |
+|AzureResourceGroup | The name of the Azure resource group the machine is a member of. |
+|AzureResourceName | The name of the Azure resource |
+|AzureLocation | The location of the Azure resource |
+|AzureUpdateDomain | The name of the Azure update domain |
+|AzureFaultDomain | The name of the Azure fault domain |
+|AzureVmId | The unique identifier of the Azure virtual machine |
+|AzureSize | The size of the Azure VM |
+|AzureImagePublisher | The name of the Azure VM publisher |
+|AzureImageOffering | The name of the Azure VM offer type |
+|AzureImageSku | The SKU of the Azure VM image |
+|AzureImageVersion | The version of the Azure VM image |
+|AzureCloudServiceName | The name of the Azure cloud service |
+|AzureCloudServiceDeployment | Deployment ID for the Cloud Service |
+|AzureCloudServiceRoleName | Cloud Service role name |
+|AzureCloudServiceRoleType | Cloud Service role type: *worker* or *web* |
+|AzureCloudServiceInstanceId | Cloud Service role instance ID |
+|AzureVmScaleSetName | The name of the virtual machine scale set |
+|AzureVmScaleSetDeployment | Virtual machine scale set deployment ID |
+|AzureVmScaleSetResourceId | The unique identifier of the virtual machine scale set resource.|
+|AzureVmScaleSetInstanceId | The unique identifier of the virtual machine scale set |
+|AzureServiceFabricClusterId | The unique identifer of the Azure Service Fabric cluster |
+|AzureServiceFabricClusterName | The name of the Azure Service Fabric cluster |
+
+### VMProcess records
+
+Records with a type of *VMProcess* have inventory data for TCP-connected processes on servers with the Dependency agent. These records have the properties in the following table:
+
+| Property | Description |
+|:--|:--|
+|TenantId | The unique identifier for the workspace |
+|SourceSystem | *Insights* |
+|TimeGenerated | Timestamp of the record (UTC) |
+|Computer | The computer FQDN |
+|AgentId | The unique ID of the Log Analytics agent |
+|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
+|Process | The unique identifier of the Service Map process. It is in the form of *p-{GUID}*.
+|ExecutableName | The name of the process executable |
+|DisplayName | Process display name |
+|Role | Process role: *webserver*, *appServer*, *databaseServer*, *ldapServer*, *smbServer* |
+|Group | Process group name. Processes in the same group are logically related, e.g., part of the same product or system component. |
+|StartTime | The process pool start time |
+|FirstPid | The first PID in the process pool |
+|Description | The process description |
+|CompanyName | The name of the company |
+|InternalName | The internal name |
+|ProductName | The name of the product |
+|ProductVersion | The version of the product |
+|FileVersion | The version of the file |
+|ExecutablePath |The path of the executable |
+|CommandLine | The command line |
+|WorkingDirectory | The working directory |
+|Services | An array of services under which the process is executing |
+|UserName | The account under which the process is executing |
+|UserDomain | The domain under which the process is executing |
+|_ResourceId | The unique identifier for a process within the workspace |
++
+## Sample map queries
+
+### List all known machines
+
+```kusto
+VMComputer | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### When was the VM last rebooted
+
+```kusto
+let Today = now(); VMComputer | extend DaysSinceBoot = Today - BootTime | summarize by Computer, DaysSinceBoot, BootTime | sort by BootTime asc
+```
+
+### Summary of Azure VMs by image, location, and SKU
+
+```kusto
+VMComputer | where AzureLocation != "" | summarize by Computer, AzureImageOffering, AzureLocation, AzureImageSku
+```
+
+### List the physical memory capacity of all managed computers
+
+```kusto
+VMComputer | summarize arg_max(TimeGenerated, *) by _ResourceId | project PhysicalMemoryMB, Computer
+```
+
+### List computer name, DNS, IP, and OS
+
+```kusto
+VMComputer | summarize arg_max(TimeGenerated, *) by _ResourceId | project Computer, OperatingSystemFullName, DnsNames, Ipv4Addresses
+```
+
+### Find all processes with "sql" in the command line
+
+```kusto
+VMProcess | where CommandLine contains_cs "sql" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### Find a machine (most recent record) by resource name
+
+```kusto
+search in (VMComputer) "m-4b9c93f9-bc37-46df-b43c-899ba829e07b" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### Find a machine (most recent record) by IP address
+
+```kusto
+search in (VMComputer) "10.229.243.232" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### List all known processes on a specified machine
+
+```kusto
+VMProcess | where Machine == "m-559dbcd8-3130-454d-8d1d-f624e57961bc" | summarize arg_max(TimeGenerated, *) by _ResourceId
+```
+
+### List all computers running SQL Server
+
+```kusto
+VMComputer | where AzureResourceName in ((search in (VMProcess) "*sql*" | distinct Machine)) | distinct Computer
+```
+
+### List all unique product versions of curl in my datacenter
+
+```kusto
+VMProcess | where ExecutableName == "curl" | distinct ProductVersion
+```
+
+### Create a computer group of all computers running CentOS
+
+```kusto
+VMComputer | where OperatingSystemFullName contains_cs "CentOS" | distinct Computer
+```
+
+### Bytes sent and received trends
+
+```kusto
+VMConnection | summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer | order by Computer desc | render timechart
+```
+
+### Which Azure VMs are transmitting the most bytes
+
+```kusto
+VMConnection | join kind=fullouter(VMComputer) on $left.Computer == $right.Computer | summarize count(BytesSent) by Computer, AzureVMSize | sort by count_BytesSent desc
+```
+
+### Link status trends
+
+```kusto
+VMConnection | where TimeGenerated >= ago(24hr) | where Computer == "acme-demo" | summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h) | render timechart
+```
+
+### Connection failures trend
+
+```kusto
+VMConnection | where Computer == "acme-demo" | extend bythehour = datetime_part("hour", TimeGenerated) | project bythehour, LinksFailed | summarize failCount = count() by bythehour | sort by bythehour asc | render timechart
+```
+
+### Bound Ports
+
+```kusto
+VMBoundPort
+| where TimeGenerated >= ago(24hr)
+| where Computer == 'admdemo-appsvr'
+| distinct Port, ProcessName
+```
+
+### Number of open ports across machines
+
+```kusto
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize by Computer, Machine, Port, Protocol
+| summarize OpenPorts=count() by Computer, Machine
+| order by OpenPorts desc
+```
+
+### Score processes in your workspace by the number of ports they have open
+
+```kusto
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize by ProcessName, Port, Protocol
+| summarize OpenPorts=count() by ProcessName
+| order by OpenPorts desc
+```
+
+### Aggregate behavior for each port
+
+This query can then be used to score ports by activity, e.g., ports with most inbound/outbound traffic, ports with most connections
+```kusto
+//
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
+| project-away TimeGenerated
+| order by Machine, Computer, Port, Ip, ProcessName
+```
+
+### Summarize the outbound connections from a group of machines
+
+```kusto
+// the machines of interest
+let machines = datatable(m: string) ["m-82412a7a-6a32-45a9-a8d6-538354224a25"];
+// map of ip to monitored machine in the environment
+let ips=materialize(VMComputer
+| summarize ips=makeset(todynamic(Ipv4Addresses)) by MonitoredMachine=AzureResourceName
+| mvexpand ips to typeof(string));
+// all connections to/from the machines of interest
+let out=materialize(VMConnection
+| where Machine in (machines)
+| summarize arg_max(TimeGenerated, *) by ConnectionId);
+// connections to localhost augmented with RemoteMachine
+let local=out
+| where RemoteIp startswith "127."
+| project ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol, RemoteIp, RemoteMachine=Machine;
+// connections not to localhost augmented with RemoteMachine
+let remote=materialize(out
+| where RemoteIp !startswith "127."
+| join kind=leftouter (ips) on $left.RemoteIp == $right.ips
+| summarize by ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol, RemoteIp, RemoteMachine=MonitoredMachine);
+// the remote machines to/from which we have connections
+let remoteMachines = remote | summarize by RemoteMachine;
+// all augmented connections
+(local)
+| union (remote)
+//Take all outbound records but only inbound records that come from either //unmonitored machines or monitored machines not in the set for which we are computing dependencies.
+| where Direction == 'outbound' or (Direction == 'inbound' and RemoteMachine !in (machines))
+| summarize by ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol, RemoteIp, RemoteMachine
+// identify the remote port
+| extend RemotePort=iff(Direction == 'outbound', DestinationPort, 0)
+// construct the join key we'll use to find a matching port
+| extend JoinKey=strcat_delim(':', RemoteMachine, RemoteIp, RemotePort, Protocol)
+// find a matching port
+| join kind=leftouter (VMBoundPort
+| where Machine in (remoteMachines)
+| summarize arg_max(TimeGenerated, *) by PortId
+| extend JoinKey=strcat_delim(':', Machine, Ip, Port, Protocol)) on JoinKey
+// aggregate the remote information
+| summarize Remote=makeset(iff(isempty(RemoteMachine), todynamic('{}'), pack('Machine', RemoteMachine, 'Process', Process1, 'ProcessName', ProcessName1))) by ConnectionId, Direction, Machine, Process, ProcessName, SourceIp, DestinationIp, DestinationPort, Protocol
+```
+
+## Performance records
+Records with a type of *InsightsMetrics* have performance data from the guest operating system of the virtual machine. These records have the properties in the following table:
++
+| Property | Description |
+|:--|:--|
+|TenantId | Unique identifier for the workspace |
+|SourceSystem | *Insights* |
+|TimeGenerated | Time the value was collected (UTC) |
+|Computer | The computer FQDN |
+|Origin | *vm.azm.ms* |
+|Namespace | Category of the performance counter |
+|Name | Name of the performance counter |
+|Val | Collected value |
+|Tags | Related details about the record. See the table below for tags used with different record types. |
+|AgentId | Unique identifier for each computer's agent |
+|Type | *InsightsMetrics* |
+|_ResourceId_ | Resource ID of the virtual machine |
+
+The performance counters currently collected into the *InsightsMetrics* table are listed in the following table:
+
+| Namespace | Name | Description | Unit | Tags |
+|:|:|:|:|:|
+| Computer | Heartbeat | Computer Heartbeat | | |
+| Memory | AvailableMB | Memory Available Bytes | Megabytes | memorySizeMB - Total memory size|
+| Network | WriteBytesPerSecond | Network Write Bytes Per Second | BytesPerSecond | NetworkDeviceId - Id of the device<br>bytes - Total sent bytes |
+| Network | ReadBytesPerSecond | Network Read Bytes Per Second | BytesPerSecond | networkDeviceId - Id of the device<br>bytes - Total received bytes |
+| Processor | UtilizationPercentage | Processor Utilization Percentage | Percent | totalCpus - Total CPUs |
+| LogicalDisk | WritesPerSecond | Logical Disk Writes Per Second | CountPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | WriteLatencyMs | Logical Disk Write Latency Millisecond | MilliSeconds | mountId - Mount ID of the device |
+| LogicalDisk | WriteBytesPerSecond | Logical Disk Write Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | TransfersPerSecond | Logical Disk Transfers Per Second | CountPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | TransferLatencyMs | Logical Disk Transfer Latency Millisecond | MilliSeconds | mountId - Mount ID of the device |
+| LogicalDisk | ReadsPerSecond | Logical Disk Reads Per Second | CountPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | ReadLatencyMs | Logical Disk Read Latency Millisecond | MilliSeconds | mountId - Mount ID of the device |
+| LogicalDisk | ReadBytesPerSecond | Logical Disk Read Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device |
+| LogicalDisk | FreeSpacePercentage | Logical Disk Free Space Percentage | Percent | mountId - Mount ID of the device |
+| LogicalDisk | FreeSpaceMB | Logical Disk Free Space Bytes | Megabytes | mountId - Mount ID of the device<br>diskSizeMB - Total disk size |
+| LogicalDisk | BytesPerSecond | Logical Disk Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device |
++
+## Next steps
+
+* If you are new to writing log queries in Azure Monitor, review [how to use Log Analytics](../logs/log-analytics-tutorial.md) in the Azure portal to write log queries.
+
+* Learn about [writing search queries](../logs/get-started-queries.md).
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
description: Map is a feature of VM insights. It automatically discovers applica
Previously updated : 03/20/2020 Last updated : 06/08/2022
azure-monitor Vminsights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-optout.md
description: This article describes how to stop monitoring your virtual machines
Previously updated : 03/12/2020 Last updated : 06/08/2022
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
description: Overview of VM insights, which monitors the health and performance
Previously updated : 07/22/2020- Last updated : 06/08/2022 # Overview of VM insights VM insights monitors the performance and health of your virtual machines and virtual machine scale sets, including their running processes and dependencies on other resources. It can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues and can also help you understand whether an issue is related to other dependencies.
+> [!NOTE]
+> VM insights does not currently support [Azure Monitor agent](../agents/azure-monitor-agent-overview.md). You can
+ VM insights supports Windows and Linux operating systems on the following machines: - Azure virtual machines
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
description: Performance is a feature of the VM insights that automatically disc
Previously updated : 05/31/2020 Last updated : 06/08/2022
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
description: Troubleshoot VM insights installation.
Previously updated : 03/15/2021 Last updated : 06/08/2022
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
na Previously updated : 12/14/2021 Last updated : 06/06/2022 # Convert an NFS volume between NFSv3 and NFSv4.1
This section shows you how to convert the NFSv3 volume to NFSv4.1.
2. Convert the NFS version: 1. In the Azure portal, navigate to the NFS volume that you want to convert.
- 2. Click **Edit**.
+ 2. Select **Edit**.
3. In the Edit window that appears, select **NSFv4.1** in the **Protocol type** pulldown. ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
This section shows you how to convert the NFSv4.1 volume to NFSv3.
> [!IMPORTANT] > Converting a volume from NFSv4.1 to NFSv3 will result in all NFSv4.1 features such as ACLs and file locking to become unavailable.
-1. Before converting the volume, unmount it from the clients in preparation. See [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
-
- Example:
- `sudo umount /path/to/vol1`
+1. Before converting the volume:
+ 1. Unmount it from the clients in preparation. See [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
+ Example:
+ `sudo umount /path/to/vol1`
+ 2. Change the export policy to read-only. See [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md).
2. Convert the NFS version: 1. In the Azure portal, navigate to the NFS volume that you want to convert.
- 2. Click **Edit**.
+ 2. Select **Edit**.
3. In the Edit window that appears, select **NSFv3** in the **Protocol type** pulldown. ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
This section shows you how to convert the NFSv4.1 volume to NFSv3.
Example: `mount -v | grep /path/to/vol1`
- `vol1:/path/to/vol1 on /path type nfs (rw,intr,tcp,nfsvers=3,rsize=16384,wsize=16384,addr=192.168.1.1)`
+ `vol1:/path/to/vol1 on /path type nfs (rw,intr,tcp,nfsvers=3,rsize=16384,wsize=16384,addr=192.168.1.1)`.
+
+7. Change the read-only export policy back to the original export policy. See See [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md).
-7. Verify access using root and non-root users.
+8. Verify access using root and non-root users.
## Next steps
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
# Compare Azure Media Services v3 presets and Azure Video Indexer
-This article compares the capabilities of **Azure Video Indexer (formerly Video Indexer) APIs** and **Media Services v3 APIs**.
+This article compares the capabilities of **Azure Video Indexer APIs** and **Media Services v3 APIs**.
Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). The following table offers the current guideline for understanding the differences and similarities.
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
The article also covers [Linking an Azure Video Indexer account to Azure Governm
If the connection to Azure failed, you can attempt to troubleshoot the problem by connecting manually. > [!NOTE]
-> It's mandatory to have the following three accounts in the same region: the Azure Video Indexer account that you're connecting with the Media Services account, as well as the Azure storage account connected to the same Media Services account.
+> It's mandatory to have the following three accounts in the same region: the Azure Video Indexer account that you're connecting with the Media Services account, as well as the Azure storage account connected to the same Media Services account. When you create an Azure Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
### Create and configure a Media Services account
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
The following schemas are in use by Azure Video Indexer
## Next steps <!-- replace below with the proper link to your main monitoring service article -->-- See [Monitoring Azure Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.
+- See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.
- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
Title: How to enable network security
-description: This article gives an overview of the Azure Video Indexer (formerly Video Analyzer for Media) network security options.
+description: This article gives an overview of the Azure Video Indexer network security options.
Last updated 04/11/2022
# NSG service tags for Azure Video Indexer
-Azure Video Indexer (formerly Video Analyzer for Media) is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](/azure/virtual-network/service-tags-overview). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Azure Video Indexer is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](/azure/virtual-network/service-tags-overview). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
## Get started with service tags
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
Last updated 12/17/2021
# Index your videos stored on OneDrive
-This article shows how to index videos stored on OneDrive by using the Azure Video Indexer (formerly Azure Azure Video Indexer) website.
+This article shows how to index videos stored on OneDrive by using the Azure Video Indexer website.
## Supported file formats
This parameter specifies the URL of the video or audio file to be indexed. If th
### Code sample
+> [!NOTE]
+> The following sample is intended for Classic accounts only and isn't compatible with ARM accounts. For an updated sample for ARM, see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
+ The following C# code snippets demonstrate the usage of all the Azure Video Indexer APIs together. ### [Classic account](#tab/With-classic-account/)
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Azure Video Indexer makes an inference of main topics from transcripts. When pos
} ] },
-` ` `
+ ``` ## Next steps
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Title: What is Azure Video Indexer? description: This article gives an overview of the Azure Video Indexer service. Previously updated : 02/15/2022 Last updated : 06/09/2022
Azure Video Indexer is a cloud application, part of Azure Applied AI Services, built on Azure Media Services and Azure Cognitive Services (such as the Face, Translator, Computer Vision, and Speech). It enables you to extract the insights from your videos using Azure Video Indexer video and audio models.
-To start extracting insights with Azure Video Indexer, you need to create an account and upload videos. When you upload your videos to Azure Video Indexer, it analyses both visuals and audio by running different AI models. As Azure Video Indexer analyzes your video, the insights that are extracted by the AI models.
-
-When you create an Azure Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account. For more information, see [Create an Azure Video Indexer account connected to Azure](connect-to-azure.md).
-
-The following diagram is an illustration and not a technical explanation of how Azure Video Indexer works in the backend.
+Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure Video Indexer in the background.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Azure Video Indexer flow diagram":::
+To start extracting insights with Azure Video Indexer, you need to [create an account](connect-to-azure.md) and upload videos, see the [how can i get started](#how-can-i-get-started-with-azure-video-indexer) section below.
+ ## Compliance, Privacy and Security As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
To learn about compliance, privacy and security in Azure Video Indexer please vi
Azure Video Indexer's insights can be applied to many scenarios, among them are: * *Deep search*: Use the insights extracted from the video to enhance the search experience across a video library. For example, indexing spoken words and faces can enable the search experience of finding moments in a video where a person spoke certain words or when two people were seen together. Search based on such insights from videos is applicable to news agencies, educational institutes, broadcasters, entertainment content owners, enterprise LOB apps, and in general to any industry that has a video library that users need to search against.
-* *Content creation*: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps for the people and label appearances make the creation process much smoother and easier, and allows you to get to the parts of the video you need for the content you're creating.
+* *Content creation*: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
* *Accessibility*: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure Video Indexer in multiple languages. * *Monetization*: Azure Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server. * *Content moderation*: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Before you begin the prerequisites, review the [Performance best practices](#per
## Supported regions
-Azure VMware Solution currently supports the following regions: East US, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, France Central, Germany West Central, Japan West, North Central US, North Europe, Southeast Asia, Switzerland West, UK South, UK West, US South Central, and West US. The list of supported regions will expand as the preview progresses.
+Azure VMware Solution currently supports the following regions:
+
+**America** : East US, West US, Central US, South Central US, North Central US, Canada East, Canada Central .
+
+**Europe** : North Europe, UK West, UK South, France Central, Switzerland West, Germany West Central.
+
+**Asia** : Southeast Asia, Japan West.
+
+**Australia** : Australia East, Australia Southeast.
+
+**Brazil** : Brazil South.
+
+The list of supported regions will expand as the preview progresses.
## Performance best practices
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Last updated 05/12/2022
In this article, you'll learn how to enable Public IP to the NSX Edge for your Azure VMware Solution. >[!TIP]
->Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md).
+>Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md).
Public IP to the NSX Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment. The Public IP is configured in Azure VMware Solution through the Azure portal and the NSX-T Data center interface within your Azure VMware Solution private cloud. With this capability, you have the following features:
The architecture shows Internet access to and from your Azure VMware Solution pr
:::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the NSX Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png"::: ## Configure a Public IP in the Azure portal
-1. Log in to the Azure portal.
+1. Log on to the Azure portal.
1. Search for and select Azure VMware Solution. 2. Select the Azure VMware Solution private cloud. 1. In the left navigation, under **Workload Networking**, select **Internet connectivity**. 4. Select the **Connect using Public IP down to the NSX-T Edge** button. >[!TIP]
->Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md)
+>Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md).
5. Select **Public IP**. :::image type="content" source="media/public-ip-nsx-edge/public-ip-internet-connectivity.png" alt-text="Diagram that shows how to select public IP to the NSX Edge":::
For example, the following rule is set to Match External Address, and this setti
If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM. For more information on the NSX-T Gateway Firewall see the [NSX-T Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
-The Distributed Firewall may also be used to filter traffic to VMs. This feature is outside the scope of this document. The [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html) .
+The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html)git status.
+
+To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](https://docs.microsoft.com/azure/azure-resource-manager/management/preview-features?tabs=azure-portal).
## Next steps
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
+
+ Title: Azure Web PubSub service data plane REST API reference overview
+description: Describes the REST APIs Azure Web PubSub supports to manage the WebSocket connections and send messages to them.
++++ Last updated : 06/09/2022++
+# Azure Web PubSub service data plane REST API reference
+
+![Diagram showing the Web PubSub service workflow.](./media/concept-service-internals/workflow.png)
+
+As illustrated by the above workflow graph, and also detailed workflow described in [internals](./concept-service-internals.md), your app server can send messages to clients or to manage the connected clients using REST APIs exposed by Web PubSub service. This article describes the REST APIs in detail.
+
+## Using REST API
+
+### Authenticate via Azure Web PubSub Service AccessKey
+
+In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with Azure Web PubSub Service.
+
+<a name="signing"></a>
+#### Signing Algorithm and Signature
+
+`HS256`, namely HMAC-SHA256, is used as the signing algorithm.
+
+You should use the `AccessKey` in Azure Web PubSub Service instance's connection string to sign the generated JWT token.
+
+#### Claims
+
+Below claims are required to be included in the JWT token.
+
+Claim Type | Is Required | Description
+||
+`aud` | true | Should be the **SAME** as your HTTP request url, trailing slash and query parameters not included. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub`.
+`exp` | true | Epoch time when this token will be expired.
+
+A pseudo code in JS:
+```js
+const bearerToken = jwt.sign({}, connectionString.accessKey, {
+ audience: request.url,
+ expiresIn: "1h",
+ algorithm: "HS256",
+ });
+```
+
+### Authenticate via Azure Active Directory Token (Azure AD Token)
+
+Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
+
+**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory.
+
+[Learn how to generate Azure AD Tokens](/azure/active-directory/develop/reference-v2-libraries)
+
+You could also use **Role Based Access Control (RBAC)** to authorize the request from your server to Azure Web PubSub Service.
+
+[Learn how to configure Role Based Access Control roles for your resource](./howto-authorize-from-application.md#add-role-assignments-on-azure-portal)
+
+## APIs
+
+| Operation Group | Description |
+|--|-|
+|[Service Status](/rest/api/webpubsub/dataplane/health-api)| Provides operations to check the service status |
+|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. |
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
These are top scenarios involving combinations of resources, features and Cloud
| Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. | | Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins are not recommended](./deploy-prerequisite.md#required-service-definition-file-csdef-updates) for use on Cloud Services (extended support).| | Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. |
-Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The migration will complete, but the role sizes will be updated to use modern role sizes. There is no change in cost or SKU properties and virtual machine will not be rebooted for this change. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)|
+Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The role sizes need to be updated before migration. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)|
| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. | | Cloud Service in a virtual network but does not have an explicit subnet assigned | Not supported. Mitigation involves moving the role into a subnet, which requires a role restart (downtime) |
As part of migration, the resource names are changed, and few Cloud Services fea
Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs. ## Next steps
-For assistance migrating your Cloud Services (classic) deployment to Cloud Services (extended support) see our [Support and troubleshooting](support-help.md) landing page.
+For assistance migrating your Cloud Services (classic) deployment to Cloud Services (extended support) see our [Support and troubleshooting](support-help.md) landing page.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the released languages and public preview languages.
|English (United Kingdom)|`en-GB`<sup>Public preview</sup> | |English (United States)|`en-US`<sup>General available</sup>| |French (France)|`fr-FR`<sup>Public preview</sup> |
+|German (Germany)|`de-DE`<sup>Public preview</sup> |
|Spanish (Spain)|`es-ES`<sup>Public preview</sup> | > [!NOTE]
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
Title: Create shared access signature (SAS) tokens for containers and blobs with Microsoft Storage Explorer
+ Title: Create shared access signature (SAS) tokens for storage containers and blobs
description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal.
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
If you try to access the resultUrl directly, you will get a 404 error. You must
```bash curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
- "ImportJobOptions": {"fileUri": "FILE-URI-PATH"}
+ "fileUri": "FILE-URI-PATH"
}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/:import?api-version=2021-10-01&format=tsv' ```
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features, which are currently available i
| | Place a group call with PSTN participants | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ |
+| | Suppport for early media | ❌ |
| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | | | Get camera list | ✔️ |
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following list presents the set of features which are currently available in
| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️ | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️ | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Support for early media | ❌ | ✔️ | ✔️ | ✔️ |
| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️ | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️ | ✔️ | | | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Browser Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/browser-support.md
Title: How to verify if your application is running in a web browser supported by Azure Communication Services
+ Title: Verify if a web browser is supported
+ description: Learn how to get current browser environment details using the Azure Communication Services Calling SDK for JavaScript -+ Previously updated : 05/27/2022- ++ Last updated : 06/08/2021++
+#Customer intent: As a developer, I can verify that a browser an end user is trying to do a call on is supported by Azure Communication Services.
+ # How to verify if your application is running in a web browser supported by Azure Communication Services
confidential-computing Confidential Nodes Aks Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-addon.md
The SGX Device plugin implements the Kubernetes device plugin interface for Encl
## PSW with SGX quote helper
-Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use the PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host. Read the feature details below.
+Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host. Read the feature details below.
[Enclave applications](confidential-computing-enclaves.md) that do remote attestation require a generated quote. This quote provides cryptographic proof of the application's identity, state, and running environment. The generation requires trusted software components that are part of Intel's PSW.
Enclave applications that do remote attestation need to generate a quote. The qu
> [!NOTE] > This feature is only required for DCsv2/DCsv3 VMs that use specialized Intel SGX hardware.
-Intel supports two attestation modes to run the quote generation. For how to choose which type, see the [attestation type differences](#attestation-type-differences).
+Intel supports two attestation modes to run the quote generation. For how to choose which type, see the [attestation type differences] (#attestation-type-differences).
- **in-proc**: hosts the trusted software components inside the enclave application process. This method is useful when you are performing local attestation (between 2 enclave apps in a single VM node) - **out-of-proc**: hosts the trusted software components outside of the enclave application. This is a preferred method when performing remote attestation.
-SGX applications built using Open Enclave SDK by default use in-proc attestation mode. SGX-based applications allow out-of-proc and require extra hosting. These applications expose the required components such as Architectural Enclave Service Manager (AESM), external to the application.
+SGX applications are built using Open Enclave SDK by default use in-proc attestation mode. SGX-based applications allow out-of-proc and require extra hosting. These applications expose the required components such as Architectural Enclave Service Manager (AESM), external to the application.
It's highly recommended to use this feature. This feature enhances uptime for your enclave apps during Intel Platform updates or DCAP driver updates.
It's highly recommended to use this feature. This feature enhances uptime for yo
No updates are required for quote generation components of PSW for each containerized application.
-With out-of-proc, container owners donΓÇÖt need to manage updates within their container. Container owners instead rely on the provided interface that invokes the centralized service outside of the container. The provider update sand manages this service.
+With out-of-proc, container owners donΓÇÖt need to manage updates within their container. Container owners instead rely on the provided interface that invokes the centralized service outside of the container.
-For out-of-proc, there's not a concern of failures because of out-of-date PSW components. The quote generation involves the trusted SW components - Quoting Enclave (QE) & Provisioning Certificate Enclave (PCE), which are part of the trusted computing base (TCB). These SW components must be up to date to maintain the attestation requirements. The provider manages the updates to these components. Customers never have to deal with attestation failures because of out-of-date trusted SW components within their container.
+For out-of-proc, there's no concern of failures because of out-of-date PSW components. The quote generation involves the trusted SW components - Quoting Enclave (QE) & Provisioning Certificate Enclave (PCE), which are part of the trusted computing base (TCB). These SW components must be up to date to maintain the attestation requirements. The provider manages the updates to these components. Customers never have to deal with attestation failures because of out-of-date trusted SW components within their container.
Out-of-proc better uses EPC memory. In in-proc attestation mode, each enclave application instantiates the copy of QE and PCE for remote attestation. With out-of-proc, the container doesn't host those enclaves, and doesnΓÇÖt consume enclave memory from the container quota.
The out-of-proc attestation model works for confidential workloads. The quote re
![Diagram of quote requestor and quote generation interface.](./media/confidential-nodes-out-of-proc-attestation/aesmmanager.png)
-The abstract model applies to confidential workload scenarios. This model uses already available AESM service. AESM is containerized and deployed as a daemon set across the Kubernetes cluster. Kubernetes guarantees a single instance of an AESM service container, wrapped in a pod, to be deployed on each agent node. The new SGX Quote daemon set has a dependency on the `sgx-device-plugin` daemon set, since the AESM service container would request EPC memory from `sgx-device-plugin` for launching QE and PCE enclaves.
+The abstract model applies to confidential workload scenarios. This model uses the already available AESM service. AESM is containerized and deployed as a daemon set across the Kubernetes cluster. Kubernetes guarantees a single instance of an AESM service container, wrapped in a pod, to be deployed on each agent node. The new SGX Quote daemon set has a dependency on the `sgx-device-plugin` daemon set, since the AESM service container would request EPC memory from `sgx-device-plugin` for launching QE and PCE enclaves.
Each container needs to opt in to use out-of-proc quote generation by setting the environment variable `SGX_AESM_ADDR=1` during creation. The container also must include the package `libsgx-quote-ex`, which directs the request to default Unix domain socket An application can still use the in-proc attestation as before. However, you can't simultaneously use both in-proc and out-of-proc within an application. The out-of-proc infrastructure is available by default and consumes resources. > [!NOTE]
-> If you are using a Intel SGX wrapper software(OSS/ISV) to run you unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider.
+> If you are using a Intel SGX wrapper software (OSS/ISV) to run you unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider.
### Sample implementation
-The below docker file is a sample for an Open Enclave-based application. Set the `SGX_AESM_ADDR=1` environment variable in the Docker file. Or, set the variable in the deployment file. Follow this sample for the Docker file and deployment YAML details.
+By default, this service is not enabled for your AKS Cluster with "confcom" addon. Please update the addon with the below command
+
+```azurecli
+az aks addon update --addon confcom --name " YourAKSClusterName " --resource-group "YourResourceGroup " --enable-sgxquotehelper
+```
+Once the service is up, use the below docker sample for an Open Enclave-based application to validate the flow. Set the `SGX_AESM_ADDR=1` environment variable in the Docker file. Or, set the variable in the deployment file. Follow this sample for the Docker file and deployment YAML details.
> [!Note]
-> The **libsgx-quote-ex** package from Intel needs to be packaged in the application container for out-of-proc attestation to work properly.
+> The **libsgx-quote-ex** package from Intel needs to be packaged in the application container for out-of-proc attestation to work properly. The instructions below have the details.
```yaml # Refer to Intel_SGX_Installation_Guide_Linux for detail
RUN apt-get update && apt-get install -y \
WORKDIR /opt/openenclave/share/openenclave/samples/remote_attestation RUN . /opt/openenclave/share/openenclave/openenclaverc \ && make build
-# this sets the flag for out of proc attestation mode. alternatively you can set this flag on the deployment files
+# this sets the flag for out of proc attestation mode, alternatively you can set this flag on the deployment files
ENV SGX_AESM_ADDR=1 CMD make run
spec:
path: /var/run/aesmd ```
+The deployment should succeed and allow your apps to perform remote attestation using the SGX Quote Helper service.
++ ## Next Steps - [Set up Confidential Nodes (DCsv2/DCsv3-Series) on AKS](./confidential-enclave-nodes-aks-get-started.md)
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
Previously updated : 06/07/2022 Last updated : 06/09/2022 zone_pivot_groups: azure-cli-or-portal
az containerapp env create `
-> [!NOTE]
-> As you call `az conatinerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
- The following table describes the parameters used in for `containerapp env create`. | Parameter | Description |
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Previously updated : 06/07/2022 Last updated : 06/09/2022 zone_pivot_groups: azure-cli-or-portal
az containerapp env create `
-> [!NOTE]
-> As you call `az containerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
- The following table describes the parameters used in `containerapp env create`. | Parameter | Description |
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md
az acr repository show-manifests \
--repository hello-world ```
+To import an artifact by digest without adding a tag:
+
+```azurecli
+az acr import \
+ --name myregistry \
+ --source docker.io/library/hello-world@sha256:abc123 \
+ --repository hello-world
+```
+ If you have a [Docker Hub account](https://www.docker.com/pricing), we recommend that you use the credentials when importing an image from Docker Hub. Pass the Docker Hub user name and the password or a [personal access token](https://docs.docker.com/docker-hub/access-tokens/) as parameters to `az acr import`. The following example imports a public image from the `tensorflow` repository in Docker Hub, using Docker Hub credentials: ```azurecli
cosmos-db Cassandra Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-partitioning.md
When data is returned, it is sorted by the clustering key, as expected in Apache
:::image type="content" source="./media/cassandra-partitioning/select-from-pk.png" alt-text="Screenshot that shows the returned data that is sorted by the clustering key."::: > [!WARNING]
-> When querying data, if you want to filter *only* on the partition key value element of a compound primary key (as is the case above), ensure that you *explicitly add a secondary index on the partition key*:
+> When querying data in a table that has a compound primary key, if you want to filter on the partition key *and* any other non-indexed fields aside from the clustering key, ensure that you *explicitly add a secondary index on the partition key*:
> > ```shell > CREATE INDEX ON uprofile.user (user);
cosmos-db Secondary Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md
It's not advised to create an index on a frequently updated column. It is pruden
> - Clustering keys > [!WARNING]
-> If you have a [compound primary key](cassandra-partitioning.md#compound-primary-key) in your table, and you want to filter *only* on the partition key value element of the compound primary key, please ensure that you *explicitly add a secondary index on the partition key*. Azure Cosmos DB Cassandra API does not apply indexes to partition keys by default, and the index in this scenario may significantly improve query performance. Review our article on [partitioning](cassandra-partitioning.md) for more information.
+> Partition keys are not indexed by default in Cassandra API. If you have a [compound primary key](cassandra-partitioning.md#compound-primary-key) in your table, and you filter either on partition key and clustering key, or just partition key, this will give the desired behaviour. However, if you filter on partition key and any other non-indexed fields aside from the clustering key, this will result in a partition key fan-out - even if the other non-indexed fields have a secondary index. If you have a compound primary key in your table, and you want to filter on both the partition key value element of the compound primary key, plus another field that is not the partition key or clustering key, please ensure that you explicitly add a secondary index on the *partition key*. The index in this scenario should significantly improve query performance, even if the other non-partition key and non-clustering key fields have no index. Review our article on [partitioning](cassandra-partitioning.md) for more information.
## Indexing example
cosmos-db Sql Api Dotnet V2sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v2sdk-samples.md
Title: 'Azure Cosmos DB: .NET examples for the SQL API' description: Find C# .NET examples on GitHub for common tasks using the Azure Cosmos DB SQL API, including CRUD operations.--+++
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
Title: 'Azure Cosmos DB: .NET (Microsoft.Azure.Cosmos) examples for the SQL API' description: Find the C# .NET v3 SDK examples on GitHub for common tasks by using the Azure Cosmos DB SQL API.--+++ Last updated 05/02/2020 - # Azure Cosmos DB .NET v3 SDK (Microsoft.Azure.Cosmos) examples for the SQL API
cost-management-billing Consumption Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/consumption-api-overview.md
# Azure consumption API overview
-The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. These APIs currently only support Enterprise Enrollments and Web Direct Subscriptions (with a few exceptions). The APIs are continually updated to support other types of Azure subscriptions.
+The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. These APIs currently only support Enterprise Enrollments, Web Direct Subscriptions (with a few exceptions), and CSP Azure plan subscriptions. The APIs are continually updated to support other types of Azure subscriptions.
Azure Consumption APIs provide access to: - Enterprise and Web Direct Customers
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/subscription-transfer.md
- Title: About transferring billing ownership for an Azure subscription
-description: This article explains the things you need to know before you transfer billing ownership of an Azure subscription to another account.
-keywords: transfer azure subscription, azure transfer subscription, move azure subscription to another account,azure change subscription owner, transfer azure subscription to another account, azure transfer billing
--
-tags: billing,top-support-issue
--- Previously updated : 09/15/2021----
-# About transferring billing ownership for an Azure subscription
-
-This article helps you understand the things you should know before you transfer billing ownership of an Azure subscription to another account.
-
-You might want to transfer billing ownership of your Azure subscription if you're leaving your organization, or you want your subscription to be billed to another account. Transferring billing ownership to another account provides the administrators in the new account permission for billing tasks. They can change the payment method, view charges, and cancel the subscription.
-
-If you want to keep the billing ownership but change the type of your subscription, see [Switch your Azure subscription to another offer](../manage/switch-azure-offer.md). To control who can access resources in the subscription, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-
-If you're an Enterprise Agreement (EA) customer, your enterprise administrators can transfer billing ownership of your subscriptions between accounts.
-
-Only the billing administrator of an account can transfer ownership of a subscription.
-
-## Determine if you are a billing administrator
-
-<a name="whoisaa"></a>
-
-In effort to do the transfer, locate the person who has access to manage billing for an account. They're authorized to access billing on the [Azure portal](https://portal.azure.com) and do various billing tasks like create subscriptions, view and pay invoices, or update payment methods.
-
-### Check if you have billing access
-
-1. To identify accounts for which you have billing access, visit the [Cost Management + Billing page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/Overview).
-
-2. Select **Billing accounts** from the left-hand menu.
-
-3. The **Billing scope** listing page shows all the subscriptions where you have access to the billing details.
-
-### Check by subscription
-
-1. If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade).
-
-2. Select the subscription you want to check.
-
-3. Under the **Settings** heading, select **Properties**. See the **Account Admin** box to understand who is the account administrator of the subscription.
-
- > [!NOTE]
- > Not all subscription types show the Properties.
-
-## Supported subscription types
-
-Subscription transfer in the Azure portal is available for the subscription types listed below. Currently transfer isn't supported for [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) or [Azure in Open (AIO)](https://azure.microsoft.com/offers/ms-azr-0111p/) subscriptions. For a workaround, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). To transfer other subscriptions, like support plans, [contact Azure Support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
--- [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/)<sup>1</sup>-- [Microsoft Partner Network](https://azure.microsoft.com/offers/ms-azr-0025p/) -- [Visual Studio Enterprise (MPN) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)-- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/) -- [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/)-- [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/offers/ms-azr-0023p/)-- [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/)-- [Visual Studio Enterprise: BizSpark](https://azure.microsoft.com/offers/ms-azr-0064p/)-- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)-- [Visual Studio Test Professional](https://azure.microsoft.com/offers/ms-azr-0060p/)-- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)<sup>2</sup>-
-<sup>1</sup> Using the EA portal.
-
-<sup>2</sup> Only supported for accounts that are created during sign-up on the Azure website.
-
-## Resources transferred with subscriptions
-
-All your resources like VMs, disks, and websites transfer to the new account. However, if you transfer a subscription to an account in another Azure AD tenant, any [administrator roles](../manage/add-change-subscription-administrator.md) and [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) on the subscription don't transfer. Also, [app registrations](../../active-directory/develop/quickstart-register-app.md) and other tenant-specific services don't transfer along with the subscription.
-
-## Transfer account ownership to another country/region
-
-Unfortunately, you can't transfer subscriptions across countries or regions using the Azure portal. However they can get transferred if you open an Azure support request. To create a support request, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
-
-## Transfer a subscription from one account to another
-
-If you're an administrator of two accounts, your can transfer a subscription between your accounts. Your accounts are conceptually considered accounts of two different users so you can transfer subscriptions between your accounts.
-To view the steps needed to transfer your subscription, see [Transfer billing ownership of an Azure subscription](../manage/billing-subscription-transfer.md).
-
-## Transferring a subscription shouldn't create downtime
-
-If you transfer a subscription to an account in the same Azure AD tenant, there's no impact to the resources running in the subscription. However, context information saved in PowerShell isn't updated so you might have to clear it or change settings. If you transfer the subscription to an account in another tenant and decide to move the subscription to the tenant, all users, groups, and service principals who had [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) to access resources in the subscription lose their access. Service downtime might result.
-
-## New account usage and billing history
-
-The only information available to the users for the new account is the last month's cost for your subscription. The rest of the usage and billing history doesn't transfer with the subscription.
-
-## Manually migrate data and services
-
-When you transfer a subscription, its resources stay with it. If you can't transfer subscription ownership, you can manually migrate its resources. For more information, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
-
-## Remaining subscription credits
-
-If you have a Visual Studio or Microsoft Partner Network subscription, you get monthly credits. Your credit doesn't carry forward with the subscription in the new account. The user who accepts the transfer request needs to have a Visual Studio license to accept the transfer request. The subscription uses the Visual Studio credit that's available in the user's account. For more information, see [Transferring Visual Studio and Partner Network subscriptions](../manage/billing-subscription-transfer.md#transfer-visual-studio-and-partner-network-subscriptions).
-
-## Users keep access to transferred resources
-
-Keep in mind that users with access to resources in a subscription keep their access when ownership is transferred. However, [administrator roles](../manage/add-change-subscription-administrator.md) and [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) might get removed. Losing access occurs when your account is in an Azure AD tenant other than the subscription's tenant and the user who sent the transfer request moves the subscription to your account's tenant.
-
-You can view the users who have Azure role assignments to access resources in the subscription in the Azure portal. Visit the [Subscription page in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Then select the subscription you want to check, and then select **Access control (IAM)** from the left-hand pane. Next, select **Role assignments** from the top of the page. The role assignments page lists all users who have access on the subscription.
-
-Even if the [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) are removed during transfer, users in the original owner account might continue to have access to the subscription through other security mechanisms, including:
-
-* Management certificates that grant the user admin rights to subscription resources. For more information, see [Create and Upload a Management Certificate for Azure](../../cloud-services/cloud-services-certs-create.md).
-* Access keys for services like Storage. For more information, see [About Azure storage accounts](../../storage/common/storage-account-create.md).
-* Remote Access credentials for services like Azure Virtual Machines.
-
-If the recipient needs to restrict access to resources, they should consider updating any secrets associated with the service. Most resources can be updated. Sign in to the [Azure portal](https://portal.azure.com) and then on the Hub menu, select **All resources**. Next, Select the resource. Then in the resource page, select **Settings**. There you can view and update existing secrets.
-
-## You pay for usage when you receive ownership
-
-Your account is responsible for payment for any usage that is reported from the time of transfer onwards. There may be some usage that took place before transfer but was reported afterwards. The usage is included in your account's bill.
-
-## Use a different payment method
-
-While accepting the transfer request, you can select an existing payment method that's linked to your account or add a new payment method.
-
-## Transfer Enterprise Agreement subscription ownership
-
-The Enterprise Administrator can update account ownership for any account, even after an original account owner is no longer part of the organization. For more information about transferring Azure Enterprise Agreement accounts, see [Azure Enterprise transfers](../manage/ea-transfers.md).
-
-## Need help? Contact us.
-
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
-
-## Next steps
--- [Transfer billing ownership of an Azure subscription](../manage/billing-subscription-transfer.md)
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Previously updated : 10/14/2021 Last updated : 06/08/2022 # Automated publishing for continuous integration and delivery
Follow these steps to get started:
- task: NodeTool@0 inputs:
- versionSpec: '10.x'
+ versionSpec: '14.x'
displayName: 'Install Node.js' - task: Npm@1
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Defender for Cloud's supported kill chain intents are based on [version 9 of the
| **LateralMovement** | Lateral movement consists of techniques that enable an adversary to access and control remote systems on a network and could, but does not necessarily, include execution of tools on remote systems. The lateral movement techniques could allow an adversary to gather information from a system without needing additional tools, such as a remote access tool. An adversary can use lateral movement for many purposes, including remote Execution of tools, pivoting to additional systems, access to specific information or files, access to additional credentials, or to cause an effect. | | **Execution** | The execution tactic represents techniques that result in execution of adversary-controlled code on a local or remote system. This tactic is often used in conjunction with lateral movement to expand access to remote systems on a network. | | **Collection** | Collection consists of techniques used to identify and gather information, such as sensitive files, from a target network prior to exfiltration. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. |
-| **Exfiltration** | Exfiltration refers to techniques and attributes that result or aid in the adversary removing files and information from a target network. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. |
| **Command and Control** | The command and control tactic represents how adversaries communicate with systems under their control within a target network. |
+| **Exfiltration** | Exfiltration refers to techniques and attributes that result or aid in the adversary removing files and information from a target network. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. |
| **Impact** | Impact events primarily try to directly reduce the availability or integrity of a system, service, or network; including manipulation of data to impact a business or operational process. This would often refer to techniques such as ransomware, defacement, data manipulation, and others. |
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
Title: How to use Defender for Containers to identify vulnerabilities
+ Title: How to use Defender for Containers to identify vulnerabilities in Microsoft Defender for Cloud
description: Learn how to use Defender for Containers to scan images in your registries Previously updated : 04/28/2022 Last updated : 06/08/2022 # Use Defender for Containers to scan your ACR images for vulnerabilities
-This page explains how to use the built-in vulnerability scanner to scan the container images stored in your Azure Resource Manager-based Azure Container Registry.
+This page explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
-When the scanner, powered by Qualys, reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
> [!TIP] > You can also scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
There are four triggers for an image scan:
- **Continuous scan**- This trigger has two modes:
- - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+ - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
- - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
+This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
Defender for Cloud filters, and classifies findings from the scanner. When an im
To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
-1. Enable **Defender for Containers** for your subscription. Defender for Cloud is now ready to scan images in your registries.
+1. [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
>[!NOTE] > This feature is charged per image.
To create a rule:
## FAQ
-### How does Defender for Cloud scan an image?
-Defender for Cloud pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
+### How does Defender for Containers scan an image?
+
+Defender for Containers pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts. ### Can I get the scan results via REST API?+ Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan. ### What registry types are scanned? What types are billed?+ For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](defender-for-container-registries-introduction.md#availability).
-If you connect unsupported registries to your Azure subscription, Defender for Cloud won't scan them and won't bill you for them.
+If you connect unsupported registries to your Azure subscription, Defender for Containers won't scan them and won't bill you for them.
### Can I customize the findings from the vulnerability scanner?+ Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise. [Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-usage.md#disable-specific-findings). ### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?+ Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities. ## Next steps
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | |
+| Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Γ£ô (Preview) | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
defender-for-iot Tutorial Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-splunk.md
To address a lack of visibility into the security and resiliency of OT networks,
The application provides SOC analysts with multidimensional visibility into the specialized OT protocols and IIoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior. The application also enables both IT, and OT incident response from within one corporate SOC. This is an important evolution given the ongoing convergence of IT and OT to support new IIoT initiatives, such as smart machines and real-time intelligence.
-The Splunk application can be installed locally or run on a cloud. The Splunk integration along with Defender for IoT supports both deployments.
+The Splunk application can be installed locally ('Splunk Enterprise') or run on a cloud ('Splunk Cloud'). The Splunk integration along with Defender for IoT supports 'Splunk Enterprise' only.
> [!Note] > References to CyberX refer to Microsoft Defender for IoT.
dns Tutorial Alias Pip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-pip.md
Title: 'Tutorial: Create an Azure DNS alias record to refer to an Azure public IP address'
-description: This tutorial shows you how to configure an Azure DNS alias record to reference an Azure public IP address.
+description: In this tutorial, you learn how to configure an Azure DNS alias record to reference an Azure public IP address.
Previously updated : 04/19/2021 Last updated : 06/09/2022 + #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to an Azure public IP address.
-# Tutorial: Configure an alias record to refer to an Azure public IP address
+# Tutorial: Create an alias record to refer to an Azure public IP address
+
+You can create an alias record to reference an Azure resource. An example is an alias record that references an Azure public IP resource.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a network infrastructure.
+> * Create a virtual network and a subnet.
> * Create a web server virtual machine with a public IP. > * Create an alias record that points to the public IP. > * Test the alias record.
In this tutorial, you learn how to:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-You must have a domain name available that you can host in Azure DNS to test with. You must have full control of this domain. Full control includes the ability to set the name server (NS) records for the domain.
-For instructions to host your domain in Azure DNS, see [Tutorial: Host your domain in Azure DNS](dns-delegate-domain-azure-dns.md).
+* An Azure account with an active subscription.
+* A domain name hosted in Azure DNS. If you don't have an Azure DNS zone, you can [create a DNS zone](./dns-delegate-domain-azure-dns.md#create-a-dns-zone), then [delegate your domain](dns-delegate-domain-azure-dns.md#delegate-the-domain) to Azure DNS.
+
+> [!NOTE]
+> In this tutorial, `contoso.com` is used as an example. Replace `contoso.com` with your own domain name.
-The example domain used for this tutorial is contoso.com, but use your own domain name.
+## Sign in to Azure
+
+Sign in to the Azure portal at https://portal.azure.com.
## Create the network infrastructure
-First, create a virtual network and a subnet to place your web servers in.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Create a resource** from the left panel of the Azure portal. Enter *resource group* in the search box, and create a resource group named **RG-DNS-Alias-pip**.
-3. Select **Create a resource** > **Networking** > **Virtual network**.
-4. Create a virtual network named **VNet-Server**. Place it in the **RG-DNS-Alias-pip** resource group, and name the subnet **SN-Web**.
+
+Create a virtual network and a subnet to place your web server in.
+
+1. In the Azure portal, enter *virtual network* in the search box at the top of the portal, and then select **Virtual networks** from the search results.
+1. In **Virtual networks**, select **+ Create**.
+1. In **Create virtual network**, enter or select the following information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |-||
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new** </br> In **Name**, enter **PIPResourceGroup** </br> Select **OK** |
+ | **Instance details** | |
+ | Name | Enter **myPIPVNet** |
+ | Region | Select your region |
+
+1. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. In the **IP Addresses** tab, enter the following information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.10.0.0/16** |
+
+1. Select **+ Add subnet**, and enter this information in the **Add subnet**:
+
+ | Setting | Value |
+ |-|-|
+ | Subnet name | Enter **WebSubnet** |
+ | Subnet address range | Enter **10.10.0.0/24** |
+
+1. Select **Add**.
+1. Select the **Review + create** tab or select the **Review + create** button.
+1. Select **Create**.
## Create a web server virtual machine
-1. Select **Create a resource** > **Windows Server 2016 VM**.
-2. Enter **Web-01** for the name, and place the VM in the **RG-DNS-Alias-TM** resource group. Enter a username and password, and select **OK**.
-3. For **Size**, select an SKU with 8-GB RAM.
-4. For **Settings**, select the **VNet-Servers** virtual network and the **SN-Web** subnet. For public inbound ports, select **HTTP (80)** > **HTTPS (443)** > **RDP (3389)**, and then select **OK**.
-5. On the **Summary** page, select **Create**.
-This deployment takes a few minutes to complete. The virtual machine will have an attached NIC with a basic dynamic public IP called Web-01-ip. The public IP will change every time the virtual machine is restarted.
+Create a Windows Server virtual machine and then install IIS web server on it.
+
+### Create the virtual machine
+
+Create a Windows Server 2019 virtual machine.
-### Install IIS
+1. In the Azure portal, enter *virtual machine* in the search box at the top of the portal, and then select **Virtual machines** from the search results.
+1. In **Virtual machines**, select **+ Create** and then select **Azure virtual machine**.
+1. In **Create a virtual machine**, enter or select the following information in the **Basics** tab:
-Install IIS on **Web-01**.
+ | **Setting** | **Value** |
+ |||
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **RG-DNS-Alias-pip** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **Web-01** |
+ | Region | Select **(US) East US** |
+ | Availability options | Select **No infrastructure redundancy required** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2** |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
-1. Connect to **Web-01**, and sign in.
-2. On the **Server Manager** dashboard, select **Add roles and features**.
-3. Select **Next** three times. On the **Server Roles** page, select **Web Server (IIS)**.
-4. Select **Add Features**, and then select **Next**.
-5. Select **Next** four times, and then select **Install**. This procedure takes a few minutes to finish.
-6. After the installation finishes, select **Close**.
-7. Open a web browser. Browse to **localhost** to verify that the default IIS web page appears.
+
+1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+1. In the **Networking** tab, enter or select the following information:
+
+ | Setting | Value |
+ ||-|
+ | **Network interface** | |
+ | Virtual network | **myPIPVNet** |
+ | Subnet | **WebSubnet** |
+ | Public IP | Take the default public IP |
+ | NIC network security group | Select **Basic**|
+ | Public inbound ports | Select **Allow selected ports** |
+ | Select inbound ports | Select **HTTP (80)**, **HTTPS (443)** and **RDP (3389)** |
+
+1. Select **Review + create**.
+1. Review the settings, and then select **Create**.
+
+This deployment may take a few minutes to complete.
+
+> [!NOTE]
+> **Web-01** virtual machine has an attached NIC with a basic dynamic public IP that changes every time the virtual machine is restarted.
+
+### Install IIS web server
+
+Install IIS web server on **Web-01**.
+
+1. In the **Overview** page of **Web-01**, select **Connect** and then **RDP**.
+1. In the **RDP** page, select **Download RDP File**.
+1. Open *Web-01.rdp*, and select **Connect**.
+1. Enter the username and password entered during virtual machine creation.
+1. On the **Server Manager** dashboard, select **Manage** then **Add Roles and Features**.
+1. Select **Server Roles** or select **Next** three times. On the **Server Roles** page, select **Web Server (IIS)**.
+1. Select **Add Features**, and then select **Next**.
+1. Select **Confirmation** or select **Next** three times, and then select **Install**. The installation process takes a few minutes to finish.
+1. After the installation finishes, select **Close**.
+1. Open a web browser. Browse to **localhost** to verify that the default IIS web page appears.
+
+ :::image type="content" source="./media/tutorial-alias-pip/iis-web-server.png" alt-text="Screenshot of Internet Explorer showing the I I S Web Server Welcome page.":::
## Create an alias record Create an alias record that points to the public IP address.
-1. Select your Azure DNS zone to open the zone.
-2. Select **Record set**.
-3. In the **Name** text box, select **web01**.
-4. Leave the **Type** as an **A** record.
-5. Select the **Alias Record Set** check box.
-6. Select **Choose Azure service**, and then select the **Web-01-ip** public IP address.
+1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal, and then select **contoso.com** DNS zone from the search results.
+1. In the **Overview** page, select the **+ Record set** button.
+1. In the **Add record set**, enter *web01* in the **Name**.
+1. Select **A** for the **Type**.
+1. Select **Yes** for the **Alias record set**, and then select the **Azure Resource** for the **Alias type**.
+1. Select the **Web-01-ip** public IP address for the **Azure resource**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-alias-pip/add-public-ip-alias-inline.png" alt-text="Screenshot of adding an alias record to refer to the Azure public IP of the I I S web server using the Add record set page." lightbox="./media/tutorial-alias-pip/add-public-ip-alias-expanded.png":::
## Test the alias record
-1. In the **RG-DNS-Alias-pip** resource group, select the **Web-01** virtual machine. Note the public IP address.
-1. From a web browser, browse to the fully qualified domain name for the Web01-01 virtual machine. An example is **web01.contoso.com**. You now see the IIS default web page.
-2. Close the web browser.
-3. Stop the **Web-01** virtual machine, and then restart it.
-4. After the virtual machine restarts, note the new public IP address for the virtual machine.
-5. Open a new browser. Browse again to the fully qualified domain name for the Web01-01 virtual machine. An example is **web01.contoso.com**.
+1. In the Azure portal, enter *virtual machine* in the search box at the top of the portal, and then select **Virtual machines** from the search results.
+1. Select the **Web-01** virtual machine. Note the public IP address in the **Overview** page.
+1. From a web browser, browse to `web01.contoso.com`, which is the fully qualified domain name of the **Web-01** virtual machine. You now see the IIS welcome web page.
+1. Close the web browser.
+1. Stop the **Web-01** virtual machine, and then restart it.
+1. After the virtual machine restarts, note the new public IP address for the virtual machine.
+1. From a web browser, browse again to `web01.contoso.com`.
-This procedure succeeds because you used an alias record to point to the public IP address resource, not a standard A record.
+This procedure succeeds because you used an alias record to point to the public IP resource instead of a standard A record that points to the public IP address, not the resource.
## Clean up resources
-When you no longer need the resources created for this tutorial, delete the **RG-DNS-Alias-pip** resource group.
-
+When no longer needed, you can delete all resources created in this tutorial by deleting the **RG-DNS-Alias-pip** resource group and the alias record **web01** from **contoso.com** DNS zone.
## Next steps
-In this tutorial, you created an alias record to refer to an Azure public IP address. To learn about Azure DNS and web apps, continue with the tutorial for web apps.
+In this tutorial, you created an alias record to refer to an Azure public IP address resource. To learn how to create an alias record to support domain name apex with Traffic Manager, continue with the alias records for Traffic Manager tutorial.
> [!div class="nextstepaction"]
-> [Create DNS records for a web app in a custom domain](./dns-web-sites-custom-domain.md)
+> [Create alias records for Traffic Manager](./tutorial-alias-tm.md)
dns Tutorial Alias Rr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-rr.md
Title: 'Tutorial: Create an alias record to refer to a resource record in a zone'
-description: This tutorial shows you how to configure an Azure DNS alias record to reference a resource record within the zone.
-
+description: In this tutorial, you learn how to configure an alias record to reference a resource record within the zone.
+ + Previously updated : 04/19/2021- Last updated : 06/09/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to a resource record within the zone.
Alias records can reference other record sets of the same type. For example, you
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create an alias record for a resource record in the zone.
+> * Create a resource record in the zone.
+> * Create an alias record for the resource record.
> * Test the alias record. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-You must have a domain name available that you can host in Azure DNS to test with. You must have full control of this domain. Full control includes the ability to set the name server (NS) records for the domain.
-For instructions to host your domain in Azure DNS, see [Tutorial: Host your domain in Azure DNS](dns-delegate-domain-azure-dns.md).
+* An Azure account with an active subscription.
+* A domain name hosted in Azure DNS. If you don't have an Azure DNS zone, you can [create a DNS zone](./dns-delegate-domain-azure-dns.md#create-a-dns-zone), then [delegate your domain](dns-delegate-domain-azure-dns.md#delegate-the-domain) to Azure DNS.
+
+> [!NOTE]
+> In this tutorial, `contoso.com` is used as an example. Replace `contoso.com` with your own domain name.
+
+## Sign in to Azure
+Sign in to the Azure portal at https://portal.azure.com.
## Create an alias record Create an alias record that points to a resource record in the zone. ### Create the target resource record
-1. Select your Azure DNS zone to open the zone.
-2. Select **Record set**.
-3. In the **Name** text box, enter **server**.
-4. For the **Type**, select **A**.
-5. In the **IP ADDRESS** text box, enter **10.10.10.10**.
-6. Select **OK**.
+1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal, and then select **contoso.com** DNS zone from the search results.
+1. In the **Overview** page, select the **+Record set** button.
+1. In the **Add record set**, enter *server* in the **Name**.
+1. Select **A** for the **Type**.
+1. Enter *10.10.10.10* in the **IP address**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-alias-rr/add-record-set-inline.png" alt-text="Screentshot of adding the target record set in the Add record set page." lightbox="./media/tutorial-alias-rr/add-record-set-expanded.png":::
### Create the alias record
-1. Select your Azure DNS zone to open the zone.
-2. Select **Record set**.
-3. In the **Name** text box, enter **test**.
-4. For the **Type**, select **A**.
-5. Select **Yes** in the **Alias Record Set** check box. Then select the **Zone record set** option.
-6. For the **Zone record set**, select the **server** record.
-7. Select **OK**.
+1. In the **Overview** page of **contoso.com** DNS zone, select the **+Record set** button.
+1. In the **Add record set**, enter *test* in the **Name**.
+1. Select **A** for the **Type**.
+1. Select **Yes** for the **Alias record set**, and then select the **Zone record set** for the **Alias type**.
+1. Select the **server** record for the **Zone record set**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-alias-rr/add-alias-record-set-inline.png" alt-text="Screentshot of adding the alias record set in the Add record set page." lightbox="./media/tutorial-alias-rr/add-alias-record-set-expanded.png":::
## Test the alias record
-1. Start your favorite nslookup tool. One option is to browse to [https://network-tools.com/nslook](https://network-tools.com/nslook).
-2. Set the query type for A records, and look up **test.\<your domain name\>**. The answer is **10.10.10.10**.
-3. In the Azure portal, change the **server** A record to **10.11.11.11**.
-4. Wait a few minutes, and then use nslookup again for the **test** record. The answer is **10.11.11.11**.
+After adding the alias record, you can verify that it's working by using a tool such as *nslookup* to query the `test` A record.
-## Clean up resources
+> [!TIP]
+> You may need to wait at least 10 minutes after you add a record to successfully verify that it's working. It can take a while for changes to propagate through the DNS system.
+
+1. From a command prompt, enter the `nslookup` command:
+
+ ```
+ nslookup test.contoso.com
+ ```
-When you no longer need the resources created for this tutorial, delete the **server** and **test** resource records in your zone.
+1. Verify that the response looks similar to the following output:
+
+ ```
+ Server: UnKnown
+ Address: 40.90.4.1
+
+ Name: test.contoso.com
+ Address: 10.10.10.10
+ ```
+
+1. In the **Overview** page of **contoso.com** DNS zone, select the **server** record, and then enter *10.11.11.11* in the **IP address**.
+
+1. Select **Save**.
+
+1. Wait a few minutes, and then use the `nslookup` command again. Verify the response changed to reflect the new IP address:
++
+ ```
+ Server: UnKnown
+ Address: 40.90.4.1
+
+ Name: test.contoso.com
+ Address: 10.11.11.11
+ ```
+
+## Clean up resources
+When you no longer need the resources created for this tutorial, delete the **server** and **test** records from your zone.
## Next steps
-In this tutorial, you created an alias record to refer to a resource record within the zone. To learn about Azure DNS and web apps, continue with the tutorial for web apps.
+In this tutorial, you learned the basic steps to create an alias record to refer to a resource record within the Azure DNS zone.
-> [!div class="nextstepaction"]
-> [Create DNS records for a web app in a custom domain](./dns-web-sites-custom-domain.md)
+- Learn more about [alias records](dns-alias.md).
+- Learn more about [zones and records](dns-zones-records.md).
event-grid Azure Active Directory Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/azure-active-directory-events.md
+
+ Title: Azure Active Directory events
+description: This article describes Azure AD event types and provides event samples.
+ Last updated : 06/09/2022++
+# Azure Active Directory events
+
+This article provides the properties and schema for Azure Active Directory (Azure AD) events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md).
+
+## Available event types
+These events are triggered when a [User](/graph/api/resources/user) or [Group](/graph/api/resources/group) is created, updated or deleted in Azure AD or by operating over those resources using Microsoft Graph API.
+
+ | Event name | Description |
+ | - | -- |
+ | **Microsoft.Graph.UserCreated** | Triggered when a user in Azure AD is created. |
+ | **Microsoft.Graph.UserUpdated** | Triggered when a user in Azure AD is updated. |
+ | **Microsoft.Graph.UserDeleted** | Triggered when a user in Azure AD is deleted. |
+ | **Microsoft.Graph.GroupCreated** | Triggered when a group in Azure AD is created. |
+ | **Microsoft.Graph.GroupUpdated** | Triggered when a group in Azure AD is updated. |
+ | **Microsoft.Graph.GroupDeleted** | Triggered when a group in Azure AD is deleted. |
+
+## Example event
+When an event is triggered, the Event Grid service sends data about that event to subscribing destinations. This section contains an example of what that data would look like for each Azure AD event.
+
+### Microsoft.Graph.UserCreated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.UserCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Users/<user-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "created",
+ "clientState": "<guid>",
+ "resource": "Users/<user-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.User",
+ "@odata.id": "Users/<user-id>",
+ "id": "<user-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+
+### Microsoft.Graph.UserUpdated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.UserUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Users/<user-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "updated",
+ "clientState": "<guid>",
+ "resource": "Users/<user-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.User",
+ "@odata.id": "Users/<user-id>",
+ "id": "<user-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+### Microsoft.Graph.UserDeleted event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.UserDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Users/<user-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "deleted",
+ "clientState": "<guid>",
+ "resource": "Users/<user-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.User",
+ "@odata.id": "Users/<user-id>",
+ "id": "<user-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+### Microsoft.Graph.GroupCreated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.GroupCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Groups/<group-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "created",
+ "clientState": "<guid>",
+ "resource": "Groups/<group-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.Group",
+ "@odata.id": "Groups/<group-id>",
+ "id": "<group-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+### Microsoft.Graph.GroupUpdated event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.GroupUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Groups/<group-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "updated",
+ "clientState": "<guid>",
+ "resource": "Groups/<group-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.Group",
+ "@odata.id": "Groups/<group-id>",
+ "id": "<group-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
+
+### Microsoft.Graph.GroupDeleted event
+
+```json
+[{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.GroupDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Groups/<group-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "changeType": "deleted",
+ "clientState": "<guid>",
+ "resource": "Groups/<group-id>",
+ "resourceData": {
+ "@odata.type": "#Microsoft.Graph.Group",
+ "@odata.id": "Groups/<group-id>",
+ "id": "<group-id>",
+ "organizationId": "<tenant-id>",
+ "eventTime": "2022-05-24T22:24:31.3062901Z",
+ "sequenceNumber": <sequence-number>
+ },
+ "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
+ "subscriptionId": "<microsoft-graph-subscription-id>",
+ "tenantId": "<tenant-id>
+ }
+}]
+```
++
+## Event properties
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `source` | string | The tenant event source. This field isn't writeable. Microsoft Graph API provides this value. |
+| `subject` | string | Publisher-defined path to the event subject. |
+| `type` | string | One of the event types for this event source. |
+| `time` | string | The time the event is generated based on the provider's UTC time |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Event payload that provides the data about the resource state change. |
+| `specversion` | string | CloudEvents schema specification version. |
+++
+The data object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `changeType` | string | The type of resource state change. |
+| `resource` | string | The resource identifier for which the event was raised. |
+| `tenantId` | string | The organization ID where the user or group is kept. |
+| `clientState` | string | A secret provided by the user at the time of the Graph API subscription creation. |
+| `@odata.type` | string | The Graph API change type. |
+| `@odata.id` | string | The Graph API resource identifier for which the event was raised. |
+| `id` | string | The resource identifier for which the event was raised. |
+| `organizationId` | string | The Azure AD tenant identifier. |
+| `eventTime` | string | The time at which the resource state occurred. |
+| `sequenceNumber` | string | A sequence number. |
+| `subscriptionExpirationDateTime` | string | The time in [RFC 3339](https://tools.ietf.org/html/rfc3339) format at which the Graph API subscription expires. |
+| `subscriptionId` | string | The Graph API subscription identifier. |
+| `tenantId` | string | The Azure AD tenant identifier. |
++
+## Next steps
+
+* For an introduction to Azure Event Grid's Partner Events, see [Partner Events overview](partner-events-overview.md)
+* For information on how to subscribe to Microsoft Graph API to receive Azure AD events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md).
+* For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md).
+* For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md).
+* For information about how to configure an event subscription to select specific events to be delivered, consult [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md).
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
If you specify multiple different filters, an **AND** operation is done, so each
## CloudEvents For events in the **CloudEvents schema**, use the following values for the key: `eventid`, `source`, `eventtype`, `eventtypeversion`, or event data (like `data.key1`).
-You can also use [extension context attributes in CloudEvents 1.0](https://github.com/cloudevents/spec/blob/v1.0.1/spec.md#extension-context-attributes). In the following example, `comexampleextension1` and `comexampleothervalue` are extension context attributes.
+You can also use [extension context attributes in CloudEvents 1.0](https://github.com/cloudevents/spec/blob/v1.0.1/spec.md#extension-context-attributes). In the following example, `comexampleextension1` and `comexampleothervalue` are extension context attributes.
```json {
Here's an example of using an extension context attribute in a filter.
Advanced filtering has the following limitations:
-* 25 advanced filters and 25 filter values across all the filters per event grid subscription
+* 25 advanced filters and 25 filter values across all the filters per Event Grid subscription
* 512 characters per string value * Keys with **`.` (dot)** character in them. For example: `http://schemas.microsoft.com/claims/authnclassreference` or `john.doe@contoso.com`. Currently, there's no support for escape characters in keys.
event-grid Outlook Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/outlook-events.md
+
+ Title: Outlook events in Azure Event Grid
+description: This article describes Microsoft Outlook events in Azure Event Grid.
+ Last updated : 06/09/2022++
+# Microsoft Outlook events
+
+This article provides the properties and schema for Microsoft Outlook events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md).
+
+## Available event types
+These events are triggered when an Outlook event or an Outlook contact is created, updated or deleted or by operating over those resources using Microsoft Graph API.
+
+ | Event name | Description |
+ | - | -- |
+ | **Microsoft.Graph.EventCreated** | Triggered when an event in Outlook is created. |
+ | **Microsoft.Graph.EventUpdated** | Triggered when an event in Outlook is updated. |
+ | **Microsoft.Graph.EventDeleted** | Triggered when an event in Outlook is deleted. |
+ | **Microsoft.Graph.ContactCreated** | Triggered when a contact in Outlook is created. |
+ | **Microsoft.Graph.ContactUpdated** | Triggered when a contact in Outlook is updated. |
+ | **Microsoft.Graph.ContactDeleted** | Triggered when a contact in Outlook is deleted. |
+
+## Example event
+When an event is triggered, the Event Grid service sends data about that event to subscribing destinations. This section contains an example of what that data would look like for each Outlook event.
+
+### Microsoft.Graph.EventCreated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.EventCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Events/<event-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "created",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<event id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Event",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+### Microsoft.Graph.EventUpdated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.EventUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Events/<event-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "updated",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<event id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Event",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+### Microsoft.Graph.EventDeleted event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.EventDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Events/<event-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "deleted",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<event id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Events('<event id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Event",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+### Microsoft.Graph.ContactCreated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.ContactCreated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Contacts/<contact-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "created",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<contact id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Contact",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+### Microsoft.Graph.ContactUpdated event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.ContactUpdated",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Contacts/<contact-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "updated",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<contact id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Contact",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+### Microsoft.Graph.ContactDeleted event
+
+```json
+{
+ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
+ "type": "Microsoft.Graph.ContactDeleted",
+ "source": "/tenants/<tenant-id>/applications/<application-id>",
+ "subject": "Contacts/<contact-id>",
+ "time": "2022-05-24T22:24:31.3062901Z",
+ "datacontenttype": "application/json",
+ "specversion": "1.0",
+ "data": {
+ "@odata.type": "#Microsoft.OutlookServices.Notification",
+ "Id": null,
+ "SubscriptionExpirationDateTime": "2019-02-14T23:56:30.1307708Z",
+ "ChangeType": "deleted",
+ "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",
+ "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "clientState": "<client state>",
+ "resourceData": {
+ "Id": "<contact id>",
+ "@odata.etag": "<tag id>",
+ "od@ata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Contacts('<contact id>')",
+ "@odata.type": "#Microsoft.OutlookServices.Contact",
+ "OtherResourceData": "<some other resource data>"
+ }
+ }
+}
+```
+
+## Event properties
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `source` | string | The tenant event source. This field isn't writeable. Microsoft Graph API provides this value. |
+| `subject` | string | Publisher-defined path to the event subject. |
+| `type` | string | One of the event types for this event source. |
+| `time` | string | The time the event is generated based on the provider's UTC time |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Event payload that provides the data about the resource state change. |
+| `specversion` | string | CloudEvents schema specification version. |
+++
+The data object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `changeType` | string | The type of resource state change. |
+| `resource` | string | The resource identifier for which the event was raised. |
+| `tenantId` | string | The organization ID where the user or contact is kept. |
+| `clientState` | string | A secret provided by the user at the time of the Graph API subscription creation. |
+| `@odata.type` | string | The Graph API change type. |
+| `@odata.id` | string | The Graph API resource identifier for which the event was raised. |
+| `id` | string | The resource identifier for which the event was raised. |
+| `organizationId` | string | The Outlook tenant identifier. |
+| `eventTime` | string | The time at which the resource state occurred. |
+| `sequenceNumber` | string | A sequence number. |
+| `subscriptionExpirationDateTime` | string | The time in [RFC 3339](https://tools.ietf.org/html/rfc3339) format at which the Graph API subscription expires. |
+| `subscriptionId` | string | The Graph API subscription identifier. |
+| `tenantId` | string | The Outlook tenant identifier. |
+| `otherResourceData` | string | Placeholder that represents one or more dynamic properties that may be included in the event. |
++
+## Next steps
+
+* For an introduction to Azure Event Grid's Partner Events, see [Partner Events overview](partner-events-overview.md)
+* For information on how to subscribe to Microsoft Graph API to receive Outlook events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md).
+* For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md).
+* For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md).
+* For information about how to configure an event subscription to select specific events to be delivered, consult [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md).
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Title: What is Azure Event Grid? description: Send event data from a source to handlers with Azure Event Grid. Build event-based applications, and integrate with Azure services. Previously updated : 03/15/2022 Last updated : 06/09/2022 # What is Azure Event Grid?
-Azure Event Grid allows you to easily build applications with event-based architectures. First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to. Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has support for your own events, using custom topics.
+Event Grid is a highly scalable, serverless event broker that you can use to integrate applications using events. Events are delivered by Event Grid to subscriber destinations such as applications, Azure services, or any endpoint to which Event Grid has network access. The source of those events can be other applications, SaaS services and Azure services.
-You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and make sure your events are reliably delivered.
+With Event Grid you connect solutions using event-driven architectures. An [event-driven architecture](/azure/architecture/guide/architecture-styles/event-driven) uses events to communicate occurrences in system state changes, for example, to other applications or services. You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and make sure your events are reliably delivered.
Azure Event Grid is deployed to maximize availability by natively spreading across multiple fault domains in every region, and across availability zones (in regions that support them). For a list of regions that are supported by Event Grid, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid&regions=all).
-This article provides an overview of Azure Event Grid. If you want to get started with Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md).
+The event sources and event handlers or destinations are summarized in the following diagram.
:::image type="content" source="./media/overview/functional-model.png" alt-text="Event Grid model of sources and handlers" lightbox="./media/overview/functional-model-big.png":::
This article provides an overview of Azure Event Grid. If you want to get starte
## Event sources
-Currently, the following Azure services support sending events to Event Grid. For more information about a source in the list, select the link.
+Event Grid supports the following event sources:
+1. **Your own service or solution** that publishes events to Event Grid so that your customers can subscribe to them. Event Grid provides two type of resources you can use depending on your requirements.
+ - [Custom Topics](custom-topics.md) or "Topics" for short. Use custom topics if your requirements resemble the following user story:
+
+ "As an owner of a system, I want to communicate my system's state changes by publishing events and routing those events to event handlers, under my control or otherwise, that can process my system's events in a way they see fit."
+
+ - [Domains](event-domains.md). Use domains if you want to deliver events to multiple teams at scale. Your requirements probably are similar to the following one:
+
+ "As an owner of a system, I want to announce my systemΓÇÖs state changes to multiple teams in a single tenant so that they can process my systemΓÇÖs events in a way they see fit."
+2. A **SaaS provider or platform** can publish their events to Event Grid through a feature called [Partner Events](partner-events-overview.md). You can [subscribe to those events](subscribe-to-partner-events.md) and automate tasks, for example. Events from the following partners are currently available:
+ - [Auth0](auth0-overview.md)
+ - [Microsoft Graph API](subscribe-to-graph-api-events.md). Through Microsoft Graph API you can get events from [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), [Azure AD](azure-active-directory-events.md), SharePoint, Conversations, security alerts, and Universal Print.
+
+3. **An Azure service**. The following Azure services support sending events to Event Grid. For more information about a source in the list, select the link.
+ ## Event handlers
-For full details on the capabilities of each handler as well as related articles, see [event handlers](event-handlers.md). Currently, the following Azure services support handling events from Event Grid:
+For full details on the capabilities of each handler and related articles, see [event handlers](event-handlers.md). Currently, the following Azure services support handling events from Event Grid:
[!INCLUDE [event-handlers.md](includes/event-handlers.md)]
Azure Event Grid uses a pay-per-event pricing model, so you only pay for what yo
A tutorial that uses Azure Functions to stream data from Event Hubs to Azure Synapse Analytics. * [Event Grid REST API reference](/rest/api/eventgrid) Provides reference content for managing Event Subscriptions, routing, and filtering.
+* [Partner Events overview](partner-events-overview.md).
+* [subscribe to partner events](subscribe-to-partner-events.md).
event-grid Partner Events Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-graph-api.md
+
+ Title: Microsoft Graph API events in Azure Event Grid
+description: This article describes events published by Microsoft Graph API.
+ Last updated : 06/09/2022++
+# Microsoft Graph API events
+
+Microsoft Graph API provides a unified programmable model that you can use to receive events about state changes of resources in Microsoft Outlook, Teams, SharePoint, Azure Active Directory, Microsoft Conversations, and security alerts. For every resource in the following table, events for create, update and delete state changes are supported.
+
+## Graph API event sources
+
+|Microsoft event source |Resource(s) | Available event types |
+|: | : | :-|
+|Azure Active Directory| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Azure AD event types](azure-active-directory-events.md) |
+|Microsoft Outlook|[Event](/graph/api/resources/event) (calendar meeting), [Message](/graph/api/resources/message) (email), [Contact](/graph/api/resources/contact) | [Microsoft Outlook event types](outlook-events.md) |
+|Microsoft Teams|[ChatMessage](/graph/api/resources/callrecords-callrecord), [CallRecord](/graph/api/resources/callrecords-callrecord) (meeting) | [Microsoft Teams event types](teams-events.md) |
+|Microsoft SharePoint and OneDrive| [DriveItem](/graph/api/resources/driveitem)| |
+|Microsoft SharePoint| [List](/graph/api/resources/list)|
+|Security alerts| [Alert](/graph/api/resources/alert)|
+|Microsoft Conversations| [Conversation](/graph/api/resources/conversation)| |
+
+You create a Microsoft Graph API subscription to enable Graph API events to flow into a partner topic. The partner topic is automatically created for you as part of the Graph API subscription creation. You use that partner topic to [create event subscriptions](event-filtering.md) to send your events to any of the supported [event handlers](event-handlers.md) that best meets your requirements to process the events.
++
+## Next steps
+
+* [Partner Events overview](partner-events-overview.md).
+* [subscribe to partner events](subscribe-to-partner-events.md), which includes instructions on how to subscribe to Microsoft Graph API events.
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
You may want to use the Partner Events feature if you've one or more of the foll
## Available partners A partner must go through an [onboarding process](onboard-partner.md) before a customer can start receiving or sending events to partners. Following is the list of available partners and whether their services were designed to send events to or receive events from Event Grid.
+### Microsoft partners
+| Partner | Sends events to Azure? | Receives events from Azure? |
+| :--|:--:|:-:|
+| Microsoft Graph API* | Yes | N/A |
+
+#### Microsoft Graph API
+Through Microsoft Graph API, you can get events from a diverse set of Microsoft services such as [Azure AD](azure-active-directory-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), **SharePoint**, and so on. For a complete list of event sources, see [Microsoft Graph API's change notifications documentation](/graph/webhooks#supported-resources).
+
+### Non-Microsoft partners
| Partner | Sends events to Azure? | Receives events from Azure? | | : |:--:|:-:| | Auth0 | Yes | N/A | ### Auth0+ [Auth0](https://auth0.com) is a managed authentication platform for businesses to authenticate, authorize, and secure access for applications, devices, and users. You can create an [Auth0 partner topic](auth0-overview.md) to connect your Auth0 and Azure accounts. This integration allows you to react to, log, and monitor Auth0 events in real time. To try it out, see [Integrate Azure Event Grid with Auto0](auth0-how-to.md). ## Verified partners
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
+
+ Title: Azure Event Grid - Subscribe to Microsoft Graph API events
+description: This article explains how to subscribe to events published by Microsoft Graph API.
+ Last updated : 06/09/2022++
+# Subscribe to events published by Microsoft Graph API
+This article describes steps to subscribe to events published by Microsoft Graph API. The following table lists the resources for which events are available through Graph API. For every resource, events for create, update and delete state changes are supported.
+
+|Microsoft event source |Resource(s) | Available event types |
+|: | : | :-|
+|Azure Active Directory| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Azure AD event types](azure-active-directory-events.md) |
+|Microsoft Outlook|[Event](/graph/api/resources/event) (calendar meeting), [Message](/graph/api/resources/message) (email), [Contact](/graph/api/resources/contact) | [Microsoft Outlook event types](outlook-events.md) |
+|Microsoft Teams|[ChatMessage](/graph/api/resources/callrecords-callrecord), [CallRecord](/graph/api/resources/callrecords-callrecord) (meeting) | [Microsoft Teams event types](teams-events.md) |
+|Microsoft SharePoint and OneDrive| [DriveItem](/graph/api/resources/driveitem)| |
+|Microsoft SharePoint| [List](/graph/api/resources/list)|
+|Security alerts| [Alert](/graph/api/resources/alert)|
+|Microsoft Conversations| [Conversation](/graph/api/resources/conversation)| |
+
+> [!IMPORTANT]
+>If you aren't familiar with the **Partner Events** feature, see [Partner Events overview](partner-events-overview.md).
++
+## Why you should use Microsoft Graph API with Event Grid as a destination?
+Besides the ability to subscribe to Microsoft Graph API events via Event Grid, you have [other options](/graph/change-notifications-delivery) through which you can receive similar notifications (not events). Consider using Microsoft Graph API to deliver events to Event Grid if you have at least one of the following requirements:
+
+- You're developing an event-driven solution that requires events from Azure Active Directory, Outlook, Teams, etc. to react to resource changes. You require the robust eventing model and publish-subscribe capabilities that Event Grid provides. For an overview of Event Grid, see [Event Grid concepts](concepts.md).
+- You want to use Event Grid to route events to multiple destinations using a single Graph API subscription and you want to avoid managing multiple Graph API subscriptions.
+- You require to route events to different downstream applications, webhooks or Azure services depending on some of the properties in the event. For example, you may want to route event types such as `Microsoft.Graph.UserCreated` and `Microsoft.Graph.UserDeleted` to a specialized application that processes users' onboarding and off-boarding. You may also want to send `Microsoft.Graph.UserUpdated` events to another application that syncs contacts information, for example. You can achieve that using a single Graph API subscription when using Event Grid as a notification destination. For more information, see [event filtering](event-filtering.md) and [event handlers](event-handlers.md).
+- Interoperability is important to you. You want to forward and handle events in a standard way using CNCF's [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specification standard, to which Event Grid fully complies.
+- You like the extensibility support that CloudEvents provides. For example, if you want to trace events across compliant systems, you may use CloudEvents extension [Distributed Tracing](https://github.com/cloudevents/spec/blob/v1.0.1/extensions/distributed-tracing.md). Learn more about more [CloudEvents extensions](https://github.com/cloudevents/spec/blob/v1.0.1/documented-extensions.md).
+- You want to use proven event-driven approaches adopted by the industry.
+
+## High-level steps
+
+The common steps to subscribe to events published by any partner, including Graph API, are described in [subscribe to partner events](subscribe-to-partner-events.md). For a quick reference, the steps described in that article are listed here. This article deals with step 3: enable events flow to a partner topic.
+
+1. Register the Event Grid resource provider with your Azure subscription.
+2. Authorize partner to create a partner topic in your resource group.
+3. [Enable events to flow to a partner topic](#enable-microsoft-graph-api-events-to-flow-to-your-partner-topic)
+4. Activate partner topic so that your events start flowing to your partner topic.
+5. Subscribe to events.
+
+### Enable Microsoft Graph API events to flow to your partner topic
+
+> [!IMPORTANT]
+> Microsoft Graph API's (MGA) ability to send events to Even Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
+
+You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample:
+
+```json
+POST to https://canary.graph.microsoft.com/testprodbetawebhooks1/subscriptions
+
+Body:
+{
+ "changeType": "Updated,Deleted,Created",
+ "notificationUrl": "EventGrid:?azuresubscriptionid=8A8A8A8A-4B4B-4C4C-4D4D-12E12E12E12E&resourcegroup=yourResourceGroup&partnertopic=youPartnerTopic&location=theAzureRegionFortheTopic",
+ "resource": "users",
+ "expirationDateTime": "2022-04-30T00:00:00Z",
+ "clientState": "mysecret"
+}
+```
+
+Here are some of the key payload properties:
+
+- `changeType`: the kind of resource changes for which you want to receive events. Valid values: `Updated`, `Deleted`, and `Created`. You can specify one or more of these values separated by commas.
+- `notificationUrl`: a URI that conforms to the following pattern: `EventGrid:?azuresubscriptionid=<you-azure-subscription-id>&resourcegroup=<your-resource-group-name>&partnertopic=<the-name-for-your-partner-topic>&location=<the-Azure-region-where-you-want-the-topic-created>`.
+- resource: the resource for which you need events announcing state changes.
+- expirationDateTime: the expiration time at which the subscription will expire and hence the flow of events will stop. It must conform to the format specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). You must specify an expiration time that is within the [maximum subscription length allowable for the resource type](/graph/api/resources/subscription#maximum-length-of-subscription-per-resource-type) used.
+- client state. A value that is set by you when creating a Graph API subscription. For more information, see [Graph API subscription properties](/graph/api/resources/subscription#properties).
+
+> [!NOTE]
+> Microsoft Graph API's capability to send events to Event Grid is only available in a specific Graph API environment. You will need to update your code so that it uses the following Graph API endpoint `https://canary.graph.microsoft.com/testprodbetawebhooks1`. For example, this is the way you can set the endpoint on your graph client (`com.microsoft.graph.requests.GraphServiceClient`) using the Graph API Java SDK:
+>
+>```java
+>graphClient.setServiceRoot("https://canary.graph.microsoft.com/testprodbetawebhooks1");
+>```
+
+**You can create a Microsoft Graph API subscription by following the instructions in the [Microsoft Graph API webhook samples](https://github.com/microsoftgraph?q=webhooks&type=public&language=&sort=)** that include code samples for [NodeJS](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java (Spring Boot)](https://github.com/microsoftgraph/java-spring-webhooks-sample), and [.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample). There are no samples available for Python, Go and other languages yet, but the [Graph SDK](/graph/sdks/sdks-overview) supports creating Graph API subscriptions using those programming languages.
+
+> [!NOTE]
+> - Partner topic names must be unique within the same Azure region. Each tenant-application ID combination can create up to 10 unique partner topics.
+> - Be mindful of certain [Graph API resources' service limits](/graph/webhooks#azure-ad-resource-limitations) when developing your solution.
+
+#### What happens when you create a Microsoft Graph API subscription?
+
+When you create a Graph API subscription with a `notificationUrl` bound to Event Grid, a partner topic is created in your Azure subscription. For that partner topic, you [configure event subscriptions](event-filtering.md) to send your events to any of the supported [event handlers](event-handlers.md) that best meets your requirements to process the events.
+
+#### Microsoft Graph API Explorer
+For quick tests and to get to know the API, you could use the [Microsoft Graph API explorer](/graph/graph-explorer/graph-explorer-features). For anything else beyond casuals tests or learning, you should use the Graph SDKs as described above.
+
+## Next steps
+
+See the following articles:
+
+- [Azure Event Grid - Partner Events overview](partner-events-overview.md)
+- [Microsoft Graph API webhook samples](https://github.com/microsoftgraph?q=webhooks&type=public&language=&sort=). Use these samples to send events to Event Grid. You just need to provide a suitable value ``notificationUrl`` according to the request example above.
+- [Varied set of resources on Microsoft Graph API](https://developer.microsoft.com/en-us/graph/rest-api).
+- [Microsoft Graph API webhooks](/graph/api/resources/webhooks)
+- [Best practices for working with Microsoft Graph API](/graph/best-practices-concept)
+- [Microsoft Graph API SDKs](/graph/sdks/sdks-overview)
+- [Microsoft Graph API tutorials](/graph/tutorials), which shows how to use Graph API in different programming languages.This doesn't necessarily include examples for sending events to Event Grid.
+
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Title: Azure Event Grid - Subscribe to partner events description: This article explains how to subscribe to events from a partner using Azure Event Grid. Previously updated : 03/31/2022 Last updated : 06/09/2022 # Subscribe to events published by a partner with Azure Event Grid
Following example shows the way to create a partner configuration resource that
Here's the list of partners and a link to submit a request to enable events flow to a partner topic. - [Auth0](auth0-how-to.md)
+- [Microsoft Graph API](subscribe-to-graph-api-events.md)
## Activate a partner topic
event-grid Teams Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/teams-events.md
+
+ Title: Microsoft Teams events in Azure Event Grid
+description: This article describes Microsoft Teams events in Azure Event Grid.
+ Last updated : 06/06/2022++
+# Microsoft Teams events in Azure Event Grid
+
+This article provides the list of available event types for Microsoft Teams events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md).
+
+## Available event types
+These events are triggered when a call record is created or updated, and chat message is created, updated or deleted or by operating over those resources using Microsoft Graph API.
+
+ | Event name | Description |
+ | - | -- |
+ | **Microsoft.Graph.CallRecordCreated** | Triggered when a call or meeting is produced in Microsoft Teams. |
+ | **Microsoft.Graph.CallRecordUpdated** | Triggered when a call or meeting is updated in Microsoft Teams. |
+ | **Microsoft.Graph.ChatMessageCreated** | Triggered when a chat message is sent via teams or channels in Microsoft Teams. |
+ | **Microsoft.Graph.ChatMessageUpdated** | Triggered when a chat message is edited via teams or channels in Microsoft Teams. |
+ | **Microsoft.Graph.ChatMessageDeleted** | Triggered when a chat message is deleted via Teams or channels in Teams. |
++
+## Next steps
+
+* For an introduction to Azure Event Grid's Partner Events, see [Partner Events overview](partner-events-overview.md)
+* For information on how to subscribe to Microsoft Graph API events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md).
+* For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md).
+* For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md).
+* For information about how to configure an event subscription to select specific events to be delivered, consult [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md).
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
Previously updated : 12/14/2020 Last updated : 06/09/2022
ExpressRoute Direct gives you the ability to directly connect to Microsoft's glo
## Before you begin
-Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, please do the following via Azure PowerShell:
+Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, run the following via Azure PowerShell:
1. Sign in to Azure and select the subscription you wish to enroll. ```azurepowershell-interactive
Once enrolled, verify that the **Microsoft.Network** resource provider is regist
## <a name="authorization"></a>Generate the Letter of Authorization (LOA)
-Reference the recently created ExpressRoute Direct resource, input a customer name to write the LOA to and (optionally) define a file location to store the document. If a file path is not referenced, the document will download to the current directory.
+Reference the recently created ExpressRoute Direct resource, input a customer name to write the LOA to and (optionally) define a file location to store the document. If a file path isn't referenced, the document will download to the current directory.
### Azure PowerShell
This process should be used to conduct a Layer 1 test, ensuring that each cross-
## <a name="circuit"></a>Create a circuit
-By default, you can create 10 circuits in the subscription where the ExpressRoute Direct resource is. This limit can be increased by support. You are responsible for tracking both Provisioned and Utilized Bandwidth. Provisioned bandwidth is the sum of bandwidth of all circuits on the ExpressRoute Direct resource and utilized bandwidth is the physical usage of the underlying physical interfaces.
+By default, you can create 10 circuits in the subscription where the ExpressRoute Direct resource is. This limit can be increased by support. You're responsible for tracking both Provisioned and Utilized Bandwidth. Provisioned bandwidth is the sum of bandwidth of all circuits on the ExpressRoute Direct resource and utilized bandwidth is the physical usage of the underlying physical interfaces.
-There are additional circuit bandwidths that can be utilized on ExpressRoute Direct to support only the scenarios outlined above. These bandwidths are 40 Gbps and 100 Gbps.
+There are more circuit bandwidths that can be utilized on ExpressRoute Direct to support only the scenarios outlined above. These bandwidths are 40 Gbps and 100 Gbps.
**SkuTier** can be Local, Standard, or Premium.
-**SkuFamily** can only be MeteredData. Unlimited is not supported on ExpressRoute Direct.
+**SkuFamily** can only be MeteredData. Unlimited isn't supported on ExpressRoute Direct.
Create a circuit on the ExpressRoute Direct resource.
You can delete the ExpressRoute Direct resource by running the following command
```powershell Remove-azexpressrouteport -Name $Name -Resourcegroupname -$ResourceGroupName ```+
+## Public Preview
+
+The following scenario is in public preview:
+
+ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Azure Active Directory tenants. You'll create an authorization for your ExpressRoute Direct resource, and redeem the authorization to create an ExpressRoute circuit in a different subscription or Azure Active Directory tenant.
+
+### Enable ExpressRoute Direct and circuits in different subscriptions
+
+1. To enroll in the preview, send an e-mail to ExpressRouteDirect@microsoft.com with the ExpressRoute Direct and target ExpressRoute circuit Azure subscription IDs. You'll receive an e-mail once the feature get enabled for your subscriptions.
+
+1. Create the ExpressRoute Direct authorization by running the following commands in PowerShell:
+
+ ```powershell
+ Add-AzExpressRoutePortAuthorization -Name $Name -ExpressRoutePort $ERPort
+ Set-AzExpressRoutePort -ExpressRoutePort $ERPort
+ ```
+
+1. Verify the authorization was created successfully and store ExpressRoute Direct authorization into a variable:
+
+ ```powershell
+ $ERDirect = Get-AzExpressRoutePort -Name $Name -ResourceGroupName $ResourceGroupName
+ $ERDirect
+ ```
+
+1. Redeem the authorization to create the ExpressRoute Direct circuit with the following command:
+
+ ```powershell
+ New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -ExpressRoutePort $ERDirect -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -Authorization $ERDirect.Authorization
+ ```
## Next steps For more information about ExpressRoute Direct, see the [Overview](expressroute-erdirect-about.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | Interxion | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo |
+| **Doha2** | [Ooredoo](https://www.ooredoo.qa/portal/OoredooQatar/b2b-data-centre) | 3 | Qatar Central | Supported | |
| **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | n/a | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
firewall-manager Secured Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secured-virtual-hub.md
Previously updated : 10/12/2020 Last updated : 06/09/2022
You can choose the required security providers to protect and govern your networ
Using Firewall Manager in the Azure portal, you can either create a new secured virtual hub, or convert an existing virtual hub that you previously created using Azure Virtual WAN.
-## Gated public preview
+## Public preview features
-The below features are currently in gated public preview.
+The following features are in public preview:
| Feature | Description | | - | |
-| Routing Intent and Policies enabling Inter-hub security | This feature allows customers to configure internet-bound, private or inter-hub traffic flow through the Azure Firewall. Please review [Routing Intent and Policies](../virtual-wan/how-to-routing-policies.md) to learn more. |
+| Routing Intent and Policies enabling Inter-hub security | This feature allows you to configure internet-bound, private or inter-hub traffic flow through Azure Firewall. For more information, see [Routing Intent and Policies](../virtual-wan/how-to-routing-policies.md). |
## Next steps
firewall-manager Threat Intelligence Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/threat-intelligence-settings.md
Previously updated : 06/30/2020 Last updated : 06/09/2022
You can configure threat intelligence in one of the three modes that are describ
|Mode |Description | |||
-|`Off` | The threat intelligence feature is not enabled for your firewall. |
-|`Alert only` | You will receive high-confidence alerts for traffic going through your firewall to or from known malicious IP addresses and domains. |
-|`Alert and deny` | Traffic is blocked and you will receive high-confidence alerts when traffic is detected attempting to go through your firewall to or from known malicious IP addresses and domains. |
+|`Off` | The threat intelligence feature isn't enabled for your firewall. |
+|`Alert only` | You'll receive high-confidence alerts for traffic going through your firewall to or from known malicious IP addresses and domains. |
+|`Alert and deny` | Traffic is blocked and you'll receive high-confidence alerts when traffic is detected attempting to go through your firewall to or from known malicious IP addresses and domains. |
> [!NOTE] > Threat intelligence mode is inherited from parent policies to child policies. A child policy must be configured with the same or a stricter mode than the parent policy.
The following log excerpt shows a triggered rule for outbound traffic to a malic
## Testing -- **Outbound testing** - Outbound traffic alerts should be a rare occurrence, as it means that your environment has been compromised. To help test outbound alerts are working, a test FQDN has been created that triggers an alert. Use **testmaliciousdomain.eastus.cloudapp.azure.com** for your outbound tests.
+- **Outbound testing** - Outbound traffic alerts should be a rare occurrence, as it means that your environment has been compromised. To help test outbound alerts are working, the following FQDNs have been created to trigger an alert. Use the following FQDNs for your outbound tests:
+<br><br>
+
+ - `documentos-001.brazilsouth.cloudapp.azure.com`
+ - `itaucardiupp.centralus.cloudapp.azure.com`
+ - `azure-c.online`
+ - `www.azureadsec.com`
+ - `azurein360.co`
+
+ > [!NOTE]
+ > These FQDNs are subject to change, so they are not guaranteed to always work. Any changes will be documented here.
+ - **Inbound testing** - You can expect to see alerts on incoming traffic if DNAT rules are configured on the firewall. This is true even if only specific sources are allowed on the DNAT rule and traffic is otherwise denied. Azure Firewall doesn't alert on all known port scanners; only on scanners that are known to also engage in malicious activity.
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In PowerShell, run the following command: ```azurepowershell-interactive
- New-AzADServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8' -Role Contributor
+ New-AzADServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8'
``` ##### Azure CLI
hdinsight Apache Hadoop Use Hive Ambari View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-ambari-view.md
description: Learn how to use the Hive View from your web browser to submit Hive
Previously updated : 04/23/2020 Last updated : 06/09/2022 # Use Apache Ambari Hive View with Apache Hadoop in HDInsight
hdinsight Hdinsight Hadoop Manage Ambari Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-manage-ambari-rest-api.md
description: Learn how to use Ambari to monitor and manage Hadoop clusters in Az
Previously updated : 04/29/2020 Last updated : 06/09/2022 # Manage HDInsight clusters by using the Apache Ambari REST API
hdinsight Hdinsight Linux Ambari Ssh Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-linux-ambari-ssh-tunnel.md
description: Learn how to use an SSH tunnel to securely browse web resources hos
Previously updated : 04/14/2020 Last updated : 06/09/2022 # Use SSH tunneling to access Apache Ambari web UI, JobHistory, NameNode, Apache Oozie, and other UIs
hdinsight Hdinsight Scaling Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-scaling-best-practices.md
Previously updated : 04/29/2020 Last updated : 06/09/2022 # Manually scale Azure HDInsight clusters
hdinsight Hdinsight Troubleshoot Failed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-failed-cluster.md
description: Diagnose and troubleshoot a slow or failing job on an Azure HDInsig
Previously updated : 08/15/2019 Last updated : 06/09/2022 # Troubleshoot a slow or failing job on a HDInsight cluster
To help diagnose the source of a cluster error, start a new cluster with the sam
* [Analyze HDInsight Logs](./hdinsight-troubleshoot-guide.md) * [Access Apache Hadoop YARN application sign in Linux-based HDInsight](hdinsight-hadoop-access-yarn-app-logs-linux.md) * [Enable heap dumps for Apache Hadoop services on Linux-based HDInsight](hdinsight-hadoop-collect-debug-heap-dump-linux.md)
-* [Known Issues for Apache Spark cluster on HDInsight](./spark/apache-spark-known-issues.md)
+* [Known Issues for Apache Spark cluster on HDInsight](./spark/apache-spark-known-issues.md)
hdinsight Hdinsight Use Oozie Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-oozie-linux-mac.md
description: Use Hadoop Oozie in Linux-based HDInsight. Learn how to define an O
Previously updated : 04/27/2020 Last updated : 05/09/2022 # Use Apache Oozie with Apache Hadoop to define and run a workflow on Linux-based Azure HDInsight
In this article, you learned how to define an Oozie workflow and how to run an O
* [Upload data for Apache Hadoop jobs in HDInsight](hdinsight-upload-data.md) * [Use Apache Sqoop with Apache Hadoop in HDInsight](hadoop/apache-hadoop-use-sqoop-mac-linux.md) * [Use Apache Hive with Apache Hadoop on HDInsight](hadoop/hdinsight-use-hive.md)
-* [Troubleshoot Apache Oozie](./troubleshoot-oozie.md)
+* [Troubleshoot Apache Oozie](./troubleshoot-oozie.md)
hdinsight Machine Learning Services Quickstart Job Rconsole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/machine-learning-services-quickstart-job-rconsole.md
- Title: 'Quickstart: R script on ML Services & R console - Azure HDInsight'
-description: In the quickstart, you execute an R script on an ML Services cluster in Azure HDInsight using R console.
-- Previously updated : 06/19/2019--
-#Customer intent: I want to learn how to execute an R script using ML Services in Azure HDInsight for R console.
--
-# Quickstart: Execute an R script on an ML Services cluster in Azure HDInsight using R console
--
-ML Services on Azure HDInsight allows R scripts to use Apache Spark and Apache Hadoop MapReduce to run distributed computations. ML Services controls how calls are executed by setting the compute context. The edge node of a cluster provides a convenient place to connect to the cluster and to run your R scripts. With an edge node, you have the option of running the parallelized distributed functions of RevoScaleR across the cores of the edge node server. You can also run them across the nodes of the cluster by using RevoScaleR's Hadoop Map Reduce or Apache Spark compute contexts.
-
-In this quickstart, you learn how to run an R script with R console that demonstrates using Spark for distributed R computations. You will define a compute context to perform computations locally on an edge node, and again distributed across the nodes in the HDInsight cluster.
-
-## Prerequisites
-
-* An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-* An SSH client. For more information, see [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md).
--
-## Connect to R console
-
-1. Connect to the edge node of an ML Services HDInsight cluster using SSH. Edit the command below by replacing `CLUSTERNAME` with the name of your cluster, and then enter the command:
-
- ```cmd
- ssh sshuser@CLUSTERNAME-ed-ssh.azurehdinsight.net
- ```
-
-1. From the SSH session, use the following command to start the R console:
-
- ```
- R
- ```
-
- You should see an output with the version of ML Server, in addition to other information.
--
-## Use a compute context
-
-1. From the `>` prompt, you can enter R code. Use the following code to load example data into the default storage for HDInsight:
-
- ```R
- # Set the HDFS (WASB) location of example data
- bigDataDirRoot <- "/example/data"
-
- # create a local folder for storing data temporarily
- source <- "/tmp/AirOnTimeCSV2012"
- dir.create(source)
-
- # Download data to the tmp folder
- remoteDir <- "https://packages.revolutionanalytics.com/datasets/AirOnTimeCSV2012"
- download.file(file.path(remoteDir, "airOT201201.csv"), file.path(source, "airOT201201.csv"))
- download.file(file.path(remoteDir, "airOT201202.csv"), file.path(source, "airOT201202.csv"))
- download.file(file.path(remoteDir, "airOT201203.csv"), file.path(source, "airOT201203.csv"))
- download.file(file.path(remoteDir, "airOT201204.csv"), file.path(source, "airOT201204.csv"))
- download.file(file.path(remoteDir, "airOT201205.csv"), file.path(source, "airOT201205.csv"))
- download.file(file.path(remoteDir, "airOT201206.csv"), file.path(source, "airOT201206.csv"))
- download.file(file.path(remoteDir, "airOT201207.csv"), file.path(source, "airOT201207.csv"))
- download.file(file.path(remoteDir, "airOT201208.csv"), file.path(source, "airOT201208.csv"))
- download.file(file.path(remoteDir, "airOT201209.csv"), file.path(source, "airOT201209.csv"))
- download.file(file.path(remoteDir, "airOT201210.csv"), file.path(source, "airOT201210.csv"))
- download.file(file.path(remoteDir, "airOT201211.csv"), file.path(source, "airOT201211.csv"))
- download.file(file.path(remoteDir, "airOT201212.csv"), file.path(source, "airOT201212.csv"))
-
- # Set directory in bigDataDirRoot to load the data into
- inputDir <- file.path(bigDataDirRoot,"AirOnTimeCSV2012")
-
- # Make the directory
- rxHadoopMakeDir(inputDir)
-
- # Copy the data from source to input
- rxHadoopCopyFromLocal(source, bigDataDirRoot)
- ```
-
- This step may take around 10 minutes to complete.
-
-1. Create some data info and define two data sources. Enter the following code in the R console:
-
- ```R
- # Define the HDFS (WASB) file system
- hdfsFS <- RxHdfsFileSystem()
-
- # Create info list for the airline data
- airlineColInfo <- list(
- DAY_OF_WEEK = list(type = "factor"),
- ORIGIN = list(type = "factor"),
- DEST = list(type = "factor"),
- DEP_TIME = list(type = "integer"),
- ARR_DEL15 = list(type = "logical"))
-
- # get all the column names
- varNames <- names(airlineColInfo)
-
- # Define the text data source in hdfs
- airOnTimeData <- RxTextData(inputDir, colInfo = airlineColInfo, varsToKeep = varNames, fileSystem = hdfsFS)
-
- # Define the text data source in local system
- airOnTimeDataLocal <- RxTextData(source, colInfo = airlineColInfo, varsToKeep = varNames)
-
- # formula to use
- formula = "ARR_DEL15 ~ ORIGIN + DAY_OF_WEEK + DEP_TIME + DEST"
- ```
-
-1. Run a logistic regression over the data using the **local** compute context. Enter the following code in the R console:
-
- ```R
- # Set a local compute context
- rxSetComputeContext("local")
-
- # Run a logistic regression
- system.time(
- modelLocal <- rxLogit(formula, data = airOnTimeDataLocal)
- )
-
- # Display a summary
- summary(modelLocal)
- ```
-
- The computations should complete in about 7 minutes. You should see output that ends with lines similar to the following snippet:
-
- ```output
- Data: airOnTimeDataLocal (RxTextData Data Source)
- File name: /tmp/AirOnTimeCSV2012
- Dependent variable(s): ARR_DEL15
- Total independent variables: 634 (Including number dropped: 3)
- Number of valid observations: 6005381
- Number of missing observations: 91381
- -2*LogLikelihood: 5143814.1504 (Residual deviance on 6004750 degrees of freedom)
-
- Coefficients:
- Estimate Std. Error z value Pr(>|z|)
- (Intercept) -3.370e+00 1.051e+00 -3.208 0.00134 **
- ORIGIN=JFK 4.549e-01 7.915e-01 0.575 0.56548
- ORIGIN=LAX 5.265e-01 7.915e-01 0.665 0.50590
- ......
- DEST=SHD 5.975e-01 9.371e-01 0.638 0.52377
- DEST=TTN 4.563e-01 9.520e-01 0.479 0.63172
- DEST=LAR -1.270e+00 7.575e-01 -1.676 0.09364 .
- DEST=BPT Dropped Dropped Dropped Dropped
-
-
-
- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-
- Condition number of final variance-covariance matrix: 11904202
- Number of iterations: 7
- ```
-
-1. Run the same logistic regression using the **Spark** context. The Spark context distributes the processing over all the worker nodes in the HDInsight cluster. Enter the following code in the R console:
-
- ```R
- # Define the Spark compute context
- mySparkCluster <- RxSpark()
-
- # Set the compute context
- rxSetComputeContext(mySparkCluster)
-
- # Run a logistic regression
- system.time(
- modelSpark <- rxLogit(formula, data = airOnTimeData)
- )
-
- # Display a summary
- summary(modelSpark)
- ```
-
- The computations should complete in about 5 minutes.
-
-1. To quit the R console, use the following command:
-
- ```R
- quit()
- ```
-
-## Clean up resources
-
-After you complete the quickstart, you may want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it is not in use. You are also charged for an HDInsight cluster, even when it is not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they are not in use.
-
-To delete a cluster, see [Delete an HDInsight cluster using your browser, PowerShell, or the Azure CLI](../hdinsight-delete-cluster.md).
-
-## Next steps
-
-In this quickstart, you learned how to run an R script with R console that demonstrated using Spark for distributed R computations. Advance to the next article to learn the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster.
-
-> [!div class="nextstepaction"]
->[Compute context options for ML Services on HDInsight](./r-server-compute-contexts.md)
hdinsight Machine Learning Services Quickstart Job Rstudio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/machine-learning-services-quickstart-job-rstudio.md
- Title: 'Quickstart: RStudio Server & ML Services for R - Azure HDInsight'
-description: In the quickstart, you execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server.
-- Previously updated : 06/19/2019--
-#Customer intent: I want to learn how to execute an R script using ML Services in Azure HDInsight for RStudio Server.
--
-# Quickstart: Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server
--
-ML Services on Azure HDInsight allows R scripts to use Apache Spark and Apache Hadoop MapReduce to run distributed computations. ML Services controls how calls are executed by setting the compute context. The edge node of a cluster provides a convenient place to connect to the cluster and to run your R scripts. With an edge node, you have the option of running the parallelized distributed functions of RevoScaleR across the cores of the edge node server. You can also run them across the nodes of the cluster by using RevoScaleR's Hadoop Map Reduce or Apache Spark compute contexts.
-
-In this quickstart, you learn how to run an R script with RStudio Server that demonstrates using Spark for distributed R computations. You will define a compute context to perform computations locally on an edge node, and again distributed across the nodes in the HDInsight cluster.
-
-## Prerequisite
-
-An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-## Connect to RStudio Server
-
-RStudio Server runs on the cluster's edge node. Go to the following URL where `CLUSTERNAME` is the name of the ML Services cluster you created:
-
-```
-https://CLUSTERNAME.azurehdinsight.net/rstudio/
-```
-
-The first time you sign in you need to authenticate twice. For the first authentication prompt, provide the cluster Admin login and password, default is `admin`. For the second authentication prompt, provide the SSH login and password, default is `sshuser`. Subsequent sign-ins only require the SSH credentials.
-
-Once you are connected, your screen should resemble the following screenshot:
--
-## Use a compute context
-
-1. From RStudio Server, use the following code to load example data into the default storage for HDInsight:
-
- ```RStudio
- # Set the HDFS (WASB) location of example data
- bigDataDirRoot <- "/example/data"
-
- # create a local folder for storing data temporarily
- source <- "/tmp/AirOnTimeCSV2012"
- dir.create(source)
-
- # Download data to the tmp folder
- remoteDir <- "https://packages.revolutionanalytics.com/datasets/AirOnTimeCSV2012"
- download.file(file.path(remoteDir, "airOT201201.csv"), file.path(source, "airOT201201.csv"))
- download.file(file.path(remoteDir, "airOT201202.csv"), file.path(source, "airOT201202.csv"))
- download.file(file.path(remoteDir, "airOT201203.csv"), file.path(source, "airOT201203.csv"))
- download.file(file.path(remoteDir, "airOT201204.csv"), file.path(source, "airOT201204.csv"))
- download.file(file.path(remoteDir, "airOT201205.csv"), file.path(source, "airOT201205.csv"))
- download.file(file.path(remoteDir, "airOT201206.csv"), file.path(source, "airOT201206.csv"))
- download.file(file.path(remoteDir, "airOT201207.csv"), file.path(source, "airOT201207.csv"))
- download.file(file.path(remoteDir, "airOT201208.csv"), file.path(source, "airOT201208.csv"))
- download.file(file.path(remoteDir, "airOT201209.csv"), file.path(source, "airOT201209.csv"))
- download.file(file.path(remoteDir, "airOT201210.csv"), file.path(source, "airOT201210.csv"))
- download.file(file.path(remoteDir, "airOT201211.csv"), file.path(source, "airOT201211.csv"))
- download.file(file.path(remoteDir, "airOT201212.csv"), file.path(source, "airOT201212.csv"))
-
- # Set directory in bigDataDirRoot to load the data into
- inputDir <- file.path(bigDataDirRoot,"AirOnTimeCSV2012")
-
- # Make the directory
- rxHadoopMakeDir(inputDir)
-
- # Copy the data from source to input
- rxHadoopCopyFromLocal(source, bigDataDirRoot)
- ```
-
- This step may take around 8 minutes to complete.
-
-1. Create some data info and define two data sources. Enter the following code in RStudio:
-
- ```RStudio
- # Define the HDFS (WASB) file system
- hdfsFS <- RxHdfsFileSystem()
-
- # Create info list for the airline data
- airlineColInfo <- list(
- DAY_OF_WEEK = list(type = "factor"),
- ORIGIN = list(type = "factor"),
- DEST = list(type = "factor"),
- DEP_TIME = list(type = "integer"),
- ARR_DEL15 = list(type = "logical"))
-
- # get all the column names
- varNames <- names(airlineColInfo)
-
- # Define the text data source in hdfs
- airOnTimeData <- RxTextData(inputDir, colInfo = airlineColInfo, varsToKeep = varNames, fileSystem = hdfsFS)
-
- # Define the text data source in local system
- airOnTimeDataLocal <- RxTextData(source, colInfo = airlineColInfo, varsToKeep = varNames)
-
- # formula to use
- formula = "ARR_DEL15 ~ ORIGIN + DAY_OF_WEEK + DEP_TIME + DEST"
- ```
-
-1. Run a logistic regression over the data using the **local** compute context. Enter the following code in RStudio:
-
- ```RStudio
- # Set a local compute context
- rxSetComputeContext("local")
-
- # Run a logistic regression
- system.time(
- modelLocal <- rxLogit(formula, data = airOnTimeDataLocal)
- )
-
- # Display a summary
- summary(modelLocal)
- ```
-
- The computations should complete in about 7 minutes. You should see output that ends with lines similar to the following snippet:
-
- ```output
- Data: airOnTimeDataLocal (RxTextData Data Source)
- File name: /tmp/AirOnTimeCSV2012
- Dependent variable(s): ARR_DEL15
- Total independent variables: 634 (Including number dropped: 3)
- Number of valid observations: 6005381
- Number of missing observations: 91381
- -2*LogLikelihood: 5143814.1504 (Residual deviance on 6004750 degrees of freedom)
-
- Coefficients:
- Estimate Std. Error z value Pr(>|z|)
- (Intercept) -3.370e+00 1.051e+00 -3.208 0.00134 **
- ORIGIN=JFK 4.549e-01 7.915e-01 0.575 0.56548
- ORIGIN=LAX 5.265e-01 7.915e-01 0.665 0.50590
- ......
- DEST=SHD 5.975e-01 9.371e-01 0.638 0.52377
- DEST=TTN 4.563e-01 9.520e-01 0.479 0.63172
- DEST=LAR -1.270e+00 7.575e-01 -1.676 0.09364 .
- DEST=BPT Dropped Dropped Dropped Dropped
-
-
-
- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-
- Condition number of final variance-covariance matrix: 11904202
- Number of iterations: 7
- ```
-
-1. Run the same logistic regression using the **Spark** context. The Spark context distributes the processing over all the worker nodes in the HDInsight cluster. Enter the following code in RStudio:
-
- ```RStudio
- # Define the Spark compute context
- mySparkCluster <- RxSpark()
-
- # Set the compute context
- rxSetComputeContext(mySparkCluster)
-
- # Run a logistic regression
- system.time(
- modelSpark <- rxLogit(formula, data = airOnTimeData)
- )
-
- # Display a summary
- summary(modelSpark)
- ```
-
- The computations should complete in about 5 minutes.
-
-## Clean up resources
-
-After you complete the quickstart, you may want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it is not in use. You are also charged for an HDInsight cluster, even when it is not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they are not in use.
-
-To delete a cluster, see [Delete an HDInsight cluster using your browser, PowerShell, or the Azure CLI](../hdinsight-delete-cluster.md).
-
-## Next steps
-
-In this quickstart, you learned how to run an R script with RStudio Server that demonstrated using Spark for distributed R computations. Advance to the next article to learn the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster.
-
-> [!div class="nextstepaction"]
->[Compute context options for ML Services on HDInsight](./r-server-compute-contexts.md)
-
-> [!NOTE]
-> This page describes features of RStudio software. Microsoft Azure HDInsight is not affiliated with RStudio, Inc.
hdinsight Ml Services Tutorial Spark Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/ml-services-tutorial-spark-compute.md
- Title: 'Tutorial: Use R in a Spark compute context in Azure HDInsight'
-description: Tutorial - Get started with R and Spark on an Azure HDInsight Machine Learning services cluster.
-- Previously updated : 06/21/2019-
-#Customer intent: As a developer, I need to understand the Spark compute context for Machine Learning services.
--
-# Tutorial: Use R in a Spark compute context in Azure HDInsight
--
-This tutorial provides a step-by-step introduction to using the R functions in Apache Spark that run on an Azure HDInsight Machine Learning services cluster.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Download the sample data to local storage
-> * Copy the data to default storage
-> * Set up a dataset
-> * Create data sources
-> * Create a compute context for Spark
-> * Fit a linear model
-> * Use composite XDF files
-> * Convert XDF to CSV
-
-## Prerequisites
-
-* An Azure HDInsight Machine Learning services cluster. Go to [Create Apache Hadoop clusters by using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and, for **Cluster type**, select **ML Services**.
-
-## Connect to RStudio Server
-
-RStudio Server runs on the cluster's edge node. Go to the following site (where *CLUSTERNAME* in the URL is the name of the HDInsight Machine Learning services cluster you created):
-
-```
-https://CLUSTERNAME.azurehdinsight.net/rstudio/
-```
-
-The first time you sign in, you authenticate twice. At the first authentication prompt, provide the cluster admin username and password (the default is *admin*). At the second authentication prompt, provide the SSH username and password (the default is *sshuser*). Subsequent sign-ins require only the SSH credentials.
-
-## Download the sample data to local storage
-
-The *Airline 2012 On-Time Data Set* consists of 12 comma-separated files that contain flight arrival and departure details for all commercial flights within the US for the year 2012. This dataset is large, with over 6 million observations.
-
-1. Initialize a few environment variables. In the RStudio Server console, enter the following code:
-
- ```R
- bigDataDirRoot <- "/tutorial/data" # root directory on cluster default storage
- localDir <- "/tmp/AirOnTimeCSV2012" # directory on edge node
- remoteDir <- "https://packages.revolutionanalytics.com/datasets/AirOnTimeCSV2012" # location of data
- ```
-
-1. In the right pane, select the **Environment** tab. The variables are displayed under **Values**.
-
- :::image type="content" source="./media/ml-services-tutorial-spark-compute/hdinsight-rstudio-image.png" alt-text="HDInsight R studio web console" border="true":::
-
-1. Create a local directory, and download the sample data. In RStudio, enter the following code:
-
- ```R
- # Create local directory
- dir.create(localDir)
-
- # Download data to the tmp folder(local)
- download.file(file.path(remoteDir, "airOT201201.csv"), file.path(localDir, "airOT201201.csv"))
- download.file(file.path(remoteDir, "airOT201202.csv"), file.path(localDir, "airOT201202.csv"))
- download.file(file.path(remoteDir, "airOT201203.csv"), file.path(localDir, "airOT201203.csv"))
- download.file(file.path(remoteDir, "airOT201204.csv"), file.path(localDir, "airOT201204.csv"))
- download.file(file.path(remoteDir, "airOT201205.csv"), file.path(localDir, "airOT201205.csv"))
- download.file(file.path(remoteDir, "airOT201206.csv"), file.path(localDir, "airOT201206.csv"))
- download.file(file.path(remoteDir, "airOT201207.csv"), file.path(localDir, "airOT201207.csv"))
- download.file(file.path(remoteDir, "airOT201208.csv"), file.path(localDir, "airOT201208.csv"))
- download.file(file.path(remoteDir, "airOT201209.csv"), file.path(localDir, "airOT201209.csv"))
- download.file(file.path(remoteDir, "airOT201210.csv"), file.path(localDir, "airOT201210.csv"))
- download.file(file.path(remoteDir, "airOT201211.csv"), file.path(localDir, "airOT201211.csv"))
- download.file(file.path(remoteDir, "airOT201212.csv"), file.path(localDir, "airOT201212.csv"))
- ```
-
- The download should be complete in about 9.5 minutes.
-
-## Copy the data to default storage
-
-The Hadoop Distributed File System (HDFS) location is specified with the `airDataDir` variable. In RStudio, enter the following code:
-
-```R
-# Set directory in bigDataDirRoot to load the data into
-airDataDir <- file.path(bigDataDirRoot,"AirOnTimeCSV2012")
-
-# Create directory (default storage)
-rxHadoopMakeDir(airDataDir)
-
-# Copy data from local storage to default storage
-rxHadoopCopyFromLocal(localDir, bigDataDirRoot)
-
-# Optional. Verify files
-rxHadoopListFiles(airDataDir)
-```
-
-The step should be complete in about 10 seconds.
-
-## Set up a dataset
-
-1. Create a file system object that uses the default values. In RStudio, enter the following code:
-
- ```R
- # Define the HDFS (WASB) file system
- hdfsFS <- RxHdfsFileSystem()
- ```
-
-1. Because the original CSV files have rather unwieldy variable names, you supply a *colInfo* list to make them more manageable. In RStudio, enter the following code:
-
- ```R
- airlineColInfo <- list(
- MONTH = list(newName = "Month", type = "integer"),
- DAY_OF_WEEK = list(newName = "DayOfWeek", type = "factor",
- levels = as.character(1:7),
- newLevels = c("Mon", "Tues", "Wed", "Thur", "Fri", "Sat",
- "Sun")),
- UNIQUE_CARRIER = list(newName = "UniqueCarrier", type =
- "factor"),
- ORIGIN = list(newName = "Origin", type = "factor"),
- DEST = list(newName = "Dest", type = "factor"),
- CRS_DEP_TIME = list(newName = "CRSDepTime", type = "integer"),
- DEP_TIME = list(newName = "DepTime", type = "integer"),
- DEP_DELAY = list(newName = "DepDelay", type = "integer"),
- DEP_DELAY_NEW = list(newName = "DepDelayMinutes", type =
- "integer"),
- DEP_DEL15 = list(newName = "DepDel15", type = "logical"),
- DEP_DELAY_GROUP = list(newName = "DepDelayGroups", type =
- "factor",
- levels = as.character(-2:12),
- newLevels = c("< -15", "-15 to -1","0 to 14", "15 to 29",
- "30 to 44", "45 to 59", "60 to 74",
- "75 to 89", "90 to 104", "105 to 119",
- "120 to 134", "135 to 149", "150 to 164",
- "165 to 179", ">= 180")),
- ARR_DELAY = list(newName = "ArrDelay", type = "integer"),
- ARR_DELAY_NEW = list(newName = "ArrDelayMinutes", type =
- "integer"),
- ARR_DEL15 = list(newName = "ArrDel15", type = "logical"),
- AIR_TIME = list(newName = "AirTime", type = "integer"),
- DISTANCE = list(newName = "Distance", type = "integer"),
- DISTANCE_GROUP = list(newName = "DistanceGroup", type =
- "factor",
- levels = as.character(1:11),
- newLevels = c("< 250", "250-499", "500-749", "750-999",
- "1000-1249", "1250-1499", "1500-1749", "1750-1999",
- "2000-2249", "2250-2499", ">= 2500")))
-
- varNames <- names(airlineColInfo)
- ```
-
-## Create data sources
-
-In a Spark compute context, you can create data sources by using the following functions:
-
-|Function | Description |
-||-|
-|`RxTextData` | A comma-delimited text data source. |
-|`RxXdfData` | Data in the XDF data file format. In RevoScaleR, the XDF file format is modified for Hadoop to store data in a composite set of files rather than a single file. |
-|`RxHiveData` | Generates a Hive Data Source object.|
-|`RxParquetData` | Generates a Parquet Data Source object.|
-|`RxOrcData` | Generates an Orc Data Source object.|
-
-Create an [RxTextData](/machine-learning-server/r-reference/revoscaler/rxtextdata) object by using the files you copied to HDFS. In RStudio, enter the following code:
-
-```R
-airDS <- RxTextData( airDataDir,
- colInfo = airlineColInfo,
- varsToKeep = varNames,
- fileSystem = hdfsFS )
-```
-
-## Create a compute context for Spark
-
-To load data and run analyses on worker nodes, you set the compute context in your script to [RxSpark](/machine-learning-server/r-reference/revoscaler/rxspark). In this context, R functions automatically distribute the workload across all the worker nodes, with no built-in requirement for managing jobs or the queue. The Spark compute context is established through `RxSpark` or `rxSparkConnect()` to create the Spark compute context, and it uses `rxSparkDisconnect()` to return to a local compute context. In RStudio, enter the following code:
-
-```R
-# Define the Spark compute context
-mySparkCluster <- RxSpark()
-
-# Set the compute context
-rxSetComputeContext(mySparkCluster)
-```
-
-## Fit a linear model
-
-1. Use the [rxLinMod](/machine-learning-server/r-reference/revoscaler/rxlinmod) function to fit a linear model using your `airDS` data source. In RStudio, enter the following code:
-
- ```R
- system.time(
- delayArr <- rxLinMod(ArrDelay ~ DayOfWeek, data = airDS,
- cube = TRUE)
- )
- ```
-
- This step should be complete in 2 to 3 minutes.
-
-1. View the results. In RStudio, enter the following code:
-
- ```R
- summary(delayArr)
- ```
-
- You should see the following results:
-
- ```output
- Call:
- rxLinMod(formula = ArrDelay ~ DayOfWeek, data = airDS, cube = TRUE)
-
- Cube Linear Regression Results for: ArrDelay ~ DayOfWeek
- Data: airDataXdf (RxXdfData Data Source)
- File name: /tutorial/data/AirOnTimeCSV2012
- Dependent variable(s): ArrDelay
- Total independent variables: 7
- Number of valid observations: 6005381
- Number of missing observations: 91381
-
- Coefficients:
- Estimate Std. Error t value Pr(>|t|) | Counts
- DayOfWeek=Mon 3.54210 0.03736 94.80 2.22e-16 *** | 901592
- DayOfWeek=Tues 1.80696 0.03835 47.12 2.22e-16 *** | 855805
- DayOfWeek=Wed 2.19424 0.03807 57.64 2.22e-16 *** | 868505
- DayOfWeek=Thur 4.65502 0.03757 123.90 2.22e-16 *** | 891674
- DayOfWeek=Fri 5.64402 0.03747 150.62 2.22e-16 *** | 896495
- DayOfWeek=Sat 0.91008 0.04144 21.96 2.22e-16 *** | 732944
- DayOfWeek=Sun 2.82780 0.03829 73.84 2.22e-16 *** | 858366
-
- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-
- Residual standard error: 35.48 on 6005374 degrees of freedom
- Multiple R-squared: 0.001827 (as if intercept included)
- Adjusted R-squared: 0.001826
- F-statistic: 1832 on 6 and 6005374 DF, p-value: < 2.2e-16
- Condition number: 1
- ```
-
- The results indicate that you've processed all the data, 6 million observations, using all the CSV files in the specified directory. Because you specified `cube = TRUE`, you have an estimated coefficient for each day of the week (and not the intercept).
-
-## Use composite XDF files
-
-As you've seen, you can analyze CSV files directly with R on Hadoop. But you can do the analysis more quickly if you store the data in a more efficient format. The R XDF file format is efficient, but it's modified somewhat for HDFS so that individual files remain within a single HDFS block. (The HDFS block size varies from installation to installation but is typically either 64 MB or 128 MB.)
-
-When you use [rxImport](/machine-learning-server/r-reference/revoscaler/rximport) on Hadoop to create a set of composite XDF files, you specify an `RxTextData` data source such as `AirDS` as the inData and an `RxXdfData` data source with fileSystem set to an HDFS file system as the outFile argument. You can then use the `RxXdfData` object as the data argument in subsequent R analyses.
-
-1. Define an `RxXdfData` object. In RStudio, enter the following code:
-
- ```R
- airDataXdfDir <- file.path(bigDataDirRoot,"AirOnTimeXDF2012")
-
- airDataXdf <- RxXdfData( airDataXdfDir,
- fileSystem = hdfsFS )
- ```
-
-1. Set a block size of 250000 rows and specify that we read all the data. In RStudio, enter the following code:
-
- ```R
- blockSize <- 250000
- numRowsToRead = -1
- ```
-
-1. Import the data using `rxImport`. In RStudio, enter the following code:
-
- ```R
- rxImport(inData = airDS,
- outFile = airDataXdf,
- rowsPerRead = blockSize,
- overwrite = TRUE,
- numRows = numRowsToRead )
- ```
-
- This step should be complete in a few minutes.
-
-1. Re-estimate the same linear model, using the new, faster data source. In RStudio, enter the following code:
-
- ```R
- system.time(
- delayArr <- rxLinMod(ArrDelay ~ DayOfWeek, data = airDataXdf,
- cube = TRUE)
- )
- ```
-
- The step should be complete in less than a minute.
-
-1. View the results. The results should be the same as from the CSV files. In RStudio, enter the following code:
-
- ```R
- summary(delayArr)
- ```
-
-## Convert XDF to CSV
-
-### In a Spark context
-
-If you converted your CSV files to XDF file format for greater efficiency while running the analyses, but now want to convert your data back to CSV, you can do so by using [rxDataStep](/machine-learning-server/r-reference/revoscaler/rxdatastep).
-
-To create a folder of CSV files, first create an `RxTextData` object by using a directory name as the file argument. This object represents the folder in which to create the CSV files. This directory is created when you run the `rxDataStep`. Then, point to this `RxTextData` object in the `outFile` argument of the `rxDataStep`. Each CSV that's created is named based on the directory name and followed by a number.
-
-Suppose that you want to write out a folder of CSV files in HDFS from your `airDataXdf` composite XDF after you perform the logistic regression and prediction, so that the new CSV files contain the predicted values and residuals. In RStudio, enter the following code:
-
-```R
-airDataCsvDir <- file.path(bigDataDirRoot,"AirDataCSV2012")
-airDataCsvDS <- RxTextData(airDataCsvDir,fileSystem=hdfsFS)
-rxDataStep(inData=airDataXdf, outFile=airDataCsvDS)
-```
-
-This step should be complete in about 2.5 minutes.
-
-The `rxDataStep` wrote out one CSV file for every XDFD file in the input composite XDF file. This is the default behavior for writing CSV files from composite XDF files to HDFS when the compute context is set to `RxSpark`.
-
-### In a local context
-
-Alternatively, when you're done performing your analyses, you could switch your compute context back to `local` to take advantage of two arguments within `RxTextData` that give you slightly more control when you write out CSV files to HDFS: `createFileSet` and `rowsPerOutFile`. When you set `createFileSet` to `TRUE`, a folder of CSV files is written to the directory that you specify. When you set `createFileSet` to `FALSE`, a single CSV file is written. You can set the second argument, `rowsPerOutFile`, to an integer to indicate how many rows to write to each CSV file when `createFileSet` is `TRUE`.
-
-In RStudio, enter the following code:
-
-```R
-rxSetComputeContext("local")
-airDataCsvRowsDir <- file.path(bigDataDirRoot,"AirDataCSVRows2012")
-airDataCsvRowsDS <- RxTextData(airDataCsvRowsDir, fileSystem=hdfsFS, createFileSet=TRUE, rowsPerOutFile=1000000)
-rxDataStep(inData=airDataXdf, outFile=airDataCsvRowsDS)
-```
-
-This step should be complete in about 10 minutes.
-
-When you use an `RxSpark` compute context, `createFileSet` defaults to `TRUE` and `rowsPerOutFile` has no effect. Therefore, if you want to create a single CSV or customize the number of rows per file, perform `rxDataStep` in a `local` compute context (the data can still be in HDFS).
-
-## Final steps
-
-1. Clean up the data. In RStudio, enter the following code:
-
- ```R
- rxHadoopRemoveDir(airDataDir)
- rxHadoopRemoveDir(airDataXdfDir)
- rxHadoopRemoveDir(airDataCsvDir)
- rxHadoopRemoveDir(airDataCsvRowsDir)
- rxHadoopRemoveDir(bigDataDirRoot)
- ```
-
-1. Stop the remote Spark application. In RStudio, enter the following code:
-
- ```R
- rxStopEngine(mySparkCluster)
- ```
-
-1. Quit the R session. In RStudio, enter the following code:
-
- ```R
- quit()
- ```
-
-## Clean up resources
-
-After you complete the tutorial, you might want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it's not in use. You're also charged for an HDInsight cluster, even when it's not in use. Because the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they're not in use.
-
-To delete a cluster, see [Delete an HDInsight cluster by using your browser, PowerShell, or the Azure CLI](../hdinsight-delete-cluster.md).
-
-## Next steps
-
-In this tutorial, you learned how to use R functions in Apache Spark that are running on an HDInsight Machine Learning services cluster. For more information, see the following articles:
-
-* [Compute context options for an Azure HDInsight Machine Learning services cluster](r-server-compute-contexts.md)
-* [R Functions for Spark on Hadoop](/machine-learning-server/r-reference/revoscaler/revoscaler-hadoop-functions)
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/quickstart-resource-manager-template.md
- Title: 'Quickstart: Create ML Services cluster using template - Azure HDInsight'
-description: This quickstart shows how to use Resource Manager template to create an ML Services cluster in Azure HDInsight.
--- Previously updated : 03/13/2020-
-#Customer intent: As a developer new to ML Services on Azure, I need to see how to create an ML Services cluster.
--
-# Quickstart: Create ML Services cluster in Azure HDInsight using ARM template
--
-In this quickstart, you use an Azure Resource Manager template (ARM template) to create an [ML Services](./r-server-overview.md) cluster in Azure HDInsight. Microsoft Machine Learning Server is available as a deployment option when you create HDInsight clusters in Azure. The cluster type that provides this option is called ML Services. This capability provides data scientists, statisticians, and R programmers with on-demand access to scalable, distributed methods of analytics on HDInsight.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[:::image type="icon" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.hdinsight%2Fhdinsight-rserver%2Fazuredeploy.json)
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Review the template
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-rserver/).
--
-Two Azure resources are defined in the template:
-
-* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.
-* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster.
-
-## Deploy the template
-
-1. Select the **Deploy to Azure** button below to sign in to Azure and open the ARM template.
-
- [:::image type="icon" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.hdinsight%2Fhdinsight-rserver%2Fazuredeploy.json)
-
-1. Enter or select the following values:
-
- |Property |Description |
- |||
- |Subscription|From the drop-down list, select the Azure subscription that's used for the cluster.|
- |Resource group|From the drop-down list, select your existing resource group, or select **Create new**.|
- |Location|The value will autopopulate with the location used for the resource group.|
- |Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.|
- |Cluster Login User Name|Provide the username, default is **admin**.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` ). |
- |Ssh User Name|Provide the username, default is sshuser|
- |Ssh Password|Provide the password.|
-
- :::image type="content" source="./media/quickstart-resource-manager-template/resource-manager-template-rserver.png" alt-text="Deploy Resource Manager template HBase" border="true":::
-
-1. Review the **TERMS AND CONDITIONS**. Then select **I agree to the terms and conditions stated above**, then **Purchase**. You'll receive a notification that your deployment is in progress. It takes about 20 minutes to create a cluster.
-
-## Review deployed resources
-
-Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, an [Azure Data Lake Storage Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
-
-## Clean up resources
-
-After you complete the quickstart, you may want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it isn't in use. You're also charged for an HDInsight cluster, even when it isn't in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they aren't in use.
-
-From the Azure portal, navigate to your cluster, and select **Delete**.
-
-[Delete Resource Manager template HBase](./media/quickstart-resource-manager-template/azure-portal-delete-rserver.png)
-
-You can also select the resource group name to open the resource group page, and then select **Delete resource group**. By deleting the resource group, you delete both the HDInsight cluster, and the default storage account.
-
-## Next steps
-
-In this quickstart, you learned how to create an ML Services cluster in HDInsight using an ARM template. In the next article, you learn how to run an R script with RStudio Server that demonstrates using Spark for distributed R computations..
-
-> [!div class="nextstepaction"]
-> [Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server](./machine-learning-services-quickstart-job-rstudio.md)
hdinsight R Server Compute Contexts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-compute-contexts.md
- Title: Compute context options for ML Services on HDInsight - Azure
-description: Learn about the different compute context options available to users with ML Services on HDInsight
-- Previously updated : 01/02/2020---
-# Compute context options for ML Services on HDInsight
--
-ML Services on Azure HDInsight controls how calls are executed by setting the compute context. This article outlines the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster.
-
-The edge node of a cluster provides a convenient place to connect to the cluster and to run your R scripts. With an edge node, you have the option of running the parallelized distributed functions of RevoScaleR across the cores of the edge node server. You can also run them across the nodes of the cluster by using RevoScaleR's Hadoop Map Reduce or Apache Spark compute contexts.
-
-## ML Services on Azure HDInsight
-
-[ML Services on Azure HDInsight](r-server-overview.md) provides the latest capabilities for R-based analytics. It can use data that is stored in an Apache Hadoop HDFS container in your [Azure Blob](../../storage/common/storage-introduction.md "Azure Blob storage") storage account, a Data Lake Store, or the local Linux file system. Since ML Services is built on open-source R, the R-based applications you build can apply any of the 8000+ open-source R packages. They can also use the routines in [RevoScaleR](/machine-learning-server/r-reference/revoscaler/revoscaler), Microsoft's big data analytics package that is included with ML Services.
-
-## Compute contexts for an edge node
-
-In general, an R script that's run in ML Services cluster on the edge node runs within the R interpreter on that node. The exceptions are those steps that call a RevoScaleR function. The RevoScaleR calls run in a compute environment that is determined by how you set the RevoScaleR compute context. When you run your R script from an edge node, the possible values of the compute context are:
--- local sequential (*local*)-- local parallel (*localpar*)-- Map Reduce-- Spark-
-The *local* and *localpar* options differ only in how **rxExec** calls are executed. They both execute other rx-function calls in a parallel manner across all available cores unless specified otherwise through use of the RevoScaleR **numCoresToUse** option, for example `rxOptions(numCoresToUse=6)`. Parallel execution options offer optimal performance.
-
-The following table summarizes the various compute context options to set how calls are executed:
-
-| Compute context | How to set | Execution context |
-| - | - | - |
-| Local sequential | rxSetComputeContext('local') | Parallelized execution across the cores of the edge node server, except for rxExec calls, which are executed serially |
-| Local parallel | rxSetComputeContext('localpar') | Parallelized execution across the cores of the edge node server |
-| Spark | RxSpark() | Parallelized distributed execution via Spark across the nodes of the HDI cluster |
-| Map Reduce | RxHadoopMR() | Parallelized distributed execution via Map Reduce across the nodes of the HDI cluster |
-
-## Guidelines for deciding on a compute context
-
-Which of the three options you choose that provide parallelized execution depends on the nature of your analytics work, the size, and the location of your data. There's no simple formula that tells you, which compute context to use. There are, however, some guiding principles that can help you make the right choice, or, at least, help you narrow down your choices before you run a benchmark. These guiding principles include:
--- The local Linux file system is faster than HDFS.-- Repeated analyses are faster if the data is local, and if it's in XDF.-- It's preferable to stream small amounts of data from a text data source. If the amount of data is larger, convert it to XDF before analysis.-- The overhead of copying or streaming the data to the edge node for analysis becomes unmanageable for very large amounts of data.-- ApacheSpark is faster than Map Reduce for analysis in Hadoop.-
-Given these principles, the following sections offer some general rules of thumb for selecting a compute context.
-
-### Local
--- If the amount of data to analyze is small and doesn't require repeated analysis, then stream it directly into the analysis routine using *local* or *localpar*.-- If the amount of data to analyze is small or medium-sized and requires repeated analysis, then copy it to the local file system, import it to XDF, and analyze it via *local* or *localpar*.-
-### Apache Spark
--- If the amount of data to analyze is large, then import it to a Spark DataFrame using **RxHiveData** or **RxParquetData**, or to XDF in HDFS (unless storage is an issue), and analyze it using the Spark compute context.-
-### Apache Hadoop Map Reduce
--- Use the Map Reduce compute context only if you come across an insurmountable problem with the Spark compute context since it's generally slower. -
-## Inline help on rxSetComputeContext
-For more information and examples of RevoScaleR compute contexts, see the inline help in R on the rxSetComputeContext method, for example:
-
-```console
-> ?rxSetComputeContext
-```
-
-You can also refer to the [Distributed computing overview](/machine-learning-server/r/how-to-revoscaler-distributed-computing) in [Machine Learning Server documentation](/machine-learning-server/).
-
-## Next steps
-
-In this article, you learned about the options that are available to specify whether and how execution is parallelized across cores of the edge node or HDInsight cluster. To learn more about how to use ML Services with HDInsight clusters, see the following topics:
--- [Overview of ML Services for Apache Hadoop](r-server-overview.md)-- [Azure Storage options for ML Services on HDInsight](r-server-storage.md)
hdinsight R Server Hdinsight Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-hdinsight-manage.md
- Title: Manage ML Services cluster on HDInsight - Azure
-description: Learn how to manage various tasks on ML Services cluster in Azure HDInsight.
-- Previously updated : 06/19/2019---
-# Manage ML Services cluster on Azure HDInsight
--
-In this article, you learn how to manage an existing ML Services cluster on Azure HDInsight to perform tasks like adding multiple concurrent users, connecting remotely to an ML Services cluster, changing compute context, etc.
-
-## Prerequisites
-
-* An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-* A Secure Shell (SSH) client: An SSH client is used to remotely connect to the HDInsight cluster and run commands directly on the cluster. For more information, see [Use SSH with HDInsight.](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-## Enable multiple concurrent users
-
-You can enable multiple concurrent users for ML Services cluster on HDInsight by adding more users for the edge node on which the RStudio community version runs. When you create an HDInsight cluster, you must provide two users, an HTTP user and an SSH user:
---- **Cluster login username**: an HTTP user for authentication through the HDInsight gateway that is used to protect the HDInsight clusters you created. This HTTP user is used to access the Apache Ambari UI, Apache Hadoop YARN UI, as well as other UI components.-- **Secure Shell (SSH) username**: an SSH user to access the cluster through secure shell. This user is a user in the Linux system for all the head nodes, worker nodes, and edge nodes. So you can use secure shell to access any of the nodes in a remote cluster.-
-The R Studio Server Community version used in the ML Services cluster on HDInsight accepts only Linux username and password as a sign in mechanism. It does not support passing tokens. So, when you try to access R Studio for the first time on an ML Services cluster, you need to sign in twice.
--- First sign in using the HTTP user credentials through the HDInsight Gateway.--- Then use the SSH user credentials to sign in to RStudio.
-
-Currently, only one SSH user account can be created when provisioning an HDInsight cluster. So to enable multiple users to access ML Services cluster on HDInsight, you must create additional users in the Linux system.
-
-Because RStudio runs on the cluster's edge node, there are several steps here:
-
-1. Use the existing SSH user to sign in to the edge node
-2. Add more Linux users in edge node
-3. Use RStudio Community version with the user created
-
-### Step 1: Use the created SSH user to sign in to the edge node
-
-Follow the instructions at [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md) to access the edge node. The edge node address for ML Services cluster on HDInsight is `CLUSTERNAME-ed-ssh.azurehdinsight.net`.
-
-### Step 2: Add more Linux users in edge node
-
-To add a user to the edge node, execute the commands:
-
-```bash
-# Add a user
-sudo useradd <yournewusername> -m
-
-# Set password for the new user
-sudo passwd <yournewusername>
-```
-
-The following screenshot shows the outputs.
--
-When prompted for "Current Kerberos password:", just press **Enter** to ignore it. The `-m` option in `useradd` command indicates that the system will create a home folder for the user, which is required for RStudio Community version.
-
-### Step 3: Use RStudio Community version with the user created
-
-Access RStudio from `https://CLUSTERNAME.azurehdinsight.net/rstudio/`. If you are logging in for the first time after creating the cluster, enter the cluster admin credentials followed by the SSH user credentials you created. If this is not your first login, only enter the credentials for the SSH user you created.
-
-You can also sign in using the original credentials (by default, it is *sshuser*) concurrently from another browser window.
-
-Note also that the newly added users do not have root privileges in Linux system, but they do have the same access to all the files in the remote HDFS and WASB storage.
-
-## Connect remotely to Microsoft ML Services
-
-You can set up access to the HDInsight Spark compute context from a remote instance of ML Client running on your desktop. To do so, you must specify the options (hdfsShareDir, shareDir, sshUsername, sshHostname, sshSwitches, and sshProfileScript) when defining the RxSpark compute context on your desktop: For example:
-
-```r
-myNameNode <- "default"
-myPort <- 0
-
-mySshHostname <- '<clustername>-ed-ssh.azurehdinsight.net' # HDI secure shell hostname
-mySshUsername <- '<sshuser>'# HDI SSH username
-mySshSwitches <- '-i /cygdrive/c/Data/R/davec' # HDI SSH private key
-
-myhdfsShareDir <- paste("/user/RevoShare", mySshUsername, sep="/")
-myShareDir <- paste("/var/RevoShare" , mySshUsername, sep="/")
-
-mySparkCluster <- RxSpark(
- hdfsShareDir = myhdfsShareDir,
- shareDir = myShareDir,
- sshUsername = mySshUsername,
- sshHostname = mySshHostname,
- sshSwitches = mySshSwitches,
- sshProfileScript = '/etc/profile',
- nameNode = myNameNode,
- port = myPort,
- consoleOutput= TRUE
-)
-```
-
-For more information, see the "Using Microsoft Machine Learning Server as an Apache Hadoop Client" section in [How to use RevoScaleR in an Apache Spark compute context](/machine-learning-server/r/how-to-revoscaler-spark#more-spark-scenarios)
-
-## Use a compute context
-
-A compute context allows you to control whether computation is performed locally on the edge node or distributed across the nodes in the HDInsight cluster. For an example of setting a compute context with RStudio Server, see [Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server](machine-learning-services-quickstart-job-rstudio.md).
-
-## Distribute R code to multiple nodes
-
-With ML Services on HDInsight, you can take existing R code and run it across multiple nodes in the cluster by using `rxExec`. This function is useful when doing a parameter sweep or simulations. The following code is an example of how to use `rxExec`:
-
-```r
-rxExec( function() {Sys.info()["nodename"]}, timesToRun = 4 )
-```
-
-If you are still using the Spark context, this command returns the nodename value for the worker nodes that the code `(Sys.info()["nodename"])` is run on. For example, on a four node cluster, you expect to receive output similar to the following snippet:
-
-```r
-$rxElem1
- nodename
-"wn3-mymlser"
-
-$rxElem2
- nodename
-"wn0-mymlser"
-
-$rxElem3
- nodename
-"wn3-mymlser"
-
-$rxElem4
- nodename
-"wn3-mymlser"
-```
-
-## Access data in Apache Hive and Parquet
-
-HDInsight ML Services allows direct access to data in Hive and Parquet for use by ScaleR functions in the Spark compute context. These capabilities are available through new ScaleR data source functions called RxHiveData and RxParquetData that work through use of Spark SQL to load data directly into a Spark DataFrame for analysis by ScaleR.
-
-The following code provides some sample code on use of the new functions:
-
-```r
-#Create a Spark compute context:
-myHadoopCluster <- rxSparkConnect(reset = TRUE)
-
-#Retrieve some sample data from Hive and run a model:
-hiveData <- RxHiveData("select * from hivesampletable",
- colInfo = list(devicemake = list(type = "factor")))
-rxGetInfo(hiveData, getVarInfo = TRUE)
-
-rxLinMod(querydwelltime ~ devicemake, data=hiveData)
-
-#Retrieve some sample data from Parquet and run a model:
-rxHadoopMakeDir('/share')
-rxHadoopCopyFromLocal(file.path(rxGetOption('sampleDataDir'), 'claimsParquet/'), '/share/')
-pqData <- RxParquetData('/share/claimsParquet',
- colInfo = list(
- age = list(type = "factor"),
- car.age = list(type = "factor"),
- type = list(type = "factor")
- ) )
-rxGetInfo(pqData, getVarInfo = TRUE)
-
-rxNaiveBayes(type ~ age + cost, data = pqData)
-
-#Check on Spark data objects, cleanup, and close the Spark session:
-lsObj <- rxSparkListData() # two data objs are cached
-lsObj
-rxSparkRemoveData(lsObj)
-rxSparkListData() # it should show empty list
-rxSparkDisconnect(myHadoopCluster)
-```
-
-For additional info on use of these new functions see the online help in ML Services through use of the `?RxHivedata` and `?RxParquetData` commands.
-
-## Install additional R packages on the cluster
-
-### To install R packages on the edge node
-
-If you want to install additional R packages on the edge node, you can use `install.packages()` directly from within the R console, once connected to the edge node through SSH.
-
-### To install R packages on the worker node
-
-To install R packages on the worker nodes of the cluster, you must use a Script Action. Script Actions are Bash scripts that are used to make configuration changes to the HDInsight cluster or to install additional software, such as additional R packages.
-
-> [!IMPORTANT]
-> Using Script Actions to install additional R packages can only be used after the cluster has been created. Do not use this procedure during cluster creation, as the script relies on ML Services being completely configured.
-
-1. Follow the steps at [Customize clusters using Script Action](../hdinsight-hadoop-customize-cluster-linux.md).
-
-3. For **Submit script action**, provide the following information:
-
- * For **Script type**, select **Custom**.
-
- * For **Name**, provide a name for the script action.
-
- * For **Bash script URI**, enter `https://mrsactionscripts.blob.core.windows.net/rpackages-v01/InstallRPackages.sh`. This is the script that installs additional R packages on the worker node
-
- * Select the check box only for **Worker**.
-
- * **Parameters**: The R packages to be installed. For example, `bitops stringr arules`
-
- * Select the check box to **Persist this script action**.
-
- > [!NOTE]
- > 1. By default, all R packages are installed from a snapshot of the Microsoft MRAN repository consistent with the version of ML Server that has been installed. If you want to install newer versions of packages, then there is some risk of incompatibility. However this kind of install is possible by specifying `useCRAN` as the first element of the package list, for example `useCRAN bitops, stringr, arules`.
- > 2. Some R packages require additional Linux system libraries. For convenience, the HDInsight ML Services comes pre-installed with the dependencies needed by the top 100 most popular R packages. However, if the R package(s) you install require libraries beyond these then you must download the base script used here and add steps to install the system libraries. You must then upload the modified script to a public blob container in Azure storage and use the modified script to install the packages.
- > For more information on developing Script Actions, see [Script Action development](../hdinsight-hadoop-script-actions-linux.md).
-
- :::image type="content" source="./media/r-server-hdinsight-manage/submit-script-action.png" alt-text="Azure portal submit script action" border="true":::
-
-4. Select **Create** to run the script. Once the script completes, the R packages are available on all worker nodes.
-
-## Next steps
-
-* [Operationalize ML Services cluster on HDInsight](r-server-operationalize.md)
-* [Compute context options for ML Service cluster on HDInsight](r-server-compute-contexts.md)
-* [Azure Storage options for ML Services cluster on HDInsight](r-server-storage.md)
hdinsight R Server Operationalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-operationalize.md
- Title: Operationalize ML Services on HDInsight - Azure
-description: Learn how to operationalize your data model to make predictions with ML Services in Azure HDInsight.
-- Previously updated : 06/27/2018---
-# Operationalize ML Services cluster on Azure HDInsight
--
-After you have used ML Services cluster in HDInsight to complete your data modeling, you can operationalize the model to make predictions. This article provides instructions on how to perform this task.
-
-## Prerequisites
-
-* An ML Services cluster on HDInsight. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) and select **ML Services** for **Cluster type**.
-
-* A Secure Shell (SSH) client: An SSH client is used to remotely connect to the HDInsight cluster and run commands directly on the cluster. For more information, see [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-## Operationalize ML Services cluster with one-box configuration
-
-> [!NOTE]
-> The steps below are applicable to R Server 9.0 and ML Server 9.1. For ML Server 9.3, refer to [Use the administration tool to manage the operationalization configuration](/machine-learning-server/operationalize/configure-admin-cli-launch).
-
-1. SSH into the edge node.
-
- ```bash
- ssh USERNAME@CLUSTERNAME-ed-ssh.azurehdinsight.net
- ```
-
- For instructions on how to use SSH with Azure HDInsight, see [Use SSH with HDInsight.](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-1. Change directory for the relevant version and sudo the dot net dll:
-
- - For Microsoft ML Server 9.1:
-
- ```bash
- cd /usr/lib64/microsoft-r/rserver/o16n/9.1.0
- sudo dotnet Microsoft.RServer.Utils.AdminUtil/Microsoft.RServer.Utils.AdminUtil.dll
- ```
-
- - For Microsoft R Server 9.0:
-
- ```bash
- cd /usr/lib64/microsoft-deployr/9.0.1
- sudo dotnet Microsoft.DeployR.Utils.AdminUtil/Microsoft.DeployR.Utils.AdminUtil.dll
- ```
-
-1. You are presented with the options to choose from. Choose the first option, as shown in the following screenshot, to **Configure ML Server for Operationalization**.
-
- :::image type="content" source="./media/r-server-operationalize/admin-util-one-box-1.png" alt-text="R server Administration utility select" border="true":::
-
-1. You are now presented with the option to choose how you want to operationalize ML Server. From the presented options, choose the first one by entering **A**.
-
- :::image type="content" source="./media/r-server-operationalize/admin-util-one-box-2.png" alt-text="R server Administration utility operationalize" border="true":::
-
-1. When prompted, enter and reenter the password for a local admin user.
-
-1. You should see outputs suggesting that the operation was successful. You are also prompted to select another option from the menu. Select E to go back to the main menu.
-
- :::image type="content" source="./media/r-server-operationalize/admin-util-one-box-3.png" alt-text="R server Administration utility success" border="true":::
-
-1. Optionally, you can perform diagnostic checks by running a diagnostic test as follows:
-
- a. From the main menu, select **6** to run diagnostic tests.
-
- :::image type="content" source="./media/r-server-operationalize/hdinsight-diagnostic1.png" alt-text="R server Administration utility diagnostic" border="true":::
-
- b. From the Diagnostic Tests menu, select **A**. When prompted, enter the password that you provided for the local admin user.
-
- :::image type="content" source="./media/r-server-operationalize/hdinsight-diagnostic2.png" alt-text="R server Administration utility test" border="true":::
-
- c. Verify that the output shows that overall health is a pass.
-
- :::image type="content" source="./media/r-server-operationalize/hdinsight-diagnostic3.png" alt-text="R server Administration utility pass" border="true":::
-
- d. From the menu options presented, enter **E** to return to the main menu and then enter **8** to exit the admin utility.
-
-### Long delays when consuming web service on Apache Spark
-
-If you encounter long delays when trying to consume a web service created with mrsdeploy functions in an Apache Spark compute context, you may need to add some missing folders. The Spark application belongs to a user called '*rserve2*' whenever it is invoked from a web service using mrsdeploy functions. To work around this issue:
-
-```r
-# Create these required folders for user 'rserve2' in local and hdfs:
-
-hadoop fs -mkdir /user/RevoShare/rserve2
-hadoop fs -chmod 777 /user/RevoShare/rserve2
-
-mkdir /var/RevoShare/rserve2
-chmod 777 /var/RevoShare/rserve2
--
-# Next, create a new Spark compute context:
-
-rxSparkConnect(reset = TRUE)
-```
-
-At this stage, the configuration for operationalization is complete. Now you can use the `mrsdeploy` package on your RClient to connect to the operationalization on edge node and start using its features like [remote execution](/machine-learning-server/r/how-to-execute-code-remotely) and [web-services](/machine-learning-server/operationalize/concept-what-are-web-services). Depending on whether your cluster is set up on a virtual network or not, you may need to set up port forward tunneling through SSH login. The following sections explain how to set up this tunnel.
-
-### ML Services cluster on virtual network
-
-Make sure you allow traffic through port 12800 to the edge node. That way, you can use the edge node to connect to the Operationalization feature.
-
-```r
-library(mrsdeploy)
-
-remoteLogin(
- deployr_endpoint = "http://[your-cluster-name]-ed-ssh.azurehdinsight.net:12800",
- username = "admin",
- password = "xxxxxxx"
-)
-```
-
-If the `remoteLogin()` cannot connect to the edge node, but you can SSH to the edge node, then you need to verify whether the rule to allow traffic on port 12800 has been set properly or not. If you continue to face the issue, you can work around it by setting up port forward tunneling through SSH. For instructions, see the following section:
-
-### ML Services cluster not set up on virtual network
-
-If your cluster is not set up on vnet or if you are having troubles with connectivity through vnet, you can use SSH port forward tunneling:
-
-```bash
-ssh -L localhost:12800:localhost:12800 USERNAME@CLUSTERNAME-ed-ssh.azurehdinsight.net
-```
-
-Once your SSH session is active, the traffic from your local machine's port 12800 is forwarded to the edge node's port 12800 through SSH session. Make sure you use `127.0.0.1:12800` in your `remoteLogin()` method. This logs into the edge node's operationalization through port forwarding.
-
-```r
-library(mrsdeploy)
-
-remoteLogin(
- deployr_endpoint = "http://127.0.0.1:12800",
- username = "admin",
- password = "xxxxxxx"
-)
-```
-
-## Scale operationalized compute nodes on HDInsight worker nodes
-
-To scale the compute nodes, you first decommission the worker nodes and then configure compute nodes on the decommissioned worker nodes.
-
-### Step 1: Decommission the worker nodes
-
-ML Services cluster is not managed through [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). If the worker nodes are not decommissioned, the YARN Resource Manager does not work as expected because it is not aware of the resources being taken up by the server. In order to avoid this situation, we recommend decommissioning the worker nodes before you scale out the compute nodes.
-
-Follow these steps to decommission worker nodes:
-
-1. Log in to the cluster's Ambari console and click on **Hosts** tab.
-
-1. Select worker nodes (to be decommissioned).
-
-1. Click **Actions** > **Selected Hosts** > **Hosts** > **Turn ON Maintenance Mode**. For example, in the following image we have selected wn3 and wn4 to decommission.
-
- :::image type="content" source="./media/r-server-operationalize/get-started-operationalization.png" alt-text="Apache Ambari Turn On Maintenance Mode" border="true":::
-
-* Select **Actions** > **Selected Hosts** > **DataNodes** > click **Decommission**.
-* Select **Actions** > **Selected Hosts** > **NodeManagers** > click **Decommission**.
-* Select **Actions** > **Selected Hosts** > **DataNodes** > click **Stop**.
-* Select **Actions** > **Selected Hosts** > **NodeManagers** > click on **Stop**.
-* Select **Actions** > **Selected Hosts** > **Hosts** > click **Stop All Components**.
-* Unselect the worker nodes and select the head nodes.
-* Select **Actions** > **Selected Hosts** > "**Hosts** > **Restart All Components**.
-
-### Step 2: Configure compute nodes on each decommissioned worker node(s)
-
-1. SSH into each decommissioned worker node.
-
-1. Run admin utility using the relevant DLL for the ML Services cluster that you have. For ML Server 9.1, run the following:
-
- ```bash
- dotnet /usr/lib64/microsoft-deployr/9.0.1/Microsoft.DeployR.Utils.AdminUtil/Microsoft.DeployR.Utils.AdminUtil.dll
- ```
-
-1. Enter **1** to select option **Configure ML Server for Operationalization**.
-
-1. Enter **C** to select option `C. Compute node`. This configures the compute node on the worker node.
-
-1. Exit the Admin Utility.
-
-### Step 3: Add compute nodes details on web node
-
-Once all decommissioned worker nodes are configured to run compute node, come back on the edge node and add decommissioned worker nodes' IP addresses in the ML Server web node's configuration:
-
-1. SSH into the edge node.
-
-1. Run `vi /usr/lib64/microsoft-deployr/9.0.1/Microsoft.DeployR.Server.WebAPI/appsettings.json`.
-
-1. Look for the "Uris" section, and add worker node's IP and port details.
-
- ```json
- "Uris": {
- "Description": "Update 'Values' section to point to your backend machines. Using HTTPS is highly recommended",
- "Values": [
- "http://localhost:12805", "http://[worker-node1-ip]:12805", "http://[workder-node2-ip]:12805"
- ]
- }
- ```
-
-## Next steps
-
-* [Manage ML Services cluster on HDInsight](r-server-hdinsight-manage.md)
-* [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md)
-* [Azure Storage options for ML Services cluster on HDInsight](r-server-storage.md)
hdinsight R Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-overview.md
- Title: Introduction to ML Services on Azure HDInsight
-description: Learn how to use ML Services on HDInsight to create applications for big data analysis.
-- Previously updated : 04/20/2020-
-#Customer intent: As a developer I want to have a basic understanding of Microsoft's implementation of machine learning in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
--
-# What is ML Services in Azure HDInsight
--
-Microsoft Machine Learning Server is available as a deployment option when you create HDInsight clusters in Azure. The cluster type that provides this option is called **ML Services**. This capability provides on-demand access to adaptable, distributed methods of analytics on HDInsight.
-
-ML Services on HDInsight provides the latest capabilities for R-based analytics on datasets of virtually any size. The datasets can be loaded to either Azure Blob or Data Lake storage. Your R-based applications can use the 8000+ open-source R packages. The routines in ScaleR, Microsoft's big data analytics package are also available.
-
-The edge node provides a convenient place to connect to the cluster and run your R scripts. The edge node allows running the ScaleR parallelized distributed functions across the cores of the server. You can also run them across the nodes of the cluster by using ScaleR's Hadoop Map Reduce. You can also use Apache Spark compute contexts.
-
-The models or predictions that result from analysis can be downloaded for on-premises use. They can also be `operationalized` elsewhere in Azure. In particular, through [Azure Machine Learning Studio (classic)](https://studio.azureml.net), and [web service](../../machine-learning/classic/deploy-a-machine-learning-web-service.md).
-
-## Get started with ML Services on HDInsight
-
-To create an ML Services cluster in HDInsight, select the **ML Services** cluster type. The ML Services cluster type includes ML Server on the data nodes, and edge node. The edge node serves as a landing zone for ML Services-based analytics. See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-create-linux-clusters-portal.md) for a walkthrough on how to create the cluster.
-
-## Why choose ML Services in HDInsight?
-
-ML Services in HDInsight provides the following benefits:
-
-### AI innovation from Microsoft and open-source
-
- ML Services includes highly adaptable, distributed set of algorithms such as [RevoscaleR](/machine-learning-server/r-reference/revoscaler/revoscaler), [revoscalepy](/machine-learning-server/python-reference/revoscalepy/revoscalepy-package), and [microsoftML](/machine-learning-server/python-reference/microsoftml/microsoftml-package). These algorithms can work on data sizes larger than the size of physical memory. They also run on a wide variety of platforms in a distributed manner. Learn more about the collection of Microsoft's custom [R packages](/machine-learning-server/r-reference/introducing-r-server-r-package-reference) and [Python packages](/machine-learning-server/python-reference/introducing-python-package-reference) included with the product.
-
- ML Services bridges these Microsoft innovations and contributions coming from the open-source community (R, Python, and AI toolkits). All on top of a single enterprise-grade platform. Any R or Python open-source machine learning package can work side by side with any proprietary innovation from Microsoft.
-
-### Simple, secure, and high-scale operationalization and administration
-
- Enterprises relying on traditional paradigms and environments invest much time and effort towards operationalization. This action results in inflated costs and delays including the translation time for: models, iterations to keep them valid and current, regulatory approval, and managing permissions.
-
- ML Services offers enterprise grade [operationalization](/machine-learning-server/what-is-operationalization). After a machine learning model completes, it takes just a few clicks to generate web services APIs. These [web services](/machine-learning-server/operationalize/concept-what-are-web-services) are hosted on a server grid in the cloud and can be integrated with line-of-business applications. The ability to deploy to an elastic grid lets you scale seamlessly with the needs of your business, both for batch and real-time scoring. For instructions, see [Operationalize ML Services on HDInsight](r-server-operationalize.md).
-
-<!
-* **Deep ecosystem engagements to deliver customer success with optimal total cost of ownership**
-
- Individuals embarking on the journey of making their applications intelligent or simply wanting to learn the new world of AI and machine learning, need the right resources to help them get started. In addition to this documentation, Microsoft provides several learning resources and has engaged several training partners to help you ramp up and become productive quickly.
->
-
-> [!NOTE]
-> The ML Services cluster type on HDInsight is supported only on HDInsight 3.6. HDInsight 3.6 is scheduled to retire on December 31, 2020.
-
-## Key features of ML Services on HDInsight
-
-The following features are included in ML Services on HDInsight.
-
-| Feature category | Description |
-||-|
-| R-enabled | [R packages](/machine-learning-server/r-reference/introducing-r-server-r-package-reference) for solutions written in R, with an open-source distribution of R, and run-time infrastructure for script execution. |
-| Python-enabled | [Python modules](/machine-learning-server/python-reference/introducing-python-package-reference) for solutions written in Python, with an open-source distribution of Python, and run-time infrastructure for script execution.
-| [Pre-trained models](/machine-learning-server/install/microsoftml-install-pretrained-models) | For visual analysis and text sentiment analysis, ready to score data you provide. |
-| [Deploy and consume](r-server-operationalize.md) | `Operationalize` your server and deploy solutions as a web service. |
-| [Remote execution](r-server-hdinsight-manage.md#connect-remotely-to-microsoft-ml-services) | Start remote sessions on ML Services cluster on your network from your client workstation. |
-
-## Data storage options for ML Services on HDInsight
-
-Default storage for the HDFS file system can be an Azure Storage account or Azure Data Lake Storage. Uploaded data to cluster storage during analysis is made persistent. The data is available even after the cluster is deleted. Various tools can handle the data transfer to storage. The tools include the portal-based upload facility of the storage account and the AzCopy utility.
-
-You can enable access to additional Blob and Data lake stores during cluster creation. You aren't limited by the primary storage option in use. See [Azure Storage options for ML Services on HDInsight](./r-server-storage.md) article to learn more about using multiple storage accounts.
-
-You can also use Azure Files as a storage option for use on the edge node. Azure Files enables file shares created in Azure Storage to the Linux file system. For more information, see [Azure Storage options for ML Services on HDInsight](r-server-storage.md).
-
-## Access ML Services edge node
-
-You can connect to Microsoft ML Server on the edge node using a browser, or SSH/PuTTY. The R console is installed by default during cluster creation.
-
-## Develop and run R scripts
-
-Your R scripts can use any of the 8000+ open-source R packages. You can also use the parallelized and distributed routines from the ScaleR library. Scripts run on the edge node run within the R interpreter on that node. Except for steps that call ScaleR functions with a Map Reduce (RxHadoopMR) or Spark (RxSpark) compute context. The functions run in a distributed fashion across the data nodes that are associated with the data. For more information about context options, see [Compute context options for ML Services on HDInsight](r-server-compute-contexts.md).
-
-## `Operationalize` a model
-
-When your data modeling is complete, `operationalize` the model to make predictions for new data either from Azure or on-premises. This process is known as scoring. Scoring can be done in HDInsight, Azure Machine Learning, or on-premises.
-
-### Score in HDInsight
-
-To score in HDInsight, write an R function. The function calls your model to make predictions for a new data file that you've loaded to your storage account. Then, save the predictions back to the storage account. You can run this routine on-demand on the edge node of your cluster or by using a scheduled job.
-
-### Score in Azure Machine Learning (AML)
-
-To score using Azure Machine Learning, use the open-source Azure Machine Learning R package known as [AzureML](https://cran.r-project.org/src/contrib/Archive/AzureML/) to publish your model as an Azure web service. For convenience, this package is pre-installed on the edge node. Next, use the facilities in Azure Machine Learning to create a user interface for the web service, and then call the web service as needed for scoring. Then convert ScaleR model objects to equivalent open-source model objects for use with the web service. Use ScaleR coercion functions, such as `as.randomForest()` for ensemble-based models, for this conversion.
-
-### Score on-premises
-
-To score on-premises after creating your model: serialize the model in R, download it, de-serialize it, then use it for scoring new data. You can score new data by using the approach described earlier in Score in HDInsight or by using [web services](/machine-learning-server/operationalize/concept-what-are-web-services).
-
-## Maintain the cluster
-
-### Install and maintain R packages
-
-Most of the R packages that you use are required on the edge node since most steps of your R scripts run there. To install additional R packages on the edge node, you can use the `install.packages()` method in R.
-
-If you're just using ScaleR library routines, you don't usually need additional R packages. You might need additional packages for **rxExec** or **RxDataStep** execution on the data nodes.
-
-The additional packages can be installed with a script action after you create the cluster. For more information, see [Manage ML Services in HDInsight cluster](r-server-hdinsight-manage.md).
-
-### Change Apache Hadoop MapReduce memory settings
-
-Available memory to ML Services can be modified when it's running a MapReduce job. To modify a cluster, use the Apache Ambari UI for your cluster. For Ambari UI instructions, see [Manage HDInsight clusters using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
-
-Available memory to ML Services can be changed by using Hadoop switches in the call to **RxHadoopMR**:
-
-```r
-hadoopSwitches = "-libjars /etc/hadoop/conf -Dmapred.job.map.memory.mb=6656"
-```
-
-### Scale your cluster
-
-An existing ML Services cluster on HDInsight can be scaled up or down through the portal. By scaling up, you gain additional capacity for larger processing tasks. You can scale back a cluster when it's idle. For instructions about how to scale a cluster, see [Manage HDInsight clusters](../hdinsight-administer-use-portal-linux.md).
-
-### Maintain the system
-
-OS Maintenance is done on the underlying Linux VMs in an HDInsight cluster during off-hours. Typically, maintenance is done at 3:30 AM (VM's local time) every Monday and Thursday. Updates don't impact more than a quarter of the cluster at a time.
-
-Running jobs might slow down during maintenance. However, they should still run to completion. Any custom software or local data that you've is preserved across these maintenance events unless a catastrophic failure occurs that requires a cluster rebuild.
-
-## IDE options for ML Services on HDInsight
-
-The Linux edge node of an HDInsight cluster is the landing zone for R-based analysis. Recent versions of HDInsight provide a browser-based IDE of RStudio Server on the edge node. RStudio Server is more productive than the R console for development and execution.
-
-A desktop IDE can access the cluster through a remote MapReduce or Spark compute context. Options include: Microsoft's [R Tools for Visual Studio](https://marketplace.visualstudio.com/items?itemName=MikhailArkhipov007.RTVS2019) (RTVS), RStudio, and Walware's Eclipse-based StatET.
-
-Access the R console on the edge node by typing **R** at the command prompt. When using the console interface, it's convenient to develop R script in a text editor. Then cut and paste sections of your script into the R console as needed.
-
-## Pricing
-
-The prices associated with an ML Services HDInsight cluster are structured similarly to other HDInsight cluster types. They're based on the sizing of the underlying VMs across the name, data, and edge nodes. Core-hour uplifts as well. For more information, see [HDInsight pricing](https://azure.microsoft.com/pricing/details/hdinsight/).
-
-## Next steps
-
-To learn more about how to use ML Services on HDInsight clusters, see the following articles:
-
-* [Execute an R script on an ML Services cluster in Azure HDInsight using RStudio Server](machine-learning-services-quickstart-job-rstudio.md)
-* [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md)
-* [Storage options for ML Services cluster on HDInsight](r-server-storage.md)
hdinsight R Server Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-storage.md
- Title: Azure storage solutions for ML Services on HDInsight - Azure
-description: Learn about the different storage options available with ML Services on HDInsight
-- Previously updated : 01/02/2020---
-# Azure storage solutions for ML Services on Azure HDInsight
--
-ML Services on HDInsight can use different storage solutions to persist data, code, or objects that contain results from analysis. These solutions include the following options:
--- [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/)-- [Azure Data Lake Storage Gen1](https://azure.microsoft.com/services/storage/data-lake-storage/)-- [Azure Files](https://azure.microsoft.com/services/storage/files/)-
-You also have the option of accessing multiple Azure storage accounts or containers with your HDInsight cluster. Azure Files is a convenient data storage option for use on the edge node that enables you to mount an Azure file share to, for example, the Linux file system. But Azure file shares can be mounted and used by any system that has a supported operating system such as Windows or Linux.
-
-When you create an Apache Hadoop cluster in HDInsight, you specify either an **Azure Blob storage** account or **Data Lake Storage Gen1**. A specific storage container from that account holds the file system for the cluster that you create (for example, the Hadoop Distributed File System). For more information and guidance, see:
--- [Use Azure Blob storage with HDInsight](../hdinsight-hadoop-use-blob-storage.md)-- [Use Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen1.md)-
-## Use Azure Blob storage accounts with ML Services cluster
-
-If you specified more than one storage account when creating your ML Services cluster, the following instructions explain how to use a secondary account for data access and operations on an ML Services cluster. Assume the following storage accounts and container: **storage1** and a default container called **container1**, and **storage2** with **container2**.
-
-> [!WARNING]
-> For performance purposes, the HDInsight cluster is created in the same data center as the primary storage account that you specify. Using a storage account in a different location than the HDInsight cluster is not supported.
-
-### Use the default storage with ML Services on HDInsight
-
-1. Using an SSH client, connect to the edge node of your cluster. For information on using SSH with HDInsight clusters, see [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md).
-
-2. Copy a sample file, mysamplefile.csv, to the /share directory.
-
- ```bash
- hadoop fs ΓÇômkdir /share
- hadoop fs ΓÇôcopyFromLocal mycsv.scv /share
- ```
-
-3. Switch to R Studio or another R console, and write R code to set the name node to **default** and location of the file you want to access.
-
- ```R
- myNameNode <- "default"
- myPort <- 0
-
- #Location of the data:
- bigDataDirRoot <- "/share"
-
- #Define Spark compute context:
- mySparkCluster <- RxSpark(nameNode=myNameNode, consoleOutput=TRUE)
-
- #Set compute context:
- rxSetComputeContext(mySparkCluster)
-
- #Define the Hadoop Distributed File System (HDFS) file system:
- hdfsFS <- RxHdfsFileSystem(hostName=myNameNode, port=myPort)
-
- #Specify the input file to analyze in HDFS:
- inputFile <-file.path(bigDataDirRoot,"mysamplefile.csv")
- ```
-
-All the directory and file references point to the storage account `wasbs://container1@storage1.blob.core.windows.net`. This is the **default storage account** that's associated with the HDInsight cluster.
-
-### Use the additional storage with ML Services on HDInsight
-
-Now, suppose you want to process a file called mysamplefile1.csv that's located in the /private directory of **container2** in **storage2**.
-
-In your R code, point the name node reference to the **storage2** storage account.
-
-```R
-myNameNode <- "wasbs://container2@storage2.blob.core.windows.net"
-myPort <- 0
-
-#Location of the data:
-bigDataDirRoot <- "/private"
-
-#Define Spark compute context:
-mySparkCluster <- RxSpark(consoleOutput=TRUE, nameNode=myNameNode, port=myPort)
-
-#Set compute context:
-rxSetComputeContext(mySparkCluster)
-
-#Define HDFS file system:
-hdfsFS <- RxHdfsFileSystem(hostName=myNameNode, port=myPort)
-
-#Specify the input file to analyze in HDFS:
-inputFile <-file.path(bigDataDirRoot,"mysamplefile1.csv")
-```
-
-All of the directory and file references now point to the storage account `wasbs://container2@storage2.blob.core.windows.net`. This is the **Name Node** that you've specified.
-
-Configure the `/user/RevoShare/<SSH username>` directory on **storage2** as follows:
-
-```bash
-hadoop fs -mkdir wasbs://container2@storage2.blob.core.windows.net/user
-hadoop fs -mkdir wasbs://container2@storage2.blob.core.windows.net/user/RevoShare
-hadoop fs -mkdir wasbs://container2@storage2.blob.core.windows.net/user/RevoShare/<RDP username>
-```
-
-## Use Azure Data Lake Storage Gen1 with ML Services cluster
-
-To use Data Lake Storage Gen1 with your HDInsight cluster, you need to give your cluster access to each Azure Data Lake Storage Gen1 that you want to use. For instructions on how to use the Azure portal to create a HDInsight cluster with an Azure Data Lake Storage Gen1 as the default storage or as additional storage, see [Create an HDInsight cluster with Data Lake Storage Gen1 using Azure portal](../../data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md).
-
-You then use the storage in your R script much like you did a secondary Azure storage account as described in the previous procedure.
-
-### Add cluster access to your Azure Data Lake Storage Gen1
-
-You access Data Lake Storage Gen1 by using an Azure Active Directory (Azure AD) Service Principal that's associated with your HDInsight cluster.
-
-1. When you create your HDInsight cluster, select **Cluster Azure AD Identity** from the **Data Source** tab.
-
-2. In the **Cluster Azure AD Identity** dialog box, under **Select AD Service Principal**, select **Create new**.
-
-After you give the Service Principal a name and create a password for it, click **Manage ADLS Access** to associate the Service Principal with your Data Lake Storage.
-
-It's also possible to add cluster access to one or more Data Lake storage Gen1 accounts following cluster creation. Open the Azure portal entry for a Data Lake Storage Gen1 and go to **Data Explorer > Access > Add**.
-
-### How to access Data Lake Storage Gen1 from ML Services on HDInsight
-
-Once you've given access to Data Lake Storage Gen1, you can use the storage in ML Services cluster on HDInsight the way you would a secondary Azure storage account. The only difference is that the prefix **wasbs://** changes to **adl://** as follows:
-
-```R
-# Point to the ADL Storage (e.g. ADLtest)
-myNameNode <- "adl://rkadl1.azuredatalakestore.net"
-myPort <- 0
-
-# Location of the data (assumes a /share directory on the ADL account)
-bigDataDirRoot <- "/share"
-
-# Define Spark compute context
-mySparkCluster <- RxSpark(consoleOutput=TRUE, nameNode=myNameNode, port=myPort)
-
-# Set compute context
-rxSetComputeContext(mySparkCluster)
-
-# Define HDFS file system
-hdfsFS <- RxHdfsFileSystem(hostName=myNameNode, port=myPort)
-
-# Specify the input file in HDFS to analyze
-inputFile <-file.path(bigDataDirRoot,"mysamplefile.csv")
-```
-
-The following commands are used to configure the Data Lake Storage Gen1 with the RevoShare directory and add the sample .csv file from the previous example:
-
-```bash
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/user
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/user/RevoShare
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/user/RevoShare/<user>
-
-hadoop fs -mkdir adl://rkadl1.azuredatalakestore.net/share
-
-hadoop fs -copyFromLocal /usr/lib64/R Server-7.4.1/library/RevoScaleR/SampleData/mysamplefile.csv adl://rkadl1.azuredatalakestore.net/share
-
-hadoop fs ΓÇôls adl://rkadl1.azuredatalakestore.net/share
-```
-
-## Use Azure Files with ML Services on HDInsight
-
-There's also a convenient data storage option for use on the edge node called [Azure Files](https://azure.microsoft.com/services/storage/files/). It enables you to mount an Azure Storage file share to the Linux file system. This option can be handy for storing data files, R scripts, and result objects that might be needed later, especially when it makes sense to use the native file system on the edge node rather than HDFS.
-
-A major benefit of Azure Files is that the file shares can be mounted and used by any system that has a supported OS such as Windows or Linux. For example, it can be used by another HDInsight cluster that you or someone on your team has, by an Azure VM, or even by an on-premises system. For more information, see:
--- [How to use Azure Files with Linux](../../storage/files/storage-how-to-use-files-linux.md)-- [How to use Azure Files on Windows](../../storage/files/storage-dotnet-how-to-use-files.md)-
-## Next steps
--- [Overview of ML Services cluster on HDInsight](r-server-overview.md)-- [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md)-- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen2.md)
hdinsight R Server Submit Jobs R Tools Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/r-server/r-server-submit-jobs-r-tools-vs.md
- Title: Submit jobs from R Tools for Visual Studio - Azure HDInsight
-description: Submit R jobs from your local Visual Studio machine to an HDInsight cluster.
-- Previously updated : 06/19/2019---
-# Submit jobs from R Tools for Visual Studio
--
-[R Tools for Visual Studio](https://marketplace.visualstudio.com/items?itemName=MikhailArkhipov007.RTVS2019) (RTVS) is a free, open-source extension for the Community (free), Professional, and Enterprise editions of both [Visual Studio 2017](https://www.visualstudio.com/downloads/), and [Visual Studio 2015 Update 3](https://go.microsoft.com/fwlink/?LinkId=691129) or higher. RTVS is not available for [Visual Studio 2019](/visualstudio/porting/port-migrate-and-upgrade-visual-studio-projects?preserve-view=true&view=vs-2019).
-
-RTVS enhances your R workflow by offering tools such as the [R Interactive window](/visualstudio/rtvs/interactive-repl) (REPL), intellisense (code completion), [plot visualization](/visualstudio/rtvs/visualizing-data) through R libraries such as ggplot2 and ggviz, [R code debugging](/visualstudio/rtvs/debugging), and more.
-
-## Set up your environment
-
-1. Install [R Tools for Visual Studio](/visualstudio/rtvs/installing-r-tools-for-visual-studio).
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/install-r-tools-for-vs.png" alt-text="Installing RTVS in Visual Studio 2017" border="true":::
-
-2. Select the *Data science and analytical applications* workload, then select the **R language support**, **Runtime support for R development**, and **Microsoft R Client** options.
-
-3. You need to have public and private keys for SSH authentication.
- <!-- {TODO tbd, no such file yet}[use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-windows.md) -->
-
-4. Install [ML Server](/previous-versions/machine-learning-server/install/r-server-install-windows) on your machine. ML Server provides the [`RevoScaleR`](/machine-learning-server/r-reference/revoscaler/revoscaler) and `RxSpark` functions.
-
-5. Install [PuTTY](https://www.putty.org/) to provide a compute context to run `RevoScaleR` functions from your local client to your HDInsight cluster.
-
-6. You have the option to apply the Data Science Settings to your Visual Studio environment, which provides a new layout for your workspace for the R tools.
- 1. To save your current Visual Studio settings, use the **Tools > Import and Export Settings** command, then select **Export selected environment settings** and specify a file name. To restore those settings, use the same command and select **Import selected environment settings**.
-
- 2. Go to the **R Tools** menu item, then select **Data Science Settings...**.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/data-science-settings.png" alt-text="Visual Studio Data Science Settings" border="true":::
-
- > [!NOTE]
- > Using the approach in step 1, you can also save and restore your personalized data scientist layout, rather than repeating the **Data Science Settings** command.
-
-## Execute local R methods
-
-1. Create your HDInsight ML Services cluster.
-2. Install the [RTVS extension](/visualstudio/rtvs/installation).
-3. Download the [samples zip file](https://github.com/Microsoft/RTVS-docs/archive/master.zip).
-4. Open `examples/Examples.sln` to launch the solution in Visual Studio.
-5. Open the `1-Getting Started with R.R` file in the `A first look at R` solution folder.
-6. Starting at the top of the file, press Ctrl+Enter to send each line, one at a time, to the R Interactive window. Some lines might take a while as they install packages.
- * Alternatively, you can select all lines in the R file (Ctrl+A), then either execute all (Ctrl+Enter), or select the Execute Interactive icon on the toolbar.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/execute-interactive1.png" alt-text="Visual Studio execute interactive" border="true":::
-
-7. After running all the lines in the script, you should see an output similar to this:
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/visual-studio-workspace.png" alt-text="Visual Studio workspace R tools" border="true":::
-
-## Submit jobs to an HDInsight ML Services cluster
-
-Using a Microsoft ML Server/Microsoft R Client from a Windows computer equipped with PuTTY, you can create a compute context that will run distributed `RevoScaleR` functions from your local client to your HDInsight cluster. Use `RxSpark` to create the compute context, specifying your username, the Apache Hadoop cluster's edge node, SSH switches, and so forth.
-
-1. The ML Services edge node address on HDInsight is `CLUSTERNAME-ed-ssh.azurehdinsight.net` where `CLUSTERNAME` is the name of your ML Services cluster.
-
-1. Paste the following code into the R Interactive window in Visual Studio, altering the values of the setup variables to match your environment.
-
- ```R
- # Setup variables that connect the compute context to your HDInsight cluster
- mySshHostname <- 'r-cluster-ed-ssh.azurehdinsight.net ' # HDI secure shell hostname
- mySshUsername <- 'sshuser' # HDI SSH username
- mySshClientDir <- "C:\\Program Files (x86)\\PuTTY"
- mySshSwitches <- '-i C:\\Users\\azureuser\\r.ppk' # Path to your private ssh key
- myHdfsShareDir <- paste("/user/RevoShare", mySshUsername, sep = "/")
- myShareDir <- paste("/var/RevoShare", mySshUsername, sep = "/")
- mySshProfileScript <- "/usr/lib64/microsoft-r/3.3/hadoop/RevoHadoopEnvVars.site"
-
- # Create the Spark Cluster compute context
- mySparkCluster <- RxSpark(
- sshUsername = mySshUsername,
- sshHostname = mySshHostname,
- sshSwitches = mySshSwitches,
- sshProfileScript = mySshProfileScript,
- consoleOutput = TRUE,
- hdfsShareDir = myHdfsShareDir,
- shareDir = myShareDir,
- sshClientDir = mySshClientDir
- )
-
- # Set the current compute context as the Spark compute context defined above
- rxSetComputeContext(mySparkCluster)
- ```
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/apache-spark-context.png" alt-text="apache spark setting the context" border="true":::
-
-1. Execute the following commands in the R Interactive window:
-
- ```R
- rxHadoopCommand("version") # should return version information
- rxHadoopMakeDir("/user/RevoShare/newUser") # creates a new folder in your storage account
- rxHadoopCopy("/example/data/people.json", "/user/RevoShare/newUser") # copies file to new folder
- ```
-
- You should see an output similar to the following:
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/successful-rx-commands.png" alt-text="Successful rx command execution" border="true":::
-a
-1. Verify that the `rxHadoopCopy` successfully copied the `people.json` file from the example data folder to the newly created `/user/RevoShare/newUser` folder:
-
- 1. From your HDInsight ML Services cluster pane in Azure, select **Storage accounts** from the left-hand menu.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/hdinsight-storage-accounts.png" alt-text="Azure HDInsight Storage accounts" border="true":::
-
- 2. Select the default storage account for your cluster, making note of the container/directory name.
-
- 3. Select **Containers** from the left-hand menu on your storage account pane.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/hdi-storage-containers.png" alt-text="Azure HDInsight Storage containers" border="true":::
-
- 4. Select your cluster's container name, browse to the **user** folder (you might have to click *Load more* at the bottom of the list), then select *RevoShare*, then **newUser**. The `people.json` file should be displayed in the `newUser` folder.
-
- :::image type="content" source="./media/r-server-submit-jobs-r-tools-vs/hdinsight-copied-file.png" alt-text="HDInsight copied file folder location" border="true":::
-
-1. After you are finished using the current Apache Spark context, you must stop it. You cannot run multiple contexts at once.
-
- ```R
- rxStopEngine(mySparkCluster)
- ```
-
-## Next steps
-
-* [Compute context options for ML Services on HDInsight](r-server-compute-contexts.md)
-* [Combining ScaleR and SparkR](../hdinsight-hadoop-r-scaler-sparkr.md) provides an example of airline flight delay predictions.
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Title: Architectural concepts in Azure IoT Central | Microsoft Docs
description: This article introduces key concepts relating the architecture of Azure IoT Central Previously updated : 08/31/2021 Last updated : 06/03/2022
iot-central Concepts Device Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-authentication.md
To connect a device with device SAS token to your application:
> [!NOTE] > To use existing SAS keys in your enrollment groups, disable the **Auto generate keys** toggle and manually enter your SAS keys.
+If you use the default **SAS-IoT-Devices** enrollment group, IoT Central generates the individual device keys for you. To access these keys, select **Connect** on the device details page. This page displays the **ID Scope**, **Device ID**, **Primary key**, and **Secondary key** that you use in your device code. This page also displays a QR code the contains the same data.
+ ## Individual enrollment Typically, devices connect by using credentials derived from an enrollment group X.509 certificate or SAS key. However, if your devices each have their own credentials, you can use individual enrollments. An individual enrollment is an entry for a single device that's allowed to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
Title: What are device templates in Azure IoT Central | Microsoft Docs
description: Azure IoT Central device templates let you specify the behavior of the devices connected to your application. A device template specifies the telemetry, properties, and commands the device must implement. A device template also defines the UI for the device in IoT Central such as the forms and views an operator uses. Previously updated : 08/24/2021 Last updated : 06/03/2022
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
Title: Move from IoT Central to a PaaS solution | Microsoft Docs
description: How do I move between aPaaS and PaaS solution approaches? Previously updated : 01/25/2022 Last updated : 06/09/2022
iot-central Concepts Faq Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-extend.md
Title: Extend IoT Central | Microsoft Docs
description: How do I extend IoT Central if it's missing something I need? Previously updated : 01/05/2022 Last updated : 06/09/2022
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. Previously updated : 01/18/2022 Last updated : 06/08/2022
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-quotas-limits.md
Title: Azure IoT Central quotas and limits | Microsoft Docs
description: This article lists the key quotas and limits that apply to an IoT Central application. Previously updated : 12/15/2021 Last updated : 06/07/2022
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
Title: Telemetry, property, and command payloads in Azure IoT Central | Microsof
description: Azure IoT Central device templates let you specify the telemetry, properties, and commands of a device must implement. Understand the format of the data a device can exchange with IoT Central. Previously updated : 12/27/2021 Last updated : 06/08/2022
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
instanceOf: .device.templateId,
properties: .device.properties.reported | map({ key: .name, value: .value }) | from_entries ```
-Device templates: If you're currently using legacy data exports with the device templates data type, then you can obtain the same data using the [Device Templates - Get API call](/rest/api/iotcentral/1.0dataplane/device-templates/get).
+Device templates: If you're currently using legacy data exports with the device templates data type, then you can obtain the same data using the [Device Templates - Get API call](/rest/api/iotcentral/2022-05-31dataplane/device-templates/get).
### Destination migration considerations
This example snapshot shows a message that contains device and properties data i
If you have an existing data export in your preview application with the *Devices* and *Device templates* streams turned on, update your export by **30 June 2020**. This requirement applies to exports to Azure Blob storage, Azure Event Hubs, and Azure Service Bus.
-Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/1.0dataplane/devices/get), [device property](/rest/api/iotcentral/1.0dataplane/devices/get-properties), and [device template](/rest/api/iotcentral/1.0dataplane/device-templates/get) objects in the IoT Central public API.
+Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/2022-05-31dataplane/devices/get), [device property](/rest/api/iotcentral/2022-05-31dataplane/devices/get-properties), and [device template](/rest/api/iotcentral/2022-05-31dataplane/device-templates/get) objects in the IoT Central public API.
For **Devices**, notable differences between the old data format and the new data format include: - `@id` for device is removed, `deviceId` is renamed to `id`
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
When a device connects to your IoT Central application, its device status change
- A new real device is added on the **Devices** page. - A set of devices is added using **Import** on the **Devices** page.
-1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
+1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS. The *Device ID* that was used to register the device. Either a SAS key or X.509 certificatTo find these values: to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
When a device connects to your IoT Central application, its device status change
### Device connection status
-When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events are not sent by the device, they are generated internally by IoT Central.
+When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events aren't sent by the device, they're generated internally by IoT Central.
The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute.
To add a device to your Azure IoT Central application:
1. This device now appears in your device list for this template. Select the device to see the device details page that contains all views for the device.
+## Get device connection information
+
+When a device provisions and connects to IoT Central, it needs connection information from your IoT Central application:
+
+- The *ID Scope* that identifies the application to DPS.
+- The *Device ID* that was used to register the device.
+- Either a SAS key or X.509 certificate.
+
+To find these values:
+
+1. Choose **Devices** on the left pane.
+
+1. Click on the device in the device list to see the device details.
+
+1. Select **Connect** to view the connection information. The QR code encodes a JSON document that includes the **ID Scope**, **Device ID**, and **Primary key** derived from the default **SAS-IoT-Devices** device connection group.
+
+> [!NOTE]
+> If the authentication type is **Shared access signature**, the keys displayed are derived from the default **SAS-IoT-Devices** device connection group.
+ ## Change organization To move a device to a different organization, you must have access to both the source and destination organizations. To move a device:
To move a device to a different organization, you must have access to both the s
1. Select the device to move in the device list.
-1. Select **Manage Device** and **Organization** from the drop down menu.
+1. Select **Manage Device** and **Organization** from the drop-down menu.
1. Select the new organization for the device:
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
The response to this request looks like the following example. The role value id
} ```
-You can also add a service principal user which is useful if you need to use service principal authentication for REST API calls. To learn more, see [Add or update a service principal user](/rest/api/iotcentral/1.0dataplane/users/create#add-or-update-a-service-principal-user).
+You can also add a service principal user which is useful if you need to use service principal authentication for REST API calls. To learn more, see [Add or update a service principal user](/rest/api/iotcentral/2022-05-31dataplane/users/create#add-or-update-a-service-principal-user).
### Change the role of a user
iot-central Iot Central Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-customer-data-requests.md
Title: Customer data request featuresΓÇï in Azure IoT Central | Microsoft Docs
description: This article describes identifying, deleting, and exporting customer data in Azure IoT Central application. Previously updated : 12/28/2021 Last updated : 06/03/2022
iot-central Iot Central Customer Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-customer-data-residency.md
Title: Customer data residency in Azure IoT Central | Microsoft Docs
description: This article describes customer data residency in Azure IoT Central applications. Previously updated : 12/09/2021 Last updated : 06/07/2022
iot-central Iot Central Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-supported-browsers.md
Title: Supported browsers for Azure IoT Central | Microsoft Docs
description: Azure IoT Central can be accessed across modern desktops, tablets and browsers. This article outlines the list of supported browsers. Previously updated : 12/21/2021 Last updated : 06/08/2022
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central application administration guide
description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to administer your IoT Central application. Application administration includes users, organization, security, and automated deployments. Previously updated : 01/04/2022 Last updated : 06/08/2022
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
This article introduces you to Azure IoT Central REST API. Use the API to create
The REST API operations are grouped into the: -- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/1.0dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) versions of the data plane API.
+- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/2022-05-31dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) versions of the data plane API.
- *Control plane* operations that let you work with the Azure resources associated with IoT Central applications. Control plane operations let you automate tasks that can also be completed in the Azure portal. ## Data plane operations
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
When you register a device with IoT Central, you're telling IoT Central the ID o
There are three ways to register a device in an IoT Central application: -- Use the **Devices** page in your IoT Central application to register devices individually. To learn more, see [Add a device](howto-manage-devices-individually.md#add-a-device).-- Add devices in bulk from a CSV file. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices). - Automatically register devices when they first try to connect. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. To learn more, see [Automatically register devices](concepts-device-authentication.md#automatically-register-devices).
+- Add devices in bulk from a CSV file. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices).
+- Use the **Devices** page in your IoT Central application to register devices individually. To learn more, see [Add a device](howto-manage-devices-individually.md#add-a-device).
Optionally, you can require an operator to approve the device before it starts sending data.
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Title: Azure IoT Central data integration guide | Microsoft Docs
description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to integrate your IoT Central application with other services to extend its capabilities. Previously updated : 01/04/2022 Last updated : 06/03/2022
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
To register your device:
Keep this page open. In the next section, you scan this QR code using the smartphone app to connect it to IoT Central.
+> [!TIP]
+> The QR code contains the information, such as the registered device ID, your device needs to establish a connection to your IoT Central application. It saves you from the need to enter the connection information manually.
+ ## Connect your device To get you started quickly, this article uses the **IoT Plug and Play** smartphone app as an IoT device. The app sends telemetry collected from the smartphone's sensors, responds to commands invoked from IoT Central, and reports property values to IoT Central.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
Keep Termite open to monitor device output in the following steps.
* IAR Embedded Workbench for ARM (EW for ARM). You can download and install a [14-day free trial of IAR EW for ARM](https://www.iar.com/products/architectures/arm/iar-embedded-workbench-for-arm/).
-* Download the Microchip ATSAME54-XPRO IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the Microchip ATSAME54-XPRO IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory.
+ > [!IMPORTANT]
+ > Choose a directory with a short path to avoid compiler errors when you build. For example, use *C:\atsame54*.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
Keep Termite open to monitor device output in the following steps.
* [MPLAB XC32/32++ Compiler 2.4.0 or later](https://www.microchip.com/mplab/compilers).
-* Download the Microchip ATSAME54-XPRO MPLab sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the Microchip ATSAME54-XPRO MPLab sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory.
+ > [!IMPORTANT]
+ > Choose a directory with a short path to avoid compiler errors when you build. For example, use *C:\atsame54*.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
This checklist is a starting point for firewall rules:
| FQDN (\* = wildcard) | Outbound TCP Ports | Usage | | -- | -- | -- | | `mcr.microsoft.com` | 443 | Microsoft Container Registry |
+ | `\*.data.mcr.microsoft.com` | 443 | Data endpoint providing content delivery. |
| `global.azure-devices-provisioning.net` | 443 | [Device Provisioning Service](../iot-dps/about-iot-dps.md) access (optional) | | `\*.azurecr.io` | 443 | Personal and third-party container registries | | `\*.blob.core.windows.net` | 443 | Download Azure Container Registry image deltas from blob storage |
iot-hub Tutorial Message Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md
Previously updated : 03/16/2022 Last updated : 06/08/2022 # Customer intent: As a customer using Azure IoT Hub, I want to add information to the messages that come through my IoT hub and are sent to another endpoint. For example, I'd like to pass the IoT hub name to the application that reads the messages from the final endpoint, such as Azure Storage. # Tutorial: Use Azure IoT Hub message enrichments
-*Message enrichments* describes the ability of Azure IoT Hub to *stamp* messages with additional information before the messages are sent to the designated endpoint. One reason to use message enrichments is to include data that can be used to simplify downstream processing. For example, enriching device telemetry messages with a device twin tag can reduce load on customers to make device twin API calls for this information. For more information, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
+*Message enrichments* are the ability of Azure IoT Hub to stamp messages with additional information before the messages are sent to the designated endpoint. One reason to use message enrichments is to include data that can be used to simplify downstream processing. For example, enriching device messages with a device twin tag can reduce load on customers to make device twin API calls for this information. For more information, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
In this tutorial, you see two ways to create and configure the resources that are needed to test the message enrichments for an IoT hub. The resources include one storage account with two storage containers. One container holds the enriched messages, and another container holds the original messages. Also included is an IoT hub to receive the messages and route them to the appropriate storage container based on whether they're enriched or not.
-* The first method is to use the Azure CLI to create the resources and configure the message routing. Then you define the enrichments manually by using the [Azure portal](https://portal.azure.com).
+* The first method is to use the Azure CLI to create the resources and configure the message routing. Then you define the message enrichments in the Azure portal.
-* The second method is to use an Azure Resource Manager template to create both the resources *and* the configurations for the message routing and message enrichments.
+* The second method is to use an Azure Resource Manager template to create both the resources and configure both the message routing and message enrichments.
After the configurations for the message routing and message enrichments are finished, you use an application to send messages to the IoT hub. The hub then routes them to both storage containers. Only the messages sent to the endpoint for the **enriched** storage container are enriched.
-Here are the tasks you perform to complete this tutorial:
+In this tutorial, you perform the following tasks:
-**Use IoT Hub message enrichments**
> [!div class="checklist"]
-> * First method: Create resources and configure message routing by using the Azure CLI. Configure the message enrichments manually by using the [Azure portal](https://portal.azure.com).
-> * Second method: Create resources and configure message routing and message enrichments by using a Resource Manager template.
+>
+> * First method: Create resources and configure message routing using the Azure CLI. Configure the message enrichments in the Azure portal.
+> * Second method: Create resources and configure message routing and message enrichments using a Resource Manager template.
> * Run an app that simulates an IoT device sending messages to the hub.
-> * View the results, and verify that the message enrichments are working as expected.
+> * View the results, and verify that the message enrichments are being applied to the targeted messages.
## Prerequisites -- You must have an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- Install [Visual Studio](https://www.visualstudio.com/).
+* You must have an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-- Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)] ## Retrieve the IoT C# samples repository
-Download the [IoT C# samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip) from GitHub and unzip them. This repository has several applications, scripts, and Resource Manager templates in it. The ones to be used for this tutorial are as follows:
+Download or clone the [IoT C# samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) from GitHub. Follow the directions in **README.md** to set up the prerequisites for running C# samples.
+
+This repository has several applications, scripts, and Resource Manager templates in it. The ones to be used for this tutorial are as follows:
-* For the manual method, there's a CLI script that's used to create the resources. This script is in /azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/iothub_msgenrichment_cli.azcli. This script creates the resources and configures the message routing. After you run this script, create the message enrichments manually by using the [Azure portal](https://portal.azure.com).
-* For the automated method, there's an Azure Resource Manager template. The template is in /azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/template_msgenrichments.json. This template creates the resources, configures the message routing, and then configures the message enrichments.
-* The third application you use is the Device Simulation app, which you use to send messages to the IoT hub and test the message enrichments.
+* For the manual method, there's a CLI script that creates the cloud resources. This script is in `/azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/iothub_msgenrichment_cli.azcli`. This script creates the resources and configures the message routing. After you run this script, create the message enrichments manually by using the Azure portal.
+* For the automated method, there's an Azure Resource Manager template. The template is in `/azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/template_msgenrichments.json`. This template creates the resources, configures the message routing, and then configures the message enrichments.
+* The third application you use is the device simulation app, which you use to send messages to the IoT hub and test the message enrichments.
-## Manually set up and configure by using the Azure CLI
+## Create and configure resources using the Azure CLI
-In addition to creating the necessary resources, the Azure CLI script also configures the two routes to the endpoints that are separate storage containers. For more information on how to configure the message routing, see the [Routing tutorial](tutorial-routing.md). After the resources are set up, use the [Azure portal](https://portal.azure.com) to configure message enrichments for each endpoint. Then continue on to the testing step.
+In addition to creating the necessary resources, the Azure CLI script also configures the two routes to the endpoints that are separate storage containers. For more information on how to configure message routing, see the [routing tutorial](tutorial-routing.md). After the resources are set up, use the [Azure portal](https://portal.azure.com) to configure message enrichments for each endpoint. Then continue on to the testing step.
> [!NOTE] > All messages are routed to both endpoints, but only the messages going to the endpoint with configured message enrichments will be enriched.
->
You can use the script that follows, or you can open the script in the /resources folder of the downloaded repository. The script performs the following steps: * Create an IoT hub. * Create a storage account. * Create two containers in the storage account. One container is for the enriched messages, and another container is for messages that aren't enriched.
-* Set up routing for the two different storage accounts:
- * Create an endpoint for each storage account container.
- * Create a route to each of the storage account container endpoints.
+* Set up routing for the two different storage containers:
+ * Create an endpoint for each storage account container.
+ * Create a route to each of the storage account container endpoints.
There are several resource names that must be globally unique, such as the IoT hub name and the storage account name. To make running the script easier, those resource names are appended with a random alphanumeric value called *randomValue*. The random value is generated once at the top of the script. It's appended to the resource names as needed throughout the script. If you don't want the value to be random, you can set it to an empty string or to a specific value.
Here are the resources created by the script. *Enriched* means that the resource
| Name | Value | |--|--| | resourceGroup | ContosoResourcesMsgEn |
-| container name | original |
-| container name | enriched |
| IoT device name | Contoso-Test-Device | | IoT Hub name | ContosoTestHubMsgEn | | storage Account Name | contosostorage |
+| container name 1 | original |
+| container name 2 | enriched |
| endpoint Name 1 | ContosoStorageEndpointOriginal | | endpoint Name 2 | ContosoStorageEndpointEnriched| | route Name 1 | ContosoStorageRouteOriginal |
subscriptionID=$(az account show --query id -o tsv)
# This retrieves a random value. randomValue=$RANDOM
-# This command installs the IOT Extension for Azure CLI.
+# This command installs the IoT Extension for Azure CLI.
# You only need to install this the first time. # You need it to create the device identity. az extension add --name azure-iot
az iot hub route create \
At this point, the resources are all set up and the message routing is configured. You can view the message routing configuration in the portal and set up the message enrichments for messages going to the **enriched** storage container.
-### Manually configure the message enrichments by using the Azure portal
+### Configure the message enrichments using the Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it.
-1. Go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it. Select **Message routing** for the IoT hub.
+2. Select **Message routing** for the IoT hub.
:::image type="content" source="./media/tutorial-message-enrichments/select-iot-hub.png" alt-text="Screenshot that shows how to select message routing." border="true":::
- The message routing pane has three tabs labeled **Routes**, **Custom endpoints**, and **Enrich messages**. Browse the first two tabs to see the configuration set up by the script. Use the third tab to add message enrichments. Let's enrich messages going to the endpoint for the storage container called **enriched**. Fill in the name and value, and then select the endpoint **ContosoStorageEndpointEnriched** from the drop-down list. Here's an example of how to set up an enrichment that adds the IoT hub name to the message:
+ The message routing pane has three tabs labeled **Routes**, **Custom endpoints**, and **Enrich messages**. Browse the first two tabs to see the configuration set up by the script.
+
+3. Select the **Enrich messages** tab to add three message enrichments for the messages going to the endpoint for the storage container called **enriched**.
+
+4. For each message enrichment, fill in the name and value, and then select the endpoint **ContosoStorageEndpointEnriched** from the drop-down list. Here's an example of how to set up an enrichment that adds the IoT hub name to the message:
![Add first enrichment](./media/tutorial-message-enrichments/add-message-enrichments.png)
-2. Add these values to the list for the ContosoStorageEndpointEnriched endpoint.
+ Add these values to the list for the ContosoStorageEndpointEnriched endpoint:
- | Key | Value | Endpoint (drop-down list) |
- | - | -- | -|
- | myIotHub | $iothubname | AzureStorageContainers > ContosoStorageEndpointEnriched |
- | DeviceLocation | $twin.tags.location (assumes that the device twin has a location tag) | AzureStorageContainers > ContosoStorageEndpointEnriched |
- |customerID | 6ce345b8-1e4a-411e-9398-d34587459a3a | AzureStorageContainers > ContosoStorageEndpointEnriched |
+ | Name | Value | Endpoint |
+ | - | -- | -- |
+ | myIotHub | `$iothubname` | ContosoStorageEndpointEnriched |
+ | DeviceLocation | `$twin.tags.location` (assumes that the device twin has a location tag) | ContosoStorageEndpointEnriched |
+ |customerID | `6ce345b8-1e4a-411e-9398-d34587459a3a` | ContosoStorageEndpointEnriched |
-3. When you're finished, your pane should look similar to this image:
+ When you're finished, your pane should look similar to this image:
![Table with all enrichments added](./media/tutorial-message-enrichments/all-message-enrichments.png)
-4. Select **Apply** to save the changes. Skip to the [Test message enrichments](#test-message-enrichments) section.
+5. Select **Apply** to save the changes.
+
+You now have message enrichments set up for all messages routed to the **enriched** endpoint. Skip to the [Test message enrichments](#test-message-enrichments) section to continue the tutorial.
-## Create and configure by using a Resource Manager template
+## Create and configure resources using a Resource Manager template
You can use a Resource Manager template to create and configure the resources, message routing, and message enrichments.
-1. Sign in to the Azure portal. Select **+ Create a Resource** to bring up a search box. Enter *template deployment*, and search for it. In the results pane, select **Template deployment (deploy using custom template)**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **+ Create a Resource** to bring up a search box. Enter *template deployment*, and search for it. In the results pane, select **Template deployment (deploy using custom template)**.
![Template deployment in the Azure portal](./media/tutorial-message-enrichments/template-select-deployment.png)
You can use a Resource Manager template to create and configure the resources, m
1. In the **Custom deployment** pane, select **Build your own template in the editor**.
-1. In the **Edit template** pane, select **Load file**. Windows Explorer appears. Locate the **template_messageenrichments.json** file in the unzipped repo file in **/iot-hub/Tutorials/Routing/SimulatedDevice/resources**.
+1. In the **Edit template** pane, select **Load file**. Windows Explorer appears. Locate the **template_messageenrichments.json** file in the unzipped repo file in the **/iot-hub/Tutorials/Routing/SimulatedDevice/resources** directory.
![Select template from local machine](./media/tutorial-message-enrichments/template-select.png)
You can use a Resource Manager template to create and configure the resources, m
| Name | Value | |--|--|
- | resourceGroup | ContosoResourcesMsgEn |
- | container name | original |
- | container name | enriched |
- | IoT device name | Contoso-Test-Device |
| IoT Hub name | ContosoTestHubMsgEn | | storage Account Name | contosostorage |
+ | container name 1 | original |
+ | container name 2 | enriched |
| endpoint Name 1 | ContosoStorageEndpointOriginal | | endpoint Name 2 | ContosoStorageEndpointEnriched| | route Name 1 | ContosoStorageRouteOriginal |
You can use a Resource Manager template to create and configure the resources, m
![Top half of Custom deployment pane](./media/tutorial-message-enrichments/template-deployment-top.png)
-1. Here's the bottom half of the **Custom deployment** pane. You can see the rest of the parameters and the terms and conditions.
+1. Here's the bottom half of the **Custom deployment** pane. You can see the rest of the parameters and the terms and conditions.
![Bottom half of Custom deployment pane](./media/tutorial-message-enrichments/template-deployment-bottom.png) 1. Select the check box to agree to the terms and conditions. Then select **Purchase** to continue with the template deployment.
-1. Wait for the template to be fully deployed. Select the bell icon at the top of the screen to check on the progress. When it's finished, continue to the [Test message enrichments](#test-message-enrichments) section.
+1. Wait for the template to be fully deployed. Select the bell icon at the top of the screen to check on the progress.
+
+### Register a device in the portal
+
+1. Once your resources are deployed, select the IoT hub in your resource group.
+1. Select **Devices** from the **Device management** section of the navigation menu.
+1. Select **Add Device** to register a new device in your hub.
+1. Provide a device ID. The sample application used later in this tutorial defaults to a device named `Contoso-Test-Device`, but you can use any ID. Select **Save**.
+1. Once the device is created in your hub, select its name from the list of devices. You may need to refresh the list.
+1. Copy the **Primary key** value and have it available to use in the testing section of this article.
## Add location tag to the device twin
One of the message enrichments configured on your IoT hub specifies a key of Dev
Follow these steps to add a location tag to your device's twin with the portal.
-1. Go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it. Select **Devices** on the left-pane of the IoT hub, then select your device (**Contoso-Test-Device**).
+1. Navigate to your IoT hub in the Azure portal.
+
+1. Select **Devices** on the left-pane of the IoT hub, then select your device.
1. Select the **Device twin** tab at the top of the device page and add the following line just before the closing brace at the bottom of the device twin. Then select **Save**. ```json , "tags": {"location": "Plant 43"}- ``` :::image type="content" source="./media/tutorial-message-enrichments/add-location-tag-to-device-twin.png" alt-text="Screenshot of adding location tag to device twin in Azure portal":::
To learn more about how device twin paths are handled with message enrichments,
To view the message enrichments, select **Resource groups**. Then select the resource group you're using for this tutorial. Select the IoT hub from the list of resources, and go to **Messaging**. The message routing configuration and the configured enrichments appear.
-Now that the message enrichments are configured for the endpoint, run the Simulated Device application to send messages to the IoT hub. The hub was set up with settings that accomplish the following tasks:
+Now that the message enrichments are configured for the **enriched** endpoint, run the simulated device application to send messages to the IoT hub. The hub was set up with settings that accomplish the following tasks:
-* Messages routed to the storage endpoint ContosoStorageEndpointOriginal won't be enriched and will be stored in the storage container `original`.
+* Messages routed to the storage endpoint ContosoStorageEndpointOriginal won't be enriched and will be stored in the storage container **original**.
-* Messages routed to the storage endpoint ContosoStorageEndpointEnriched will be enriched and stored in the storage container `enriched`.
+* Messages routed to the storage endpoint ContosoStorageEndpointEnriched will be enriched and stored in the storage container **enriched**.
-The Simulated Device application is one of the applications in the unzipped download. The application sends messages for each of the different message routing methods in the [Routing tutorial](tutorial-routing.md), which includes Azure Storage.
+The simulated device application is one of the applications in the azure-iot-samples-csharp repository. The application sends messages with a randomized value for the property `level`. Only messages that have `storage` set as the message's level property will be routed to the two endpoints.
-Double-click the solution file **IoT_SimulatedDevice.sln** to open the code in Visual Studio, and then open **Program.cs**. Substitute the IoT hub name for the marker `{your hub name}`. The format of the IoT hub host name is **{your hub name}.azure-devices.net**. For this tutorial, the hub host name is ContosoTestHubMsgEn.azure-devices.net. Next, substitute the device key you saved earlier when you ran the script to create the resources for the marker `{your device key}`.
+1. Open the file **Program.cs** from the **SimulatedDevice** directory in your preferred code editor.
-If you don't have the device key, you can retrieve it from the portal. After you sign in, go to **Resource groups**, select your resource group, and then select your IoT hub. Look under **IoT Devices** for your test device, and select your device. Select the copy icon next to **Primary key** to copy it to the clipboard.
+1. Replace the placeholder text with your own resource information. Substitute the IoT hub name for the marker `{your hub name}`. The format of the IoT hub host name is **{your hub name}.azure-devices.net**. Next, substitute the device key you saved earlier when you ran the script to create the resources for the marker `{your device key}`.
+
+ If you don't have the device key, you can retrieve it from the portal. After you sign in, go to **Resource groups**, select your resource group, and then select your IoT hub. Look under **IoT Devices** for your test device, and select your device. Select the copy icon next to **Primary key** to copy it to the clipboard.
```csharp
- private readonly static string s_myDeviceId = "Contoso-Test-Device";
- private readonly static string s_iotHubUri = "ContosoTestHubMsgEn.azure-devices.net";
- // This is the primary key for the device. This is in the portal.
- // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
- private readonly static string s_deviceKey = "{your device key}";
+ private readonly static string s_myDeviceId = "Contoso-Test-Device";
+ private readonly static string s_iotHubUri = "{your hub name}.azure-devices.net";
+ // This is the primary key for the device. This is in the portal.
+ // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
+ private readonly static string s_deviceKey = "{your device key}";
``` ### Run and test
-Run the console application for a few minutes. The messages that are being sent are displayed on the console screen of the application.
+Run the console application for a few minutes.
-The app sends a new device-to-cloud message to the IoT hub every second. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. It randomly assigns a level of `critical` or `storage`, which causes the message to be routed to the storage account or to the default endpoint. The messages sent to the **enriched** container in the storage account will be enriched.
+In a command line window, you can run the sample with the following commands executed at the **SimulatedDevice** directory level:
-After several storage messages are sent, view the data.
+```console
+dotnet restore
+dotnet run
+```
-1. Select **Resource groups**. Find your resource group, **ContosoResourcesMsgEn**, and select it.
+The app sends a new device-to-cloud message to the IoT hub every second. The messages that are being sent are displayed on the console screen of the application. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. The sample program randomly changes the message level to either `critical` or `storage`. Messages labeled for storage are routed to the storage account, and the rest go to the default endpoint. The messages sent to the **enriched** container in the storage account will be enriched.
-2. Select your storage account, which is **contosostorage**. Then select **Storage Explorer (preview)** in the left pane.
+After several storage messages are sent, view the data.
- ![Select Storage Explorer](./media/tutorial-message-enrichments/select-storage-explorer.png)
+1. Select **Resource groups**. Find your resource group, **ContosoResourcesMsgEn**, and select it.
- Select **BLOB CONTAINERS** to see the two containers that can be used.
+2. Select your storage account, which begins with **contosostorage**. Then select **Storage browser (preview)** from the navigation menu. Select **Blob containers** to see the two containers that you created.
- ![See the containers in the storage account](./media/tutorial-message-enrichments/show-blob-containers.png)
+ :::image type="content" source="./media/tutorial-message-enrichments/show-blob-containers.png" alt-text="See the containers in the storage account.":::
-The messages in the container called **enriched** have the message enrichments included in the messages. The messages in the container called **original** have the raw messages with no enrichments. Drill down into one of the containers until you get to the bottom, and open the most recent message file. Then do the same for the other container to verify that there are no enrichments added to messages in that container.
+The messages in the container called **enriched** have the message enrichments included in the messages. The messages in the container called **original** have the raw messages with no enrichments. Drill down into one of the containers until you get to the bottom, and open the most recent message file. Then do the same for the other container to verify that the one is enriched and one isn't.
When you look at messages that have been enriched, you should see "my IoT Hub" with the hub name and the location and the customer ID, like this: ```json
-{"EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z","Properties":{"level":"storage","myIotHub":"contosotesthubmsgen3276","DeviceLocation":"Plant 43","customerID":"6ce345b8-1e4a-411e-9398-d34587459a3a"},"SystemProperties":{"connectionDeviceId":"Contoso-Test-Device","connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"636930642531278483","enqueuedTime":"2019-05-10T06:06:32.7220000Z"},"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"}
+{
+ "EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z",
+ "Properties":
+ {
+ "level":"storage",
+ "myIotHub":"contosotesthubmsgen3276",
+ "DeviceLocation":"Plant 43",
+ "customerID":"6ce345b8-1e4a-411e-9398-d34587459a3a"
+ },
+ "SystemProperties":
+ {
+ "connectionDeviceId":"Contoso-Test-Device",
+ "connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
+ "connectionDeviceGenerationId":"636930642531278483",
+ "enqueuedTime":"2019-05-10T06:06:32.7220000Z"
+ },"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"
+}
```
-Here's an unenriched message. Notice that "my IoT Hub," "devicelocation," and "customerID" don't show up here because these fields are added by the enrichments. This endpoint has no enrichments.
+Here's an unenriched message. Notice that `my IoT Hub,` `devicelocation,` and `customerID` don't show up here because these fields are added by the enrichments. This endpoint has no enrichments.
```json
-{"EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z","Properties":{"level":"storage"},"SystemProperties":{"connectionDeviceId":"Contoso-Test-Device","connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"636930642531278483","enqueuedTime":"2019-05-10T06:06:32.7220000Z"},"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"}
+{
+ "EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z",
+ "Properties":
+ {
+ "level":"storage"
+ },
+ "SystemProperties":
+ {
+ "connectionDeviceId":"Contoso-Test-Device",
+ "connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
+ "connectionDeviceGenerationId":"636930642531278483",
+ "enqueuedTime":"2019-05-10T06:06:32.7220000Z"
+ },"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"
+}
``` ## Clean up resources
az group delete --name $resourceGroup
## Next steps
-In this tutorial, you configured and tested adding message enrichments to IoT Hub messages by using the following steps:
-
-**Use IoT Hub message enrichments**
-
-> [!div class="checklist"]
-> * First method: Create resources and configure message routing by using the Azure CLI. Configure the message enrichments manually by using the [Azure portal](https://portal.azure.com).
-> * Second method: Create resources and configure message routing and message enrichments by using an Azure Resource Manager template.
-> * Run an app that simulates an IoT device sending messages to the hub.
-> * View the results, and verify that the message enrichments are working as expected.
+In this tutorial, you configured and tested message enrichments for IoT Hub messages as they are routed to an endpoint.
For more information about message enrichments, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
-For more information about message routing, see these articles:
-
-> [!div class="nextstepaction"]
-> [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](iot-hub-devguide-messages-d2c.md)
+To learn more about IoT Hub, continue to the next tutorial.
> [!div class="nextstepaction"]
-> [Tutorial: IoT Hub routing](tutorial-routing.md)
+> [Tutorial: Set up and use metrics and logs with an IoT hub](tutorial-use-metrics-and-diags.md)
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
For full details, see [Assign Azure roles using Azure PowerShell](../../role-bas
1. Validate adding new secret without "Key Vault Secrets Officer" role on key vault level.
-Go to key vault Access control (IAM) tab and remove "Key Vault Secrets Officer" role assignment for this resource.
+ 1. Go to key vault Access control (IAM) tab and remove "Key Vault Secrets Officer" role assignment for this resource.
-![Remove assignment - key vault](../media/rbac/image-9.png)
+ ![Remove assignment - key vault](../media/rbac/image-9.png)
-Navigate to previously created secret. You can see all secret properties.
+ 1. Navigate to previously created secret. You can see all secret properties.
-![Secret view with access](../media/rbac/image-10.png)
+ ![Secret view with access](../media/rbac/image-10.png)
-Create new secret ( Secrets \> +Generate/Import) should show below error:
+ 1. Create new secret ( Secrets \> +Generate/Import) should show below error:
- ![Create new secret](../media/rbac/image-11.png)
+ ![Create new secret](../media/rbac/image-11.png)
-2. Validate secret editing without "Key Vault Secret Officer" role on secret level.
+1. Validate secret editing without "Key Vault Secret Officer" role on secret level.
-- Go to previously created secret Access Control (IAM) tab
+ 1. Go to previously created secret Access Control (IAM) tab
and remove "Key Vault Secrets Officer" role assignment for this resource. -- Navigate to previously created secret. You can see secret properties.
+ 1. Navigate to previously created secret. You can see secret properties.
- ![Secret view without access](../media/rbac/image-12.png)
+ ![Secret view without access](../media/rbac/image-12.png)
-3. Validate secrets read without reader role on key vault level.
+1. Validate secrets read without reader role on key vault level.
-- Go to key vault resource group Access control (IAM) tab and remove "Key Vault Reader" role assignment.
+ 1. Go to key vault resource group Access control (IAM) tab and remove "Key Vault Reader" role assignment.
-- Navigating to key vault's Secrets tab should show below error:
+ 1. Navigating to key vault's Secrets tab should show below error:
- ![Secret tab - error](../media/rbac/image-13.png)
+ ![Secret tab - error](../media/rbac/image-13.png)
### Creating custom roles
lighthouse Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/architecture.md
Title: Azure Lighthouse architecture description: Learn about the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship. Previously updated : 09/13/2021 Last updated : 06/09/2022
lighthouse Cloud Solution Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cloud-solution-provider.md
Title: Cloud Solution Provider program considerations description: For CSP partners, Azure delegated resource management helps improve security and control by enabling granular permissions. Previously updated : 11/18/2021 Last updated : 06/09/2022
Azure Lighthouse helps improve security by limiting unnecessary access to your c
To further minimize the number of permanent assignments, you can [create eligible authorizations](../how-to/create-eligible-authorizations.md) (currently in public preview) to grant additional permissions to your users on a just-in-time basis.
-Onboarding a subscription that you created through the CSP program follows the steps described in [Onboard a subscription to Azure Lighthouse](../how-to/onboard-customer.md). Any user who has the Admin Agent role in your tenant can perform this onboarding.
+Onboarding a subscription that you created through the CSP program follows the steps described in [Onboard a subscription to Azure Lighthouse](../how-to/onboard-customer.md). Any user who has the Admin Agent role in the customer's tenant can perform this onboarding.
> [!TIP]
-> [Managed Service offers](managed-services-offers.md) with private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. You can onboard these subscriptions to Azure Lighthouse by [using Azure Resource Manager templates](../how-to/onboard-customer.md).
+> [Managed Service offers](managed-services-offers.md) with private plans aren't supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. Instead, you can onboard these subscriptions to Azure Lighthouse by [using Azure Resource Manager templates](../how-to/onboard-customer.md).
> [!NOTE] > The [**My customers** page in the Azure portal](../how-to/view-manage-customers.md) now includes a **Cloud Solution Provider (Preview)** section, which displays billing info and resources for CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more info, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md).
lighthouse Cross Tenant Management Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cross-tenant-management-experience.md
Title: Cross-tenant management experiences description: Azure Lighthouse enables and enhances cross-tenant experiences in many Azure services. Previously updated : 12/01/2021 Last updated : 06/09/2022 # Cross-tenant management experiences
-As a service provider, you can use [Azure Lighthouse](../overview.md) to manage resources for multiple customers from within your own Azure Active Directory (Azure AD) tenant. Many tasks and services can be performed across managed tenants by using [Azure delegated resource management](../concepts/architecture.md).
+As a service provider, you can use [Azure Lighthouse](../overview.md) to manage your customers' Azure resources from within your own Azure Active Directory (Azure AD) tenant. Many common tasks and services can be performed across these managed tenants.
> [!TIP] > Azure Lighthouse can also be used [within an enterprise which has multiple Azure AD tenants of its own](enterprise.md) to simplify cross-tenant administration. ## Understanding tenants and delegation
-An Azure AD tenant is a representation of an organization. It's a dedicated instance of Azure AD that an organization receives when they create a relationship with Microsoft by signing up for Azure, Microsoft 365, or other services. Each Azure AD tenant is distinct and separate from other Azure AD tenants, and has its own tenant ID (a GUID). For more info, see [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
+An Azure AD tenant is a representation of an organization. It's a dedicated instance of Azure AD that an organization receives when they create a relationship with Microsoft by signing up for Azure, Microsoft 365, or other services. Each Azure AD tenant is distinct and separate from other Azure AD tenants, and has its own tenant ID (a GUID). For more information, see [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
-Typically, in order to manage Azure resources for a customer, service providers would have to sign in to the Azure portal using an account associated with that customer's tenant, requiring an administrator in the customer's tenant to create and manage user accounts for the service provider.
+Typically, in order to manage Azure resources for a customer, service providers would have to sign in to the Azure portal using an account associated with that customer's tenant. In this scenario, an administrator in the customer's tenant must create and manage user accounts for the service provider.
-With Azure Lighthouse, the onboarding process specifies users within the service provider's tenant who will be able to work on delegated subscriptions and resource groups in the customer's tenant. These users can then sign in to the Azure portal using their own credentials. Within the Azure portal, they can manage resources belonging to all customers to which they have access. This can be done by visiting the [My customers](../how-to/view-manage-customers.md) page in the Azure portal, or by working directly within the context of that customer's subscription, either in the Azure portal or via APIs.
+With Azure Lighthouse, the onboarding process specifies users in the service provider's tenant who will be able to work on delegated subscriptions and resource groups in the customer's tenant. These users can then sign in to the Azure portal, using their own credentials, and work on resources belonging to all of the customers to which they have access. Users in the managing tenant can see all of these customers by visiting the [My customers](../how-to/view-manage-customers.md) page in the Azure portal. They can also work on resources directly within the context of that customer's subscription, either in the Azure portal or via APIs.
-Azure Lighthouse allows greater flexibility to manage resources for multiple customers without having to sign in to different accounts in different tenants. For example, a service provider may have two customers with different responsibilities and access levels. Using Azure Lighthouse, authorized users can sign in to the service provider's tenant to access these resources.
+Azure Lighthouse provides flexibility to manage resources for multiple customers without having to sign in to different accounts in different tenants. For example, a service provider may have two customers with different responsibilities and access levels. Using Azure Lighthouse, authorized users can sign in to the service provider's tenant and access all of the delegated resources across these customers.
![Diagram showing customer resources managed through one service provider tenant.](../media/azure-delegated-resource-management-service-provider-tenant.jpg) ## APIs and management tool support
-You can perform management tasks on delegated resources either directly in the portal or by using APIs and management tools (such as Azure CLI and Azure PowerShell). All existing APIs can be used when working with delegated resources, as long as the functionality is supported for cross-tenant management and the user has the appropriate permissions.
+You can perform management tasks on delegated resources in the Azure portal, or you can use APIs and management tools such as Azure CLI and Azure PowerShell. All existing APIs can be used on delegated resources, as long as the functionality is supported for cross-tenant management and the user has the appropriate permissions.
The Azure PowerShell [Get-AzSubscription cmdlet](/powershell/module/Az.Accounts/Get-AzSubscription) will show the `TenantId` for the managing tenant by default. You can use the `HomeTenantId` and `ManagedByTenantIds` attributes for each subscription, allowing you to identify whether a returned subscription belongs to a managed tenant or to your managing tenant.
Most tasks and services can be performed on delegated resources across managed t
- Manage hybrid servers at scale - [Azure Arc-enabled servers](../../azure-arc/servers/overview.md): - [Manage Windows Server or Linux machines outside Azure that are connected](../../azure-arc/servers/onboard-portal.md) to delegated subscriptions and/or resource groups in Azure - Manage connected machines using Azure constructs, such as Azure Policy and tagging
- - Ensure the same set of policies are applied across customers' hybrid environments
- - Use Microsoft Defender for Cloud to monitor compliance across customers' hybrid environments
+ - Ensure the same set of [policies are applied](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md) across customers' hybrid environments
+ - Use Microsoft Defender for Cloud to [monitor compliance across customers' hybrid environments](../../defender-for-cloud/quickstart-onboard-machines.md?pivots=azure-arc)
- Manage hybrid Kubernetes clusters at scale - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md): - [Manage Kubernetes clusters that are connected](../../azure-arc/kubernetes/quickstart-connect-cluster.md) to delegated subscriptions and/or resource groups in Azure
- - [Use GitOps](../../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md) for connected clusters
- - Enforce policies across connected clusters
+ - [Use GitOps](../../azure-arc/kubernetes/tutorial-use-gitops-flux2.md) for connected clusters
+ - [Enforce policies across connected clusters](../../governance/policy/concepts/policy-for-kubernetes.md#install-azure-policy-extension-for-azure-arc-enabled-kubernetes)
[Azure Automation](../../automation/index.yml):
Most tasks and services can be performed on delegated resources across managed t
[Azure Backup](../../backup/index.yml): - Back up and restore customer data [from on-premises workloads, Azure VMs, Azure file shares, and more](../..//backup/backup-overview.md#what-can-i-back-up)-- View data for all delegated customer resources in [Backup Center](../../backup/backup-center-overview.md)
+- View data for all delegated customer resources in [Backup center](../../backup/backup-center-overview.md)
- Use the [Backup Explorer](../../backup/monitor-azure-backup-with-backup-explorer.md) to help view operational information of backup items (including Azure resources not yet configured for backup) and monitoring information (jobs and alerts) for delegated subscriptions. The Backup Explorer is currently available only for Azure VM data.-- Use [Backup Reports](../../backup/configure-reports.md) across delegated subscriptions to track historical trends, analyze backup storage consumption, and audit backups and restores.
+- Use [Backup reports](../../backup/configure-reports.md) across delegated subscriptions to track historical trends, analyze backup storage consumption, and audit backups and restores.
[Azure Blueprints](../../governance/blueprints/index.yml):
Most tasks and services can be performed on delegated resources across managed t
- Manage hosted Kubernetes environments and deploy and manage containerized applications within customer tenants - Deploy and manage clusters in customer tenants-- Use Azure Monitor for containers to monitor performance across customer tenants
+- [Use Azure Monitor for containers](../../aks/monitor-aks.md) to monitor performance across customer tenants
[Azure Migrate](../../migrate/index.yml):
Most tasks and services can be performed on delegated resources across managed t
- Deploy and manage [Azure Virtual Network](../../virtual-network/index.yml) and virtual network interface cards (vNICs) within managed tenants - Deploy and configure [Azure Firewall](../../firewall/overview.md) to protect customersΓÇÖ Virtual Network resources-- Manage connectivity services such as [Azure Virtual WAN](../../virtual-wan/virtual-wan-about.md), [ExpressRoute](../../expressroute/expressroute-introduction.md), and [VPN Gateways](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
+- Manage connectivity services such as [Azure Virtual WAN](../../virtual-wan/virtual-wan-about.md), [Azure ExpressRoute](../../expressroute/expressroute-introduction.md), and [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
- Use Azure Lighthouse to support key scenarios for the [Azure Networking MSP Program](../../networking/networking-partners-msp.md) [Azure Policy](../../governance/policy/index.yml):
With all scenarios, please be aware of the following current limitations:
- Onboard your customers to Azure Lighthouse, either by [using Azure Resource Manager templates](../how-to/onboard-customer.md) or by [publishing a private or public managed services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). - [View and manage customers](../how-to/view-manage-customers.md) by going to **My customers** in the Azure portal.
+- Learn more about [Azure Lighthouse architecture](architecture.md).
lighthouse Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/enterprise.md
Title: Azure Lighthouse in enterprise scenarios description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Azure AD tenants. Previously updated : 02/18/2022 Last updated : 06/09/2022
lighthouse Isv Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/isv-scenarios.md
Title: Azure Lighthouse in ISV scenarios description: The capabilities of Azure Lighthouse can be used by ISVs for more flexibility with customer offerings. Previously updated : 09/08/2021 Last updated : 06/09/2022
lighthouse Managed Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-applications.md
Title: Azure Lighthouse and Azure managed applications description: Understand how Azure Lighthouse and Azure managed applications can be used together. Previously updated : 09/08/2021 Last updated : 06/09/2022
lighthouse Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-services-offers.md
Title: Managed Service offers in Azure Marketplace description: Offer your Azure Lighthouse management services to customers through Managed Services offers in Azure Marketplace. Previously updated : 02/02/2022 Last updated : 06/09/2022
lighthouse Recommended Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/recommended-security-practices.md
Title: Recommended security practices description: When using Azure Lighthouse, it's important to consider security and access control. Previously updated : 09/08/2021 Last updated : 06/09/2022
lighthouse Tenants Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/tenants-users-roles.md
Title: Tenants, users, and roles in Azure Lighthouse scenarios description: Understand how Azure Active Directory tenants, users, and roles can be used in Azure Lighthouse scenarios. Previously updated : 12/16/2021 Last updated : 06/09/2022
When defining an authorization, each user account must be assigned one of the [A
All [built-in roles](../../role-based-access-control/built-in-roles.md) are currently supported with Azure Lighthouse, with the following exceptions: - The [Owner](../../role-based-access-control/built-in-roles.md#owner) role is not supported.-- Any built-in roles with [DataActions](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported.
+- Any built-in roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported.
- The [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) built-in role is supported, but only for the limited purpose of [assigning roles to a managed identity in the customer tenant](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). No other permissions typically granted by this role will apply. If you define a user with this role, you must also specify the built-in role(s) that this user can assign to managed identities. > [!NOTE]
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
If you've previously created a quick test, you can edit the test plan at any tim
### Split CSV input data across test engines
-By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. You don't have to make any modifications to the JMX test script.
+By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file will be split evenly.
For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
-If you have multiple CSV files, each file will be split evenly.
+Azure Load Testing doesn't preserve the header row in your CSV file when splitting a CSV file. For more information about how to configure your JMeter script and CSV file, see [Read data from a CSV file](./how-to-read-csv-data.md).
-To configure your load test:
-
-1. Go to the **Test plan** page for your load test.
-1. Select **Split CSV evenly between Test engines**.
-
- :::image type="content" source="media/how-to-create-manage-test/configure-test-split-csv.png" alt-text="Screenshot that shows the checkbox to enable splitting input C S V files when configuring a test in the Azure portal.":::
## Parameters
Configure the number of test engine instances, and Azure Load Testing automatica
## Test criteria
-You can specify test failure criteria based on a number of client metrics. When a load test surpasses the threshold for a metric, the load test has a **Failed** status. For more information, see [Configure test failure criteria](./how-to-define-test-criteria.md).
+You can specify test failure criteria based on client metrics. When a load test surpasses the threshold for a metric, the load test has a **Failed** status. For more information, see [Configure test failure criteria](./how-to-define-test-criteria.md).
You can use the following client metrics:
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
To edit your JMeter script by using the Apache JMeter GUI:
1. Select the **CSV Data Set Config** element in your test plan. 1. Update the **Filename** information and remove any file path reference.
+
+ 1. Optionally, enter the CSV field names in **Variable Names**, when you split the CSV file across test engines.
+
+ Azure Load Testing doesn't preserve the header row when splitting your CSV file. Provide the variable names in the **CSV Data Set Config** element instead of using a header row.
- :::image type="content" source="media/how-to-read-csv-data/update-csv-data-set-config.png" alt-text="Screenshot that shows the test runs to compare.":::
+ :::image type="content" source="media/how-to-read-csv-data/update-csv-data-set-config.png" alt-text="Screenshot that shows the JMeter UI to configure a C S V Data Set Config element.":::
- 1. Repeat the previous steps for every CSV Data Set Config element.
+ 1. Repeat the previous steps for every **CSV Data Set Config** element in the script.
- 1. Save the JMeter script.
+ 1. Save the JMeter script and add it to your [test plan](./how-to-create-manage-test.md#test-plan).
To edit your JMeter script by using Visual Studio Code or your editor of preference: 1. Open the JMX file in Visual Studio Code.
- 1. For each `CSVDataSet`, update the `filename` element and remove any file path reference.
+ 1. For each `CSVDataSet`:
+
+ 1. Update the `filename` element and remove any file path reference.
+
+ 1. Add the CSV field names as a comma-separated list in `variableNames`.
```xml <CSVDataSet guiclass="TestBeanGUI" testclass="CSVDataSet" testname="Search parameters" enabled="true">
To edit your JMeter script by using Visual Studio Code or your editor of prefere
</CSVDataSet> ```
- 1. Save the JMeter script.
+ 1. Save the JMeter script and add it to your [test plan](./how-to-create-manage-test.md#test-plan).
## Add a CSV file to your load test When you reference an external file in your JMeter script, upload this file to your load test. When the load starts, Azure Load Testing copies all files to a single folder on each of the test engines instances.
+> [!IMPORTANT]
+> Azure Load Testing doesn't preserve the header row when splitting your CSV file. Before you add the CSV file to the load test, remove the header row from the file.
+ ::: zone pivot="experience-azp" To add a CSV file to your load test by using the Azure portal:
To add a CSV file to your load test:
## Split CSV input data across test engines
-By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. You don't have to make any modifications to the JMX test script.
+By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file will be split evenly.
For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
-If you have multiple CSV files, each file will be split evenly.
+> [!IMPORTANT]
+> Azure Load Testing doesn't preserve the header row when splitting your CSV file.
+> 1. [Configure your JMeter script](#configure-your-jmeter-script) to use variable names when reading the CSV file.
+> 1. Remove the header row from the CSV file before you add it to the load test.
To configure your load test to split input CSV files:
logic-apps Logic Apps Workflow Actions Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-workflow-actions-triggers.md
The Logic Apps engine checks access to the trigger you want to call, so make sur
| <*trigger-name*> | String | The name for the trigger in the nested logic app you want to call | | <*Azure-subscription-ID*> | String | The Azure subscription ID for the nested logic app | | <*Azure-resource-group*> | String | The Azure resource group name for the nested logic app |
-| <*nested-logic-app-name*> | String | The name for the logic app you want to call |
|||| *Optional*
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
---+++ Last updated 05/11/2022 #Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
machine-learning How To Use Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-data.md
description: 'Learn to how work with data using the Python SDK v2 preview for Azure Machine Learning.' --++ Last updated 05/10/2022
returned_job.services["Studio"].endpoint
## Table
-An MLTable is primarily an abstraction over tabular data, but it can also be used for some advanced scenarios involving multiple paths. The following YAML describes an MLTable:
+An [MLTable](concept-data.md#mltable) is primarily an abstraction over tabular data, but it can also be used for some advanced scenarios involving multiple paths. The following YAML describes an MLTable:
```yaml paths:
tbl = mltable.load("./sample_data")
df = tbl.to_pandas_dataframe() ```
-For a full example of using an MLTable, see the [Working with MLTable notebook].
+For more information on the YAML file format, see [the MLTable file](how-to-create-register-data-assets.md#the-mltable-file).
+
+<!-- Commenting until notebook is published. For a full example of using an MLTable, see the [Working with MLTable notebook]. -->
## Consuming V1 dataset assets in V2
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-version-track-datasets.md
description: Learn how to version machine learning datasets and how versioning w
--++ Last updated 10/21/2021
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
To change permissions for a specific resource, follow these steps:
1. Select **Access Control (IAM)**. 1. Under **Grant access to this resource**, select **Add role assignment**.
- :::image type="content" source="media/managed-grafana-how-to-permissions-iam.png" alt-text="Screenshot of the Azure platform to add role assignment in App Insights.":::
+ :::image type="content" source="./media/permissions/permissions-iam.png" alt-text="Screenshot of the Azure platform to add role assignment in App Insights.":::
1. The portal lists various roles you can give to your Managed Grafana resource. Select a role. For instance, **Monitoring Reader**. Select this role. 1. Click **Next**.
- :::image type="content" source="media/managed-grafana-how-to-permissions-role.png" alt-text="Screenshot of the Azure platform and choose Monitor Reader.":::
+ :::image type="content" source="./media/permissions/permissions-role.png" alt-text="Screenshot of the Azure platform and choose Monitor Reader.":::
1. For **Assign access to**, select **Managed Identity**. 1. Click **Select members**.
- :::image type="content" source="media/managed-grafana-how-to-permissions-members.png" alt-text="Screenshot of the Azure platform selecting members.":::
+ :::image type="content" source="media/permissions/permissions-members.png" alt-text="Screenshot of the Azure platform selecting members.":::
1. Select the **Subscription** containing your Managed Grafana workspace 1. Select a **Managed identity** from the options in the dropdown list 1. Select your Managed Grafana workspace from the list. 1. Click **Select** to confirm
- :::image type="content" source="media/managed-grafana-how-to-permissions-identity.png" alt-text="Screenshot of the Azure platform selecting the workspace.":::
+ :::image type="content" source="media/permissions/permissions-managed-identities.png" alt-text="Screenshot of the Azure platform selecting the workspace.":::
1. Click **Next**, then **Review + assign** to confirm the application of the new permission
+For more information about how to use Managed Grafana with Azure Monitor, go to [Monitor your Azure services in Grafana](../azure-monitor/visualize/grafana-plugin.md).
+ ## Next steps > [!div class="nextstepaction"]
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB
No, there's no architectural dependency between Azure Managed Instance for Apache Cassandra and the Azure Cosmos DB backend.
+### What versions of Apache Cassandra does the service support?
+
+The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
+ ### Does Azure Managed Instance for Apache Cassandra have an SLA? Yes, the SLA is published [here](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/).
-#### Can I deploy Azure Managed Instance for Apache Cassandra in any region?
+### Can I deploy Azure Managed Instance for Apache Cassandra in any region?
Currently the managed instance is available in a limited number of regions.
managed-instance-apache-cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/introduction.md
You can use this service to easily place managed instances of Apache Cassandra d
- **Simplified deployment:** After the hybrid connectivity is established, deployment of new data centers in Azure is easy through [simple commands](manage-resources-cli.md#create-datacenter). - **Metrics:** each datacenter node provisioned by the service emits metrics using [Metric Collector for Apache Cassandra](https://github.com/datastax/metric-collector-for-apache-cassandra). The metrics can be [visualized in Prometheus or Grafana](visualize-prometheus-grafana.md). The service is also integrated with [Azure Monitor for metrics and diagnostic logging](monitor-clusters.md).
+>[!NOTE]
+> The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
+ ### Simplified scaling In the managed instance, scaling up and scaling down nodes in a datacenter is fully managed. You select the number of nodes you need, and with a [simple command](manage-resources-cli.md#update-datacenter), the scaling orchestrator takes care of establishing their operation within the Cassandra ring.
managed-instance-apache-cassandra Management Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/management-operations.md
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
## Compaction
-* The system currently does not perform a major compaction.
+* The system currently doesn't perform a major compaction.
* Repair (see [Maintenance](#maintenance)) performs a Merkle tree compaction, which is a special kind of compaction. * Depending on the compaction strategy on the keyspace, Cassandra automatically compacts when the keyspace reaches a specific size. We recommend that you carefully select a compaction strategy for your workload, and don't do any manual compactions outside the strategy.
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
* Apache Cassandra software-level patches are done when security vulnerabilities are identified. The patching cadence may vary.
-* During patching, machines are rebooted one rack at a time. You should not experience any degradation at the application side as long as **quorum ALL setting is not being used**, and the replication factor is **3 or higher**.
+* During patching, machines are rebooted one rack at a time. You shouldn't experience any degradation at the application side as long as **quorum ALL setting is not being used**, and the replication factor is **3 or higher**.
-* The version in Apache Cassandra is in the format `X.Y.Z`. You can control the deployment of major (X) and minor (Y) versions manually via service tools. Whereas the Cassandra patches (Z) that may be required for that major/minor version combination are done automatically.
+* The version in Apache Cassandra is in the format `X.Y.Z`. You can control the deployment of major (X) and minor (Y) versions manually via service tools. Whereas the Cassandra patches (Z) that may be required for that major/minor version combination are done automatically.
+
+>[!NOTE]
+> The service currently supports Cassandra versions 3.11 and 4.0. By default, version 3.11 is deployed, as version 4.0 is currently in public preview. See our [Azure CLI Quickstart](create-cluster-cli.md) (step 5) for specifying Cassandra version during cluster deployment.
## Maintenance
Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.micr
## Backup and restore
-Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There is no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There's no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
> [!WARNING] > Backups can be restored to the same VNet/subnet as your existing cluster, but they cannot be restored to the *same cluster*. Backups can only be restored to **new clusters**. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
For more information on security features, see our article [here](security.md).
## Hybrid support
-When a [hybrid](configure-hybrid-cluster.md) cluster is configured, automated reaper operations running in the service will benefit the whole cluster. This includes data centers that are not provisioned by the service. Outside this, it is your responsibility to maintain your on-premise or externally hosted data center.
+When a [hybrid](configure-hybrid-cluster.md) cluster is configured, automated reaper operations running in the service will benefit the whole cluster. This includes data centers that aren't provisioned by the service. Outside this, it is your responsibility to maintain your on-premise or externally hosted data center.
## Next steps
mysql Tutorial Deploy Springboot On Aks Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-springboot-on-aks-vnet.md
az group delete --name rg-mysqlaksdemo
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps
mysql Tutorial Deploy Wordpress On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md
Download the [latest WordPress](https://wordpress.org/download/) version. Create
```
-Rename ```wp-config-sample.php``` to ```wp-config.php``` and replace lines from beginingin of ```// ** MySQL settings - You can get this info from your web host ** //``` until the line ```define( 'DB_COLLATE', '' );``` with the code snippet below. The code below is reading the database host , username and password from the Kubernetes manifest file.
+Rename ```wp-config-sample.php``` to ```wp-config.php``` and replace lines from beginingin of ```// ** MySQL settings - You can get this info from your web host ** //``` until the line ```define( 'DB_COLLATE', '' );``` with the code snippet below. The code below is reading the database host, username and password from the Kubernetes manifest file.
```php //Using environment variables for DB connection information
az group delete --name wordpress-project --yes --no-wait
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.0.59 or later
## Create a resource group - Azure CLI
-Run the following Azure CLI command to create a resource group.
+Run the following Azure CLI command to create a resource group in which your Azure Red Hat OpenShift cluster will reside.
```azurecli-interactive AZ_RG=$(az group create -n test-aro-rg -l eastus2 --query name -o tsv)
The output is similar to the following example.
} ```
-> [!NOTE]
-> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
+> [!IMPORTANT]
+> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well. You also need to create your Azure Red Hat OpenShift cluster in the resource group you created above.
To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
-## Use the service principal to deploy an Azure Red Hat OpenShift cluster - Azure CLI
-
-Using the service principal that you created when you created the Azure Red Hat OpenShift cluster, use the `az aro create` command to deploy the Azure Red Hat OpenShift cluster. Use the `--client-id` and `--client-secret` parameters to specify the appId and password from the output of the `az ad sp create-for-rbac` command, as shown in the following command.
-
-```azure-cli
-az aro create \
-
- --resource-group myResourceGroup \
-
- --name myAROCluster \
-
- --client-id <appID> \
-
- --client-secret <password>
-```
-
-> [!IMPORTANT]
-> If you're using an existing service principal with a customized secret, ensure the secret doesn't exceed 190 bytes.
- ::: zone-end ::: zone pivot="aro-azureportal" ## Create a service principal with the Azure portal
-The following sections explain how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
-
-## Prerequisite - Azure portal
-
-Create a service principal, as explained in [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). **Be sure to save the client ID and the appID.**
-
-## To use the service principal to deploy an Azure Red Hat OpenShift cluster - Azure portal
+This section explains how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
-To use the service principal you created to deploy a cluster, complete the following steps.
+To create a service principal, see [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). **Be sure to save the client ID and the appID.**
-1. On the Create Azure Red Hat OpenShift **Basics** tab, create a resource group for your subscription, as shown in the following example.
- :::image type="content" source="./media/basics-openshift-sp.png" alt-text="Screenshot that shows how to use the Azure Red Hat service principal with Azure portal to create a cluster." lightbox="./media/basics-openshift-sp.png":::
-
-2. Select **Next: Authentication** to configure the service principal on the **Authentication** page of the **Azure Red Hat OpenShift** dialog.
-
- :::image type="content" source="./media/openshift-service-principal-portal.png" alt-text="Screenshot that shows how to use the Authentication tab with Azure portal to create a service principal." lightbox="./media/openshift-service-principal-portal.png":::
-
-In the **Service principal information** section:
--- **Service principal client ID** is your appId. -- **Service principal client secret** is the service principal's decrypted Secret value.-
-In the **Cluster pull secret** section:
--- **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.-
-After completing this tab, select **Next: Networking** to continue deploying your cluster. Select **Review + Create** when you complete the remaining tabs.
-
-> [!NOTE]
-> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
-
-## Grant permissions to the service principal - Azure portal
-
-To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
::: zone-end
openshift Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-portal.md
Create a service principal, as explained in [Use the portal to create an Azure A
- **Service principal client ID** is your appId. - **Service principal client secret** is the service principal's decrypted Secret value.
+ If you need to create a service principal, see [Creating and using a service principal with an Azure Red Hat OpenShift cluster](howto-create-service-principal.md).
+
In the **Cluster pull secret** section: - **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/create.md
Title: Create Datadog - Azure partner solutions description: This article describes how to use the Azure portal to create an instance of Datadog. Previously updated : 05/28/2021 Last updated : 06/08/2022
Use the Azure portal to find Datadog.
1. If you've visited the **Marketplace** in a recent session, select the icon from the available options. Otherwise, search for _Marketplace_.
- :::image type="content" source="media/create/marketplace.png" alt-text="Marketplace icon.":::
+ :::image type="content" source="media/create/marketplace.png" alt-text="Screenshot of the Azure Marketplace icon.":::
1. In the Marketplace, search for **Datadog**.
-1. In the plan overview screen, select **Set up + subscribe**.
+1. In the plan overview screen, select **Subscribe**.
- :::image type="content" source="media/create/datadog-app-2.png" alt-text="Datadog application in Azure Marketplace.":::
+ :::image type="content" source="media/create/datadog-app-2.png" alt-text="Screenshot of the Datadog application in Azure Marketplace.":::
## Create a Datadog resource in Azure The portal displays a selection asking whether you would like to create a Datadog organization or link Azure subscription to an existing Datadog organization.
-If you are creating a new Datadog organization, select **Create** under the **Create a new Datadog organization**
+If you're creating a new Datadog organization, select **Create** under the **Create a new Datadog organization**
The portal displays a form for creating the Datadog resource. Provide the following values.
Use Azure resource tags to configure which metrics and logs are sent to Datadog.
Tag rules for sending **metrics** are: - By default, metrics are collected for all resources, except virtual machines, virtual machine scale sets, and app service plans.-- Virtual machines, virtual machine scale sets, and app service plans with *Include* tags send metrics to Datadog.-- Virtual machines, virtual machine scale sets, and app service plans with *Exclude* tags don't send metrics to Datadog.-- If there's a conflict between inclusion and exclusion rules, exclusion takes priority
+- Virtual machines, virtual machine scale sets, and app service plans with _Include_ tags send metrics to Datadog.
+- Virtual machines, virtual machine scale sets, and app service plans with _Exclude_ tags don't send metrics to Datadog.
+- If there's a conflict between inclusion and exclusion rules, exclusion takes priority.
Tag rules for sending **logs** are: - By default, logs are collected for all resources.-- Azure resources with *Include* tags send logs to Datadog.-- Azure resources with *Exclude* tags don't send logs to Datadog.
+- Azure resources with _Include_ tags send logs to Datadog.
+- Azure resources with _Exclude_ tags don't send logs to Datadog.
- If there's a conflict between inclusion and exclusion rules, exclusion takes priority.
-For example, the screenshot below shows a tag rule where only those virtual machines, virtual machine scale sets, and app service plans tagged as *Datadog = True* send metrics to Datadog.
+For example, the following screenshot shows a tag rule where only those virtual machines, virtual machine scale sets, and app service plans tagged as _Datadog = True_ send metrics to Datadog.
-There are two types of logs that can be emitted from Azure to Datadog.
+There are three types of logs that can be sent from Azure to Datadog.
1. **Subscription level logs** - Provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription. 1. **Azure resource logs** - Provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+1. **Azure Active Directory logs** - As an IT administrator, you want to monitor your IT environment. The information about your system's health enables you to assess potential issues and decide how to respond.
+
+The Azure Active Directory portal gives you access to three activity logs:
+
+- [Sign-in](../../active-directory/reports-monitoring/concept-sign-ins.md) ΓÇô Information about sign-ins and how your resources are used by your users.
+- [Audit](../../active-directory/reports-monitoring/concept-audit-logs.md) ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenant's resources.
+- [Provisioning](../../active-directory/reports-monitoring/concept-provisioning-logs.md) ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
+ To send subscription level logs to Datadog, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Datadog.
-To send Azure resource logs to Datadog, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Datadog, use Azure resource tags.
+To send Azure resource logs to Datadog, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Datadog, use Azure resource tags.
+
+You can request your IT Administrator to route Azure Active Directory Logs to Datadog. For more information, see [Azure AD activity logs in Azure Monitor](../../active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md).
The logs sent to Datadog will be charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
The Azure portal retrieves the appropriate Datadog application from Azure Active
Select the Datadog app name. Select **Next: Tags**.
Select **Next: Tags**.
You can specify custom tags for the new Datadog resource. Provide name and value pairs for the tags to apply to the Datadog resource. When you've finished adding tags, select **Next: Review+Create**.
When you've finished adding tags, select **Next: Review+Create**.
Review your selections and the terms of use. After validation completes, select **Create**. Azure deploys the Datadog resource. When the process completes, select **Go to Resource** to see the Datadog resource. ## Next steps
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Title: Create Dynatrace application - Azure partner solutions
+ Title: Create Dynatrace for Azure (preview) resource - Azure partner solutions
description: This article describes how to use the Azure portal to create an instance of Dynatrace.
Last updated 06/07/2022
# QuickStart: Get started with Dynatrace
-In this quickstart, you create a new instance of Dynatrace. You can either create a new Dynatrace environment or [link to an existing Dynatrace environment](dynatrace-link-to-existing.md#link-to-existing-dynatrace-environment).
+In this quickstart, you create a new instance of Dynatrace for Azure (preview). You can either create a new Dynatrace environment or [link to an existing Dynatrace environment](dynatrace-link-to-existing.md#link-to-existing-dynatrace-environment).
When you use the integrated Dynatrace experience in Azure portal, the following entities are created and mapped for monitoring and billing purposes. - **Dynatrace resource in Azure** - Using the Dynatrace resource, you can manage the Dynatrace environment in Azure. The resource is created in the Azure subscription and resource group that you select during the create or linking process. - **Dynatrace environment** - This is the Dynatrace environment on Dynatrace SaaS. When you choose to create a new environment, the environment on Dynatrace SaaS is automatically created, in addition to the Dynatrace resource in Azure. The resource is created in the Azure subscription and resource group that you selected when you created the environment or linked to an existing environment.
partner-solutions Dynatrace How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md
Title: Configure pre-deployment to use Dynatrace with Azure.
+ Title: Configure pre-deployment to use Dynatrace with Azure (preview) - Azure partner solutions
description: This article describes how to complete the prerequisites for Dynatrace on the Azure portal.
This article describes the prerequisites that must be completed before you creat
## Access control
-To set up the Azure Dynatrace integration, you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before starting the setup.
+To set up the Dynatrace for Azure (preview), you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before starting the setup.
## Add enterprise application
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
Title: Manage your Dynatrace for Azure integration
+ Title: Manage your Dynatrace for Azure (preview) integration - Azure partner solutions
description: This article describes how to manage Dynatrace on the Azure portal.
Last updated 06/07/2022
# Manage the Dynatrace integration with Azure
-This article describes how to manage the settings for your Azure integration with Dynatrace.
+This article describes how to manage the settings for your Dynatrace for Azure (preview).
## Resource overview
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
Title: Linking to an existing Dynatrace for Azure resource
+ Title: Linking to an existing Dynatrace for Azure (preview) resource - Azure partner solutions
description: This article describes how to use the Azure portal to link to an instance of Dynatrace.
Last updated 06/07/2022
In this quickstart, you link an Azure subscription to an existing Dynatrace environment. After you link to the Dynatrace environment, you can monitor the linked Azure subscription and the resources in that subscription using the Dynatrace environment.
-When you use the integrated experience for Dynatrace in the Azure portal, your billing and monitoring for the following entities is tracked in the portal.
+When you use the integrated experience for Dynatrace in the Azure (preview) portal, your billing and monitoring for the following entities is tracked in the portal.
:::image type="content" source="media/dynatrace-link-to-existing/dynatrace-entities-linking.png" alt-text="Flowchart showing three entities: subscription 1 connected to subscription 1 and Dynatrace S A A S.":::
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
Title: Dynatrace integration overview - Azure partner solutions
+ Title: Dynatrace for Azure (preview) overview - Azure partner solutions
description: Learn about using the Dynatrace Cloud-Native Observability Platform in the Azure Marketplace.
Last updated 06/07/2022
# What is Dynatrace integration with Azure?
-Dynatrace is a popular monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
+Dynatrace is a monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
-Dynatrace for Azure offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
+Dynatrace for Azure (preview) offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
You can create and manage the Dynatrace resources using the Azure portal through a resource provider named `Dynatrace.Observability`. Dynatrace owns and runs the software as a service (SaaS) application including the Dynatrace environments created through this experience.
Dynatrace for Azure provides the following capabilities:
- **Manage Dynatrace OneAgent on VMs and App Services** - Provides a single experience to install and uninstall Dynatrace OneAgent on virtual machines and App Services.
-## Dynatrace Links
+## Dynatrace links
For more help using Dynatrace for Azure service, see the [Dynatrace](https://aka.ms/partners/Dynatrace/PartnerDocs) documentation.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
Title: Troubleshooting Dynatrace - Azure partner solutions
-description: This article provides information about troubleshooting Dynatrace integration with Azure
+ Title: Troubleshooting Dynatrace for Azure (preview) - Azure partner solutions
+description: This article provides information about troubleshooting Dynatrace for Azure
Last updated 06/07/2022
# Troubleshoot Dynatrace for Azure
-This article describes how to contact support when working with a Dynatrace resource. Before contacting support, see [Fix common errors](#fix-common-errors).
+This article describes how to contact support when working with a Dynatrace for Azure (preview) resource. Before contacting support, see [Fix common errors](#fix-common-errors).
## Contact support
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Partner solutions are available through the Marketplace.
| [Datadog](./datadog/overview.md) | Monitor your servers, clouds, metrics, and apps in one place. | | [Elastic](./elastic/overview.md) | Monitor the health and performance of your Azure environment. | | [Logz.io](./logzio/overview.md) | Monitor the health and performance of your Azure environment. |
-| [Dynatrace for Azure](./dynatrace/dynatrace-overview.md) | Use Dyntrace for Azure to create and manage Dynatrace environments using the Azure portal. |
+| [Dynatrace for Azure (preview)](./dynatrace/dynatrace-overview.md) | Use Dyntrace for Azure (preview) for monitoring your workflows using the Azure portal. |
| [NGINX for Azure (preview)](./nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
az group delete --name django-project --yes --no-wait
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-audit.md
Last updated 08/03/2021
# Audit logging in Azure Database for PostgreSQL - Hyperscale (Citus) + > [!IMPORTANT] > The pgAudit extension in Hyperscale (Citus) is currently in preview. This > preview version is provided without a service level agreement, and it's not
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-backup.md
Last updated 04/14/2021
# Backup and restore in Azure Database for PostgreSQL - Hyperscale (Citus) + Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) automatically creates backups of each node and stores them in locally redundant storage. Backups can be used to restore your Hyperscale (Citus) server group to a specified time.
postgresql Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-colocation.md
Last updated 05/06/2019
# Table colocation in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Colocation means storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node. ## Data colocation for hash-distributed tables
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-columnar.md
Last updated 08/03/2021
# Columnar table storage + Azure Database for PostgreSQL - Hyperscale (Citus) supports append-only columnar table storage for analytic and data warehousing workloads. When columns (rather than rows) are stored contiguously on disk, data becomes more
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-connection-pool.md
Last updated 05/31/2022
# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) connection pooling + Establishing new connections takes time. That works against most applications, which request many short-lived connections. We recommend using a connection pooler, both to reduce idle transactions and reuse existing connections. To
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-distributed-data.md
Last updated 05/06/2019
# Distributed data in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + This article outlines the three table types in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). It shows how distributed tables are stored as shards, and the way that shards are placed on nodes.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-firewall-rules.md
Last updated 10/15/2021
# Public access in Azure Database for PostgreSQL - Hyperscale (Citus) + [!INCLUDE [azure-postgresql-hyperscale-access](../../../includes/azure-postgresql-hyperscale-access.md)] This page describes the public access option. For private access, see
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-high-availability.md
Last updated 01/12/2022
# High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + High availability (HA) avoids database downtime by maintaining standby replicas of every node in a server group. If a node goes down, Hyperscale (Citus) switches incoming connections from the failed node to its standby. Failover happens
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-maintenance.md
Last updated 02/14/2022
# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Azure Database for PostgreSQL - Hyperscale (Citus) does periodic maintenance to keep your managed database secure, stable, and up-to-date. During maintenance, all nodes in the server group get new features, updates, and patches.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-monitoring.md
Last updated 12/06/2021
# Monitor and tune Azure Database for PostgreSQL - Hyperscale (Citus) + Monitoring data about your servers helps you troubleshoot and optimize for your workload. Hyperscale (Citus) provides various monitoring options to provide insight into the behavior of nodes in a server group.
postgresql Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-nodes.md
Last updated 07/28/2019
# Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + ## Nodes The Hyperscale (Citus) hosting type allows Azure Database for PostgreSQL
postgresql Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-private-access.md
Last updated 10/15/2021
# Private access in Azure Database for PostgreSQL - Hyperscale (Citus) + This page describes the private access option. For public access, see [here](concepts-firewall-rules.md).
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-read-replicas.md
Last updated 02/03/2022
# Read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) + The read replica feature allows you to replicate data from a Hyperscale (Citus) server group to a read-only server group. Replicas are updated **asynchronously** with PostgreSQL physical replication technology. You can
postgresql Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-security-overview.md
Last updated 01/14/2022
# Security in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + This page outlines the multiple layers of security available to protect the data in your Hyperscale (Citus) server group.
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-server-group.md
Last updated 01/13/2022
# Hyperscale (Citus) server group + ## Nodes The Azure Database for PostgreSQL - Hyperscale (Citus) deployment option allows
postgresql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-alert-on-metric.md
Last updated 3/16/2020
# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Hyperscale (Citus) + This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on [monitoring metrics](concepts-monitoring.md) for your Azure services. We'll set up an alert to trigger when the value of a specified metric crosses a threshold. The alert triggers when the condition is first met, and continues to trigger afterwards.
postgresql Howto App Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-type.md
Last updated 07/17/2020
# Determining Application Type + Running efficient queries on a Hyperscale (Citus) server group requires that tables be properly distributed across servers. The recommended distribution varies by the type of application and its query patterns.
postgresql Howto Build Scalable Apps Classify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-classify.md
Last updated 04/28/2022
# Classify application workload + Here are common characteristics of the workloads that are the best fit for Hyperscale (Citus).
postgresql Howto Build Scalable Apps Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-concepts.md
Last updated 04/28/2022
# Fundamental concepts for scaling + Before we investigate the steps of building a new app, it's helpful to see a quick overview of the terms and concepts involved.
postgresql Howto Build Scalable Apps Model High Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-high-throughput.md
Last updated 04/28/2022
# Model high-throughput transactional apps + ## Common filter as shard key To pick the shard key for a high-throughput transactional application, follow
postgresql Howto Build Scalable Apps Model Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-multi-tenant.md
Last updated 04/28/2022
# Model multi-tenant SaaS apps + ## Tenant ID as the shard key The tenant ID is the column at the root of the workload, or the top of the
postgresql Howto Build Scalable Apps Model Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-real-time.md
Last updated 04/28/2022
# Model real-time analytics apps + ## Colocate large tables with shard key To pick the shard key for a real-time operational analytics application, follow
postgresql Howto Build Scalable Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-overview.md
Last updated 04/28/2022
# Build scalable apps + > [!NOTE] > This article is for you if: >
postgresql Howto Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-choose-distribution-column.md
Last updated 02/28/2022
# Choose distribution columns in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Choosing each table's distribution column is one of the most important modeling decisions you'll make. Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) stores rows in shards based on the value of the rows' distribution column.
postgresql Howto Compute Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-compute-quota.md
Last updated 12/10/2021
# Change compute quotas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal + Azure enforces a vCore quota per subscription per region. There are two independently adjustable limits: vCores for coordinator nodes, and vCores for worker nodes.
postgresql Howto Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-connect.md
Last updated 05/25/2022
# Connect to a server group + Choose your database client below to learn how to configure it to connect to Hyperscale (Citus). # [pgAdmin](#tab/pgadmin) + [pgAdmin](https://www.pgadmin.org/) is a popular and feature-rich open source administration and development platform for PostgreSQL.
administration and development platform for PostgreSQL.
# [psql](#tab/psql) + The [psql utility](https://www.postgresql.org/docs/current/app-psql.html) is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them to PostgreSQL, and see the query results.
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-create-users.md
Last updated 1/8/2019
# Create users in Azure Database for PostgreSQL - Hyperscale (Citus) + ## The server admin account The PostgreSQL engine uses
postgresql Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-high-availability.md
Last updated 07/27/2020
# Configure Hyperscale (Citus) high availability + Azure Database for PostgreSQL - Hyperscale (Citus) provides high availability (HA) to avoid database downtime. With HA enabled, every node in a server group will get a standby. If the original node becomes unhealthy, its standby will be
postgresql Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-logging.md
Last updated 9/13/2021
# Logs in Azure Database for PostgreSQL - Hyperscale (Citus) + PostgreSQL database server logs are available for every node of a Hyperscale (Citus) server group. You can ship logs to a storage server, or to an analytics service. The logs can be used to identify, troubleshoot, and repair
postgresql Howto Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-maintenance.md
Last updated 04/07/2021
# Manage scheduled maintenance settings for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + You can specify maintenance options for each Hyperscale (Citus) server group in your Azure subscription. Options include the maintenance schedule and notification settings for upcoming and finished maintenance events.
postgresql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-manage-firewall-using-portal.md
Last updated 11/16/2021
# Manage public access for Azure Database for PostgreSQL - Hyperscale (Citus) + Server-level firewall rules can be used to manage [public access](concepts-firewall-rules.md) to a Hyperscale (Citus) coordinator node from a specified IP address (or range of IP addresses) in the
postgresql Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-modify-distributed-tables.md
Last updated 8/10/2020
# Distribute and modify tables + ## Distributing tables To create a distributed table, you need to first define the table schema. To do
postgresql Howto Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-monitoring.md
Last updated 10/05/2021
# How to view metrics in Azure Database for PostgreSQL - Hyperscale (Citus) + Resource metrics are available for every node of a Hyperscale (Citus) server group, and in aggregate across the nodes.
postgresql Howto Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-private-access.md
Last updated 01/14/2022
# Private access in Azure Database for PostgreSQL Hyperscale (Citus) + [Private access](concepts-private-access.md) allows resources in an Azure virtual network to connect securely and privately to nodes in a Hyperscale (Citus) server group. This how-to assumes you've already created a virtual
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-read-replicas-portal.md
Last updated 08/03/2021
# Create and manage read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal + In this article, you learn how to create and manage read replicas in Hyperscale (Citus) from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
postgresql Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restart.md
Last updated 05/06/2022
# Restart Azure Database for PostgreSQL - Hyperscale (Citus) + You can restart your Hyperscale (Citus) server group for the Azure portal. Restarting the server group applies to all nodes; you can't selectively restart individual nodes. The restart applies to all PostgreSQL server processes in the nodes. Any applications attempting to use the database will experience connectivity downtime while the restart happens.
postgresql Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restore-portal.md
Last updated 07/09/2021
# Point-in-time restore of a Hyperscale (Citus) server group + This article provides step-by-step procedures to perform [point-in-time recoveries](concepts-backup.md#restore) for a Hyperscale (Citus) server group using backups. You can restore either to the earliest backup or to
postgresql Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-grow.md
Last updated 12/10/2021
# Scale a Hyperscale (Citus) server group + Azure Database for PostgreSQL - Hyperscale (Citus) provides self-service scaling to deal with increased load. The Azure portal makes it easy to add new worker nodes, and to increase the vCores of existing nodes. Adding nodes causes
postgresql Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-initial.md
Last updated 08/03/2021
# Pick initial size for Hyperscale (Citus) server group + The size of a server group, both number of nodes and their hardware capacity, is [easy to change](howto-scale-grow.md)). However you still need to choose an initial size for a new server group. Here are some tips for a
postgresql Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-rebalance.md
Last updated 07/20/2021
# Rebalance shards in Hyperscale (Citus) server group + To take advantage of newly added nodes, rebalance distributed table [shards](concepts-distributed-data.md#shards). Rebalancing moves shards from existing nodes to the new ones. Hyperscale (Citus) offers zero-downtime rebalancing, meaning queries continue without interruption during
postgresql Howto Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ssl-connection-security.md
Last updated 07/16/2020 # Configure TLS in Azure Database for PostgreSQL - Hyperscale (Citus)+ The Hyperscale (Citus) coordinator node requires client applications to connect with Transport Layer Security (TLS). Enforcing TLS between the database server and client applications helps keep data confidential in transit. Extra verification settings described below also protect against "man-in-the-middle" attacks. ## Enforcing TLS connections
postgresql Howto Table Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-table-size.md
Last updated 12/06/2021
# Determine table and relation size + The usual way to find table sizes in PostgreSQL, `pg_total_relation_size`, drastically under-reports the size of distributed tables on Hyperscale (Citus). All this function does on a Hyperscale (Citus) server group is to reveal the size
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-common-connection-issues.md
Last updated 12/17/2021
# Troubleshoot connection issues to Azure Database for PostgreSQL - Hyperscale (Citus) + Connection problems may be caused by several things, such as: * Firewall settings
postgresql Howto Troubleshoot Read Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-read-only.md
Last updated 08/03/2021
# Troubleshoot read-only access to Azure Database for PostgreSQL - Hyperscale (Citus) + PostgreSQL can't run on a machine without some free disk space. To maintain access to PostgreSQL servers, it's necessary to prevent the disk space from running out.
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-upgrade.md
Last updated 4/5/2021
# Upgrade Hyperscale (Citus) server group + These instructions describe how to upgrade to a new major version of PostgreSQL on all server group nodes.
postgresql Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-useful-diagnostic-queries.md
Last updated 8/23/2021
# Useful Diagnostic Queries + ## Finding which node contains data for a specific tenant In the multi-tenant use case, we can determine which worker node contains the
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/overview.md
Last updated 04/20/2022
# What is Hyperscale (Citus)? + ## The superpower of distributed tables Hyperscale (Citus) is PostgreSQL extended with the superpower of "distributed
postgresql Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/product-updates.md
Last updated 10/15/2021
# Product updates for PostgreSQL - Hyperscale (Citus) + ## Updates feed The Microsoft Azure website lists newly available features per product, plus
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-connect-psql.md
Last updated 05/05/2022
# Connect to a Hyperscale (Citus) server group with psql + ## Prerequisites To follow this quickstart, you'll first need to:
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-create-portal.md
Last updated 05/05/2022
# Create a Hyperscale (Citus) server group in the Azure portal + Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that allows you to run horizontally scalable PostgreSQL databases in the cloud.
Let's get started!
# [Direct link](#tab/direct) + Visit [Create Hyperscale (Citus) server group](https://portal.azure.com/#create/Microsoft.PostgreSQLServerGroup) in the Azure portal. # [Via portal search](#tab/portal-search) + 1. Visit the [Azure portal](https://portal.azure.com/) and search for **citus**. Select **Azure Database for PostgreSQL Hyperscale (Citus)**. ![search for citus](../media/quickstart-hyperscale-create-portal/portal-search.png)
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-distribute-tables.md
Last updated 05/05/2022
# Model and load data + In this example, we'll use Hyperscale (Citus) to store and query events recorded from GitHub open source contributors.
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-run-queries.md
Last updated 05/05/2022
# Run queries + ## Prerequisites To follow this quickstart, you'll first need to:
postgresql Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-extensions.md
Last updated 02/24/2022
# PostgreSQL extensions in Azure Database for PostgreSQL – Hyperscale (Citus) + PostgreSQL provides the ability to extend the functionality of your database by using extensions. Extensions allow for bundling multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions can function like built-in features. For more information on PostgreSQL extensions, see [Package related objects into an extension](https://www.postgresql.org/docs/current/static/extend-extensions.html). ## Use PostgreSQL extensions
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-functions.md
Last updated 02/24/2022
# Functions in the Hyperscale (Citus) SQL API + This section contains reference information for the user-defined functions provided by Hyperscale (Citus). These functions help in providing distributed functionality to Hyperscale (Citus).
postgresql Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-limits.md
Last updated 02/25/2022
# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations + The following section describes capacity and functional limits in the Hyperscale (Citus) service.
postgresql Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-metadata.md
Last updated 02/18/2022
# System tables and views + Hyperscale (Citus) creates and maintains special tables that contain information about distributed data in the server group. The coordinator node consults these tables when planning how to run queries across the worker nodes.
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
Last updated 02/24/2022
# The Hyperscale (Citus) SQL API + Azure Database for PostgreSQL - Hyperscale (Citus) includes features beyond standard PostgreSQL. Below is a categorized reference of functions and configuration options for:
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-parameters.md
Last updated 02/18/2022
# Server parameters + There are various server parameters that affect the behavior of Hyperscale (Citus), both from standard PostgreSQL, and specific to Hyperscale (Citus). These parameters can be set in the Azure portal for a Hyperscale (Citus) server
postgresql Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-versions.md
Last updated 10/01/2021
# Supported database versions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + ## PostgreSQL versions The version of PostgreSQL running in a Hyperscale (Citus) server group is
postgresql Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-compute.md
Last updated 05/10/2022
# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) compute and storage+ You can select the compute and storage settings independently for worker nodes and the coordinator node in a Hyperscale (Citus) server
postgresql Resources Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-pricing.md
Last updated 02/23/2022
# Pricing for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + ## General pricing For the most up-to-date pricing information, see the service
postgresql Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-regions.md
Last updated 02/23/2022
# Regional availability for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + Hyperscale (Citus) server groups are available in the following Azure regions: * Americas:
postgresql Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-multi-tenant.md
Last updated 05/14/2019
# Tutorial: design a multi-tenant database by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to: > [!div class="checklist"]
postgresql Tutorial Design Database Realtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-realtime.md
Last updated 05/14/2019
# Tutorial: Design a real-time analytics dashboard by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) + In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to: > [!div class="checklist"]