Updates from: 04/29/2022 01:05:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recoverability-overview.md
There are several Azure Monitor workbooks that can help you to monitor configura
- Directory role and group membership updates for service principals - Modified federation settings
-The [Cross-tenant access activity workbook ](../reports-monitoring/workbook-cross-tenant-access-activity.md)can help you monitor which applications in external tenants your users are accessing, and which applications I your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants.
+The [Cross-tenant access activity workbook ](../reports-monitoring/workbook-cross-tenant-access-activity.md)can help you monitor which applications in external tenants your users are accessing, and which applications in your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants.
## Operational security
active-directory Forcepoint Cloud Security Gateway Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Forcepoint Cloud Security Gateway - User Authentication'
+description: Learn how to configure single sign-on between Azure Active Directory and Forcepoint Cloud Security Gateway - User Authentication.
++++++++ Last updated : 04/19/2022++++
+# Tutorial: Azure AD SSO integration with Forcepoint Cloud Security Gateway - User Authentication
+
+In this tutorial, you'll learn how to integrate Forcepoint Cloud Security Gateway - User Authentication with Azure Active Directory (Azure AD). When you integrate Forcepoint Cloud Security Gateway - User Authentication with Azure AD, you can:
+
+* Control in Azure AD who has access to Forcepoint Cloud Security Gateway - User Authentication.
+* Enable your users to be automatically signed-in to Forcepoint Cloud Security Gateway - User Authentication with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Forcepoint Cloud Security Gateway - User Authentication single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Forcepoint Cloud Security Gateway - User Authentication supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Forcepoint Cloud Security Gateway - User Authentication from the gallery
+
+To configure the integration of Forcepoint Cloud Security Gateway - User Authentication into Azure AD, you need to add Forcepoint Cloud Security Gateway - User Authentication from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Forcepoint Cloud Security Gateway - User Authentication** in the search box.
+1. Select **Forcepoint Cloud Security Gateway - User Authentication** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Forcepoint Cloud Security Gateway - User Authentication
+
+Configure and test Azure AD SSO with Forcepoint Cloud Security Gateway - User Authentication using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Forcepoint Cloud Security Gateway - User Authentication.
+
+To configure and test Azure AD SSO with Forcepoint Cloud Security Gateway - User Authentication, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Forcepoint Cloud Security Gateway - User Authentication SSO](#configure-forcepoint-cloud-security-gatewayuser-authentication-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Forcepoint Cloud Security Gateway - User Authentication test user](#create-forcepoint-cloud-security-gatewayuser-authentication-test-user)** - to have a counterpart of B.Simon in Forcepoint Cloud Security Gateway - User Authentication that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Forcepoint Cloud Security Gateway - User Authentication** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://mailcontrol.com/sp_metadata.xml`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://proxy-login.blackspider.com/`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://mailcontrol.com`
+
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Forcepoint Cloud Security Gateway - User Authentication** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Forcepoint Cloud Security Gateway - User Authentication.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Forcepoint Cloud Security Gateway - User Authentication**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Forcepoint Cloud Security Gateway - User Authentication SSO
+
+1. Log in to your Forcepoint Cloud Security Gateway - User Authentication company site as an administrator.
+
+1. Go to **Web** > **SETTINGS** and click **Single Sign-On**.
+
+1. In the **Single Sign-On** page, perform the following steps:
+
+ ![Screenshot that shows the Single Sign-On Configuration.](./media/forcepoint-cloud-security-gateway-tutorial/general.png "Configuration")
+
+ a. Enable **Use identity provider for single sign-on** checkbox.
+
+ b. Select **Identity provider** from the dropdown.
+
+ c. Open the downloaded **Certificate (Base64)** from the Azure portal and upload the file into the **File upload** textbox by clicking **Browse** option.
+
+ d. Click **Save**.
+
+### Create Forcepoint Cloud Security Gateway - User Authentication test user
+
+In this section, you create a user called Britta Simon in Forcepoint Cloud Security Gateway - User Authentication. Work with [Forcepoint Cloud Security Gateway - User Authentication support team](mailto:support@forcepoint.com) to add the users in the Forcepoint Cloud Security Gateway - User Authentication platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Forcepoint Cloud Security Gateway - User Authentication Sign-on URL where you can initiate the login flow.
+
+* Go to Forcepoint Cloud Security Gateway - User Authentication Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Forcepoint Cloud Security Gateway - User Authentication tile in the My Apps, this will redirect to Forcepoint Cloud Security Gateway - User Authentication Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Forcepoint Cloud Security Gateway - User Authentication you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Goalquest Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/goalquest-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with GoalQuest'
+description: Learn how to configure single sign-on between Azure Active Directory and GoalQuest.
++++++++ Last updated : 04/13/2022++++
+# Tutorial: Azure AD SSO integration with GoalQuest
+
+In this tutorial, you'll learn how to integrate GoalQuest with Azure Active Directory (Azure AD). When you integrate GoalQuest with Azure AD, you can:
+
+* Control in Azure AD who has access to GoalQuest.
+* Enable your users to be automatically signed-in to GoalQuest with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* GoalQuest single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* GoalQuest supports **IDP** initiated SSO.
+
+## Add GoalQuest from the gallery
+
+To configure the integration of GoalQuest into Azure AD, you need to add GoalQuest from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **GoalQuest** in the search box.
+1. Select **GoalQuest** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for GoalQuest
+
+Configure and test Azure AD SSO with GoalQuest using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in GoalQuest.
+
+To configure and test Azure AD SSO with GoalQuest, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure GoalQuest SSO](#configure-goalquest-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create GoalQuest test user](#create-goalquest-test-user)** - to have a counterpart of B.Simon in GoalQuest that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **GoalQuest** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot that shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. Airtable application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot that shows attributes configuration image.](common/default-attributes.png "Image")
+
+1. In addition to above, Airtable application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | - | |
+ | employee ID | user.employeeid |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to GoalQuest.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **GoalQuest**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure GoalQuest SSO
+
+To configure single sign-on on **GoalQuest** side, you need to send the **App Federation Metadata Url** to [GoalQuest support team](mailto:goalquest@biworldwide.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create GoalQuest test user
+
+In this section, you create a user called Britta Simon in GoalQuest. Work with [GoalQuest support team](mailto:goalquest@biworldwide.com) to add the users in the GoalQuest platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the GoalQuest for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the GoalQuest tile in the My Apps, you should be automatically signed in to the GoalQuest for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure GoalQuest you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Phenom Txm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/phenom-txm-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Phenom TXM'
+description: Learn how to configure single sign-on between Azure Active Directory and Phenom TXM.
++++++++ Last updated : 04/19/2022++++
+# Tutorial: Azure AD SSO integration with Phenom TXM
+
+In this tutorial, you'll learn how to integrate Phenom TXM with Azure Active Directory (Azure AD). When you integrate Phenom TXM with Azure AD, you can:
+
+* Control in Azure AD who has access to Phenom TXM.
+* Enable your users to be automatically signed-in to Phenom TXM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Phenom TXM single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Phenom TXM supports **SP** and **IDP** initiated SSO.
+
+## Add Phenom TXM from the gallery
+
+To configure the integration of Phenom TXM into Azure AD, you need to add Phenom TXM from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Phenom TXM** in the search box.
+1. Select **Phenom TXM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Phenom TXM
+
+Configure and test Azure AD SSO with Phenom TXM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Phenom TXM.
+
+To configure and test Azure AD SSO with Phenom TXM, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Phenom TXM SSO](#configure-phenom-txm-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Phenom TXM test user](#create-phenom-txm-test-user)** - to have a counterpart of B.Simon in Phenom TXM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Phenom TXM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://<SUBDOMAIN>.phenompro.com/auth/realms/<ID>` |
+ | `https://<SUBDOMAIN>.phenom.com/auth/realms/<ID>` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL |
+ |--|
+ | `https://<SUBDOMAIN>.phenompro.com/auth/<ID>` |
+ | `https://<SUBDOMAIN>.phenom.com/auth/<ID>` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | Sign-on URL |
+ |--|
+ | `https://<SUBDOMAIN>.phenompro.com` |
+ | `https://<SUBDOMAIN>.phenom.com` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Phenom TXM Client support team](mailto:support@phenompeople.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Phenom TXM.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Phenom TXM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Phenom TXM SSO
+
+1. Log in to your Phenom TXM company site as an administrator.
+
+1. Go to **Settings** tab > **Identity Provider**.
+
+1. In the **Identity Provider** section, perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/phenom-txm-tutorial/input.png "Configuration")
+
+ ![Screenshot that shows the Identity Provider Metadata.](./media/phenom-txm-tutorial/certificate.png "Metadata")
+
+ a. Enter a valid name in the **Display Name** textbox.
+
+ b. In the **Single SignOn URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **Meta data URL** textbox, paste the **App Federation Metadata Url** value which you have copied from the Azure portal.
+
+ d. Click **Save Changes**.
+
+ e. Copy **Entity ID** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ f. Copy **Redirect URI (ACS URL)** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+### Create Phenom TXM test user
+
+1. In a different web browser window, log into your Phenom TXM website as an administrator.
+
+1. Go to **Users** tab and click **Create Users** > **Create single new User**.
+
+1. In the **Create User** page, perform the following steps:
+
+ a. In the **User Information** section, enter a valid **First Name**, **Last Name** and **Work Email** in the textboxes and click **Continue**.
+
+ ![Screenshot that shows the User Information fields.](./media/phenom-txm-tutorial/name.png "User Information")
+
+ b. In the **Assign Tenants** section, **Select Tenants** and click **Continue**.
+
+ ![Screenshot that shows the Tenants Information fields.](./media/phenom-txm-tutorial/details.png "Tenants")
+
+ c. In the **Assign Roles** section, **Select roles** from the dropdown and click **Continue**.
+
+ ![Screenshot that shows the Roles Mapping for Users.](./media/phenom-txm-tutorial/role.png "Mapping")
+
+ d. In the **Summary** section, review your selections and click **Finish** to create a user.
+
+ ![Screenshot that shows the Phenom TXM Summary section.](./media/phenom-txm-tutorial/finish.png "Summary")
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Phenom TXM Sign on URL where you can initiate the login flow.
+
+* Go to Phenom TXM Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Phenom TXM for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Phenom TXM tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Phenom TXM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Phenom TXM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
Previously updated : 04/26/2022 Last updated : 04/28/2022 # Customer intent: As a developer I am looking for information on how to enable my users to control their own information
Adjust the API scopes used in your application
For the Request API the new scope for your application or Postman is now:
-```3db474b9-6a0c-96ac-1fceb342124f/.default```
+```3db474b9-6a0c-4840-96ac-1fceb342124f/.default ```
### How do I reset the Azure AD Verifiable credentials service?
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
This article lists the latest features, improvements, and changes in the Azure A
## April
-Verifiable Credentials service Administrators must perform a small configuration change before **May 4, 2022** following [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to avoid service disruptions. On May 4, 2022 we'll roll out updates on our service that will result in errors on issuance and presentation on those tenants that haven't applied the changes.
-
+Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. Follow [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to apply the required configuration changes.
>[!IMPORTANT]
-> When the configuration on your tenant has not been updated, . [Service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
+> If changes are not applied before **May 4, 2022**, you will experience errors on issuance and presentation for your application or service using the Azure AD Verifiable Credentials Service. [Update service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
## March 2022 - Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure portal.
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
For AKS to automatically rotate non-CA certificates, the cluster must have [TLS
> [!Note] > If you have an existing cluster you have to upgrade that cluster to enable Certificate Auto-Rotation.
-For any AKS clusters created or upgraded after March 2022 Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes within 80% of the client certificate valid time, before they expire with no downtime for the cluster.
+For any AKS clusters created or upgraded after March 2022 Azure Kubernetes Service will automatically rotate non-CA certificates on both the control plane and agent nodes within 80% of the client certificate valid time, before they expire with no downtime for the cluster.
-#### How to check whether current agent node pool is TLS Bootstrapping enabled?
-To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths. On a Linux node: /var/lib/kubelet/bootstrap-kubeconfig, on a Windows node, itΓÇÖs c:\k\bootstrap-config.
+### How to check whether current agent node pool is TLS Bootstrapping enabled?
+
+To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths:
+
+* On a Linux node: */var/lib/kubelet/bootstrap-kubeconfig*
+* On a Windows node: *C:\k\bootstrap-config*
+
+To access agent nodes, see [Connect to Azure Kubernetes Service cluster nodes for maintenance or troubleshooting][aks-node-access] for more information.
> [!Note]
-> The file path may change as k8s version evolves in the future.
+> The file path may change as Kubernetes version evolves in the future.
-> [!IMPORTANT]
->Once a region is configured either create a new cluster or upgrade 'az aks upgrade -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME' an existing cluster to set that cluster for auto-cert rotation.
+Once a region is configured, create a new cluster or upgrade an existing cluster with `az aks upgrade` to set that cluster for auto-certificate rotation. A control plane and node pool upgrade is needed to enable this feature.
+
+```azurecli
+az aks upgrade -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME
+```
### Limitation
-Auto cert rotation won't be enabled on non-rbac cluster.
+Auto certificate rotation won't be enabled on a non-RBAC cluster.
## Manually rotate your cluster certificates
az aks rotate-certs -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME
Verify that the old certificates are no longer valid by running a `kubectl` command. Since you have not updated the certificates used by `kubectl`, you will see an error. For example: ```console
-$ kubectl get no
+$ kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ca") ```
az aks get-credentials -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME --overwrite-exis
Verify the certificates have been updated by running a `kubectl` command, which will now succeed. For example: ```console
-kubectl get no
+kubectl get nodes
``` > [!NOTE]
This article showed you how to automatically rotate your cluster's certificates,
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [aks-best-practices-security-upgrades]: operator-best-practices-cluster-security.md
+[aks-node-access]: ./node-access.md
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Microsoft provides guidance on additional actions you can take to secure your wo
* [New large-scale campaign targets Kubeflow](https://techcommunity.microsoft.com/t5/azure-security-center/new-large-scale-campaign-targets-kubeflow/ba-p/2425750) - June 8, 2021
+## How does the managed Control Plane communicate with my Nodes?
+
+AKS uses a secure tunnel communication to allow the api-server and individual node kubelets to communicate even on separate virtual networks. The tunnel is secured through TLS encryption. The current main tunnel that is used by AKS is [Konnectivity, previously known as apiserver-network-proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/). Please ensure that all network rules follow the [Azure required network rules and FQDNs](limit-egress-traffic.md).
+ ## Why are two resource groups created with AKS? AKS builds upon a number of Azure infrastructure resources, including virtual machine scale sets, virtual networks, and managed disks. This enables you to leverage many of the core capabilities of the Azure platform within the managed Kubernetes environment provided by AKS. For example, most Azure virtual machine types can be used directly with AKS and Azure Reservations can be used to receive discounts on those resources automatically.
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
The required network rules and IP address dependencies are:
| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. This is not required for [private clusters][aks-private-clusters], or for clusters with the *konnectivity-agent* enabled. | | **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. This is not required for nodes provisioned after March 2021. | | **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
-| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. This is not required for [private clusters][aks-private-clusters]. |
+| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. This port is not required for [private clusters][aks-private-clusters]. |
### Azure Global required FQDN / application rules
The following FQDN / application rules are required:
| Destination FQDN | Port | Use | |-|--|-|
-| **`*.hcp.<location>.azmk8s.io`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. This is not required for [private clusters][aks-private-clusters]. |
+| **`*.hcp.<location>.azmk8s.io`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. This is required for clusters with *konnectivity-agent* enabled. Konnectivity also uses Application-Layer Protocol Negotiation (ALPN) to communicate between agent and server. Blocking or rewriting the ALPN extension will cause a failure. This is not required for [private clusters][aks-private-clusters]. |
| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. | | **`*.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). | | **`management.azure.com`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
If you want to restrict how pods communicate between themselves and East-West tr
[aks-upgrade]: upgrade-cluster.md [aks-support-policies]: support-policies.md [aks-faq]: faq.md
-[aks-private-clusters]: private-clusters.md
+[aks-private-clusters]: private-clusters.md
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
There might be various reasons for the pod being stuck in that mode. You might l
For more information about how to troubleshoot pod problems, see [Debugging Pods](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods) in the Kubernetes documentation. ## I'm receiving `TCP timeouts` when using `kubectl` or other third-party tools connecting to the API server
-AKS has HA control planes that scale vertically according to the number of cores to ensure its Service Level Objectives (SLOs) and Service Level Agreements (SLAs). If you're experiencing connections timing out, check the below:
+AKS has HA control planes that scale vertically and horizontally according to the number of cores to ensure its Service Level Objectives (SLOs) and Service Level Agreements (SLAs). If you're experiencing connections timing out, check the below:
-- **Are all your API commands timing out consistently or only a few?** If it's only a few, your `tunnelfront` pod or `aks-link` pod, responsible for node -> control plane communication, might not be in a running state. Make sure the nodes hosting this pod aren't over-utilized or under stress. Consider moving them to their own [`system` node pool](use-system-pools.md).-- **Have you opened all required ports, FQDNs, and IPs noted on the [AKS restrict egress traffic docs](limit-egress-traffic.md)?** Otherwise several commands calls can fail.
+- **Are all your API commands timing out consistently or only a few?** If it's only a few, your `konnectivity-agent` pod, `tunnelfront` pod or `aks-link` pod, responsible for node -> control plane communication, might not be in a running state. Make sure the nodes hosting this pod aren't over-utilized or under stress. Consider moving them to their own [`system` node pool](use-system-pools.md).
+- **Have you opened all required ports, FQDNs, and IPs noted on the [AKS restrict egress traffic docs](limit-egress-traffic.md)?** Otherwise several commands calls can fail. The AKS secure, tunneled communication between api-server and kubelet (through the *konnectivity-agent*) will require some of these to work.
+- **Have you blocked the Application-Layer Protocol Negotiation TLS extension?** *konnectivity-agent* requires this extension to establish a connection between the control plane and nodes.
- **Is your current IP covered by [API IP Authorized Ranges](api-server-authorized-ip-ranges.md)?** If you're using this feature and your IP is not included in the ranges your calls will be blocked. - **Do you have a client or application leaking calls to the API server?** Make sure to use watches instead of frequent get calls and that your third-party applications aren't leaking such calls. For example, a bug in the Istio mixer causes a new API Server watch connection to be created every time a secret is read internally. Because this behavior happens at a regular interval, watch connections quickly accumulate, and eventually cause the API Server to become overloaded no matter the scaling pattern. https://github.com/istio/istio/issues/19481 - **Do you have many releases in your helm deployments?** This scenario can cause both tiller to use too much memory on the nodes, as well as a large amount of `configmaps`, which can cause unnecessary spikes on the API server. Consider configuring `--history-max` at `helm init` and leverage the new Helm 3. More details on the following issues:
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
To use Application Insights, [create an instance of the Application Insights ser
:::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-logger-1.png" alt-text="Screenshot that shows where to add a new connection"::: 1. Select the **Application Insights** instance you created earlier and provide a short description. 1. To enable [availability monitoring](../azure-monitor/app/monitor-web-app-availability.md) of your API Management instance in Application Insights, select the **Add availability monitor** checkbox.
- * This setting regularly validates whether the API Management service endpoint is responding.
+ * This setting regularly validates whether the API Management gateway endpoint is responding.
* Results appear in the **Availability** pane of the Application Insights instance. 1. Select **Create**. 1. Check that the new Application Insights logger with an instrumentation key now appears in the list.
api-management Api Management Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-kubernetes.md
Pros:
* No change on the AKS side if Services are already exposed publicly and authentication logic already exists in microservices Cons:
-* Potential security risk due to public visibility of Service endpoints
+* Potential security risk due to public visibility of endpoints
* No single-entry point for inbound cluster traffic * Complicates microservices with duplicate authentication logic
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
# Connect to a virtual network in internal mode using Azure API Management With Azure virtual networks (VNets), Azure API Management can manage internet-inaccessible APIs using several VPN technologies to make the connection. For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
-This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode. In this mode, you can only access the following service endpoints within a VNet whose access you control.
+This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode, In this mode, you can only access the following API Management endpoints within a VNet whose access you control.
* The API gateway * The developer portal * Direct management * Git > [!NOTE]
-> None of the service endpoints are registered on the public DNS. The service endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNet.
+> None of the API Management endpoints are registered on the public DNS. The endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNet.
Use API Management in internal mode to:
Use API Management in internal mode to:
:::image type="content" source="media/api-management-using-with-internal-vnet/api-management-vnet-internal.png" alt-text="Connect to internal VNet":::
-For configurations specific to the *external* mode, where the service endpoints are accessible from the public internet, and backend services are located in the network, see [Connect to a virtual network using Azure API Management](api-management-using-with-vnet.md).
+For configurations specific to the *external* mode, where the API Management endpoints are accessible from the public internet, and backend services are located in the network, see [Connect to a virtual network using Azure API Management](api-management-using-with-vnet.md).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
After successful deployment, you should see your API Management service's **priv
## DNS configuration
-In internal VNet mode, you have to manage your own DNS to enable inbound access to your API Management service endpoints.
+In internal VNet mode, you have to manage your own DNS to enable inbound access to your API Management endpoints.
We recommend:
Learn how to [set up a private zone in Azure DNS](../dns/private-dns-getstarted-
> [!NOTE]
-> The API Management service does not listen to requests on its IP addresses. It only responds to requests to the host name configured on its service endpoints. These endpoints include:
+> The API Management service does not listen to requests on its IP addresses. It only responds to requests to the host name configured on its endpoints. These endpoints include:
> * API gateway > * The Azure portal > * The developer portal
Learn how to [set up a private zone in Azure DNS](../dns/private-dns-getstarted-
> * Git ### Access on default host names
-When you create an API Management service (`contosointernalvnet`, for example), the following service endpoints are configured by default:
+When you create an API Management service (`contosointernalvnet`, for example), the following endpoints are configured by default:
| Endpoint | Endpoint configuration | | -- | -- |
When you create an API Management service (`contosointernalvnet`, for example),
| Direct management endpoint | `contosointernalvnet.management.azure-api.net` | | Git | `contosointernalvnet.scm.azure-api.net` |
-To access these API Management service endpoints, you can create a virtual machine in a subnet connected to the VNet in which API Management is deployed. Assuming the [private virtual IP address](#routing) for your service is 10.1.0.5, you can map the hosts file as follows. The hosts mapping file is at `%SystemDrive%\drivers\etc\hosts` (Windows) or `/etc/hosts` (Linux, macOS).
+To access these API Management endpoints, you can create a virtual machine in a subnet connected to the VNet in which API Management is deployed. Assuming the [private virtual IP address](#routing) for your service is 10.1.0.5, you can map the hosts file as follows. The hosts mapping file is at `%SystemDrive%\drivers\etc\hosts` (Windows) or `/etc/hosts` (Linux, macOS).
| Internal virtual IP address | Endpoint configuration | | -- | -- |
To access these API Management service endpoints, you can create a virtual machi
| 10.1.0.5 | `contosointernalvnet.management.azure-api.net` | | 10.1.0.5 | `contosointernalvnet.scm.azure-api.net` |
-You can then access all the service endpoints from the virtual machine you created.
+You can then access all the API Management endpoints from the virtual machine you created.
### Access on custom domain names If you don't want to access the API Management service with the default host names:
-1. Set up [custom domain names](configure-custom-domain.md) for all your service endpoints, as shown in the following image:
+1. Set up [custom domain names](configure-custom-domain.md) for all your endpoints, as shown in the following image:
:::image type="content" source="media/api-management-using-with-internal-vnet/api-management-custom-domain-name.png" alt-text="Set up custom domain name":::
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
The API Management service depends on several Azure services. When API Managemen
## Routing
-+ A load-balanced public IP address (VIP) is reserved to provide access to all service endpoints and resources outside the VNet.
++ A load-balanced public IP address (VIP) is reserved to provide access to the API Management endpoints and resources outside the VNet. + The public VIP can be found on the **Overview/Essentials** blade in the Azure portal. For more information and considerations, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#ip-addresses-of-api-management-service-in-vnet).
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
When you create an Azure API Management service instance in the Azure cloud, Azu
## Endpoints for custom domains
-There are several API Management service endpoints to which you can assign a custom domain name. Currently, the following endpoints are available:
+There are several API Management endpoints to which you can assign a custom domain name. Currently, the following endpoints are available:
| Endpoint | Default | | -- | -- |
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
Before you proceed, here are some important points related to listener-specific
- We recommend using TLS 1.2 as this version will be mandated in the future. - You don't have to configure client authentication on an SSL profile to associate it to a listener. You can have only client authentication or listener-specific SSL policy configured, or both configured in your SSL profile.-- Using a new Predefined or Customv2 policy enhances SSL security and performance for the entire gateway (SSL Policy and SSL Profile). Therefore, you cannot have different listeners on both old as well as new SSL (predefined or custom) policies. Consider this example,
+- Using a new Predefined or Customv2 policy enhances SSL security and performance for the entire gateway (SSL Policy and SSL Profile). Therefore, you cannot have different listeners on both old as well as new SSL (predefined or custom) policies.
- You are currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. Selecting a &#34;new&#34; Predefined or Customv2 policy for any one of them will automatically apply the same new policy for the other configuration too. However, you can customize a specific one later within the realm of the new policies such that only the new
-predefined policies, or customv2 policy, or combination of these co-exist on a gateway.
+ Consider this example, you are currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. To use a &#34;new&#34; Predefined or Customv2 policy for any one of them will also require you to upgrade the other configuration. You may use the new predefined policies, or customv2 policy, or combination of these across the gateway.
To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
Application Gateway supports the following cipher suites from which you can choo
- TLS_RSA_WITH_3DES_EDE_CBC_SHA - TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
-## Known issue
-Application Gateway v2 does not support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
--- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_DHE_RSA_WITH_AES_128_CBC_SHA-- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384-- TLS_DHE_RSA_WITH_AES_256_CBC_SHA-- TLS_DHE_DSS_WITH_AES_128_CBC_SHA256-- TLS_DHE_DSS_WITH_AES_128_CBC_SHA-- TLS_DHE_DSS_WITH_AES_256_CBC_SHA256-- TLS_DHE_DSS_WITH_AES_256_CBC_SHA
+## Limitations
+
+- The connections to backend servers are always with minimum protocol TLS v1.0 and up to TLS v1.2. Therefore, only TLS versions 1.0, 1.1 and 1.2 are supported to establish a secured connection with backend servers.
+- As of now, the TLS 1.3 implementation is not enabled with &#34;Zero Round Trip Time (0-RTT)&#34; feature.
+- The Portal support for the new policies and TLS 1.3 is currently unavailable.
+- Application Gateway v2 does not support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
+ - TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_DHE_RSA_WITH_AES_128_CBC_SHA
+ - TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_DHE_RSA_WITH_AES_256_CBC_SHA
+ - TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
+ - TLS_DHE_DSS_WITH_AES_128_CBC_SHA
+ - TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
+ - TLS_DHE_DSS_WITH_AES_256_CBC_SHA
## Next steps
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
Azure Attestation includes the below claims in the attestation token for all att
- **x-ms-attestation-type**: String value representing attestation type - **x-ms-policy-hash**: Hash of Azure Attestation evaluation policy computed as BASE64URL(SHA256(UTF8(BASE64URL(UTF8(policy text))))) - **x-ms-policy-signer**: JSON object with a "jwkΓÇ¥ member representing the key a customer used to sign their policy. This is applicable when customer uploads a signed policy
+- **x-ms-runtime**: JSON object containing "claims" that are defined and generated within the attested environment. This is a specialization of the ΓÇ£enclave held dataΓÇ¥ concept, where the ΓÇ£enclave held dataΓÇ¥ is specifically formatted as a UTF-8 encoding of well formed JSON
+- **x-ms-inittime**: JSON object containing ΓÇ£claimsΓÇ¥ that are defined and enforced at secure environment initialization time
Below claim names are used from [IETF JWT specification](https://tools.ietf.org/html/rfc7519)
$maa-attestationcollateral | x-ms-sgx-collateral
The following claims are additionally supported by the SevSnpVm attestation type: -- **x-ms-runtime**: JSON object containing ΓÇ£claimsΓÇ¥ that are defined and generated within the attested environment. This is a specialization of the ΓÇ£enclave held dataΓÇ¥ concept, where the ΓÇ£enclave held dataΓÇ¥ is specifically formatted as a UTF-8 encoding of well formed JSON - **x-ms-sevsnpvm-authorkeydigest**: SHA384 hash of the author signing key - **x-ms-sevsnpvm-bootloader-svn** :AMD boot loader security version number (SVN) - **x-ms-sevsnpvm-familyId**: HCL family identification string
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
These Azure services can work with Automation job and runbook resources using an
## Pricing for Azure Automation
+Process automation includes runbook jobs and watchers. Billing for jobs is based on the number of job run time minutes used in the month, and for watchers, it is on the number of hours used in a month. The charges for process automation are incurred whenever a [job](/azure/automation/start-runbooks) or [watcher](/azure/automation/automation-scenario-using-watcher-task) runs.
+You create Automation accounts with a Basic SKU, wherein the first 500 job run time minutes are free per subscription. You are billed only for minutes/hours that exceed the 500 mins free included units.
+ You can review the prices associated with Azure Automation on the [pricing](https://azure.microsoft.com/pricing/details/automation/) page. ## Next steps
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
This tutorial describes how to use GitOps in a Kubernetes cluster. Before you di
General availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both AKS and Azure Arc-enabled Kubernetes. Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. >[!IMPORTANT]
->GitOps with Flux v2 is in public preview. In preparation for general availability, features are still being added to the preview. One recently-released feature, multi-tenancy, could affect some users. To understand how to work with multi-tenancy, [please review these details](#multi-tenancy).
+>GitOps with Flux v2 is in preview. In preparation for general availability, features are still being added to the preview. One recently-released feature, multi-tenancy, could affect some users. To understand how to work with multi-tenancy, [please review these details](#multi-tenancy).
> >The `microsoft.flux` extension released major version 1.0.0. This includes the multi-tenancy feature. If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension you can upgrade to the latest extension manually using the Azure CLI: "az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>" (use "-t connectedClusters" for Arc clusters and "-t managedClusters" for AKS clusters). - ## Prerequisites To manage GitOps through the Azure CLI or the Azure portal, you need the following items.
False whl k8s-extension C:\Users\somename\.azure\c
Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [gitops-flux2-kustomize-helm-mt](https://github.com/Azure/gitops-flux2-kustomize-helm-mt) repository.
+>[!IMPORTANT]
+>The demonstration repo is designed to simplify your use of this tutorial and illustrate some key principles. To keep up to date, the repo can get breaking changes occasionally from version upgrades. These changes won't affect your new application of this tutorial, only previous tutorial applications that have not been deleted. To learn how to handle these changes please see the [breaking change disclaimer](https://github.com/Azure/gitops-flux2-kustomize-helm-mt#breaking-change-disclaimer-%EF%B8%8F).
+ In the following example: * The resource group that contains the cluster is `flux-demo-rg`.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
When you connect your machine to Azure Arc-enabled servers, you can perform many
* Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint), included through [Microsoft Defender for Cloud](../../security-center/defender-for-servers-introduction.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected. * Use [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) to collect security-related events and correlate them with other data sources. * **Configure**:
- * Use Azure Automation for frequent and time-consuming management tasks using PowerShell and Python [runbooks](../../automation/automation-runbook-execution.md). Assess configuration changes for installed software, Microsoft services, Windows registry and files, and Linux daemons using [Change Tracking and Inventory](../../automation/change-tracking/overview.md)
+ * Use [Azure Automation](../../automation/extension-based-hybrid-runbook-worker-install.md?tabs=windows) for frequent and time-consuming management tasks using PowerShell and Python [runbooks](../../automation/automation-runbook-execution.md). Assess configuration changes for installed software, Microsoft services, Windows registry and files, and Linux daemons using [Change Tracking and Inventory](../../automation/change-tracking/overview.md)
* Use [Update Management](../../automation/update-management/overview.md) to manage operating system updates for your Windows and Linux servers. Automate onboarding and configuration of a set of Azure services when you use [Azure Automanage (preview)](../../automanage/automanage-arc.md). * Perform post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. * **Monitor**:
azure-maps Zoom Levels And Tile Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md
The following table provides the full list of values for zoom levels where the t
| 16 | 2.3887 | 611.496 | | 17 | 1.1943 | 305.748 | | 18 | 0.5972 | 152.874 |
-| 19 | 0.14929 | 76.437 |
+| 19 | 0.2986 | 76.437 |
| 20 | 0.14929 | 38.2185 | | 21 | 0.074646 | 19.10926 | | 22 | 0.037323 | 9.55463 |
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Last updated 04/15/2022
# Collect text and IIS logs with Azure Monitor agent (preview) This article describes how to configure the collection of file-based text logs, including logs generated by IIS on Windows computers, with the [Azure Monitor agent](azure-monitor-agent-overview.md). Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog.
-> [!NOTE]
-> This feature is currently in public preview and isn't completely implemented in the Azure portal. This tutorial uses Azure Resource Manager templates for steps that can't yet be performed with the portal.
+>[!IMPORTANT]
+> This feature is currently in preview. You must submit a request for it to be enabled in your subscriptions at [Azure Monitor Logs: DCR-based Custom Logs Preview Signup](https://aka.ms/CustomLogsOnboard).
## Prerequisites To complete this procedure, you need the following:
The steps to configure log collection are as follows. The detailed steps for eac
3. Create a data collection rule to define the structure of the log file and destination of the collected data. 4. Create association between the data collection rule and the agent collecting the log file.
+> [!NOTE]
+> This feature is currently in public preview and isn't completely implemented in the Azure portal. This tutorial uses Azure Resource Manager templates for steps that can't yet be performed with the portal.
+ ## Create new table in Log Analytics workspace The custom table must be created before you can send data to it. When you create the table, you provide its name and a definition for each of its columns.
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
As you type the name of an event log, Azure Monitor provides suggestions of comm
[![Configure Windows events](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox) > [!IMPORTANT]
-> You can't configure collection of security events from the workspace. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events.
+> You can't configure collection of security events from the workspace using Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events.
> [!NOTE]
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
The Log Analytics gateway is an HTTP forward proxy that supports HTTP tunneling
The Log Analytics gateway supports: * Reporting up to the same Log Analytics workspaces configured on each agent behind it and that are configured with Azure Automation Hybrid Runbook Workers.
-* Windows computers on which either the [Azure Monitor Agent](./azure-monitor-agent-overview.md) or the legacy Microsoft Monitoring Agent is directly connected to a Log Analytics workspace in Azure Monitor.
+* Windows computers on which either the [Azure Monitor Agent](./azure-monitor-agent-overview.md) or the legacy Microsoft Monitoring Agent is directly connected to a Log Analytics workspace in Azure Monitor. Both the source and the gateway server must be running the same agent. You can't stream events from a server running Azure Monitor agent through a server running the gateway with the Log Analytics agent.
* Linux computers on which either the [Azure Monitor Agent](./azure-monitor-agent-overview.md) or the legacy Log Analytics agent for Linux is directly connected to a Log Analytics workspace in Azure Monitor. * System Center Operations Manager 2012 SP1 with UR7, Operations Manager 2012 R2 with UR3, or a management group in Operations Manager 2016 or later that is integrated with Log Analytics.
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
appInsights.defaultClient.context.tags[appInsights.defaultClient.context.keys.cl
appInsights.start(); ```
+### Automatic web snippet injection (Preview)
+
+Automatic web snippet injection allows you to enable [Application Insights Usage Experiences](usage-overview.md) and Browser Diagnostic Experiences with a simple configuration. It provides an easier alternative to manually adding the JavaScript snippet or NPM package to your JavaScript web code. For node server with configuration, set `enableAutoWebSnippetInjection` to `true` or alternatively set environment variable `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. See [Application Insights Node.js Github Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview) for more information.
+ ### Automatic third-party instrumentation In order to track context across asynchronous calls, some changes are required in third party libraries such as MongoDB and Redis. By default, Application Insights will use [`diagnostic-channel-publishers`](https://github.com/Microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) to monkey-patch some of these libraries. This feature can be disabled by setting the `APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL` environment variable.
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
To use the HTTP Data Collector API, you create a POST request that includes the
| Log-Type |Specify the record type of the data that's being submitted. It can contain only letters, numbers, and the underscore (_) character, and it can't exceed 100 characters. | | x-ms-date |The date that the request was processed, in RFC 7234 format. | | x-ms-AzureResourceId | The resource ID of the Azure resource that the data should be associated with. It populates the [_ResourceId](./log-standard-columns.md#_resourceid) property and allows the data to be included in [resource-context](./design-logs-deployment.md#access-mode) queries. If this field isn't specified, the data won't be included in resource-context queries. |
-| time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. |
+| time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. Note: the Time Generated value cannot be older than 3 days before received time or the row will be dropped.|
| | | ## Authorization
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-time.md
Ingestion time may vary for different resources under different circumstances. Y
| Step | Property or Function | Comments | |:|:|:|
-| Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, then it will be set to the same time as _TimeReceived. |
+| Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, then it will be set to the same time as _TimeReceived. | If at processing time, the Time Generated value is older than 3 days the row will be dropped. |
| Record received by Azure Monitor ingestion endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | This field is not optimized for mass processing and should not be used to filter large datasets. | | Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | It is recommended to use ingestion_time() if there is a need to filter only records that were ingested in a certain time window. In such case, it is recommended to add also TimeGenerated filter with a larger range. |
azure-monitor Grafana Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/grafana-plugin.md
Title: Monitor Azure services and applications using Grafana description: Route Azure Monitor and Application Insights data so you can view them in Grafana. Previously updated : 10/10/2021 Last updated : 04/22/2022 # Monitor your Azure services in Grafana
-You can monitor Azure services and applications using [Grafana](https://grafana.com/) and its included [Azure Monitor data source plugin](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/). The plugin retrieves data from three Azure
-- Azure Monitor Metrics for numeric time series data from data from Azure resources. -- Azure Monitor Logs for log and performance data from Azure resources that enables you to query using the Kusto Query Language (KQL).-- Azure Resource Graph to quickly query and identify Azure resources across subscriptions.
+You can monitor Azure services and applications using [Grafana](https://grafana.com/) and the included [Azure Monitor data source plugin](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/). The plugin retrieves data from three Azure
+- Azure Monitor Metrics for numeric time series data from Azure resources.
+- Azure Monitor Logs for log and performance data from Azure resources that enables you to query using the powerful Kusto Query Language (KQL).
+- Azure Resource Graph to quickly query and identify Azure resources across subscriptions.
-You can then display this performance and availability data on your Grafana dashboards.
+You can then display this performance and availability data on your Grafana dashboard.
-Use the following steps to set up a Grafana server, use out of the box Azure Monitor dashboards, and build custom dashboards with metrics and logs from Azure Monitor.
+Use the following steps to set up a Grafana server and build dashboards for metrics and logs from Azure Monitor.
-## Set up a Grafana server
+## Set up Grafana
-### Set up Grafana locally
-To set up a local Grafana server, [download and install Grafana in your local environment](https://grafana.com/grafana/download).
-
-### Set up Grafana on Azure through the Azure Marketplace
-1. Go to Azure Marketplace and pick Grafana by Grafana Labs.
-
-2. Fill in the names and details. Create a new resource group. Keep track of the values you choose for the VM username, VM password, and Grafana server admin password.
-
-3. Choose VM size and a storage account.
-
-4. Configure the network configuration settings.
-
-5. View the summary and select **Create** after accepting the terms of use.
-
-6. After the deployment completes, select **Go to Resource Group**. You see a list of newly created resources.
+### Set up Azure Managed Grafana (Preview)
+Azure Managed Grafana is optimized for the Azure environment and works seamlessly with Azure Monitor. Enabling you to:
- ![Grafana resource group objects](media/grafana-plugin/grafana-resource-group.png)
+- Manage user authentication and access control using Azure Active Directory identities
+- Pin charts from the Azure portal directly to Azure Managed Grafana dashboards
- If you select the network security group (*grafana-nsg* in this case), you can see that port 3000 is used to access Grafana server.
+Use this [quickstart guide](../../managed-grafan) to create an Azure Managed Grafana workspace using the Azure portal.
-7. Get the public IP address of your Grafana server - go back to the list of resources and select **Public IP address**.
+### Set up Grafana locally
+To set up a local Grafana server, [download and install Grafana in your local environment](https://grafana.com/grafana/download).
## Sign in to Grafana > [!IMPORTANT]
-> The Internet Explorer browser and older Microsoft Edge browsers are not compatible with Grafana, you must use a chromium-based browser including the current version of Microsoft Edge. See [supported browsers for Grafana](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-web-browsers).
+> The Internet Explorer browser and older Microsoft Edge browsers are not compatible with Grafana, you must use a chromium-based browser including Microsoft Edge. See [supported browsers for Grafana](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-web-browsers).
-1. Using the IP address of your server, open the Login page at *http://\<IP address\>:3000* or the *\<DNSName>\:3000* in your browser. While 3000 is the default port, note you might have selected a different port during setup. You should see a login page for the Grafana server you built.
+- Log in to Grafana using the endpoint URL of your Azure Managed Grafana workspace or your server's IP address.
- ![Grafana login screen](./media/grafana-plugin/login-screen.png)
+## Configure Azure Monitor data source plugin
-2. Sign in with the user name *admin* and the Grafana server admin password you created earlier. If you're using a local setup, the default password would be *admin*, and you'd be requested to change it on your first login.
+Azure Managed Grafana includes an Azure Monitor datasource plugin. By default, the plugin is pre-configured with a Managed Identity that can query and visualize monitoring data from all resources in the subscription in which the Grafana workspace was deployed. Skip ahead to Build a Grafana dashboard.
-## Configure data source plugin
+![Screenshot of Azure Managed Grafana homepage.](./media/grafana-plugin/azure-managed-grafana.png)
-Once successfully logged in, you should see the option to add your first data source.
+You can expand the resources that can be viewed by your Azure Managed Grafana workspace by [configuring additional permissions](../../managed-grafan) on other subscriptions or resources.
-![Add Data Source](./media/grafana-plugin/add-data-source.png)
+ If you are using an instance that is not Azure Managed Grafana, you have to setup an Azure Monitor datasource.
1. Select **Add data source**, filter by name *Azure* and select the **Azure Monitor** data source.
-![Azure Monitor Data Source](./media/grafana-plugin/azure-monitor-data-source-list.png)
+ ![Screenshot of Azure Monitor Data Source selection.](./media/grafana-plugin/azure-monitor-data-source-list.png)
2. Pick a name for the data source and choose between Managed Identity or App Registration for authentication.
-If your Grafana instance is hosted on an Azure VM with managed identity enabled, you may use this approach for authentication. However, if your Grafana instance is not hosted on Azure or does not have managed identity enabled, you will need to use App Registration with an Azure service principal to setup authentication.
+If you are hosting Grafana on your own Azure VM or Azure App Service with managed identity enabled, you may use this approach for authentication. However, if your Grafana instance is not hosted on Azure or does not have managed identity enabled, you will need to use App Registration with an Azure service principal to setup authentication.
### Use Managed Identity
-1. Enable managed identity on your VM and change the Grafana server managed identity support setting to true.
- * The managed identity of your hosting VM needs to have the [Monitoring reader role](../roles-permissions-security.md) assigned for the subscription, resource group or resources that you will visualize in Grafana.
+1. Enable managed identity on your VM or App Service and change the Grafana server managed identity support setting to true.
+ * The managed identity of your hosting VM or App Service needs to have the [Monitoring reader role](../roles-permissions-security.md) assigned for the subscription, resource group or resources of interest.
* Additionally, you will need to update the setting 'managed_identity_enabled = true' in the Grafana server config. See [Grafana Configuration](https://grafana.com/docs/grafana/latest/administration/configuration/) for details. Once both steps are complete, you can then save and test access. 2. Select **Save & test**, and Grafana will test the credentials. You should see a message similar to the following one.
- ![Grafana data source managed identity config approved](./media/grafana-plugin/managed-identity.png)
+ ![Screenshot of Azure Monitor datasource with config approved MI.](./media/grafana-plugin/managed-identity.png)
### Or use App Registration 1. Create a service principal - Grafana uses an Azure Active Directory service principal to connect to Azure Monitor APIs and collect data. You must create, or use an existing service principal, to manage access to your Azure resources. * See [these instructions](../../active-directory/develop/howto-create-service-principal-portal.md) to create a service principal. Copy and save your tenant ID (Directory ID), client ID (Application ID) and client secret (Application key value).
- * See [Assign application to role](../../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application) to assign the [Monitoring reader role](../roles-permissions-security.md) to the Azure Active Directory application on the subscription, resource group or resource you want to monitor.
+ * View [Assign application to role](../../active-directory/develop/howto-create-service-principal-portal.md) to assign the [Monitoring reader role](../roles-permissions-security.md) to the Azure Active Directory application on the subscription, resource group or resource you want to monitor.
2. Provide the connection details you'd like to use. * When configuring the plugin, you can indicate which Azure Cloud you would like the plugin to monitor (Public, Azure US Government, Azure Germany, or Azure China).
If your Grafana instance is hosted on an Azure VM with managed identity enabled,
3. Select **Save & test**, and Grafana will test the credentials. You should see a message similar to the following one.
- ![Grafana data source app registration config approved](./media/grafana-plugin/app-registration.png)
+ ![Screenshot of Azure Monitor datasource config with approved App Reg.](./media/grafana-plugin/app-registration.png)
-## Use Azure Monitor data source dashboards
+## Build a Grafana dashboard
-The Azure Monitor plugin includes several out of the box dashboards that you may import to get started.
+1. Go to the Grafana Home page, and select **New Dashboard**.
-1. Click on the **Dashboards** tab of the Azure Monitor plugin to see a list of available dashboards.
+2. In the new dashboard, select the **Graph**. You can try other charting options but this article uses *Graph* as an example.
- ![Azure Monitor Data Source Dashboards](./media/grafana-plugin/azure-data-source-dashboards.png)
+3. A blank graph shows up on your dashboard. Click on the panel title and select **Edit** to enter the details of the data you want to plot in this graph chart.
+ ![Screenshot Grafana new panel dropdown options.](./media/grafana-plugin/grafana-new-graph-dark.png)
+
+4. Select the Azure Monitor data source you've configured.
+ * Visualizing Azure Monitor metrics - select **Azure Monitor** in the service dropdown. A list of selectors shows up, where you can select the resources and metric to monitor in this chart. To collect metrics from a VM, use the namespace **Microsoft.Compute/VirtualMachines**. Once you have selected VMs and metrics, you can start viewing their data in the dashboard.
+ ![Screenshot of Grafana panel config for Azure Monitor metrics.](./media/grafana-plugin/grafana-graph-config-for-azure-monitor-dark.png)
+ * Visualizing Azure Monitor log data - select **Azure Log Analytics** in the service dropdown. Select the workspace you'd like to query and set the query text. You can copy here any log query you already have or create a new one. As you type in your query, IntelliSense will show up and suggest autocomplete options. Select the visualization type, **Time series** **Table**, and run the query.
+
+ > [!NOTE]
+ >
+ > The default query provided with the plugin uses two macros: "$__timeFilter() and $__interval.
+ > These macros allow Grafana to dynamically calculate the time range and time grain, when you zoom in on part of a chart. You can remove these macros and use a standard time filter, such as *TimeGenerated > ago(1h)*, but that means the graph would not support the zoom in feature.
+
+ ![Screenshot of Grafana panel config for Azure Monitor logs.](./media/grafana-plugin/grafana-graph-config-for-azure-log-analytics-dark.png)
-2. Click on **Import** to download a dashboard.
+5. Following is a simple dashboard with two charts. The one on left shows the CPU percentage of two VMs. The chart on the right shows the transactions in an Azure Storage account broken down by the Transaction API type.
+ ![Screenshot of Grafana dashboards with two panels.](media/grafana-plugin/grafana6.png)
-3. Click on the name of the imported dashboard to open it.
+## Pin charts from the Azure portal to Azure Managed Grafana
-4. Use the drop-down selectors at the top of the dashboard to choose the subscription, resource group and resource of interest.
+In addition to building your panels in Grafana, you can also quickly pin Azure Monitor visualizations from the Azure portal to new or existing Grafana dashboards by adding panels to your Grafana dashboard directly from Azure Monitor. Navigate to Metrics for your resource, create a chart and click **Save to dashboard**, followed by **Pin to Grafana**. Choose the workspace and dashboard and click **Pin** to complete the operation.
- ![Storage Insights Dashboards](./media/grafana-plugin/storage-insights-dashboard.png)
+[ ![Screenshot Pin to Grafana option in Azure Monitor metrics explorer.](media/grafana-plugin/grafana-pin-to.png) ](media/grafana-plugin/grafana-pin-to-expanded.png#lightbox)
-## Build Grafana dashboards
+## Optional: Monitor your custom metrics in the same Grafana server
-1. Go to the Grafana Home page, and select **Create your first dashboard**.
+You can also install Telegraf and InfluxDB to collect and plot both custom and agent-based metrics same Grafana instance. There are many data source plugins that you can use to bring these metrics together in a dashboard.
-2. In the new dashboard, select **Add an empty panel**.
+You can also reuse this set up to include metrics from your Prometheus server. Use the Prometheus data source plugin in Grafana's plugin gallery.
-3. An empty *Time series* panel appears on your dashboard with a query editor shown below. Select the Azure Monitor data source you've configured.
- * Collecting Azure Monitor metrics - select **Metrics** in the service dropdown. A list of selectors shows up, where you can select the resources and metric to monitor in this chart. To collect metrics from a VM, use the namespace **Microsoft.Compute/VirtualMachines**. Once you have selected VMs and metrics, you can start viewing their data in the dashboard.
- ![Grafana graph config for Azure Monitor metrics](./media/grafana-plugin/metrics-config.png)
- * Collecting Azure Monitor log data - select **Logs** in the service dropdown. Select the resource or Log Analytics workspace you'd like to query and set the query text. Note that The Azure Monitor plugin allows you to query the logs for specific resources or from a Log Analytics workspace. In the query editor below, you can copy any log query you already have or create a new one. As you type in your query, IntelliSense will show up and suggest autocomplete options. Finally, select the visualization type, **Time series**, and run the query.
-
- > [!NOTE]
- >
- > The default query provided with the plugin uses two macros: "$__timeFilter() and $__interval.
- > These macros allow Grafana to dynamically calculate the time range and time grain, when you zoom in on part of a chart. You can remove these macros and use a standard time filter, such as *TimeGenerated > ago(1h)*, but that means the graph would not support the zoom in feature.
-
- The following example shows a query being run on an Application Insights resource for the average response time for all requests.
+Here are good reference articles on how to use Telegraf, InfluxDB, Prometheus, and Docker
+ - [How To Monitor System Metrics with the TICK Stack on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-monitor-system-metrics-with-the-tick-stack-on-ubuntu-16-04)
- ![Grafana graph config for Azure Log Analytics](./media/grafana-plugin/logs-config.png)
+ - [A monitoring solution for Docker hosts, containers, and containerized services](https://stefanprodan.com/2016/a-monitoring-solution-for-docker-hosts-containers-and-containerized-services/)
- * In addition to the metric and log queries shown above, the Azure Monitor plugin supports [Azure Resource Graph](../../governance/resource-graph/concepts/explore-resources.md) queries.
+Here is an image of a full Grafana dashboard that has metrics from Azure Monitor and Application Insights.
+![Screenshot of Grafana dashboard with multiple panels.](media/grafana-plugin/grafana8.png)
## Advanced Grafana features ### Variables
-Some resource and query values can be selected by the dashboard user through UI dropdowns, and updated in the resource or the query.
-Consider the following query that shows the usage of a Log Analytics workspace as an example:
+Some query values can be selected through UI dropdowns, and updated in the query.
+Consider the following query as an example:
``` Usage | where $__timeFilter(TimeGenerated)
Usage
| sort by TimeGenerated ```
-You can configure a variable that will list all available **workspaces**, and then update the resource that is queried based on a user selection.
+You can configure a variable that will list all available **Solution** values, and then update your query to use it.
To create a new variable, click the dashboard's Settings button in the top right area, select **Variables**, and then **New**. On the variable page, define the data source and query to run in order to get the list of values.
+![Grafana configure variable](./media/grafana-plugin/grafana-configure-variable-dark.png)
-![Grafana define variable](./media/grafana-plugin/define-variable.png)
-
-Once created, change the resource for your query to use the selected value(s) and your charts will respond accordingly:
-
-![Query with variable](./media/grafana-plugin/query-with-variable.png)
-
-See the full list of the [template variables](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/template-variables/) available in the Azure Monitor plugin.
+Once created, adjust the query to use the selected value(s) and your charts will respond accordingly:
+```
+Usage
+| where $__timeFilter(TimeGenerated) and Solution in ($Solutions)
+| summarize total_KBytes=sum(Quantity)*1024 by bin(TimeGenerated, $__interval)
+| sort by TimeGenerated
+```
+
+![Grafana use variables](./media/grafana-plugin/grafana-use-variables-dark.png)
### Create dashboard playlists
-One of the many useful features of Grafana is the dashboard playlist. You can create multiple dashboards and add them to a playlist configuring an interval for each dashboard to show. Navigate to the Dashboards menu item and select **Playlists** to create a playlist of existing dashboards to cycle through. You may want to display them on a large wall monitor to provide a status board for your group.
+One of the many useful features of Grafana is the dashboard playlist. You can create multiple dashboards and add them to a playlist configuring an interval for each dashboard to show. Select **Play** to see the dashboards cycle through. You may want to display them on a large wall monitor to provide a status board for your group.
-![Grafana Playlist Example](./media/grafana-plugin/playlist.png)
+![Grafana Playlist Example](./media/grafana-plugin/grafana7.png)
## Clean up resources
-If you've setup a Grafana environment on Azure, you are charged when VMs are running whether you are using them or not. To avoid incurring additional charges, clean up the resource group created in this article.
+If you've setup a Grafana environment on Azure, you are charged when resources are running whether you are using them or not. To avoid incurring additional charges, clean up the resource group created in this article.
1. From the left-hand menu in the Azure portal, click **Resource groups** and then click **Grafana**. 2. On your resource group page, click **Delete**, type **Grafana** in the text box, and then click **Delete**. ## Next steps
-* [Compare Azure Monitor metrics and logs](../data-platform.md)
+* [Overview of Azure Monitor Metrics](../data-platform.md)
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 04/26/2022 Last updated : 04/27/2022 # Azure subscription and service limits, quotas, and constraints
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
The latest values for Microsoft Purview quotas can be found in the [Microsoft Purview quota page](../../purview/how-to-manage-quotas.md).
+## Microsoft Sentinel limits
+
+This section lists the most common service limits you might encounter as you use Microsoft Sentinel.
+
+### Analytics rule limits
++
+### Incident limits
++
+### Machine learning-based limits
++
+### Notebook limits
++
+### Threat intelligence limits
++
+### Watchlist limits
++
+### User and Entity Behavior Analytics (UEBA) limits
++ ## Service Bus limits [!INCLUDE [azure-servicebus-limits](../../../includes/service-bus-quotas-table.md)]
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/data-types.md
description: Describes the data types that are available in Azure Resource Manag
Previously updated : 06/24/2021 Last updated : 04/27/2022 # Data types in ARM templates
Within an ARM template, you can use these data types:
* int * object * secureObject
-* secureString
+* securestring
* string ## Arrays
The following example shows two secure parameters.
```json "parameters": { "password": {
- "type": "secureString"
+ "type": "securestring"
}, "configValues": { "type": "secureObject" } } ```
-> [!NOTE]
-> Secure strings and objects aren't recommended to be used as an output type because they're not stored in the deployment history.
+> [!NOTE]
+> Don't use secure strings or objects as output values. If you include a secure value as an output value, the value isn't displayed in the deployment history and can't be retrieved from another template. Instead, save the secure value in a key vault, and [pass as a parameter from the key vault](key-vault-parameter.md).
## Next steps
azure-signalr Signalr Concept Serverless Development Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md
description: Details on how to develop and configure serverless real-time applic
Previously updated : 03/01/2019 Last updated : 04/20/2022 ms.devlang: csharp, javascript
# Azure Functions development and configuration with Azure SignalR Service
-Azure Functions applications can leverage the [Azure SignalR Service bindings](../azure-functions/functions-bindings-signalr-service.md) to add real-time capabilities. Client applications use client SDKs available in several languages to connect to Azure SignalR Service and receive real-time messages.
+Azure Functions applications can use the [Azure SignalR Service bindings](../azure-functions/functions-bindings-signalr-service.md) to add real-time capabilities. Client applications use client SDKs available in several languages to connect to Azure SignalR Service and receive real-time messages.
This article describes the concepts for developing and configuring an Azure Function app that is integrated with SignalR Service. ## SignalR Service configuration
-Azure SignalR Service can be configured in different modes. When used with Azure Functions, the service must be configured in *Serverless* mode.
+Azure SignalR Service can be configured in [different modes](concept-service-mode.md). When used with Azure Functions, the service must be configured in **Serverless** mode.
-In the Azure portal, locate the *Settings* page of your SignalR Service resource. Set the *Service mode* to *Serverless*.
+In the Azure portal, locate the **Settings** page of your SignalR Service resource. Set the **Service mode** to **Serverless**.
![SignalR Service Mode](media/signalr-concept-azure-functions/signalr-service-mode.png) ## Azure Functions development
-A serverless real-time application built with Azure Functions and Azure SignalR Service typically requires two Azure Functions:
+A serverless real-time application built with Azure Functions and Azure SignalR Service requires at least two Azure Functions:
-* A "negotiate" function that the client calls to obtain a valid SignalR Service access token and service endpoint URL
-* One or more functions that handle messages from SignalR Service and send messages or manage group membership
+* A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL.
+* One or more functions that handle messages sent from SignalR Service to clients.
### negotiate function
-A client application requires a valid access token to connect to Azure SignalR Service. An access token can be anonymous or authenticated to a given user ID. Serverless SignalR Service applications require an HTTP endpoint named "negotiate" to obtain a token and other connection information, such as the SignalR Service endpoint URL.
+A client application requires a valid access token to connect to Azure SignalR Service. An access token can be anonymous or authenticated to a user ID. Serverless SignalR Service applications require an HTTP endpoint named `negotiate` to obtain a token and other connection information, such as the SignalR Service endpoint URL.
-Use an HTTP triggered Azure Function and the *SignalRConnectionInfo* input binding to generate the connection information object. The function must have an HTTP route that ends in `/negotiate`.
+Use an HTTP-triggered Azure Function and the `SignalRConnectionInfo` input binding to generate the connection information object. The function must have an HTTP route that ends in `/negotiate`.
-With [class based model](#class-based-model) in C#, you don't need *SignalRConnectionInfo* input binding and can add custom claims much easier. See [Negotiate experience in class based model](#negotiate-experience-in-class-based-model)
+With [class-based model](#class-based-model) in C#, you don't need the `SignalRConnectionInfo` input binding and can add custom claims much more easily. For more information, see [Negotiate experience in class-based model](#negotiate-experience-in-class-based-model).
-For more information on how to create the negotiate function, see the [*SignalRConnectionInfo* input binding reference](../azure-functions/functions-bindings-signalr-service-input.md).
+For more information about the `negotiate` function, see [Azure Functions development](#negotiate-function).
-To learn about how to create an authenticated token, refer to [Using App Service Authentication](#using-app-service-authentication).
+To learn how to create an authenticated token, refer to [Using App Service Authentication](#using-app-service-authentication).
### Handle messages sent from SignalR Service
-Use the *SignalR Trigger* binding to handle messages sent from SignalR Service. You can get notified when clients send messages or clients get connected or disconnected.
+Use the `SignalRTrigger` binding to handle messages sent from SignalR Service. You can get notified when clients send messages or clients get connected or disconnected.
-For more information, see the [*SignalR trigger* binding reference](../azure-functions/functions-bindings-signalr-service-trigger.md).
+For more information, see the [SignalR Service trigger binding reference](../azure-functions/functions-bindings-signalr-service-trigger.md).
-You also need to configure your function endpoint as an upstream so that service will trigger the function when there is message from client. For more information about how to configure upstream, please refer to this [doc](concept-upstream.md).
+You also need to configure your function endpoint as an upstream so that service will trigger the function when there's message from a client. For more information about how to configure upstream, see [Upstream settings in Azure SignalR Service](concept-upstream.md).
> [!NOTE]
-> StreamInvocation from client is not supported in Serverless Mode.
+> SignalR Service doesn't support the `StreamInvocation` message from a client in Serverless Mode.
### Sending messages and managing group membership
-Use the *SignalR* output binding to send messages to clients connected to Azure SignalR Service. You can broadcast messages to all clients, or you can send them to a subset of clients that are authenticated with a specific user ID or have been added to a specific group.
+Use the `SignalR` output binding to send messages to clients connected to Azure SignalR Service. You can broadcast messages to all clients, or you can send them to a subset of clients. For example, only send messages to clients authenticated with a specific user ID, or only to a specific group.
-Users can be added to one or more groups. You can also use the *SignalR* output binding to add or remove users to/from groups.
+Users can be added to one or more groups. You can also use the `SignalR` output binding to add or remove users to/from groups.
-For more information, see the [*SignalR* output binding reference](../azure-functions/functions-bindings-signalr-service-output.md).
+For more information, see the [`SignalR` output binding reference](../azure-functions/functions-bindings-signalr-service-output.md).
### SignalR Hubs
-SignalR has a concept of "hubs". Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces.
+SignalR has a concept of *hubs*. Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces.
-## Class based model
+## Class-based model
-The class based model is dedicated for C#. With class based model can have a consistent SignalR server-side programming experience. It has the following features.
+The class-based model is dedicated for C#. The class-based model provides a consistent SignalR server-side programming experience, with the following features:
* Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name.
-* Auto parameter binding: Neither `ParameterNames` nor attribute `[SignalRParameter]` is needed. Parameters are auto bound to arguments of Azure Function method in order.
+* Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order.
* Convenient output and negotiate experience.
-The following codes demonstrate these features:
+The following code demonstrates these features:
```cs public class SignalRTestHub : ServerlessHub
public class SignalRTestHub : ServerlessHub
} ```
-All functions that want to leverage class based model need to be the method of class that inherits from **ServerlessHub**. The class name `SignalRTestHub` in the sample is the hub name.
+All functions that want to use the class-based model need to be a method of the class that inherits from **ServerlessHub**. The class name `SignalRTestHub` in the sample is the hub name.
### Define hub method
-All the hub methods **must** have an argument of `InvocationContext` decorated by `[SignalRTrigger]` attribute and use parameterless constructor. Then the **method name** is treated as parameter **event**.
+All the hub methods **must** have an argument of `InvocationContext` decorated by `[SignalRTrigger]` attribute and use a parameterless constructor. Then the **method name** is treated as a parameter **event**.
By default, `category=messages` except the method name is one of the following names:
-* **OnConnected**: Treated as `category=connections, event=connected`
-* **OnDisconnected**: Treated as `category=connections, event=disconnected`
+* `OnConnected`: Treated as `category=connections, event=connected`
+* `OnDisconnected`: Treated as `category=connections, event=disconnected`
### Parameter binding experience
-In class based model, `[SignalRParameter]` is unnecessary because all the arguments are marked as `[SignalRParameter]` by default except it is one of the following situations:
+In class based model, `[SignalRParameter]` is unnecessary because all the arguments are marked as `[SignalRParameter]` by default except in one of the following situations:
-* The argument is decorated by a binding attribute.
+* The argument is decorated by a binding attribute
* The argument's type is `ILogger` or `CancellationToken` * The argument is decorated by attribute `[SignalRIgnore]`
-### Negotiate experience in class based model
+### Negotiate experience in class-based model
-Instead of using SignalR input binding `[SignalR]`, negotiation in class based model can be more flexible. Base class `ServerlessHub` has a method
+Instead of using SignalR input binding `[SignalR]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method
```cs SignalRConnectionInfo Negotiate(string userId = null, IList<Claim> claims = null, TimeSpan? lifeTime = null)
internal class FunctionAuthorizeAttribute: SignalRFilterAttribute
} ```
-Leverage the attribute to authorize the function.
+Use the attribute to authorize the function.
```cs [FunctionAuthorize]
public async Task Broadcast([SignalRTrigger]InvocationContext invocationContext,
## Client development
-SignalR client applications can leverage the SignalR client SDK in one of several languages to easily connect to and receive messages from Azure SignalR Service.
+SignalR client applications can use the SignalR client SDK in one of several languages to easily connect to and receive messages from Azure SignalR Service.
### Configuring a client connection To connect to SignalR Service, a client must complete a successful connection negotiation that consists of these steps:
-1. Make a request to the *negotiate* HTTP endpoint discussed above to obtain valid connection information
-1. Connect to SignalR Service using the service endpoint URL and access token obtained from the *negotiate* endpoint
+1. Make a request to the `negotiate` HTTP endpoint discussed above to obtain valid connection information
+1. Connect to SignalR Service using the service endpoint URL and access token obtained from the `negotiate` endpoint
-SignalR client SDKs already contain the logic required to perform the negotiation handshake. Pass the negotiate endpoint's URL, minus the `negotiate` segment, to the SDK's `HubConnectionBuilder`. Here is an example in JavaScript:
+SignalR client SDKs already contain the logic required to perform the negotiation handshake. Pass the negotiate endpoint's URL, minus the `negotiate` segment, to the SDK's `HubConnectionBuilder`. Here's an example in JavaScript:
```javascript const connection = new signalR.HubConnectionBuilder()
By convention, the SDK automatically appends `/negotiate` to the URL and uses it
> [!NOTE] > If you are using the JavaScript/TypeScript SDK in a browser, you need to [enable cross-origin resource sharing (CORS)](#enabling-cors) on your Function App.
-For more information on how to use the SignalR client SDK, refer to the documentation for your language:
+For more information on how to use the SignalR client SDK, see the documentation for your language:
* [.NET Standard](/aspnet/core/signalr/dotnet-client) * [JavaScript](/aspnet/core/signalr/javascript-client)
For more information on how to use the SignalR client SDK, refer to the document
### Sending messages from a client to the service
-If you have [upstream](concept-upstream.md) configured for your SignalR resource, you can send messages from client to your Azure Functions using any SignalR client. Here is an example in JavaScript:
+If you've [upstream](concept-upstream.md) configured for your SignalR resource, you can send messages from a client to your Azure Functions using any SignalR client. Here's an example in JavaScript:
```javascript connection.send('method1', 'arg1', 'arg2');
However, there are a couple of special considerations for apps that use the Sign
### Enabling CORS
-The JavaScript/TypeScript client makes HTTP requests to the negotiate function to initiate the connection negotiation. When the client application is hosted on a different domain than the Azure Function app, cross-origin resource sharing (CORS) must be enabled on the Function app or the browser will block the requests.
+The JavaScript/TypeScript client makes HTTP request to the negotiate function to initiate the connection negotiation. When the client application is hosted on a different domain than the Azure Function app, cross-origin resource sharing (CORS) must be enabled on the function app or the browser will block the requests.
#### Localhost
Example:
#### Cloud - Azure Functions CORS
-To enable CORS on an Azure Function app, go to the CORS configuration screen under the *Platform features* tab of your Function app in the Azure portal.
+To enable CORS on an Azure Function app, go to the CORS configuration screen under the **Platform features** tab of your Function app in the Azure portal.
> [!NOTE] > CORS configuration is not yet available in Azure Functions Linux Consumption plan. Use [Azure API Management](#cloudazure-api-management) to enable CORS. CORS with Access-Control-Allow-Credentials must be enabled for the SignalR client to call the negotiate function. Select the checkbox to enable it.
-In the *Allowed origins* section, add an entry with the origin base URL of your web application.
+In the **Allowed origins** section, add an entry with the origin base URL of your web application.
![Configuring CORS](media/signalr-concept-serverless-development-config/cors-settings.png)
Configure your SignalR clients to use the API Management URL.
### Using App Service Authentication
-Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Azure Active Directory. This feature can be integrated with the *SignalRConnectionInfo* binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the *SignalR* output binding that are targeted to that user ID.
+Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Azure Active Directory. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID.
In the Azure portal, in your Function app's *Platform features* tab, open the *Authentication/authorization* settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice. Once configured, authenticated HTTP requests will include `x-ms-client-principal-name` and `x-ms-client-principal-id` headers containing the authenticated identity's username and user ID, respectively.
-You can use these headers in your *SignalRConnectionInfo* binding configuration to create authenticated connections. Here is an example C# negotiate function that uses the `x-ms-client-principal-id` header.
+You can use these headers in your `SignalRConnectionInfo` binding configuration to create authenticated connections. Here's an example C# negotiate function that uses the `x-ms-client-principal-id` header.
```csharp [FunctionName("negotiate")]
For information on other languages, see the [Azure SignalR Service bindings](../
## Next steps
-In this article, you have learned how to develop and configure serverless SignalR Service applications using Azure Functions. Try creating an application yourself using one of the quick starts or tutorials on the [SignalR Service overview page](index.yml).
+In this article, you've learned how to develop and configure serverless SignalR Service applications using Azure Functions. Try creating an application yourself using one of the quick starts or tutorials on the [SignalR Service overview page](index.yml).
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware (public preview) is currently available
| Region | **Premium-series** | **Memory optimized premium-series** | |: |: |: |
-| Australia East | Yes | Yes |
+| Australia Central | Yes | |
+| Australia East | Yes | |
| Canada Central | Yes | | | Canada East | Yes | | | Central US | Yes | Yes |
+| East US | Yes | Yes |
| Germany West Central | Yes | Yes | | Japan East | Yes | |
+| Japan West | Yes | |
| Korea Central | Yes | | | North Central US | Yes | Yes |
-| North Europe | Yes | |
+| North Europe | Yes | Yes |
+| Norway East | Yes | |
+| South Africa West | Yes | |
| South Central US | Yes | Yes | | Southeast Asia | Yes | |
+| Sweden Central | | Yes |
+| Switzerland North | Yes | |
+| Switzerland West | Yes | |
+| UAE North | Yes | |
| UK South | Yes | Yes |
-| West Europe | | Yes |
+| UK West | Yes | |
+| West Central US | Yes | |
+| West Europe | Yes | Yes |
| West US | Yes | | | West US 2 | Yes | Yes | | West US 3 | Yes | Yes |
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
Azure file shares backup is available in all regions, **except** for Germany Cen
| Setting | Limit | | | - | | Maximum number of restores per day | 10 |
-| Maximum number of files per restore | 99 |
+| Maximum number of files per restore, in case of ILR (Item level recovery) | 99 |
| Maximum recommended restore size per restore for large file shares | 15 TiB | ## Retention limits
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 04/01/2022 Last updated : 04/28/2022
You can also use the following FQDNs to allow access to the required services fr
##### Using an HTTP proxy server for AAD traffic 1. Go to the "opt/msawb/bin" folder
-2. Create a new JSON file named "ExtensionSettingOverrides.JSON"
+2. Create a new JSON file named "ExtensionSettingsOverrides.json"
3. Add a key-value pairs to the JSON file as follows: ```json
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 01/27/2022 Last updated : 04/28/2022
The following table lists the various alternatives you can use for establishing
| Allow access to service FQDNs/IPs | No additional costs <br><br> Works with all network security appliances and firewalls | A broad set of IPs or FQDNs may be required to be accessed | | Use an HTTP proxy | Single point of internet access to VMs | Additional costs to run a VM with the proxy software |
-More details around using these options are shared below:
+The following sections provide more details around using these options.
+
+>[!Note]
+>You can use the [Azure Backup connectivity test scripts](https://github.com/Azure/Azure-Workload-Backup-Troubleshooting-Scripts/releases/download/v1.0.0/AzureBackupConnectivityTestScriptsForWindows.zip) to self-diagnose the network connectivity issues on Windows environment.
#### Private endpoints
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 09/21/2021 Last updated : 04/28/2022
Azure Backup has added the Cross Region Restore feature to strengthen data avail
| Backup Management type | Supported | Supported Regions | | - | | -- |
-| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA and UG Virginia. |
-| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central, UG IOWA, and UG Virginia. |
+| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA. |
+| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central and UG IOWA. |
| MARS Agent/On premises | No | N/A | | AFS (Azure file shares) | No | N/A |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
To improve accuracy, customization is available for some languages and baseline
| Arabic (Tunisia) | `ar-TN` | | Arabic (United Arab Emirates) | `ar-AE` | | Arabic (Yemen) | `ar-YE` |
+| Bengali (India) | `bn-IN` |
| Bulgarian (Bulgaria) | `bg-BG` | | Burmese (Myanmar) | `my-MM` | | Catalan (Spain) | `ca-ES` |
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-cli.md
The following sections demonstrate how to manage the Azure Cosmos account, inclu
* [Add or remove regions](#add-or-remove-regions) * [Enable multi-region writes](#enable-multiple-write-regions) * [Set regional failover priority](#set-failover-priority)
-* [Enable service-managed failover](#enable-automatic-failover)
+* [Enable service-managed failover](#enable-service-managed-failover)
* [Trigger manual failover](#trigger-manual-failover) * [List account keys](#list-account-keys) * [List read-only account keys](#list-read-only-account-keys)
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
Title: Transfer billing ownership of an Azure subscription
-description: Describes how to transfer billing ownership of an Azure subscription to another account.
+ Title: Transfer billing ownership of an MOSP Azure subscription
+description: Describes how to transfer billing ownership of an MOSP Azure subscription to another account.
keywords: transfer azure subscription, azure transfer subscription, move azure subscription to another account,azure change subscription owner, transfer azure subscription to another account, azure transfer billing
tags: billing,top-support-issue
Previously updated : 04/07/2022 Last updated : 04/27/2022
-# Transfer billing ownership of an Azure subscription to another account
+# Transfer billing ownership of an MOSP Azure subscription to another account
-This article shows the steps needed to transfer billing ownership of an Azure subscription to another account. Before you transfer billing ownership for a subscription, read [Azure subscription and reservation transfer hub](subscription-transfer.md) to ensure that your transfer type is supported.
+This article shows the steps needed to transfer billing ownership of an (MOSP) Microsoft Online Services Program, also referred to as pay-as-you-go, Azure subscription to another account. Before you transfer billing ownership for a subscription, read [Azure subscription and reservation transfer hub](subscription-transfer.md) to ensure that your transfer type is supported.
If you want to keep your billing ownership but change subscription type, see [Switch your Azure subscription to another offer](switch-azure-offer.md). To control who can access resources in the subscription, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
When you send or accept a transfer request, you agree to terms and conditions. F
## Transfer billing ownership of an Azure subscription 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator of the billing account that has the subscription that you want to transfer. If you're not sure if you're an administrator, or if you need to determine who is, see [Determine account billing administrator](add-change-subscription-administrator.md#whoisaa).
-1. Search for **Cost Management + Billing**.
- ![Screenshot that shows Azure portal search](./media/billing-subscription-transfer/billing-search-cost-management-billing.png)
-1. Select **Subscriptions** from the left-hand pane. Depending on your access, you may need to select a billing scope and then select **Subscriptions** or **Azure subscriptions**.
-1. Select **Transfer billing ownership** for the subscription that you want to transfer.
- ![Select subscription to transfer](./media/billing-subscription-transfer/billing-select-subscription-to-transfer.png)
-1. Enter the email address of a user who's a billing administrator of the account that will be the new owner for the subscription.
-1. If you're transferring your subscription to an account in another Azure AD tenant, select if you want to move the subscription to the new account's tenant. For more information, see [Transferring subscription to an account in another Azure AD tenant](#transfer-a-subscription-to-another-azure-ad-tenant-account).
+1. Navigate to **Subscriptions** and the select the one that you want to transfer.
+ :::image type="content" source="./media/billing-subscription-transfer/navigate-subscriptions.png" alt-text="Screenshot showing navigation to the Subscriptions page." lightbox="./media/billing-subscription-transfer/navigate-subscriptions.png" :::
+1. At the top of the page, select **Transfer billing ownership**.
+ :::image type="content" source="./media/billing-subscription-transfer/select-transfer-billing-ownership.png" alt-text="Screenshot showing the Transfer billing ownership option." lightbox="./media/billing-subscription-transfer/select-transfer-billing-ownership.png" :::
+1. On the Transfer billing ownership page, enter the email address of a user that is a billing administrator of the account that will be the new owner for the subscription.
+ :::image type="content" source="./media/billing-subscription-transfer/transfer-billing-ownership-page.png" alt-text="Screenshot showing the Transfer billing ownership page." lightbox="./media/billing-subscription-transfer/transfer-billing-ownership-page.png" :::
+1. If you're transferring your subscription to an account in another Azure AD tenant, select **Move subscription tenant** to move the subscription to the new account's tenant. For more information, see [Transferring subscription to an account in another Azure AD tenant](#transfer-a-subscription-to-another-azure-ad-tenant-account).
> [!IMPORTANT]
- > If you choose to move the subscription to the new account's Azure AD tenant, all [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) to access resources in the subscription are permanently removed. Only the user in the new account who accepts your transfer request will have access to manage resources in the subscription. Alternatively, you can clear the **Subscription Azure AD tenant** option to transfer billing ownership without moving the subscription to the new account's tenant. If you do so, existing Azure role assignments to access Azure resources will be maintained.
- ![Send transfer page](./media/billing-subscription-transfer/billing-send-transfer-request.png)
+ > If you choose to move the subscription to the new account's Azure AD tenant, all [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) to access resources in the subscription are permanently removed. Only the user in the new account who accepts your transfer request will have access to manage resources in the subscription. Alternatively, you can clear the **Move subscription tenant** option to transfer billing ownership without moving the subscription to the new account's tenant. If you do so, existing Azure role assignments to access Azure resources will be maintained.
1. Select **Send transfer request**. 1. The user gets an email with instructions to review your transfer request. ![Subscription transfer email sent to the recipient](./media/billing-subscription-transfer/billing-receiver-email.png)
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
Previously updated : 09/09/2021 Last updated : 04/26/2022 # Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics
Set "authenticationType" property to **UserAuthentication**, and specify the fol
| clientSecret | Secret of the application used to generate the refresh token. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No | | refreshToken | The refresh token obtained from Google used to authorize access to BigQuery. Learn how to get one from [Obtaining OAuth 2.0 access tokens](https://developers.google.com/identity/protocols/OAuth2WebServer#obtainingaccesstokens) and [this community blog](https://jpd.ms/getting-your-bigquery-refresh-token-for-azure-datafactory-f884ff815a59). Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
+The minimum scope required to obtain an OAuth 2.0 refresh token is `https://www.googleapis.com/auth/bigquery.readonly`. If you plan to run a query that might return large results, other scope might be required. For more information, refer to this [article](https://cloud.google.com/bigquery/docs/writing-results#large-results).
+ **Example:** ```json
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
___
Given two or more inputs, returns the first not null item. This function is equivalent to coalesce. * ``iifNull(10, 20) -> 10`` * ``iifNull(null, 20, 40) -> 20``
-* ``iifNull('azure', 'data', 'factory') -> 'factory'``
+* ``iifNull('azure', 'data', 'factory') -> 'azure'``
* ``iifNull(null, 'data', 'factory') -> 'data'`` ___
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
There are a few ways you can create rules to suppress unwanted security alerts:
- To suppress alerts at the management group level, use Azure Policy - To suppress alerts at the subscription level, you can use the Azure portal or the REST API as explained below
-Suppression rules can only dismiss alerts that have already been triggered on the selected subscriptions.
+> [!NOTE]
+> Suppression rules don't work retroactively - they'll only suppress alerts triggered _after_ the rule is created. Also, if a specific alert type has never been generated on a specific subscription, future alerts of that type wonn't be suppressed. For a rule to suppress an alert on a specific subscription, that alert type has to have been triggered at leaast once before the rule is created.
To create a rule directly in the Azure portal:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers++ -- Previously updated : 04/07/2022 Last updated : 04/28/2022 # Overview of Microsoft Defender for Containers
On this page, you'll learn how you can use Defender for Containers to improve, m
| Release state: | General availability (GA)<br> Certain features are in preview, for a full list see the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section. | | Feature availability | Refer to the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section for additional information on feature release state and availability.| | Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
+| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role). See also the [permissions for each of the components](enable-data-collection.md?tabs=autoprovision-containers)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
| Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
When reviewing the outstanding recommendations for your container-related resour
### Kubernetes data plane hardening
-For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma). By default, auto provisioning is enabled when you enable Defender for Containers.
+For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma).
With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to **enforce** the best practices and mandate them for future workloads.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
description: This article describes how to set up auto provisioning of the Log A
Previously updated : 01/17/2022 Last updated : 04/28/2022 # Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud [!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to ensure your resources have the necessary agents and extensions used by Defender for Cloud.
+Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to automatically provision the necessary agents and extensions used by Defender for Cloud to your resources.
:::image type="content" source="media/enable-data-collection/auto-provisioning-list-of-extensions.png" alt-text="Screenshot of Microsoft Defender for Cloud's extensions that can be auto provisioned.":::
This table shows the availability details for the auto provisioning **feature**
### [**Defender for Containers**](#tab/autoprovision-containers)
-This table shows the availability details for the various components that can be auto provisioned to provide the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+This table shows the availability details for the components that are required for auto provisioning to provide the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+
+By default, auto provisioning is enabled when you enable Defender for Containers from the Azure portal.
| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters | ||-||
-| Release state: | ΓÇó Defender profile is in preview<br> ΓÇó Azure Policy add-on is generally available (GA) | ΓÇó Defender extension is in preview<br> ΓÇó Azure Policy extension for Azure Arc is in preview |
+| Release state: | ΓÇó Defender profile is in preview<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) |
-| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
-| Supported destinations: | Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers <br><br> The AKS Defender profile doesn't support AKS clusters that don't have RBAC enabled. | Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | The AKS Defender profile only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters](defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks#microsoft-defender-for-containers-plan-availability) |
| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | | Clouds: | **Defender profile**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy add-on**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet|**Defender extension**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy extension for Azure Arc**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet|
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment.++ Previously updated : 03/27/2022 Last updated : 04/28/2022
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Azure Kubernetes Service (AKS) Clusters without [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> |
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
<sup><a name="footnote1"></a>1</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br> <sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../azure-arc/kubernetes/overview.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](https://mseng.visualstudio.com/TechnicalContent/_workitems/recentlyupdated/) and enable Defender for Containers as an Arc extension.
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
If you're setting up network monitoring for enterprise IoT systems, you can skip
- Research your own network architecture and monitor bandwidth. Check requirements for creating certificates and other network details, and clarify the sensor appliances you'll need for your own network load.
- Calculate the approximate number of devices you'll be monitoring. Devices can be added in intervals of **1,000**, such as **1000**, **2000**, **3000**. The numbers of monitored devices are called *committed devices*.
+ Calculate the approximate number of devices you'll be monitoring. Devices can be added in intervals of **100**, such as **100**, **200**, **300**. The numbers of monitored devices are called *committed devices*.
Microsoft Defender for IoT supports both physical and virtual deployments. For physical deployments, you'll be able to purchase certified, preconfigured appliances, or download software to install yourself.
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
The following example shows another version of the Home model, with a property f
Semantic types make it possible to express a value with a unit. Properties and telemetry can be represented with any of the semantic types that are supported by DTDL. For more information on semantic types in DTDL and what values are supported, see [Semantic types in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#semantic-types).
-The following example shows a Sensor model with a semantic-type telemetry for Temperature, and a semantic-type property for Humidity.
+The following example shows a Sensor model with a semantic-type telemetry for Temperature, and a semantic-type property for Humidity.
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/advanced-home-example/ISensor.json" highlight="7-18":::
+> [!NOTE]
+> Currently, "Property" or "Telemetry" type must be the first element of the array, followed by the semantic type. Otherwise, the field may not be visible in the Azure Digital Twins Explorer.
+ ## Relationships This section goes into more detail about *relationships* in DTDL models.
event-grid Event Schema Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-key-vault.md
An Azure Key Vault account generates the following event types:
| Microsoft.KeyVault.CertificateNearExpiry | Certificate Near Expiry | Triggered when the current version of certificate is about to expire. (The event is triggered 30 days before the expiration date.) | | Microsoft.KeyVault.CertificateExpired | Certificate Expired | Triggered when the current version of a certificate is expired. | | Microsoft.KeyVault.KeyNewVersionCreated | Key New Version Created | Triggered when a new key or new key version is created. |
-| Microsoft.KeyVault.KeyNearExpiry | Key Near Expiry | Triggered when the current version of a key is about to expire. The event time can be configured using [key rotation policy(preview)](../key-vault/keys/how-to-configure-key-rotation.md) |
+| Microsoft.KeyVault.KeyNearExpiry | Key Near Expiry | Triggered when the current version of a key is about to expire. The event time can be configured using [key rotation policy](../key-vault/keys/how-to-configure-key-rotation.md) |
| Microsoft.KeyVault.KeyExpired | Key Expired | Triggered when the current version of a key is expired. | | Microsoft.KeyVault.SecretNewVersionCreated | Secret New Version Created | Triggered when a new secret or new secret version is created. | | Microsoft.KeyVault.SecretNearExpiry | Secret Near Expiry | Triggered when the current version of a secret is about to expire. (The event is triggered 30 days before the expiration date.) |
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
Title: Partner Events overview for system owners who desire to become partners description: Provides an overview of the concepts and general steps to become a partner. Previously updated : 03/31/2021 Last updated : 04/28/2022 # Partner Events overview for partners - Azure Event Grid (preview)
A verified partner is a partner organization whose identity has been validated b
Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription.
+> [!NOTE]
+> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
+ >[!IMPORTANT]
->A verified partner is not an authorized partner. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
+> **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
## Partner topic and partner destination activation
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. > [!NOTE]
-> At the time of the release of this feature on March 31st, 2022, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try it using in non-production Azure subscriptions before it's a mandatory step by around June, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
+> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 30th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 30th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
frontdoor How To Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-logs.md
Azure Front Door currently provides individual API requests with each entry havi
| SecurityProtocol | The TLS/SSL protocol version used by the request or null if no encryption. Possible values include: SSLv3, TLSv1, TLSv1.1, TLSv1.2 | | SecurityCipher | When the value for Request Protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and AFD for encryption. | | Endpoint | The domain name of AFD endpoint, for example, contoso.z01.azurefd.net |
-| HttpStatusCode | The HTTP status code returned from AFD. |
+| HttpStatusCode | The HTTP status code returned from AFD. If a request to the the origin times out, value for HttpStatusCode is set to "0".|
| Pop | The edge pop, which responded to the user request. | | Cache Status | Provides the status code of how the request gets handled by the CDN service when it comes to caching. Possible values are HIT: The HTTP request was served from AFD edge POP cache. <br> **MISS**: The HTTP request was served from origin. <br/> **PARTIAL_HIT**: Some of the bytes from a request got served from AFD edge POP cache while some of the bytes got served from origin for object chunking scenarios. <br> **CACHE_NOCONFIG**: Forwarding requests without caching settings, including bypass scenario. <br/> **PRIVATE_NOSTORE**: No cache configured in caching settings by customers. <br> **REMOTE_HIT**: The request was served by parent node cache. <br/> **N/A**:** Request that was denied by Signed URL and Rules Set. | | MatchedRulesSetName | The names of the rules that were processed. |
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
for Azure Policy use the
1. Create `main.tf` with the following code:
- ```hcl
- provider "azurerm" {
- features {}
- }
-
- terraform {
- required_providers {
- azurerm = {
- source = "hashicorp/azurerm"
- version = ">= 2.96.0"
- }
+ ```hcl
+ provider "azurerm" {
+ features {}
}
- }
-
- resource "azurerm_resource_policy_assignment" "auditvms" {
- name = "audit-vm-manageddisks"
- resource_id = var.cust_scope
- policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d"
- description = "Shows all virtual machines not using managed disks"
- display_name = "Audit VMs without managed disks assignment"
- }
- ```
+
+ terraform {
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = ">= 2.96.0"
+ }
+ }
+ }
+
+ resource "azurerm_resource_policy_assignment" "auditvms" {
+ name = "audit-vm-manageddisks"
+ resource_id = var.cust_scope
+ policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d"
+ description = "Shows all virtual machines not using managed disks"
+ display_name = "Audit VMs without managed disks assignment"
+ }
+ ```
1. Create `variables.tf` with the following code:
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/kb/947226) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. For more information please see: [Threats and Countermeasures Guide: Security Settings in Windows Server 2008 and Windows Vista](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd349791(v=ws.10)?redirectedfrom=MSDN) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Logon-Logoff
hdinsight Apache Ambari Troubleshoot Metricservice Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-metricservice-issues.md
This article describes troubleshooting steps and possible resolutions for issues
## Scenario: OutOfMemoryError or Unresponsive Apache Ambari Metrics Collector
+### Background
+
+The Ambari Metrics Collector is a daemon that runs on a specific host in the cluster and receives data from the registered publishers, Monitors, and Sinks.
++ ### Issue * You could receive a critical **"Metrics Collector Process"** alert in Ambari UI and show below similar message. `Connection failed: timed out to <headnode fqdn>:6188` * Ambari Metrics Collector maybe getting restarted in the headnode frequently
-* Some Apache Ambari metrics may not show up. For example, **NAMENODE** shows **Started** instead of **Active/Standby** status
+* Some Apache Ambari metrics may not show up in Ambari UI or Grafana. For example, **NAMENODE** shows **Started** instead of **Active/Standby** status. The **'No Data Available'** message might appear in Ambari Dashboard
### Cause
java.lang.OutOfMemoryError: Java heap space
To avoid these issues, consider using one of the following options:
-1. Increase the heap memory of Apache Ambari Metrics Collector from **Ambari** > **Ambari Metric Collector** > **Configuration** > **Metrics Collector Heap Size**
+1. Increase the heap memory of Apache Ambari Metrics Collector from **Ambari** > **Ambari Metrics** > **CONFIGS** > **Advanced ams-env** > **Metrics Collector Heap Size**
:::image type="content" source="./media/apache-ambari-troubleshoot-ams-issues/editing-ams-configuration-ambari.png" alt-text="Screenshot of editing Ambari Metric Service configuration properties." border="true":::
To avoid these issues, consider using one of the following options:
`tar czf /mnt/backupof-ambari-metrics-collector-hbase-$(date +%Y%m%d-%H%M%S).tar.gz /mnt/data/ambari-metrics-collector/hbase` 3. Restart AMS using Ambari.
+For Kafka cluster, if the above solutions do not help, consider the following solutions.
+
+- Ambari Metrics Service needs to deal with lots of kafka metrics, so it's a good idea to enable only metrics in the allowlist. Go to **Ambari** > **Ambari Metrics** > **CONFIGS** > **Advanced ams-env**, set below property to true. After this modification, need to restart the impacted services in Ambari UI as required.
+
+ :::image type="content" source="./media/apache-ambari-troubleshoot-ams-issues/editing-allowed-metrics-ambari.png" alt-text="Screenshot of editing Ambari Metric Service allowlisted metrics properties." border="true":::
+
+- Handling lots of metrics for standalone HBase with limited memory would impact HBase response time. Hence metrics would be unavailable. If Kafka cluster has many topics and still generates a lot of allowed metrics, increase the heap memory for HMaster and RegionServer in Ambari Metrics Service. Go to **Ambari** > **Ambari Metrics** > **CONFIGS** > **Advanced hbase-env** > **HBase Master Maximum Memory** and **HBase RegionServer Maximum Memory** and increase the values. Restart the required services in Ambari UI.
+
+ :::image type="content" source="./media/apache-ambari-troubleshoot-ams-issues/editing-hbase-memory-ambari.png" alt-text="Screenshot of editing Ambari Metric Service hbase memory properties." border="true":::
## Next steps
hdinsight Apache Hadoop Emulator Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-emulator-get-started.md
keywords: hadoop emulator,hadoop sandbox
Previously updated : 05/29/2019 Last updated : 04/28/2022 # Get started with an Apache Hadoop sandbox, an emulator on a virtual machine
To download an older HDP version sandbox, see the links under **Older Versions**
* [Learning the ropes of the Hortonworks Sandbox](https://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/)
-* [Hadoop tutorial - Getting started with HDP](https://hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/)
+* [Hadoop tutorial - Getting started with HDP](https://hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/)
hdinsight Apache Hadoop On Premises Migration Motivation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-motivation.md
Previously updated : 11/15/2019 Last updated : 04/28/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - motivation and benefits
hdinsight Hdinsight High Availability Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-high-availability-components.md
Title: High availability components in Azure HDInsight
description: Overview of the various high availability components used by HDInsight clusters. Previously updated : 10/07/2020 Last updated : 04/28/2022 # High availability services supported by Azure HDInsight
HDInsight HBase clusters support HBase Master high availability. Unlike other HA
## Next steps - [Availability and reliability of Apache Hadoop clusters in HDInsight](./hdinsight-business-continuity.md)-- [Azure HDInsight virtual network architecture](hdinsight-virtual-network-architecture.md)
+- [Azure HDInsight virtual network architecture](hdinsight-virtual-network-architecture.md)
hdinsight Hdinsight Key Scenarios To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-key-scenarios-to-monitor.md
description: How to monitor health and performance of Apache Hadoop clusters in
Previously updated : 03/09/2020 Last updated : 04/28/2022 # Monitor cluster performance in Azure HDInsight
Visit the following links for more information about troubleshooting and monitor
* [Analyze HDInsight logs](./hdinsight-troubleshoot-guide.md) * [Debug apps with Apache Hadoop YARN logs](hdinsight-hadoop-access-yarn-app-logs-linux.md)
-* [Enable heap dumps for Apache Hadoop services on Linux-based HDInsight](hdinsight-hadoop-collect-debug-heap-dump-linux.md)
+* [Enable heap dumps for Apache Hadoop services on Linux-based HDInsight](hdinsight-hadoop-collect-debug-heap-dump-linux.md)
hdinsight Hdinsight Log Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-log-management.md
description: Determine the types, sizes, and retention policies for HDInsight ac
Previously updated : 02/05/2020 Last updated : 04/28/2022 # Manage logs for an HDInsight cluster
hdinsight Hdinsight Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-managed-identities.md
description: Provides an overview of the implementation of managed identities in
Previously updated : 04/15/2020 Last updated : 04/28/2022 # Managed identities in Azure HDInsight
hdinsight Apache Kafka High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-high-availability.md
description: Learn how to ensure high availability with Apache Kafka on Azure HD
Previously updated : 12/09/2019 Last updated : 04/28/2022 # High availability of your data with Apache Kafka on HDInsight
For more information on connecting to HDInsight using SSH, see the
## Next steps * [Scalability of Apache Kafka on HDInsight](apache-kafka-scalability.md)
-* [Mirroring with Apache Kafka on HDInsight](apache-kafka-mirroring.md)
+* [Mirroring with Apache Kafka on HDInsight](apache-kafka-mirroring.md)
hdinsight Apache Spark Intellij Tool Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-plugin.md
description: Use the Azure Toolkit for IntelliJ to develop Spark applications wr
Previously updated : 04/14/2020 Last updated : 04/28/2022 # Use Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight cluster
hdinsight Apache Spark Streaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-streaming-overview.md
description: How to use Apache Spark Streaming applications on HDInsight Spark c
Previously updated : 04/23/2020 Last updated : 04/28/2022 # Overview of Apache Spark Streaming
hdinsight Apache Storm Develop Java Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-storm-develop-java-topology.md
description: Learn how to create Apache Storm topologies in Java by creating an
Previously updated : 04/27/2020 Last updated : 04/28/2022 # Create an Apache Storm topology in Java
You've learned how to create an Apache Storm topology by using Java. Now learn h
* [Develop topologies using Python](apache-storm-develop-python-topology.md)
-You can find more example Apache Storm topologies by visiting [Example topologies for Apache Storm on HDInsight](apache-storm-example-topology.md).
+You can find more example Apache Storm topologies by visiting [Example topologies for Apache Storm on HDInsight](apache-storm-example-topology.md).
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
Use the optional **importMode** property in the import serialization data for ea
> [!NOTE] > If the serialization data does not explicitly define an **importMode** flag for a device, it defaults to **createOrUpdate** during the import operation.
+## Import troubleshooting
+
+Using an import job to create devices may fail with a quota issue when it is close to the device count limit of the IoT hub. This can happen even if the total device count is still lower than the quota limit. The **IotHubQuotaExceeded (403002)** error is returned with the following error message: "Total number of devices on IotHub exceeded the allocated quota.ΓÇ¥
+
+If you get this error, you can use the following query to return the total number of devices registered on your IoT hub:
+
+```sql
+SELECT COUNT() as totalNumberOfDevices FROM devices
+```
+
+For information about the total number of devices that can be registered to an IoT hub, see [IoT Hub limits](iot-hub-devguide-quotas-throttling.md#other-limits).
+
+If there's still quota available, you can examine the job output blob for devices that failed with the **IotHubQuotaExceeded (403002)** error. You can then try adding these devices individually to the IoT hub. For example, you can use the **AddDeviceAsync** or **AddDeviceWithTwinAsync** methods. Don't try to add the devices using another job as you'll likely encounter the same error.
+ ## Import devices example ΓÇô bulk device provisioning The following C# code sample illustrates how to generate multiple device identities that:
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
In general, the error message presented should explain how to fix the error. If
You may see requests to IoT Hub fail with the error **403002 IoTHubQuotaExceeded**. And in Azure portal, the IoT hub device list doesn't load.
-This error occurs when the daily message quota for the IoT hub is exceeded.
-
-To resolve this error:
+This error typically occurs when the daily message quota for the IoT hub is exceeded. To resolve this error:
* [Upgrade or increase the number of units on the IoT hub](iot-hub-upgrade.md) or wait for the next UTC day for the daily quota to refresh. * To understand how operations are counted toward the quota, such as twin queries and direct methods, see [Understand IoT Hub pricing](iot-hub-devguide-pricing.md#charges-per-operation). * To set up monitoring for daily quota usage, set up an alert with the metric *Total number of messages used*. For step-by-step instructions, see [Set up metrics and alerts with IoT Hub](tutorial-use-metrics-and-diags.md#set-up-metrics).
+This error may also be returned by a bulk import job when the number of devices registered to your IoT hub approaches or exceeds the quota limit for an IoT Hub. To learn more, see [Troubleshoot import jobs](iot-hub-bulk-identity-mgmt.md#import-troubleshooting).
+ ## 403004 DeviceMaximumQueueDepthExceeded When trying to send a cloud-to-device message, you may see that the request fails with the error **403004** or **DeviceMaximumQueueDepthExceeded**.
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
Here's what's new with Azure Key Vault. New features and improvements are also announced on the [Azure updates Key Vault channel](https://azure.microsoft.com/updates/?category=security&query=Key%20vault).
+## April 2022
+
+Automated encryption key rotation in Key Vault is now generally available.
+
+For more information, see [Configure key auto-rotation in Key Vault](../keys/how-to-configure-key-rotation.md)
+ ## January 2022 Azure Key Vault service throughput limits have been increased to serve double its previous quota for each vault to help ensure high performance for applications. That is, for secret GET and RSA 2,048-bit software keys, you'll receive 4,000 GET transactions per 10 seconds vs 2,000 per 10 seconds previously. The service quotas are specific to operation type and the entire list can be accessed in [Azure Key Vault Service Limits](./service-limits.md).
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
Last updated 11/24/2021
-# Configure key auto-rotation in Azure Key Vault (preview)
+# Configure key auto-rotation in Azure Key Vault
## Overview
key. Our recommendation is to rotate encryption keys at least every two years to
This feature enables end-to-end zero-touch rotation for encryption at rest for Azure services with customer-managed key (CMK) stored in Azure Key Vault. Please refer to specific Azure service documentation to see if the service covers end-to-end rotation.
-## Pricing (Preview)
+## Pricing
-Key rotation feature is free during preview. Additional cost will occur when a key is automatically rotated once the feature GA. For more information, see [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/)
+There's an additional cost per scheduled key rotation. For more information, see [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/)
## Permissions required
-Key Vault key rotation feature requires key management permissions. You can assign a "Key Vault Administrator" role to manage rotation policy and on-demand rotation.
+Key Vault key rotation feature requires key management permissions. You can assign a "Key Vault Crypto Officer" role to manage rotation policy and on-demand rotation.
-For more information on how to use RBAC permission model and assign Azure roles, see:
+For more information on how to use Key Vault RBAC permission model and assign Azure roles, see:
[Use an Azure RBAC to control access to keys, certificates and secrets](../general/rbac-guide.md) > [!NOTE]
For more information on how to use RBAC permission model and assign Azure roles,
## Key rotation policy
-The key rotation policy allows users to configure rotation interval, expiration interval for rotated keys, and near expiry notification period for monitoring expiration using event grid notifications.
+The key rotation policy allows users to configure rotation and Event Grid notifications near expiry notification.
Key rotation policy settings: -- Expiry time: key expiration interval. It is used to set expiration date on newly rotated key. It does not affect a current key.
+- Expiry time: key expiration interval. It's used to set expiration date on newly rotated key. It doesn't affect a current key.
- Enabled/disabled: flag to enable or disable rotation for the key - Rotation types: - Automatically renew at a given time after creation (default) - Automatically renew at a given time before expiry. It requires 'Expiry Time' set on rotation policy and 'Expiration Date' set on the key.-- Rotation time: key rotation interval, the minimum value is 7 days from creation and 7 days from expiration time-- Notification time: key near expiry event interval for event grid notification. It requires 'Expiry Time' set on rotation policy and 'Expiration Date' set on the key.
+- Rotation time: key rotation interval, the minimum value is seven days from creation and seven days from expiration time
+- Notification time: key near expiry event interval for Event Grid notification. It requires 'Expiry Time' set on rotation policy and 'Expiration Date' set on the key.
:::image type="content" source="../media/keys/key-rotation/key-rotation-1.png" alt-text="Rotation policy configuration":::
az keyvault key rotate --vault-name <vault-name> --name <key-name>
## Configure key near expiry notification
-Configuration of expiry notification for event grid key near expiry event. You can configure notification with days, months and years before expiry to trigger near expiry event.
+Configuration of expiry notification for Event Grid key near expiry event. You can configure notification with days, months and years before expiry to trigger near expiry event.
:::image type="content" source="../media/keys/key-rotation/key-rotation-5.png" alt-text="Configure Notification":::
-For more information about event grid notifications in Key Vault, see
+For more information about Event Grid notifications in Key Vault, see
[Azure Key Vault as Event Grid source](../../event-grid/event-schema-key-vault.md?tabs=event-grid-event-schema) ## Configure key rotation with ARM template
Key rotation policy can also be configured using ARM templates.
"defaultValue": "P30D", "type": "String", "metadata": {
- "description": "Near expiry event grid notification. i.e. P30D"
+ "description": "Near expiry Event Grid notification. i.e. P30D"
} }
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## April 26, 2022
+[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
+
+Version: 22.04.21
+
+Main changes:
+
+- Plotly R studio extension patch.
+- Update Rscript env path to support latest R studio version 4.1.3.
+ ## April 14, 2022 New DSVM offering for [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview) is currently live in the marketplace.
Data Science Virtual Machine images for [Ubuntu 18.04](https://azuremarketplace.
### Default Browser for Windows updated
-Earlier, the default browser was set to Internet Explorer. Users are now prompted to choose a default browser when they first sign in.
+Earlier, the default browser was set to Internet Explorer. Users are now prompted to choose a default browser when they first sign in.
machine-learning How To Kubernetes Instance Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-instance-type.md
items:
limits: cpu: "100m" nvidia.com/gpu: 0
- memory: "10Mi"
+ memory: "1Gi"
requests: cpu: "100m"
- memory: "1Gi"
+ memory: "10Mi"
- metadata: name: defaultinstancetype
marketplace Summary Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/summary-dashboard.md
To view the data source and the data refresh details, such as the frequency of t
### Got feedback?
-To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the Got feedback? link.
+To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the **Got feedback?** link.
:::image type="content" source="./media/summary-dashboard/feedback.png" alt-text="Screenshot of the Got feedback link.":::
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
+
+ Title: Creating and using a service principal with an Azure Red Hat OpenShift cluster
+description: In this how-to article, learn how to create a service principal with an Azure Red Hat OpenShift cluster using Azure CLI or the Azure portal.
++++ Last updated : 03/21/2022
+topic: how-to
+keywords: azure, openshift, aro, red hat, azure CLI, azure portal
+#Customer intent: I need to create and use an Azure service principal to restrict permissions to my Azure Red Hat OpenShift cluster.
+zone_pivot_groups: azure-red-hat-openshift-service-principal
++
+# Create and use a service principal with an Azure Red Hat OpenShift cluster
+
+To interact with Azure APIs, an Azure Red Hat OpenShift cluster requires an Azure Active Directory (AD) service principal. This service principal is used to dynamically create, manage, or access other Azure resources, such as an Azure load balancer or an Azure Container Registry (ACR). For more information, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
+
+This article explains how to create and use a service principal for your Azure Red Hat OpenShift clusters using the Azure command-line interface (Azure CLI) or the Azure portal.
+
+## Before you begin
+
+The user creating an Azure AD service principal must have permissions to register an application with your Azure AD tenant and to assign the application to a role in your subscription. You need **User Access Administrator** and **Contributor** permissions at the resource-group level to create service principals.
+
+Use the following Azure CLI command to add these permissions.
+
+```azurecli-interactive
+az role assignment create \
+ --role 'User Access Administrator' \
+ --assignee-object-id $SP_OBJECT_ID \
+ --resource-group $RESOURCEGROUP \
+ --assignee-principal-type 'ServicePrincipal'
+
+az role assignment create \
+ --role 'Contributor' \
+ --assignee-object-id $SP_OBJECT_ID \
+ --resource-group $RESOURCEGROUP \
+ --assignee-principal-type 'ServicePrincipal'
+```
+
+If you don't have the required permissions, you can ask your Azure AD or subscription administrator to assign them. Alternatively, your Azure AD or subscription administrator can create a service principal in advance for you to use with the Azure Red Hat OpenShift cluster.
+
+If you're using a service principal from a different Azure AD tenant, there are more considerations regarding the permissions available when you deploy the cluster. For example, you may not have the appropriate permissions to read and write directory information.
+
+For more information on user roles and permissions, seeΓÇ»[What are the default user permissions in Azure Active Directory?](../active-directory/fundamentals/users-default-permissions.md).
+
+> [!NOTE]
+> Service principals expire in one year unless configured for longer periods. For information on extending your service principal expiration period, see [Rotate service principal credentials for your Azure Red Hat OpenShift (ARO) Cluster](howto-service-principal-credential-rotation.md).
++
+## Create a service principal with Azure CLI
+
+The following sections explain how to use the Azure CLI to create a service principal for your Azure Red Hat OpenShift cluster.
+
+## Prerequisite
+
+If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+
+## Create a service principal - Azure CLI
+
+ To create a service principal with the Azure CLI, run the `az ad sp create-for-rbac` command.
+
+> [!NOTE]
+> When using a service principal to create a new cluster, you may need to assign a Contributor role here.
+
+```azure-cli
+az ad sp create-for-rbac --name myAROClusterServicePrincipal
+```
+
+The output is similar to the following example.
+
+```
+{
+
+ "appId": "",
+
+ "displayName": "myAROClusterServicePrincipal",
+
+ "name": "http://myAROClusterServicePrincipal",
+
+ "password": "",
+
+ "tenant": ""
+
+}
+```
+
+Retain your `appId` and `password`. These values are used when you create an Azure Red Hat OpenShift cluster below.
+
+## Grant permissions to the service principal - Azure CLI
+
+Grant permissions to an existing service principal with Azure CLI, as shown in the following command.
+
+```azurecli-interactive
+az role assignment create \
+ --role 'Contributor' \
+ --assignee-object-id $SP_OBJECT_ID \
+ --resource-group $RESOURCEGROUP \
+ --assignee-principal-type 'ServicePrincipal'
+```
+
+## Use the service principal to create a cluster - Azure CLI
+
+To use an existing service principal when you create an Azure Red Hat OpenShift cluster using the `az aro create` command, use the `--client-id` and `--client-secret` parameters to specify the appId and password from the output of the `az ad sp create-for-rbac` command:
+
+```azure-cli
+az aro create \
+
+ --resource-group myResourceGroup \
+
+ --name myAROCluster \
+
+ --client-id <appID> \
+
+ --client-secret <password>
+```
+
+> [!IMPORTANT]
+> If you're using an existing service principal with a customized secret, ensure the secret doesn't exceed 190 bytes.
+++
+## Create a service principal with the Azure portal
+
+The following sections explain how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
+
+## Create a service principal - Azure portal
+
+To create a service principal using the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md).
+
+## Grant permissions to the service principal - Azure portal
+
+To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
+
+## Use the service principal - Azure portal
+
+When deploying an Azure Red Hat OpenShift cluster using the Azure portal, configure the service principal on the **Authentication** page of the **Azure Red Hat OpenShift** dialog.
++
+Specify the following values, and then select **Review + Create**.
+
+In the **Service principal information** section:
+
+- **Service principal client ID** is your appId.
+- **Service principal client secret** is the service principal's decrypted Secret value.
+
+In the **Cluster pull secret** section:
+
+- **Pull secret** is your cluster's pull secret's decrypted value.
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
Previously updated : 12/31/2021 Last updated : 04/26/2022
Through [AI enrichment](cognitive-search-concept-intro.md), Azure Cognitive Sear
Through OCR, you can extract text from photos or pictures containing alphanumeric text, such as the word "STOP" in a stop sign. Through image analysis, you can generate a text representation of an image, such as "dandelion" for a photo of a dandelion, or the color "yellow". You can also extract metadata about the image, such as its size.
-This article explains how to work with images in an AI enrichment pipeline, and describes several common scenarios such as working with embedded images, custom skills, and overlaying visualizations on original images.
+This article covers the fundamentals of working with images, and also describes several common scenarios, such as working with embedded images, custom skills, and overlaying visualizations on original images.
To work with image content in a skillset, you'll need:
Image processing is indexer-driven, which means that the raw inputs must be a su
+ Image analysis supports JPEG, PNG, GIF, and BMP + OCR supports JPEG, PNG, GIF, BMP, and TIF
-Images can be binary files or embedded in documents (PDF, RTF, and Microsoft application files). A maximum of 1000 images will be extracted from a given document. If there are more than 1000 images in a document, the first 1000 will be extracted and a warning will be generated.
+Images are either standalone binary files or embedded in documents (PDF, RTF, and Microsoft application files). A maximum of 1000 images will be extracted from a given document. If there are more than 1000 images in a document, the first 1000 will be extracted and a warning will be generated.
-Azure Blob Storage is the most frequently used storage for image processing in Cognitive Search.
+Azure Blob Storage is the most frequently used storage for image processing in Cognitive Search. There are three main tasks related to retrieving images from the source:
-- [Create a data source](/rest/api/searchservice/create-data-source) of type "azureblob" that connects to the blob container storing your files.++ Access rights on the container. If you're using a full access connection string that includes a key, the key gives you access to the content. Alternatively, you can [authenticate using Azure Active Directory (Azure AD)](search-howto-managed-identities-data-sources.md) or [connect as a trusted service](search-indexer-howto-access-trusted-service-exception.md). -- Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings so that you can filter out files you don't want.++ [Create a data source](search-howto-indexing-azure-blob-storage.md) of type "azureblob" that connects to the blob container storing your files.+++ Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings. You can filter out files you don't want. <a name="get-normalized-images"></a> ## Configure indexers for image processing
-Image processing requires image normalization to make images more uniform for downstream processing. This step occurs automatically and is internal to indexer processing. As a developer, your only action is to enable image normalization by setting the `"imageAction"` parameter in indexer configuration.
+Image extraction is the first step of indexer processing. Extracted images are queued for image processing. Extracted text is queued for text processing, if applicable.
+
+Image processing requires image normalization to make images more uniform for downstream processing. This step occurs automatically and is internal to indexer processing. As a developer, you enable image normalization by setting the `"imageAction"` parameter in indexer configuration.
Image normalization includes the following operations: + Large images are resized to a maximum height and width to make them uniform and consumable during skillset processing.
-+ For images that have metadata on orientation, image rotation is adjusted for vertical loading. Metadata adjustments are captured in a complex type created for each image.
++ For images that have metadata on orientation, image rotation is adjusted for vertical loading.
-You cannot turn off image normalization. Skills that iterate over images, such as OCR and image analysis, expect normalized images.
+Metadata adjustments are captured in a complex type created for each image. You cannot turn off image normalization. Skills that iterate over images, such as OCR and image analysis, expect normalized images.
1. [Create or Update an indexer](/rest/api/searchservice/create-indexer) to set the configuration properties:
When "imageAction" is set to a value other than "none", the new *normalized_imag
## Define skillsets for image processing
-This section supplements the [skill reference](cognitive-search-predefined-skills.md) articles by providing a holistic introduction to skill inputs, outputs, and patterns, as they relate to image processing.
+This section supplements the [skill reference](cognitive-search-predefined-skills.md) articles by providing context for working with skill inputs, outputs, and patterns, as they relate to image processing.
1. [Create or update a skillset](/rest/api/searchservice/create-skillset) to add skills.
-1. Add templates for OCR and Image Analysis from the portal, or copy the definitions from the [skill reference](cognitive-search-predefined-skills.md) documentation.
+1. Add templates for OCR and Image Analysis from the portal, or copy the definitions from the [skill reference](cognitive-search-predefined-skills.md) documentation. Insert them into the skills array of your skillset definition.
-1. If necessary, [include multi-service key](cognitive-search-attach-cognitive-services.md) in the Cognitive Services property of the skillset. Cognitive Search makes calls to a billable Azure Cognitive Services resource for OCR and image analysis. Your skillset will need to access a Cognitive Services resource in the same region as your Cognitive Search service.
+1. If necessary, [include multi-service key](cognitive-search-attach-cognitive-services.md) in the Cognitive Services property of the skillset. Cognitive Search makes calls to a billable Azure Cognitive Services resource for OCR and image analysis for transactions that exceed the free limit (20 per indexer per day). Cognitive Services must be in the same region as your search service.
+
+Once the basic framework of your skillset is created and Cognitive Services is configured, you can focus on each individual image skill, defining inputs and source context, and mapping outputs to fields in either an index or knowledge store.
> [!NOTE] > See [REST Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md) for an example skillset that combines image processing with downstream natural language processing. It shows how to feed skill imaging output into entity recognition and key phrase extraction.
Whether you're using OCR and image analysis in the same, inputs have virtually t
## Map outputs to search fields
-Output is always text, represented as nodes in an internal enriched document tree, which must be mapped to fields in a search index or projections in a knowledge store to make the content available in your app.
+Cognitive Search is a full text search and knowledge mining solution, so Image Analysis and OCR skill output is always text. Output text is represented as nodes in an internal enriched document tree, and each node must be mapped to fields in a search index or projections in a knowledge store to make the content available in your app.
1. In the skillset, review the "outputs" section of each skill to determine which nodes exist in the enriched document:
Output is always text, represented as nodes in an internal enriched document tre
1. [Create or update a search index](/rest/api/searchservice/create-index) to add fields to accept the skill outputs.
- In the fields collection below, "content" is blob content. "Metadata_storage_name" returns the name of the file (make sure it is "retrievable"). "Merged_content" is output from Text Merge (useful when images are embedded). "Text" and "layoutText" are skill outputs and must be a string collection in order to the capture all of the OCR-generated output for the entire document.
+ In the fields collection below, "content" is blob content. "Metadata_storage_name" contains the name of the file (make sure it is "retrievable"). "Metadata_storage_path" is the unique path of the blob and is the default document key. "Merged_content" is output from Text Merge (useful when images are embedded).
+
+ "Text" and "layoutText" are OCR skill outputs and must be a string collection in order to the capture all of the OCR-generated output for the entire document.
```json "fields": [
Output is always text, represented as nodes in an internal enriched document tre
1. [Update the indexer](/rest/api/searchservice/update-indexer) to map skillset output (nodes in an enrichment tree) to index fields.
- Enriched documents are internal. To "externalize" the nodes, an output field mapping specifies the data structure that is accessed by your app. Below is an example of a "text" node (OCR output) mapped to a "text" field in a search index.
+ Enriched documents are internal. To "externalize" the nodes, an output field mapping specifies the data structure that receives node content. This is the data that will be accessed by your app. Below is an example of a "text" node (OCR output) mapped to a "text" field in a search index.
```json "outputFieldMappings": [
POST /indexes/[index name]/docs/search?api-version=[api-version]
} ```
-OCR fields ("text" and "layoutText") will be empty if source documents are pure text or pure imagery. Similarly, image analysis fields ("imageCaption" and "imageTags") will be empty for source documents that are strictly text. If you later add downstream skills that process imaging output, your index may begin to emit warnings if imaging inputs are empty. Such warnings are to be expected when nodes are unpopulated in the enriched document. Recall that blob indexing lets you include or exclude file types if you want to work with content types in isolation.
+OCR recognizes text in image files. This means that OCR fields ("text" and "layoutText") will be empty if source documents are pure text or pure imagery. Similarly, image analysis fields ("imageCaption" and "imageTags") will be empty if source document inputs are strictly text. Indexer execution will emit warnings if imaging inputs are empty. Such warnings are to be expected when nodes are unpopulated in the enriched document. Recall that blob indexing lets you include or exclude file types if you want to work with content types in isolation. You can use these setting to reduce noise during indexer runs.
-An alternate query for checking results might include the "content" and "merged_content" fields. Notice that those fields will be the same on files where there was no image processing.
+An alternate query for checking results might include the "content" and "merged_content" fields. Notice that those fields will include content for any blob file, even those where there was no image processing performed.
### About skill outputs
Skill outputs include "text" (OCR), "layoutText" (OCR), "merged_content", "capti
+ "imageTags" stores tags about an image as a collection of keywords, one collection for all images in the source document.
-The following screenshot illustrates descriptions and tags for the following embedded images in a PDF. Image analysis detected three images. Other text in the example (including titles, headings, and body text) is text in the source document as well, and therefore were not included in OCR or image processing.
+The following screenshot is an illustration of a PDF that includes text and embedded images. Document cracking detected three embedded images: flock of seagulls, map, eagle. Other text in the example (including titles, headings, and body text) was extracted as text and excluded from image processing.
:::image type="content" source="media/cognitive-search-concept-image-scenarios/state-of-birds-screenshot.png" alt-text="Screenshot of three images in a PDF" border="true":::
-Image analysis output is illustrated in the JSON below (search result). In this exercise, the skill specifies tags and description, but [more features](cognitive-search-skill-image-analysis.md#skill-parameters) are available.
+Image analysis output is illustrated in the JSON below (search result). The skill definition allows you to specify which [visual features](cognitive-search-skill-image-analysis.md#skill-parameters) are of interest. For this example, tags and descriptions were produced, but there are more outputs to choose from.
-+ "imageCaption" is an array of descriptions, one per image, with tags and longer text that describes the image. For each image, the longer "text" consists of "a flock of seagulls are swimming in the water", "map", and "a close up of a bird".
++ "imageCaption" output is an array of descriptions, one per image, denoted by "tags" consisting of single words and longer phrases that describe the image. Notice the tags consisting of "a flock of seagulls are swimming in the water", or "a close up of a bird".
-+ "imageTags" is an array of tags, listed in the order of creation. Notice that tags will repeat. There is no aggregation or grouping.
++ "imageTags" output is an array of single tags, listed in the order of creation. Notice that tags will repeat. There is no aggregation or grouping. ```json "imageCaption": [
public static Point GetOriginalCoordinates(Point normalized,
## Scenario: Custom image skills
-Images can also be passed into and returned from custom skills. The skillset base64-encodes the image being passed into the custom skill. To use the image within the custom skill, set `"/document/normalized_images/*/data"` as the input to the custom skill. Within your custom skill code, base64-decode the string before converting it to an image. To return an image to the skillset, base64-encode the image before returning it to the skillset.
+Images can also be passed into and returned from custom skills. A skillset base64-encodes the image being passed into the custom skill. To use the image within the custom skill, set `"/document/normalized_images/*/data"` as the input to the custom skill. Within your custom skill code, base64-decode the string before converting it to an image. To return an image to the skillset, base64-encode the image before returning it to the skillset.
The image is returned as an object with the following properties.
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-image-analysis.md
Previously updated : 08/12/2021 Last updated : 04/27/2022 + # Image Analysis cognitive skill
-The **Image Analysis** skill extracts a rich set of visual features based on the image content. For example, you can generate a caption from an image, generate tags, or identify celebrities and landmarks. This skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) in Cognitive Services.
+The **Image Analysis** skill extracts a rich set of visual features based on the image content. For example, you can generate a caption from an image, generate tags, or identify celebrities and landmarks. This article is the reference documentation for the **Image Analysis** skill. See [Extract text and information from images](cognitive-search-concept-image-scenarios.md) for usage instructions.
-**Image Analysis** works on images that meet the following requirements:
+This skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) in Cognitive Services. **Image Analysis** works on images that meet the following requirements:
+ The image must be presented in JPEG, PNG, GIF, or BMP format + The file size of the image must be less than 4 megabytes (MB)
Parameters are case-sensitive.
## Skill inputs
-| Input name | Description |
+| Input name | Description |
||| | `image` | Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when ```imageAction``` is set to a value other than ```none```. See the [sample](#sample-output) for more information.|
-## Sample skill definition
+<!-- ## Skill outputs
+
+| Output name | Description |
+||-|
+| `categories` | Complex type that ... |
+| `tags` | Complex type that ... |
+| `description` | Complex type that ... |
+| `faces` | Complex type that ... |
+| `brands` | Complex type that ... | -->
+
+## Sample skill definition
```json {
Parameters are case-sensitive.
{ "fields": [ {
- "name": "id",
+ "name": "metadata_storage_name",
"type": "Edm.String", "key": true, "searchable": true,
Parameters are case-sensitive.
"sortable": true }, {
- "name": "blob_uri",
+ "name": "metadata_storage_path",
"type": "Edm.String", "searchable": true, "filterable": false,
You can define output field mappings to lower-level properties, such as just lan
} ```
-## Sample input
+## Sample input
```json {
You can define output field mappings to lower-level properties, such as just lan
} ```
-## Sample output
+## Sample output
```json {
You can define output field mappings to lower-level properties, such as just lan
} ``` - ## Error cases+ In the following error cases, no elements are extracted. | Error Code | Description |
If you get the error similar to `"One or more skills are invalid. Details: Error
+ [What is Image Analysis?](../cognitive-services/computer-vision/overview-image-analysis.md) + [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md)++ [Extract text and information from images](cognitive-search-concept-image-scenarios.md) + [Create Indexer (REST)](/rest/api/searchservice/create-indexer)
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
Previously updated : 08/12/2021 Last updated : 04/27/2022 # OCR cognitive skill
-The **Optical character recognition (OCR)** skill recognizes printed and handwritten text in image files. This skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) API [v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/5d986960601faab4bf452005) in Cognitive Services. The **OCR** skill maps to the following functionality:
+The **Optical character recognition (OCR)** skill recognizes printed and handwritten text in image files. This article is the reference documentation for the OCR skill. See [Extract text from images](cognitive-search-concept-image-scenarios.md) for usage instructions.
+
+An OCR skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) API [v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/5d986960601faab4bf452005) in Cognitive Services. The **OCR** skill maps to the following functionality:
+ For English, Spanish, German, French, Italian, Portuguese, and Dutch, the new ["Read"](../cognitive-services/computer-vision/overview-ocr.md#read-api) API is used. + For all other languages, the [legacy OCR](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/56f91f2e778daf14a499f20d) API is used.
The **OCR** skill extracts text from image files. Supported file formats include
> [!NOTE] > This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
->
+>
> In addition, image extraction is [billable by Azure Cognitive Search](https://azure.microsoft.com/pricing/details/search/). >
Previously, there was a parameter called "textExtractionAlgorithm" for specifyin
||| | `image` | Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when ```imageAction``` is set to a value other than ```none```. See the [sample](#sample-output) for more information.| - ## Skill outputs+ | Output name | Description | ||-| | `text` | Plain text extracted from the image. | | `layoutText` | Complex type that describes the extracted text and the location where the text was found.| - ## Sample definition ```json
Previously, there was a parameter called "textExtractionAlgorithm" for specifyin
] } ```+ <a name="sample-output"></a> ## Sample text and layoutText output
A common use case for Text Merger is the ability to merge the textual representa
The following example skillset creates a *merged_text* field. This field contains the textual content of your document and the OCRed text from each of the images embedded in that document. #### Request Body Syntax+ ```json { "description": "Extract text from images and merge with content text to produce merged_text",
The following example skillset creates a *merged_text* field. This field contain
] } ```+ The above skillset example assumes that a normalized-images field exists. To generate this field, set the *imageAction* configuration in your indexer definition to *generateNormalizedImages* as shown below: ```json
The above skillset example assumes that a normalized-images field exists. To gen
+ [Built-in skills](cognitive-search-predefined-skills.md) + [TextMerger skill](cognitive-search-skill-textmerger.md) + [How to define a skillset](cognitive-search-defining-skillset.md)++ [Extract text and information from images](cognitive-search-concept-image-scenarios.md) + [Create Indexer (REST)](/rest/api/searchservice/create-indexer)
storage Blob Upload Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger.md
To create the storage account and container, we can run the CLI commands seen be
```azurecli-interactive az group create --location eastus --name msdocs-storage-function \
-az storage account create --name msdocsstorageaccount -resource-group msdocs-storage-function -l eastus --sku Standard_LRS \
+az storage account create --name msdocsstorageaccount --resource-group msdocs-storage-function -l eastus --sku Standard_LRS \
az storage container create --name imageanalysis --account-name msdocsstorageaccount --resource-group msdocs-storage-function ```
You may need to wait a few moments for Azure to provision these resources.
After the commands complete, we also need to retrieve the connection string for the storage account. The connection string will be used later to connect our Azure Function to the storage account. ```azurecli-interactive
-az storage account show-connection-string -g msdocs-storage-function -n msdocsstoragefunction
+az storage account show-connection-string -g msdocs-storage-function -n msdocsstorageaccount
``` Copy the value of the `connectionString` property and paste it somewhere to use for later. You'll also want to make a note of the storage account name `msdocsstoragefunction` for later as well.
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
protocolSettings=$(az storage account file-service-properties show \
--query "${query}") # Replace returned values if null with default values
-protocolSettings="${protocolSettings/$replaceSmbProtocolVersion/$defaultSmbProtocolVersion}"
+protocolSettings="${protocolSettings/$replaceSmbProtocolVersion/$defaultSmbProtocolVersions}"
protocolSettings="${protocolSettings/$replaceSmbChannelEncryption/$defaultSmbChannelEncryption}" protocolSettings="${protocolSettings/$replaceSmbAuthenticationMethods/$defaultSmbAuthenticationMethods}" protocolSettings="${protocolSettings/$replaceSmbKerberosTicketEncryption/$defaultSmbKerberosTicketEncryption}"
stream-analytics Azure Database Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-database-explorer-output.md
Previously updated : 12/13/2021 Last updated : 04/27/2022 # Azure Data Explorer output from Azure Stream Analytics (Preview)
For more information about Azure Data Explorer, visit the [What is Azure Data Ex
To learn more about how to create an Azure Data Explorer and cluster by using the Azure portal, visit: [Quickstart: Create an Azure Data Explorer cluster and database](/azure/data-explorer/create-cluster-database-portal/) -
-> [!NOTE]
-> Test connection is currently not supported on multi-tenant clusters.
- ## Output configuration The following table lists the property names and their description for creating an Azure Data Explorer output:
You can significantly grow the scope of real-time analytics by leveraging ASA an
## Limitation
-For Ingestion to successfully work, you need to make sure that:
- * The number of columns in Azure Stream Analytics job query should match with Azure Data Explorer table and should be in the same order. * The name of the columns & data type should match between Azure Stream Analytics SQL query and Azure Data Explorer table. * Azure Data Explorer has an aggregation (batching) policy for data ingestion, designed to optimize the ingestion process. The policy is configured to 5 minutes, 1000 items or 1 GB of data by default, so you may experience a latency. See [batching policy](/azure/data-explorer/kusto/management/batchingpolicy) for aggregation options.
+* Test connection to Azure Data Explorer is not supported in jobs running in Shared multi-tenant environment.
## Next steps
stream-analytics Postgresql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/postgresql-database-output.md
Previously updated : 11/19/2021 Last updated : 04/27/2022 # Azure Database for PostgreSQL output from Azure Stream Analytics (Preview)
The following table lists the property names and their description for creating
Partitioning needs to enabled and is based on the PARTITION BY clause in the query. When the Inherit Partitioning option is enabled, it follows the input partitioning for [fully parallelizable queries](stream-analytics-scale-jobs.md).
+## Limitation
+Test connection functionality to Azure Database for PostgreSQL is not supported at the time of preview.
## Next steps
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
The error *Invalid object name 'table name'* indicates that you are using an obj
  - The table has some column types that cannot be represented in serverless SQL.   - The table has a format that is not supported in serverless SQL pool (Delta, ORC, etc.)
+### Unclosed quotation mark after the character string
+
+In some rare cases, where you are using `LIKE` operator on a string column or some comparison with the string literals, you might get the following error:
+
+```
+Msg 105, Level 15, State 1, Line 88
+Unclosed quotation mark after the character string
+```
+
+This error might happen if you are using `Latin1_General_100_BIN2_UTF8` collation on the column. Try to set `Latin1_General_100_CI_AS_SC_UTF8` collation on the column instead of the `Latin1_General_100_BIN2_UTF8` collation to resolve the issue. If the error is still returned, raise a support request through the Azure portal.
+ ### Could not allocate tempdb space while transferring data from one distribution to another The error *Could not allocate tempdb space while transferring data from one distribution to another* is returned when the query execution engine cannot process data and transfer it between the nodes that are executing the query.
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
As a content publisher, you might want to share a gallery to the community:
- You want greater control over the number of versions, regions, and the duration of image availability. -- You have daily/nightly builds to share publicly with your customers and donΓÇÖt want to deal with the overhead that comes with publishing on Azure Marketplace - - You want to quickly share daily or nightly builds with your customers.
+- You donΓÇÖt want to deal with the complexity of multi-tenant authentication when sharing with multiple tenants on Azure.
+ ### How sharing with the community works You [create a gallery resource](create-gallery.md#create-a-community-gallery-preview) under `Microsoft.Compute/Galleries` and choose `community` as a sharing option.
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
Key differences between persistent and ephemeral OS disks:
| **OS disk resize**| Supported during VM creation and after VM is stop-deallocated| Supported during VM creation only| | **Resizing to a new VM size**| OS disk data is preserved| Data on the OS disk is deleted, OS is reprovisioned | | **Redeploy** | OS disk data is preserved | Data on the OS disk is deleted, OS is reprovisioned |
-| **Stop/ Start of VM** | OS disk data is preserved | Data on the OS disk is deleted, OS is reprovisioned |
+| **Stop/ Start of VM** | OS disk data is preserved | VMs and scale set instances cannot be stopped. |
| **Page file placement**| For Windows, page file is stored on the resource disk| For Windows, page file is stored on the OS disk (for both OS cache placement and Temp disk placement).|
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md
[virtual-machines-windows-classic-ps-sql-int-listener]:./../../windows/sqlclassic/virtual-machines-windows-classic-ps-sql-int-listener.md [virtual-machines-sql-server-high-availability-and-disaster-recovery-solutions]:./../../windows/sql/virtual-machines-windows-sql-high-availability-dr.md [virtual-machines-sql-server-infrastructure-services]:./../../windows/sql/virtual-machines-windows-sql-server-iaas-overview.md
-[virtual-machines-sql-server-performance-best-practices]:./../../windows/sql/virtual-machines-windows-sql-performance.md
+[virtual-machines-sql-server-performance-best-practices]:/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist
[virtual-machines-upload-image-windows-resource-manager]:../../virtual-machines-windows-upload-image.md [virtual-machines-windows-tutorial]:../../virtual-machines-windows-hero-tutorial.md [virtual-machines-workload-template-sql-alwayson]:https://azure.microsoft.com/documentation/templates/sql-server-2014-alwayson-dsc/
virtual-network Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/resource-health.md
This article provides guidance on how to use Azure Resource Health to monitor an
[Azure Resource Health](/azure/service-health/overview) provides information about the health of your NAT gateway resource. You can use resource health and Azure monitor notifications to keep you informed on the availability and health status of your NAT gateway resource. Resource health can help you quickly assess whether an issue is due to a problem in your Azure infrastructure or because of an Azure platform event. The resource health of your NAT gateway is evaluated by measuring the data-path availability of your NAT gateway endpoint.
+> [!IMPORTANT]
+> When you first create your NAT gateway resource and attach it to a subnet and public IP address/prefix, there is no available data as of yet to determine the health status of your NAT gateway resource. In the first few minutes after your NAT gateway is created, you may see the health status of your NAT gateway change from Unavailable to Degraded and then to Available as health data is generated. This is an expected behavior. If you have only a public IP address or only a subnet attached to your NAT gateway resource upon deployment, the health status will immmediately show as Unknown.
+ You can view the status of your NAT gatewayΓÇÖs health status on the **Resource Health** page, found under **Support + troubleshooting** for your NAT gateway resource. The health of your NAT gateway resource is displayed as one of the following statuses: