Updates from: 05/12/2022 06:02:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Custom Assign Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-assign-powershell.md
Previously updated : 09/07/2021 Last updated : 05/10/2022
To assign the role to a service principal instead of a user, use the [Get-AzureA
## Role definitions
-Role definition objects contain the definition of the built-in or custom role, along with the permissions that are granted by that role assignment. This resource displays both custom role definitions and built-in directory roles (which are displayed in roleDefinition equivalent form). Today, an Azure AD organization can have a maximum of 30 unique custom role definitions defined.
+Role definition objects contain the definition of the built-in or custom role, along with the permissions that are granted by that role assignment. This resource displays both custom role definitions and built-in directory roles (which are displayed in roleDefinition equivalent form). For information about the maximum number of custom roles that can be created in an Azure AD organization, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).
### Create a role definition
active-directory Adobe Identity Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both Adobe Identity Man
## Capabilities supported > [!div class="checklist"] > * Create users in Adobe Identity Management
-> * Disable users in Adobe Identity Management when they do not require access anymore
+> * Remove users in Adobe Identity Management when they do not require access anymore
> * Keep user attributes synchronized between Azure AD and Adobe Identity Management > * Provision groups and group memberships in Adobe Identity Management
-> * [Single sign-on](adobe-identity-management-tutorial.md) to Adobe Identity Management (recommended)
+> * Single sign-on to Adobe Identity Management (recommended)
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-1. Determine what data to [map between Azure AD and Adobe Identity Management](../app-provisioning/customize-application-attributes.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Adobe Identity Management](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Adobe Identity Management to support provisioning with Azure AD 1. Login to [Adobe Admin Console](https://adminconsole.adobe.com/). Navigate to **Settings > Directory Details > Sync**.
-1. Click **Add Sync**.
+2. Click **Add Sync**.
![Add](media/adobe-identity-management-provisioning-tutorial/add-sync.png)
-1. Select **Sync users from Microsoft Azure** and click **Next**.
+3. Select **Sync users from Microsoft Azure** and click **Next**.
![Screenshot that shows 'Sync users from Microsoft Azure Active Directory' selected.](media/adobe-identity-management-provisioning-tutorial/sync-users.png)
-1. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management application in the Azure portal.
+4. Copy and save the **Tenant URL** and the **Secret token**. These values will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Adobe Identity Management application in the Azure portal.
![Sync](media/adobe-identity-management-provisioning-tutorial/token.png)
The Azure AD provisioning service allows you to scope who will be provisioned ba
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+> [!VIDEO https://www.youtube.com/embed/k2_fk7BY8Ow]
++ ### To configure automatic user provisioning for Adobe Identity Management in Azure AD: 1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ![Enterprise applications blade](common/enterprise-applications.png)
-1. In the applications list, select **Adobe Identity Management**.
+2. In the applications list, select **Adobe Identity Management**.
![The Adobe Identity Management link in the Applications list](common/all-applications.png)
-1. Select the **Provisioning** tab.
+3. Select the **Provisioning** tab.
![Provisioning tab](common/provisioning.png)
-1. Set the **Provisioning Mode** to **Automatic**.
+4. Set the **Provisioning Mode** to **Automatic**.
![Provisioning tab automatic](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your Adobe Identity Management Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management. If the connection fails, ensure your Adobe Identity Management account has Admin permissions and try again.
+5. Under the **Admin Credentials** section, input your Adobe Identity Management Tenant URL and Secret Token retrieved earlier from Step 2. Click **Test Connection** to ensure Azure AD can connect to Adobe Identity Management. If the connection fails, ensure your Adobe Identity Management account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
-1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
-1. Select **Save**.
+7. Select **Save**.
-1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management**.
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Adobe Identity Management**.
-1. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
- |||||
- |userName|String|✓|✓
- |active|Boolean||
- |emails[type eq "work"].value|String||
- |addresses[type eq "work"].country|String||
- |name.givenName|String||
- |name.familyName|String||
- |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||
+ |Attribute|Type|
+ |||
+ |userName|String|
+ |emails[type eq "work"].value|String|
+ |active|Boolean|
+ |addresses[type eq "work"].country|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String|
-1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**.
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**.
-1. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes.
+11. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
- |||||
- |displayName|String|✓|✓
- |members|Reference||
+ |Attribute|Type|
+ |||
+ |displayName|String|
+ |members|Reference|
-1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-1. To enable the Azure AD provisioning service for Adobe Identity Management, change the **Provisioning Status** to **On** in the **Settings** section.
+13. To enable the Azure AD provisioning service for Adobe Identity Management, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-1. Define the users and/or groups that you would like to provision to Adobe Identity Management by choosing the desired values in **Scope** in the **Settings** section.
+14. Define the users and/or groups that you would like to provision to Adobe Identity Management by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-1. When you are ready to provision, click **Save**.
+15. When you are ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-## More resources
+## Additional resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Cyberark Saml Authentication Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cyberark-saml-authentication-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with CyberArk SAML Authentication | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with CyberArk SAML Authentication'
description: Learn how to configure single sign-on between Azure Active Directory and CyberArk SAML Authentication.
Previously updated : 02/09/2021 Last updated : 05/11/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with CyberArk SAML Authentication
+# Tutorial: Azure AD SSO integration with CyberArk SAML Authentication
In this tutorial, you'll learn how to integrate CyberArk SAML Authentication with Azure Active Directory (Azure AD). When you integrate CyberArk SAML Authentication with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * CyberArk SAML Authentication single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
* CyberArk SAML Authentication supports **SP and IDP** initiated SSO.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Add CyberArk SAML Authentication from the gallery To configure the integration of CyberArk SAML Authentication into Azure AD, you need to add CyberArk SAML Authentication from the gallery to your list of managed SaaS apps.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Reply URL** text box, type a URL using the following pattern: `https://<PVWA DNS or IP>/passwordvault/api/auth/saml/logon`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<PVWA DNS or IP>/PasswordVault/v10/logon/saml` > [!NOTE]
- > These values are not real. Update these values with the actual Reply URL and Sign-On URL. Contact [CyberArk SAML Authentication Client support team](mailto:bizdevtech@cyberark.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL and Sign-On URL. Contact your CyberArk Administration team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
1. On the **Set up CyberArk SAML Authentication** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure CyberArk SAML Authentication SSO
-To configure single sign-on on **CyberArk SAML Authentication** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [CyberArk SAML Authentication support team](mailto:support@cyberark.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **CyberArk SAML Authentication** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to your CyberArk Administration team. They set this setting to have the SAML SSO connection set properly on both sides.
### Create CyberArk SAML Authentication test user
-In this section, you create a user called B.Simon in CyberArk SAML Authentication. Work with [CyberArk SAML Authentication support team](mailto:support@cyberark.com) to add the users in the CyberArk SAML Authentication platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in CyberArk SAML Authentication. Work with your CyberArk Administration team to add the users in the CyberArk SAML Authentication platform. Users must be created and activated before you use single sign-on.
## Test SSO
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure CyberArk SAML Authentication you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure CyberArk SAML Authentication you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Gamba Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gamba-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with gamba!'
+description: Learn how to configure single sign-on between Azure Active Directory and gamba!.
++++++++ Last updated : 05/08/2022++++
+# Tutorial: Azure AD SSO integration with gamba!
+
+In this tutorial, you'll learn how to integrate gamba! with Azure Active Directory (Azure AD). When you integrate gamba! with Azure AD, you can:
+
+* Control in Azure AD who has access to gamba!.
+* Enable your users to be automatically signed-in to gamba! with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* gamba! single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* gamba! supports **SP** and **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add gamba! from the gallery
+
+To configure the integration of gamba! into Azure AD, you need to add gamba! from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **gamba!** in the search box.
+1. Select **gamba!** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for gamba!
+
+Configure and test Azure AD SSO with gamba! using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in gamba!.
+
+To configure and test Azure AD SSO with gamba!, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure gamba! SSO](#configure-gamba-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create gamba! test user](#create-gamba-test-user)** - to have a counterpart of B.Simon in gamba! that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **gamba!** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://www.getgamba.com/n/#/login`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to gamba!.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **gamba!**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure gamba! SSO
+
+To configure single sign-on on **gamba!** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [gamba! support team](mailto:customers@getgamba.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create gamba! test user
+
+In this section, you create a user called Britta Simon in gamba!. Work with [gamba! support team](mailto:customers@getgamba.com) to add the users in the gamba! platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to gamba! Sign on URL where you can initiate the login flow.
+
+* Go to gamba! Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the gamba! for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the gamba! tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the gamba! for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure gamba! you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Lyve Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lyve-cloud-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Lyve Cloud'
+description: Learn how to configure single sign-on between Azure Active Directory and Lyve Cloud.
++++++++ Last updated : 05/05/2022++++
+# Tutorial: Azure AD SSO integration with Lyve Cloud
+
+In this tutorial, you'll learn how to integrate Lyve Cloud with Azure Active Directory (Azure AD). When you integrate Lyve Cloud with Azure AD, you can:
+
+* Control in Azure AD who has access to Lyve Cloud.
+* Enable your users to be automatically signed-in to Lyve Cloud with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Lyve Cloud single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Lyve Cloud supports **IDP** initiated SSO.
+
+## Add Lyve Cloud from the gallery
+
+To configure the integration of Lyve Cloud into Azure AD, you need to add Lyve Cloud from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Lyve Cloud** in the search box.
+1. Select **Lyve Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Lyve Cloud
+
+Configure and test Azure AD SSO with Lyve Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lyve Cloud.
+
+To configure and test Azure AD SSO with Lyve Cloud, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Lyve Cloud SSO](#configure-lyve-cloud-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Lyve Cloud test user](#create-lyve-cloud-test-user)** - to have a counterpart of B.Simon in Lyve Cloud that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Lyve Cloud** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<account_id>.console.lyvecloud.seagate.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<account_id>.console.lyvecloud.seagate.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Lyve Cloud Client support team](mailto:lyvecloud.support@seagate.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Lyve Cloud** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lyve Cloud.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Lyve Cloud**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Lyve Cloud SSO
+
+To configure single sign-on on **Lyve Cloud** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Lyve Cloud support team](mailto:lyvecloud.support@seagate.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Lyve Cloud test user
+
+In this section, you create a user called Britta Simon in Lyve Cloud. Work with [Lyve Cloud support team](mailto:lyvecloud.support@seagate.com) to add the users in the Lyve Cloud platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Lyve Cloud for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Lyve Cloud tile in the My Apps, you should be automatically signed in to the Lyve Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Lyve Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Skopenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skopenow-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Skopenow'
+description: Learn how to configure single sign-on between Azure Active Directory and Skopenow.
++++++++ Last updated : 05/08/2022+++
+# Tutorial: Azure AD SSO integration with Skopenow
+
+In this tutorial, you'll learn how to integrate Skopenow with Azure Active Directory (Azure AD). When you integrate Skopenow with Azure AD, you can:
+
+* Control in Azure AD who has access to Skopenow.
+* Enable your users to be automatically signed-in to Skopenow with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Skopenow single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Skopenow supports **SP and IDP** initiated SSO.
+* Skopenow supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Skopenow from the gallery
+
+To configure the integration of Skopenow into Azure AD, you need to add Skopenow from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Skopenow** in the search box.
+1. Select **Skopenow** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Skopenow
+
+Configure and test Azure AD SSO with Skopenow using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Skopenow.
+
+To configure and test Azure AD SSO with Skopenow, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Skopenow SSO](#configure-skopenow-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Skopenow test user](#create-skopenow-test-user)** - to have a counterpart of B.Simon in Skopenow that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Skopenow** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://app.skopenow.com/saml/module.php/saml/sp/metadata.php/microsoft`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://app.skopenow.com/saml/module.php/saml/sp/saml2-acs.php/microsoft`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.skopenow.com/login/sso?account=microsoft`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Skopenow** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Skopenow.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Skopenow**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Skopenow SSO
+
+To configure single sign-on on **Skopenow** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Skopenow support team](mailto:support@skopenow.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Skopenow test user
+
+In this section, a user called B.Simon is created in Skopenow. Skopenow supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Skopenow, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Skopenow Sign on URL where you can initiate the login flow.
+
+* Go to Skopenow Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Skopenow for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Skopenow tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Skopenow for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Skopenow you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Travelperk Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/travelperk-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with TravelPerk | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with TravelPerk'
description: Learn how to configure single sign-on between Azure Active Directory and TravelPerk.
Previously updated : 09/02/2021 Last updated : 05/11/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with TravelPerk
+# Tutorial: Azure AD SSO integration with TravelPerk
In this tutorial, you'll learn how to integrate TravelPerk with Azure Active Directory (Azure AD). When you integrate TravelPerk with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * A TravelPerk account with Premium subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, perform the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<COMPANY>.travelperk.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier,Reply URL and Sign on URL. The values can be found inside your TravelPerk account: go to **Company Settings** > **Integrations** > **Single Sign On**. For assistance, visit the [TravelPerk helpcenter](https://support.travelperk.com/hc/articles/360052450271-How-can-I-setup-SSO-for-Azure-SAML).
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. The values can be found inside your TravelPerk account: go to **Company Settings** > **Integrations** > **Single Sign On**. For assistance, visit the [TravelPerk helpcenter](https://support.travelperk.com/hc/articles/360052450271-How-can-I-setup-SSO-for-Azure-SAML).
1. Your TravelPerk application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. In the default mapping, **emailaddress** is mapped with **user.mail**. However, the TravelPerk application expects **emailaddress** to be mapped with **user.userprincipalname**. For TravelPerk, you must edit the attribute mapping: click the **Edit** icon, and then change the attribute mapping. To edit an attribute, just click the attribute to open edit mode.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of TravelPerk application.](common/default-attributes.png "Attributes")
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up TravelPerk** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure TravelPerk SSO
-To configure single sign-on on **TravelPerk** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [TravelPerk support team](mailto:trex@travelperk.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **TravelPerk** side, you need to set up the integration in the TravelPerk app.
+
+1. Go to https://app.travelperk.com as an Admin user, and under **Account Settings** > **Integrations** open **Single sign-on (SSO)**.
+
+1. Select **SAML** as the option, and click **New Integration** then perform the following steps:
+
+ a. In the **IdP entity ID** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ b. In the **IdP SSO service URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **IdP x509 cert** textbox, paste the **Federation Metadata XML** file (Without the X509Certificate tag) from the Azure portal.
+
+ d. Save and proceed with the testing.
### Create TravelPerk test user
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure TravelPerk you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure TravelPerk you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Zscaler Two Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-two-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
+> [!VIDEO https://www.youtube.com/embed/7SU5S0WtNNk]
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
adobe-target-content: ./quickstart-html-uiex
[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart shows how to deploy a basic HTML+CSS site to Azure App Service. You'll complete this quickstart in [Cloud Shell](../cloud-shell/overview.md), but you can also run these commands locally with [Azure CLI](/cli/azure/install-azure-cli).
+> [!NOTE]
+> For information regarding hosting static HTML files in a serverless environment, please see [Static Web Apps](../static-web-apps/overview.md).
+ :::image type="content" source="media/quickstart-html/hello-world-in-browser.png" alt-text="Home page of sample app"::: [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
zone_pivot_groups: app-service-platform-windows-linux
# Create a PHP web app in Azure App Service ::: zone pivot="platform-windows"
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart tutorial shows how to deploy a PHP app to Azure App Service on Windows.
::: zone-end ::: zone pivot="platform-linux"
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart tutorial shows how to deploy a PHP app to Azure App Service on Linux.
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart shows how to deploy a PHP app to Azure App Service on Linux.
-You create the web app using the [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell, and you use Git to deploy sample PHP code to the web app.
+You create and deploy the web app using [Azure CLI](/cli/azure/get-started-with-azure-cli).
![Sample app running in Azure](media/quickstart-php/hello-world-in-browser.png) You can follow the steps here using a Mac, Windows, or Linux machine. Once the prerequisites are installed, it takes about five minutes to complete the steps. -
-## Prerequisites
-
-To complete this quickstart:
+To complete this quickstart, you need:
-* <a href="https://git-scm.com/" target="_blank">Install Git</a>
-* <a href="https://php.net/manual/install.php" target="_blank">Install PHP</a>
+1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+1. <a href="https://git-scm.com/" target="_blank">Git</a>
+1. <a href="https://php.net/manual/install.php" target="_blank">PHP</a>
+1. <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> to run commands in any shell to provision and configure Azure resources.
-## Download the sample locally
+## 1 - Set up the sample application
1. In a terminal window, run the following commands. It will clone the sample application to your local machine, and navigate to the directory containing the sample code.
To complete this quickstart:
git clone https://github.com/Azure-Samples/php-docs-hello-world cd php-docs-hello-world ```
-
-1. Make sure the default branch is `main`.
-
- ```bash
- git branch -m main
- ```
-
- > [!TIP]
- > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this quickstart also shows you how to deploy a repository from `main`.
-
-## Run the app locally
-1. Run the application locally so that you see how it should look when you deploy it to Azure. Open a terminal window and use the `php` command to launch the built-in PHP web server.
+1. To run the application locally, use the `php` command to launch the built-in PHP web server.
```bash php -S localhost:8080 ```
-1. Open a web browser, and navigate to the sample app at `http://localhost:8080`.
-
- You see the **Hello World!** message from the sample app displayed in the page.
+1. Browse to the sample application at `http://localhost:8080` in a web browser.
![Sample app running locally](media/quickstart-php/localhost-hello-world-in-browser.png) 1. In your terminal window, press **Ctrl+C** to exit the web server.
+## 2 - Deploy your application code to Azure
+Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az_webapp_up) that will create the necessary resources and deploy your application in a single step.
---
-## Create a web app
-
-1. In the Cloud Shell, create a web app in the `myAppServicePlan` App Service plan with the [`az webapp create`](/cli/azure/webapp#az-webapp-create) command.
-
- In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az-webapp-list-runtimes).
-
- ```azurecli-interactive
- az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'PHP:8.0' --deployment-local-git
- ```
-
- When the web app has been created, the Azure CLI shows output similar to the following example:
-
- <pre>
- Local git is configured with url of 'https://&lt;username&gt;@&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git'
- {
- "availabilityState": "Normal",
- "clientAffinityEnabled": true,
- "clientCertEnabled": false,
- "cloningInfo": null,
- "containerSize": 0,
- "dailyMemoryTimeQuota": 0,
- "defaultHostName": "&lt;app-name&gt;.azurewebsites.net",
- "enabled": true,
- &lt; JSON data removed for brevity. &gt;
- }
- </pre>
-
- You've created an empty new web app, with git deployment enabled.
-
- > [!NOTE]
- > The URL of the Git remote is shown in the `deploymentLocalGitUrl` property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
- >
-
-1. Browse to your newly created web app. Replace _&lt;app-name>_ with your unique app name created in the prior step.
-
- ```bash
- http://<app-name>.azurewebsites.net
- ```
-
- Here's what your new web app should look like:
-
- ![Empty web app page](media/quickstart-php/app-service-web-service-created.png)
--
- <pre>
- Counting objects: 2, done.
- Delta compression using up to 4 threads.
- Compressing objects: 100% (2/2), done.
- Writing objects: 100% (2/2), 352 bytes | 0 bytes/s, done.
- Total 2 (delta 1), reused 0 (delta 0)
- remote: Updating branch 'main'.
- remote: Updating submodules.
- remote: Preparing deployment for commit id '25f18051e9'.
- remote: Generating deployment script.
- remote: Running deployment command...
- remote: Handling Basic Web Site deployment.
- remote: Kudu sync from: '/home/site/repository' to: '/home/site/wwwroot'
- remote: Copying file: '.gitignore'
- remote: Copying file: 'LICENSE'
- remote: Copying file: 'README.md'
- remote: Copying file: 'index.php'
- remote: Ignoring: .git
- remote: Finished successfully.
- remote: Running post deployment command(s)...
- remote: Deployment successful.
- To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- cc39b1e..25f1805 main -> main
- </pre>
-
-## Browse to the app
-
-Browse to the deployed application using your web browser.
+In the terminal, deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command:
-```
-http://<app-name>.azurewebsites.net
+```azurecli
+az webapp up \
+ --sku F1 \
+ --logs
```
-The PHP sample code is running in an Azure App Service web app.
-
-![Sample app running in Azure](media/quickstart-php/hello-world-in-browser.png)
-
-**Congratulations!** You've deployed your first PHP app to App Service.
-
-## Update locally and redeploy the code
+- If the `az` command isn't recognized, be sure you have <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> installed.
+
+- The `--sku F1` argument creates the web app on the Free pricing tier, which incurs a no cost.
+- The `--logs` flag configures default logging required to enable viewing the log stream immediately after launching the webapp.
+- You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name will be automatically generated.
+- You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az_appservice_list_locations) command.
+- If you see the error, "Could not auto-detect the runtime stack of your app," make sure you're running the command in the code directory (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md)).
+
+The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://&lt;app-name&gt;.azurewebsites.net", which is the app's URL on Azure.
+
+<pre>
+The webapp '&lt;app-name>' doesn't exist
+Creating Resource group '&lt;group-name>' ...
+Resource group creation complete
+Creating AppServicePlan '&lt;app-service-plan-name>' ...
+Creating webapp '&lt;app-name>' ...
+Configuring default logging for the app, if not already enabled
+Creating zip with contents of dir /home/cephas/myExpressApp ...
+Getting scm site credentials for zip deployment
+Starting zip deployment. This operation can take a while to complete ...
+Deployment endpoint responded with status code 202
+You can launch the app at http://&lt;app-name>.azurewebsites.net
+{
+ "URL": "http://&lt;app-name>.azurewebsites.net",
+ "appserviceplan": "&lt;app-service-plan-name>",
+ "location": "centralus",
+ "name": "&lt;app-name>",
+ "os": "linux",
+ "resourcegroup": "&lt;group-name>",
+ "runtime_version": "php|8.0",
+ "runtime_version_detected": "0.0",
+ "sku": "FREE",
+ "src_path": "//home//msangapu//myPhpApp"
+}
+</pre>
++
+## 3 - Browse to the app
+
+Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`.
+
+![Empty web app page](media/quickstart-php/hello-world-in-browser.png)
+
+## 4 - Redeploy updates
1. Using a local text editor, open the `index.php` file within the PHP app, and make a small change to the text within the string next to `echo`:
The PHP sample code is running in an Azure App Service web app.
echo "Hello Azure!"; ```
-1. In the local terminal window, commit your changes in Git, and then push the code changes to Azure.
+1. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with no arguments:
- ```bash
- git commit -am "updated output"
- git push azure main
+ ```azurecli
+ az webapp up
``` 1. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page. ![Updated sample app running in Azure](media/quickstart-php/hello-azure-in-browser.png)
-## Manage your new Azure app
+## 5 - Manage your new Azure app
1. Go to the <a href="https://portal.azure.com" target="_blank">Azure portal</a> to manage the web app you created. Search for and select **App Services**.
The PHP sample code is running in an Azure App Service web app.
The web app menu provides different options for configuring your app.
+## Clean up resources
+
+When you're finished with the sample app, you can remove all of the resources for the app from Azure. It will not incur extra charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
+
+Delete the resource group by using the [az group delete](/cli/azure/group#az-group-delete) command.
+
+```azurecli-interactive
+az group delete --name myResourceGroup
+```
+
+This command may take a minute to run.
## Next steps
The PHP sample code is running in an Azure App Service web app.
> [!div class="nextstepaction"] > [Configure PHP app](configure-language-php.md)
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
az sql server firewall-rule create --resource-group msdocs-core-sql --server <yo
-Next, update the appsettings.json file in our local app code with the Connection String of our Azure SQL Database. The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
+Next, update the appsettings.json file in our local app code with the [Connection String of our Azure SQL Database](#5connect-the-app-to-the-database). The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
```json "ConnectionStrings": {
- "MyDbConnection": "Server=tcp:coredbserver456.database.windows.net,1433;
+ "MyDbConnection": "Server=tcp:<your-server-name>.database.windows.net,1433;
Initial Catalog=coredb; Persist Security Info=False; User ID=<username>;Password=<password>;
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md
Diagnostic settings can be used to collect metrics for certain Azure services in
You run the following commands to create diagnostic settings for AppServiceConsoleLogs (standard output/error) and AppServiceHTTPLogs (web server logs). Replace _\<app-name>_ and _\<workspace-name>_ with your values. > [!NOTE]
-> The first two commands, `resourceID` and `workspaceID`, are variables to be used in the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. See [Create diagnostic settings using Azure CLI](../azure-monitor/essentials/diagnostic-settings.md#create-using-azure-cli) for more information on this command.
+> The first two commands, `resourceID` and `workspaceID`, are variables to be used in the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. See [Create diagnostic settings using Azure CLI](../azure-monitor/essentials/diagnostic-settings.md?tabs=cli#create-diagnostic-settings) for more information on this command.
> ```azurecli
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. | | `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
-|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
+|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
|`https://k8connecthelm.azureedge.net` | `az connectedk8s connect` uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. | ## Create a resource group
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 04/15/2022 Last updated : 05/11/2022
This parameter specifies a resource in Azure Resource Manager to delete from Azu
To disconnect using a service principal, run the following command:
-`azcmagent disconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID>`
+`azcmagent disconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword>`
To disconnect using an access token, run the following command:
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
To configure these values at App settings level (and avoid redeployment on just
| Host.json path | App setting | |-|-| | logging.logLevel.default | AzureFunctionsJobHost__logging__logLevel__default |
-| logging.logLeve.Host.Aggregator | AzureFunctionsJobHost__logging__logLevel__Host__Aggregator |
+| logging.logLevel.Host.Aggregator | AzureFunctionsJobHost__logging__logLevel__Host__Aggregator |
| logging.logLevel.Function | AzureFunctionsJobHost__logging__logLevel__Function |
-| logging.logLevel.Function.Function1 | AzureFunctionsJobHost__logging__logLevel__Function1 |
-| logging.logLevel.Function.Function1.User | AzureFunctionsJobHost__logging__logLevel__Function1.User |
+| logging.logLevel.Function.Function1 | AzureFunctionsJobHost__logging__logLevel__Function.Function1 |
+| logging.logLevel.Function.Function1.User | AzureFunctionsJobHost__logging__logLevel__Function.Function1.User |
You can override the settings directly at the Azure portal Function App Configuration blade or by using an Azure CLI or PowerShell script.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
The [ConfigureFunctionsWorkerDefaults] extension method has an overload that let
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/CustomMiddleware/Program.cs" id="docsnippet_middleware_register" :::
-For a more complete example of using custom middleware in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
+ The `UseWhen` extension method can be used to register a middleware which gets executed conditionally. A predicate which returns a boolean value needs to be passed to this method and the middleware will be participating in the invocation processing pipeline if the return value of the predicate is true.
+
+The following extension methods on [FunctionContext] make it easier to work with middleware in the isolated model.
+
+| Method | Description |
+| - | - |
+| **`GetHttpRequestDataAsync`** | Gets the `HttpRequestData` instance when called by an HTTP trigger. This method returns an instance of `ValueTask<HttpRequestData?>`, which is useful when you want to read message data, such as request headers and cookies. |
+| **`GetHttpResponseData`** | Gets the `HttpResponseData` instance when called by an HTTP trigger. |
+| **`GetInvocationResult`** | Gets an instance of `InvocationResult`, which represents the result of the current function execution. Use the `Value` property to get or set the value as needed. |
+| **` GetOutputBindings`** | Gets the output binding entries for the current function execution. Each entry in the result of this method is of type `OutputBindingData`. You can use the `Value` property to get or set the value as needed. |
+| **` BindInputAsync`** | Binds an input binding item for the requested `BindingMetadata` instance. For example, you can use this method when you have a function with a `BlobInput` input binding that needs to be accessed or updated by your middleware. |
+
+The following is an example of a middleware implementation which reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution. This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header.
+
+
+For a more complete example of using custom middlewares in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
## Execution context
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
A Blob Storage SAS URL for a second storage account used for key storage. By def
|Key|Sample value| |--|--|
-|AzureWebJobsSecretStorageSa| `<BLOB_SAS_URL>` |
+|AzureWebJobsSecretStorageSas| `<BLOB_SAS_URL>` |
## AzureWebJobsSecretStorageType
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
The `func new` action supports the following options:
| Option | Description | | | -- |
-| **`--authLevel`** | Lets you set the authorization level for an HTTP trigger. Supported values are: `function`, `anonymous`, `admin`. Authorization isn't enforced when running locally. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). |
+| **`--authlevel`** | Lets you set the authorization level for an HTTP trigger. Supported values are: `function`, `anonymous`, `admin`. Authorization isn't enforced when running locally. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). |
| **`--csx`** | (Version 2.x and later versions.) Generates the same C# script (.csx) templates used in version 1.x and in the portal. | | **`--language`**, **`-l`**| The template programming language, such as C#, F#, or JavaScript. This option is required in version 1.x. In version 2.x and later versions, you don't use this option because the language is defined by the worker runtime. | | **`--name`**, **`-n`** | The function name. |
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
description: Data sources define the log data that Azure Monitor collects from a
Previously updated : 03/31/2022 Last updated : 05/10/2022 # Log Analytics agent data sources in Azure Monitor
-The data that Azure Monitor collects from virtual machines with the [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure on the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type with each type having its own set of properties.
+The data that Azure Monitor collects from virtual machines with the legacy [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure on the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type with each type having its own set of properties.
> [!IMPORTANT]
-> This article covers data sources for the legacy [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. This agent **will be deprecated by August, 2024**. Please plan to [migrate to Azure Monitor agent](./azure-monitor-agent-migration.md) before that. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](agents-overview.md) for a list of the available agents and the data they can collect.
+> This article only covers data sources for the legacy [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. This agent **will be deprecated by August, 2024**. Please plan to [migrate to Azure Monitor agent](./azure-monitor-agent-migration.md) before that. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](agents-overview.md) for a list of the available agents and the data they can collect.
![Log data collection](media/agent-data-sources/overview.png)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Previously updated : 04/23/2022 Last updated : 05/11/2022 # Overview of Azure Monitor agents
-Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. This article describes the agents used by Azure Monitor and helps you determine which you need to meet the requirements for your particular environment.
+Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. There are many legacy agents that exist today for this purpose, that will all be eventually replaced by the new consolidated [Azure Monitor agent](./azure-monitor-agent-overview.md). This article describes both the legacy agents as well as the new Azure Monitor agent.
-> [!NOTE]
-> Azure Monitor recently launched a new agent, the [Azure Monitor agent](./azure-monitor-agent-overview.md), that provides all capabilities necessary to collect guest operating system monitoring data. **Use this new agent if you are not bound by [these limitations](./azure-monitor-agent-overview.md#current-limitations)**, as it consolidates the features of all the legacy agents listed below and provides additional benefits. If you do require the limitations today, you may continue using the other legacy agents listed below until **August 2024**. [Learn more](./azure-monitor-agent-overview.md)
+The general recommendation is to use the Azure Monitor agent if you are not bound by [these limitations](./azure-monitor-agent-overview.md#current-limitations), as it consolidates the features of all the legacy agents listed below and provides these [additional benefits](#azure-monitor-agent).
+If you do require the limitations today, you may continue using the other legacy agents listed below until **August 2024**. [Learn more](./azure-monitor-agent-overview.md)
## Summary of agents
The following tables provide a quick comparison of the telemetry agents for Wind
The [Azure Monitor agent](azure-monitor-agent-overview.md) is meant to replace the Log Analytics agent, Azure Diagnostic extension and Telegraf agent for both Windows and Linux machines. It can send data to both Azure Monitor Logs and Azure Monitor Metrics and uses [Data Collection Rules (DCR)](../essentials/data-collection-rule-overview.md) which provide a more scalable method of configuring data collection and destinations for each agent.
-Use the Azure Monitor agent if you need to:
+Use the Azure Monitor agent to gain these benefits:
- Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises. ([Azure Arc-enabled servers](../../azure-arc/servers/overview.md) required for machines outside of Azure.)
+- **Cost savings:**
+ - Granular targeting via [Data Collection Rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports
+ - Use XPath queries to filter Windows events that get collected. This helps further reduce ingestion and storage costs.
+- **Centrally configure** collection for different sets of data from different sets of VMs.
+- **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](./azure-monitor-agent-overview.md#data-sources-and-destinations). Additionally, every action across the data collection lifecycle, from onboarding to deployment to updates, is significantly easier, scalable, and centralized (in Azure) using data collection rules
+- **Management of dependent solutions or
+- **Security and performance** - For authentication and security, it uses Managed Identity (for virtual machines) and AAD device tokens (for clients) which are both much more secure and ΓÇÿhack proofΓÇÖ than certificates or workspace keys that legacy agents use. This agent performs better at higher EPS (events per second upload rate) compared to legacy agents.
- Manage data collection configuration centrally, using [data collection rules](../essentials/data-collection-rule-overview.md) and use Azure Resource Manager (ARM) templates or policies for management overall. - Send data to Azure Monitor Logs and Azure Monitor Metrics (preview) for analysis with Azure Monitor. - Use Windows event filtering or multi-homing for logs on Windows and Linux.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing the Azure Monitor agent (AMA) on Azure virtual
Previously updated : 03/09/2022 Last updated : 05/10/2022
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
In future, it will also consolidate features from the Diagnostic extensions.
In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents: -- **Scope of monitoring:** Centrally configure collection for different sets of data from different sets of VMs.-- **Linux multi-homing:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](#data-sources-and-destinations).-- **Windows event filtering:** Use XPATH queries to filter which Windows events are collected.-- **Improved extension management:** The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the current Log Analytics agents.
+- **Cost savings:**
+ - Granular targeting via [Data Collection Rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports
+ - Use XPath queries to filter Windows events that get collected. This helps further reduce ingestion and storage costs.
+- **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](#data-sources-and-destinations). Additionally, every action across the data collection lifecycle, from onboarding to deployment to updates, is significantly easier, scalable, and centralized (in Azure) using data collection rules
+- **Management of dependent solutions or
+- **Security and performance** - For authentication and security, it uses Managed Identity (for virtual machines) and AAD device tokens (for clients) which are both much more secure and ΓÇÿhack proofΓÇÖ than certificates or workspace keys that legacy agents use. This agent performs better at higher EPS (events per second upload rate) compared to legacy agents.
+ ### Current limitations Not all Log Analytics solutions are supported yet. [View supported features and services](#supported-services-and-features).
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
description: Guidance for troubleshooting issues on Windows Arc-enabled server w
Previously updated : 5/3/2022 Last updated : 5/9/2022
[!INCLUDE [azure-monitor-agent-architecture](../../../includes/azure-monitor-agent/azure-monitor-agent-architecture-include.md)]
-## Basic troubleshooting steps
+## Basic troubleshooting steps (installation, agent not running, configuration issues)
Follow the steps below to troubleshoot the latest version of the Azure Monitor agent running on your Windows Arc-enabled server: 1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**: 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** blade from left menu > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
- 2. If not, check if machine can reach Azure and find the extension to install using the command below:
+ 2. If not, check if the Arc agent (Connected Machine Agent) is able to connect to Azure and the extension service is running.
```azurecli
- az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
- ```
+ azcmagent show
+ ```
+ If you see `Agent Status: Disconnected`, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up. 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
description: Guidance for troubleshooting issues on Windows virtual machines, sc
Previously updated : 5/3/2022 Last updated : 5/10/2022
[!INCLUDE [azure-monitor-agent-architecture](../../../includes/azure-monitor-agent/azure-monitor-agent-architecture-include.md)]
-## Basic troubleshooting steps
+## Basic troubleshooting steps (installation, agent not running, configuration issues)
Follow the steps below to troubleshoot the latest version of the Azure Monitor agent running on your Windows virtual machine: 1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).**
azure-monitor Resource Manager Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-agent.md
description: Sample Azure Resource Manager templates to deploy and configure vir
Previously updated : 02/07/2022 Last updated : 04/26/2022
This article includes sample [Azure Resource Manager templates](../../azure-reso
[!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] - ## Azure Monitor agent
-The samples in this section install the Azure Monitor agent on Windows and Linux virtual machines and Azure Arc-enabled servers. To configure data collection for these agents, you must also deploy [Resource Manager templates data collection rules and associations](./resource-manager-data-collection-rules.md).
+The samples in this section install the Azure Monitor agent on Windows and Linux virtual machines and Azure Arc-enabled servers. To configure data collection for these agents, you must also deploy [Resource Manager templates data collection rules and associations](./resource-manager-data-collection-rules.md).
## Permissions required
-| Built-in Role | Scope(s) | Reason |
-|:|:|:|
+| Built-in Role | Scope(s) | Reason |
+|:|:|:|
| <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | To deploy agent extension | | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | To deploy ARM templates |
-### Windows Azure virtual machine
-The following sample installs the Azure Monitor agent on a Windows Azure virtual machine.
+### Azure virtual machine
+
+The following sample installs the Azure Monitor agent on an Azure virtual machine.
#### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param vmName string
+param location string
+
+resource windowsAgent 'Microsoft.Compute/virtualMachines/extensions@2021-11-01' = {
+ name: '${vmName}/AzureMonitorWindowsAgent'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Azure.Monitor'
+ type: 'AzureMonitorWindowsAgent'
+ typeHandlerVersion: '1.0'
+ autoUpgradeMinorVersion: true
+ enableAutomaticUpgrade: true
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "type": "string"
- },
- "location": {
- "type": "string"
- }
+ "vmName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ }
}, "resources": [
- {
- "name": "[concat(parameters('vmName'),'/AzureMonitorWindowsAgent')]",
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "location": "[parameters('location')]",
- "apiVersion": "2020-06-01",
- "properties": {
- "publisher": "Microsoft.Azure.Monitor",
- "type": "AzureMonitorWindowsAgent",
- "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": true,
- "enableAutomaticUpgrade":true
- }
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/AzureMonitorWindowsAgent', parameters('vmName'))]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publisher": "Microsoft.Azure.Monitor",
+ "type": "AzureMonitorWindowsAgent",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": true
}
+ }
] } ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "value": "my-windows-vm"
- },
- "location": {
- "value": "eastus"
- }
+ "vmName": {
+ "value": "my-windows-vm"
+ },
+ "location": {
+ "value": "eastus"
+ }
} } ``` ### Linux Azure virtual machine+ The following sample installs the Azure Monitor agent on a Linux Azure virtual machine. #### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param vmName string
+param location string
+
+resource linuxAgent 'Microsoft.Compute/virtualMachines/extensions@2021-11-01' = {
+ name: '${vmName}/AzureMonitorLinuxAgent'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Azure.Monitor'
+ type: 'AzureMonitorLinuxAgent'
+ typeHandlerVersion: '1.5'
+ autoUpgradeMinorVersion: true
+ enableAutomaticUpgrade: true
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "type": "string"
- },
- "location": {
- "type": "string"
- }
+ "vmName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ }
}, "resources": [
- {
- "name": "[concat(parameters('vmName'),'/AzureMonitorLinuxAgent')]",
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "location": "[parameters('location')]",
- "apiVersion": "2020-06-01",
- "properties": {
- "publisher": "Microsoft.Azure.Monitor",
- "type": "AzureMonitorLinuxAgent",
- "typeHandlerVersion": "1.5",
- "autoUpgradeMinorVersion": true,
- "enableAutomaticUpgrade":true
- }
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/AzureMonitorLinuxAgent', parameters('vmName'))]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publisher": "Microsoft.Azure.Monitor",
+ "type": "AzureMonitorLinuxAgent",
+ "typeHandlerVersion": "1.5",
+ "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": true
}
+ }
] } ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "value": "my-linux-vm"
- },
- "location": {
- "value": "eastus"
- }
+ "vmName": {
+ "value": "my-linux-vm"
+ },
+ "location": {
+ "value": "eastus"
+ }
} } ```
-### Windows Azure Arc-enabled server
-The following sample installs the Azure Monitor agent on a Windows Azure Arc-enabled server.
+### Azure Arc-enabled server
+
+The following sample installs the Azure Monitor agent on an Azure Arc-enabled server.
#### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param vmName string
+param location string
+
+resource windowsAgent 'Microsoft.HybridCompute/machines/extensions@2021-12-10-preview' = {
+ name: '${vmName}/AzureMonitorWindowsAgent'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Azure.Monitor'
+ type: 'AzureMonitorWindowsAgent'
+ autoUpgradeMinorVersion: true
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "type": "string"
- },
- "location": {
- "type": "string"
- }
+ "vmName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ }
}, "resources": [
- {
- "name": "[concat(parameters('vmName'),'/AzureMonitorWindowsAgent')]",
- "type": "Microsoft.HybridCompute/machines/extensions",
- "location": "[parameters('location')]",
- "apiVersion": "2019-08-02-preview",
- "properties": {
- "publisher": "Microsoft.Azure.Monitor",
- "type": "AzureMonitorWindowsAgent",
- "autoUpgradeMinorVersion": true
- }
+ {
+ "type": "Microsoft.HybridCompute/machines/extensions",
+ "apiVersion": "2021-12-10-preview",
+ "name": "[format('{0}/AzureMonitorWindowsAgent', parameters('vmName'))]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publisher": "Microsoft.Azure.Monitor",
+ "type": "AzureMonitorWindowsAgent",
+ "autoUpgradeMinorVersion": true
}
+ }
] } ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "vmName": {
The following sample installs the Azure Monitor agent on a Windows Azure Arc-ena
``` ### Linux Azure Arc-enabled server+ The following sample installs the Azure Monitor agent on a Linux Azure Arc-enabled server. #### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param vmName string
+param location string
+
+resource linuxAgent 'Microsoft.HybridCompute/machines/extensions@2021-12-10-preview'= {
+ name: '${vmName}/AzureMonitorWindowsAgent'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Azure.Monitor'
+ type: 'AzureMonitorWindowsAgent'
+ autoUpgradeMinorVersion: true
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "type": "string"
- },
- "location": {
- "type": "string"
- }
+ "vmName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ }
}, "resources": [
- {
- "name": "[concat(parameters('vmName'),'/AzureMonitorLinuxAgent')]",
- "type": "Microsoft.HybridCompute/machines/extensions",
- "location": "[parameters('location')]",
- "apiVersion": "2019-08-02-preview",
- "properties": {
- "publisher": "Microsoft.Azure.Monitor",
- "type": "AzureMonitorLinuxAgent",
- "autoUpgradeMinorVersion": true
- }
+ {
+ "type": "Microsoft.HybridCompute/machines/extensions",
+ "apiVersion": "2021-12-10-preview",
+ "name": "[format('{0}/AzureMonitorWindowsAgent', parameters('vmName'))]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publisher": "Microsoft.Azure.Monitor",
+ "type": "AzureMonitorWindowsAgent",
+ "autoUpgradeMinorVersion": true
}
+ }
] } ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "value": "my-linux-vm"
- },
- "location": {
- "value": "eastus"
- }
+ "vmName": {
+ "value": "my-linux-vm"
+ },
+ "location": {
+ "value": "eastus"
+ }
} } ``` ## Log Analytics agent+ The samples in this section install the legacy Log Analytics agent on Windows and Linux virtual machines in Azure and connect it to a Log Analytics workspace. ### Windows
-The following sample installs the Log Analytics agent on a Windows Azure virtual machine. This is done by enabling the [Log Analytics virtual machine extension for Windows](../../virtual-machines/extensions/oms-windows.md).
+
+The following sample installs the Log Analytics agent on an Azure virtual machine. This is done by enabling the [Log Analytics virtual machine extension for Windows](../../virtual-machines/extensions/oms-windows.md).
#### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the virtual machine.')
+param vmName string
+
+@description('Location of the virtual machine')
+param location string = resourceGroup().location
+
+@description('Id of the workspace.')
+param workspaceId string
+
+@description('Primary or secondary workspace key.')
+param workspaceKey string
+
+resource vm 'Microsoft.Compute/virtualMachines@2021-11-01' = {
+ name: vmName
+ location: location
+ properties:{}
+}
+
+resource logAnalyticsAgent 'Microsoft.Compute/virtualMachines/extensions@2021-11-01' = {
+ parent: vm
+ name: 'Microsoft.Insights.LogAnalyticsAgent'
+ location: location
+ properties: {
+ publisher: 'Microsoft.EnterpriseCloud.Monitoring'
+ type: 'MicrosoftMonitoringAgent'
+ typeHandlerVersion: '1.0'
+ autoUpgradeMinorVersion: true
+ settings: {
+ workspaceId: workspaceId
+ }
+ protectedSettings: {
+ workspaceKey: workspaceKey
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
The following sample installs the Log Analytics agent on a Windows Azure virtual
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2021-11-01",
"name": "[parameters('vmName')]", "location": "[parameters('location')]",
- "resources": [
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(parameters('vmName'), '/Microsoft.Insights.LogAnalyticsAgent')]",
- "apiVersion": "2015-06-15",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
- ],
- "properties": {
- "publisher": "Microsoft.EnterpriseCloud.Monitoring",
- "type": "MicrosoftMonitoringAgent",
- "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "workspaceId": "[parameters('workspaceId')]"
- },
- "protectedSettings": {
- "workspaceKey": "[parameters('workspaceKey')]"
- }
- }
+ "properties": {}
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/{1}', parameters('vmName'), 'Microsoft.Insights.LogAnalyticsAgent')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publisher": "Microsoft.EnterpriseCloud.Monitoring",
+ "type": "MicrosoftMonitoringAgent",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "workspaceId": "[parameters('workspaceId')]"
+ },
+ "protectedSettings": {
+ "workspaceKey": "[parameters('workspaceKey')]"
}
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]"
] } ] }- ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "value": "my-windows-vm"
- },
- "location": {
- "value": "westus"
- },
- "workspaceId": {
- "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- },
- "workspaceKey": {
- "value": "Tse-gj9CemT6A80urYa2hwtjvA5axv1xobXgKR17kbVdtacU6cEf+SNo2TdHGVKTsZHZd1W9QKRXfh+$fVY9dA=="
- }
+ "vmName": {
+ "value": "my-windows-vm"
+ },
+ "location": {
+ "value": "westus"
+ },
+ "workspaceId": {
+ "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ },
+ "workspaceKey": {
+ "value": "Tse-gj9CemT6A80urYa2hwtjvA5axv1xobXgKR17kbVdtacU6cEf+SNo2TdHGVKTsZHZd1W9QKRXfh+$fVY9dA=="
+ }
} } ``` - ### Linux+ The following sample installs the Log Analytics agent on a Linux Azure virtual machine. This is done by enabling the [Log Analytics virtual machine extension for Windows](../../virtual-machines/extensions/oms-linux.md). #### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the virtual machine.')
+param vmName string
+
+@description('Location of the virtual machine')
+param location string = resourceGroup().location
+
+@description('Id of the workspace.')
+param workspaceId string
+
+@description('Primary or secondary workspace key.')
+param workspaceKey string
+
+resource vm 'Microsoft.Compute/virtualMachines@2021-11-01' = {
+ name: vmName
+ location: location
+}
+
+resource logAnalyticsAgent 'Microsoft.Compute/virtualMachines/extensions@2021-11-01' = {
+ parent: vm
+ name: 'Microsoft.Insights.LogAnalyticsAgent'
+ location: location
+ properties: {
+ publisher: 'Microsoft.EnterpriseCloud.Monitoring'
+ type: 'OmsAgentForLinux'
+ typeHandlerVersion: '1.7'
+ autoUpgradeMinorVersion: true
+ settings: {
+ workspaceId: workspaceId
+ }
+ protectedSettings: {
+ workspaceKey: workspaceKey
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
The following sample installs the Log Analytics agent on a Linux Azure virtual m
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2021-11-01",
"name": "[parameters('vmName')]",
+ "location": "[parameters('location')]"
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/{1}', parameters('vmName'), 'Microsoft.Insights.LogAnalyticsAgent')]",
"location": "[parameters('location')]",
- "resources": [
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(parameters('vmName'), '/Microsoft.Insights.LogAnalyticsAgent')]",
- "apiVersion": "2015-06-15",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
- ],
- "properties": {
- "publisher": "Microsoft.EnterpriseCloud.Monitoring",
- "type": "OmsAgentForLinux",
- "typeHandlerVersion": "1.7",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "workspaceId": "[parameters('workspaceId')]"
- },
- "protectedSettings": {
- "workspaceKey": "[parameters('workspaceKey')]"
- }
- }
+ "properties": {
+ "publisher": "Microsoft.EnterpriseCloud.Monitoring",
+ "type": "OmsAgentForLinux",
+ "typeHandlerVersion": "1.7",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "workspaceId": "[parameters('workspaceId')]"
+ },
+ "protectedSettings": {
+ "workspaceKey": "[parameters('workspaceKey')]"
}
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]"
] } ] } ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "value": "my-linux-vm"
- },
- "location": {
- "value": "westus"
- },
- "workspaceId": {
- "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- },
- "workspaceKey": {
- "value": "Tse-gj9CemT6A80urYa2hwtjvA5axv1xobXgKR17kbVdtacU6cEf+SNo2TdHGVKTsZHZd1W9QKRXfh+$fVY9dA=="
- }
+ "vmName": {
+ "value": "my-linux-vm"
+ },
+ "location": {
+ "value": "westus"
+ },
+ "workspaceId": {
+ "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ },
+ "workspaceKey": {
+ "value": "Tse-gj9CemT6A80urYa2hwtjvA5axv1xobXgKR17kbVdtacU6cEf+SNo2TdHGVKTsZHZd1W9QKRXfh+$fVY9dA=="
+ }
} } ``` -- ## Diagnostic extension+ The samples in this section install the diagnostic extension on Windows and Linux virtual machines in Azure and configure it for data collection. ### Windows
-The following sample enables and configures the diagnostic extension on a Windows Azure virtual machine. For details on the configuration, see [Windows diagnostics extension schema](./diagnostics-extension-schema-windows.md).
+
+The following sample enables and configures the diagnostic extension on an Azure virtual machine. For details on the configuration, see [Windows diagnostics extension schema](./diagnostics-extension-schema-windows.md).
#### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the virtual machine.')
+param vmName string
+
+@description('Location for the virtual machine.')
+param location string = resourceGroup().location
+
+@description('Name of the storage account.')
+param storageAccountName string
+
+@description('Resource ID of the storage account.')
+param storageAccountId string
+
+@description('Resource ID of the workspace.')
+param workspaceResourceId string
+
+resource vm 'Microsoft.Compute/virtualMachines@2021-11-01' = {
+ name: vmName
+ location: location
+}
+
+resource vmDiagnosticsSettings 'Microsoft.Compute/virtualMachines/extensions@2021-11-01' = {
+ parent: vm
+ name: 'Microsoft.Insights.VMDiagnosticsSettings'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Azure.Diagnostics'
+ type: 'IaaSDiagnostics'
+ typeHandlerVersion: '1.5'
+ autoUpgradeMinorVersion: true
+ settings: {
+ WadCfg: {
+ DiagnosticMonitorConfiguration: {
+ overallQuotaInMB: 10000
+ DiagnosticInfrastructureLogs: {
+ scheduledTransferLogLevelFilter: 'Error'
+ }
+ PerformanceCounters: {
+ scheduledTransferPeriod: 'PT1M'
+ sinks: 'AzureMonitorSink'
+ PerformanceCounterConfiguration: [
+ {
+ counterSpecifier: '\\Processor(_Total)\\% Processor Time'
+ sampleRate: 'PT1M'
+ unit: 'percent'
+ }
+ ]
+ }
+ WindowsEventLog: {
+ scheduledTransferPeriod: 'PT5M'
+ DataSource: [
+ {
+ name: 'System!*[System[Provider[@Name=\'Microsoft Antimalware\']]]'
+ }
+ {
+ name: 'System!*[System[Provider[@Name=\'NTFS\'] and (EventID=55)]]'
+ }
+ {
+ name: 'System!*[System[Provider[@Name=\'disk\'] and (EventID=7 or EventID=52 or EventID=55)]]'
+ }
+ ]
+ }
+ }
+ SinksConfig: {
+ Sink: [
+ {
+ name: 'AzureMonitorSink'
+ AzureMonitor: {
+ ResourceId: workspaceResourceId
+ }
+ }
+ ]
+ }
+ }
+ storageAccount: storageAccountName
+ }
+ protectedSettings: {
+ storageAccountName: storageAccountName
+ storageAccountKey: listkeys(storageAccountId, '2021-08-01').key1
+ storageAccountEndPoint: 'https://${environment().suffixes.storage}'
+ }
+ }
+}
+
+resource managedIdentity 'Microsoft.Compute/virtualMachines/extensions@2021-11-01' = {
+ parent: vm
+ name: 'ManagedIdentityExtensionForWindows'
+ location: location
+ properties: {
+ publisher: 'Microsoft.ManagedIdentity'
+ type: 'ManagedIdentityExtensionForWindows'
+ typeHandlerVersion: '1.0'
+ autoUpgradeMinorVersion: true
+ settings: {
+ port: 50342
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
The following sample enables and configures the diagnostic extension on a Window
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2021-11-01",
"name": "[parameters('vmName')]",
+ "location": "[parameters('location')]"
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/{1}', parameters('vmName'), 'Microsoft.Insights.VMDiagnosticsSettings')]",
"location": "[parameters('location')]",
- "resources": [
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(parameters('vmName'), '/Microsoft.Insights.VMDiagnosticsSettings')]",
- "apiVersion": "2015-06-15",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
- ],
- "properties": {
- "publisher": "Microsoft.Azure.Diagnostics",
- "type": "IaaSDiagnostics",
- "typeHandlerVersion": "1.5",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "WadCfg": {
- "DiagnosticMonitorConfiguration": {
- "overallQuotaInMB": 10000,
- "DiagnosticInfrastructureLogs": {
- "scheduledTransferLogLevelFilter": "Error"
- },
- "PerformanceCounters": {
- "scheduledTransferPeriod": "PT1M",
- "sinks": "AzureMonitorSink",
- "PerformanceCounterConfiguration": [
- {
- "counterSpecifier": "\\Processor(_Total)\\% Processor Time",
- "sampleRate": "PT1M",
- "unit": "percent"
- }
- ]
- },
- "WindowsEventLog": {
- "scheduledTransferPeriod": "PT5M",
- "DataSource": [
- {
- "name": "System!*[System[Provider[@Name='Microsoft Antimalware']]]"
- },
- {
- "name": "System!*[System[Provider[@Name='NTFS'] and (EventID=55)]]"
- },
- {
- "name": "System!*[System[Provider[@Name='disk'] and (EventID=7 or EventID=52 or EventID=55)]]"
- }
- ]
- }
- },
- "SinksConfig": {
- "Sink": [
- {
- "name": "AzureMonitorSink",
- "AzureMonitor":
- {
- "ResourceId": "[parameters('workspaceResourceId')]"
- }
- }
- ]
- }
- },
- "storageAccount": "[parameters('storageAccountName')]"
- },
- "protectedSettings": {
- "storageAccountName": "[parameters('storageAccountName')]",
- "storageAccountKey": "[listkeys(parameters('storageAccountId'), '2015-05-01-preview').key1]",
- "storageAccountEndPoint": "https://core.windows.net" }
+ "properties": {
+ "publisher": "Microsoft.Azure.Diagnostics",
+ "type": "IaaSDiagnostics",
+ "typeHandlerVersion": "1.5",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "WadCfg": {
+ "DiagnosticMonitorConfiguration": {
+ "overallQuotaInMB": 10000,
+ "DiagnosticInfrastructureLogs": {
+ "scheduledTransferLogLevelFilter": "Error"
+ },
+ "PerformanceCounters": {
+ "scheduledTransferPeriod": "PT1M",
+ "sinks": "AzureMonitorSink",
+ "PerformanceCounterConfiguration": [
+ {
+ "counterSpecifier": "\\Processor(_Total)\\% Processor Time",
+ "sampleRate": "PT1M",
+ "unit": "percent"
+ }
+ ]
+ },
+ "WindowsEventLog": {
+ "scheduledTransferPeriod": "PT5M",
+ "DataSource": [
+ {
+ "name": "System!*[System[Provider[@Name='Microsoft Antimalware']]]"
+ },
+ {
+ "name": "System!*[System[Provider[@Name='NTFS'] and (EventID=55)]]"
+ },
+ {
+ "name": "System!*[System[Provider[@Name='disk'] and (EventID=7 or EventID=52 or EventID=55)]]"
+ }
+ ]
+ }
+ },
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "AzureMonitorSink",
+ "AzureMonitor": {
+ "ResourceId": "[parameters('workspaceResourceId')]"
+ }
}
+ ]
}
- ]
+ },
+ "storageAccount": "[parameters('storageAccountName')]"
+ },
+ "protectedSettings": {
+ "storageAccountName": "[parameters('storageAccountName')]",
+ "storageAccountKey": "[listkeys(parameters('storageAccountId'), '2021-08-01').key1]",
+ "storageAccountEndPoint": "[format('https://{0}', environment().suffixes.storage)]"
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]"
+ ]
}, {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(parameters('vmName'),'/ManagedIdentityExtensionForWindows')]",
- "apiVersion": "2018-06-01",
- "location": "[parameters('location')]",
- "properties": {
- "publisher": "Microsoft.ManagedIdentity",
- "type": "ManagedIdentityExtensionForWindows",
- "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "port": 50342
- }
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/{1}', parameters('vmName'), 'ManagedIdentityExtensionForWindows')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publisher": "Microsoft.ManagedIdentity",
+ "type": "ManagedIdentityExtensionForWindows",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "port": 50342
}
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]"
+ ]
} ] } ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "vmName": {
The following sample enables and configures the diagnostic extension on a Window
``` ### Linux+ The following sample enables and configures the diagnostic extension on a Linux Azure virtual machine. For details on the configuration, see [Windows diagnostics extension schema](../../virtual-machines/extensions/diagnostics-linux.md). #### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the virtual machine.')
+param vmName string
+
+@description('Resource ID of the virtual machine.')
+param vmId string
+
+@description('Location for the virtual machine.')
+param location string = resourceGroup().location
+
+@description('Name of the storage account.')
+param storageAccountName string
+
+@description('Resource ID of the storage account.')
+param storageSasToken string
+
+@description('URL of the event hub.')
+param eventHubUrl string
+
+resource vm 'Microsoft.Compute/virtualMachines@2021-11-01' = {
+ name: vmName
+ location: location
+}
+
+resource vmDiagnosticsSettings 'Microsoft.Compute/virtualMachines/extensions@2021-11-01' = {
+ parent: vm
+ name: 'Microsoft.Insights.VMDiagnosticsSettings'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Azure.Diagnostics'
+ type: 'LinuxDiagnostic'
+ typeHandlerVersion: '3.0'
+ autoUpgradeMinorVersion: true
+ settings: {
+ StorageAccount: storageAccountName
+ ladCfg: {
+ sampleRateInSeconds: 15
+ diagnosticMonitorConfiguration: {
+ performanceCounters: {
+ sinks: 'MyMetricEventHub,MyJsonMetricsBlob'
+ performanceCounterConfiguration: [
+ {
+ unit: 'Percent'
+ type: 'builtin'
+ counter: 'PercentProcessorTime'
+ counterSpecifier: '/builtin/Processor/PercentProcessorTime'
+ annotation: [
+ {
+ locale: 'en-us'
+ displayName: 'Aggregate CPU %utilization'
+ }
+ ]
+ condition: 'IsAggregate=TRUE'
+ class: 'Processor'
+ }
+ {
+ unit: 'Bytes'
+ type: 'builtin'
+ counter: 'UsedSpace'
+ counterSpecifier: '/builtin/FileSystem/UsedSpace'
+ annotation: [
+ {
+ locale: 'en-us'
+ displayName: 'Used disk space on /'
+ }
+ ]
+ condition: 'Name="/"'
+ class: 'Filesystem'
+ }
+ ]
+ }
+ metrics: {
+ metricAggregation: [
+ {
+ scheduledTransferPeriod: 'PT1H'
+ }
+ {
+ scheduledTransferPeriod: 'PT1M'
+ }
+ ]
+ resourceId: vmId
+ }
+ eventVolume: 'Large'
+ syslogEvents: {
+ sinks: 'MySyslogJsonBlob,MyLoggingEventHub'
+ syslogEventConfiguration: {
+ LOG_USER: 'LOG_INFO'
+ }
+ }
+ }
+ }
+ perfCfg: [
+ {
+ query: 'SELECT PercentProcessorTime, PercentIdleTime FROM SCX_ProcessorStatisticalInformation WHERE Name=\'_TOTAL\''
+ table: 'LinuxCpu'
+ frequency: 60
+ sinks: 'MyLinuxCpuJsonBlob,MyLinuxCpuEventHub'
+ }
+ ]
+ fileLogs: [
+ {
+ file: '/var/log/myladtestlog'
+ table: 'MyLadTestLog'
+ sinks: 'MyFilelogJsonBlob,MyLoggingEventHub'
+ }
+ ]
+ }
+ protectedSettings: {
+ storageAccountName: 'yourdiagstgacct'
+ storageAccountSasToken: storageSasToken
+ sinksConfig: {
+ sink: [
+ {
+ name: 'MySyslogJsonBlob'
+ type: 'JsonBlob'
+ }
+ {
+ name: 'MyFilelogJsonBlob'
+ type: 'JsonBlob'
+ }
+ {
+ name: 'MyLinuxCpuJsonBlob'
+ type: 'JsonBlob'
+ }
+ {
+ name: 'MyJsonMetricsBlob'
+ type: 'JsonBlob'
+ }
+ {
+ name: 'MyLinuxCpuEventHub'
+ type: 'EventHub'
+ sasURL: eventHubUrl
+ }
+ {
+ name: 'MyMetricEventHub'
+ type: 'EventHub'
+ sasURL: eventHubUrl
+ }
+ {
+ name: 'MyLoggingEventHub'
+ type: 'EventHub'
+ sasURL: eventHubUrl
+ }
+ ]
+ }
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
The following sample enables and configures the diagnostic extension on a Linux
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2021-11-01",
"name": "[parameters('vmName')]",
+ "location": "[parameters('location')]"
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2021-11-01",
+ "name": "[format('{0}/{1}', parameters('vmName'), 'Microsoft.Insights.VMDiagnosticsSettings')]",
"location": "[parameters('location')]",
- "resources": [
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(parameters('vmName'), '/Microsoft.Insights.VMDiagnosticsSettings')]",
- "apiVersion": "2015-06-15",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
- ],
- "properties": {
- "publisher": "Microsoft.Azure.Diagnostics",
- "type": "LinuxDiagnostic",
- "typeHandlerVersion": "3.0",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "StorageAccount": "[parameters('storageAccountName')]",
- "ladCfg": {
- "sampleRateInSeconds": 15,
- "diagnosticMonitorConfiguration":
- {
- "performanceCounters":
- {
- "sinks": "MyMetricEventHub,MyJsonMetricsBlob",
- "performanceCounterConfiguration": [
- {
- "unit": "Percent",
- "type": "builtin",
- "counter": "PercentProcessorTime",
- "counterSpecifier": "/builtin/Processor/PercentProcessorTime",
- "annotation": [
- {
- "locale": "en-us",
- "displayName": "Aggregate CPU %utilization"
- }
- ],
- "condition": "IsAggregate=TRUE",
- "class": "Processor"
- },
- {
- "unit": "Bytes",
- "type": "builtin",
- "counter": "UsedSpace",
- "counterSpecifier": "/builtin/FileSystem/UsedSpace",
- "annotation": [
- {
- "locale": "en-us",
- "displayName": "Used disk space on /"
- }
- ],
- "condition": "Name=\"/\"",
- "class": "Filesystem"
- }
- ]
- },
- "metrics": {
- "metricAggregation": [
- {
- "scheduledTransferPeriod": "PT1H"
- },
- {
- "scheduledTransferPeriod": "PT1M"
- }
- ],
- "resourceId": "[parameters('vmId')]"
- },
- "eventVolume": "Large",
- "syslogEvents": {
- "sinks": "MySyslogJsonBlob,MyLoggingEventHub",
- "syslogEventConfiguration": {
- "LOG_USER": "LOG_INFO"
- }
- }
- }
- },
- "perfCfg": [
+ "properties": {
+ "publisher": "Microsoft.Azure.Diagnostics",
+ "type": "LinuxDiagnostic",
+ "typeHandlerVersion": "3.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "StorageAccount": "[parameters('storageAccountName')]",
+ "ladCfg": {
+ "sampleRateInSeconds": 15,
+ "diagnosticMonitorConfiguration": {
+ "performanceCounters": {
+ "sinks": "MyMetricEventHub,MyJsonMetricsBlob",
+ "performanceCounterConfiguration": [
+ {
+ "unit": "Percent",
+ "type": "builtin",
+ "counter": "PercentProcessorTime",
+ "counterSpecifier": "/builtin/Processor/PercentProcessorTime",
+ "annotation": [
{
- "query": "SELECT PercentProcessorTime, PercentIdleTime FROM SCX_ProcessorStatisticalInformation WHERE Name='_TOTAL'",
- "table": "LinuxCpu",
- "frequency": 60,
- "sinks": "MyLinuxCpuJsonBlob,MyLinuxCpuEventHub"
+ "locale": "en-us",
+ "displayName": "Aggregate CPU %utilization"
} ],
- "fileLogs": [
+ "condition": "IsAggregate=TRUE",
+ "class": "Processor"
+ },
+ {
+ "unit": "Bytes",
+ "type": "builtin",
+ "counter": "UsedSpace",
+ "counterSpecifier": "/builtin/FileSystem/UsedSpace",
+ "annotation": [
{
- "file": "/var/log/myladtestlog",
- "table": "MyLadTestLog",
- "sinks": "MyFilelogJsonBlob,MyLoggingEventHub"
+ "locale": "en-us",
+ "displayName": "Used disk space on /"
}
- ]
+ ],
+ "condition": "Name=\"/\"",
+ "class": "Filesystem"
+ }
+ ]
+ },
+ "metrics": {
+ "metricAggregation": [
+ {
+ "scheduledTransferPeriod": "PT1H"
},
- "protectedSettings": {
- "storageAccountName": "yourdiagstgacct",
- "storageAccountSasToken": "[parameters('storageSasToken')]",
- "sinksConfig": {
- "sink": [
- {
- "name": "MySyslogJsonBlob",
- "type": "JsonBlob"
- },
- {
- "name": "MyFilelogJsonBlob",
- "type": "JsonBlob"
- },
- {
- "name": "MyLinuxCpuJsonBlob",
- "type": "JsonBlob"
- },
- {
- "name": "MyJsonMetricsBlob",
- "type": "JsonBlob"
- },
- {
- "name": "MyLinuxCpuEventHub",
- "type": "EventHub",
- "sasURL": "[parameters('eventHubUrl')]"
- },
- {
- "name": "MyMetricEventHub",
- "type": "EventHub",
- "sasURL": "[parameters('eventHubUrl')]"
- },
- {
- "name": "MyLoggingEventHub",
- "type": "EventHub",
- "sasURL": "[parameters('eventHubUrl')]"
- }
- ]
- }
+ {
+ "scheduledTransferPeriod": "PT1M"
}
+ ],
+ "resourceId": "[parameters('vmId')]"
+ },
+ "eventVolume": "Large",
+ "syslogEvents": {
+ "sinks": "MySyslogJsonBlob,MyLoggingEventHub",
+ "syslogEventConfiguration": {
+ "LOG_USER": "LOG_INFO"
+ }
+ }
+ }
+ },
+ "perfCfg": [
+ {
+ "query": "SELECT PercentProcessorTime, PercentIdleTime FROM SCX_ProcessorStatisticalInformation WHERE Name='_TOTAL'",
+ "table": "LinuxCpu",
+ "frequency": 60,
+ "sinks": "MyLinuxCpuJsonBlob,MyLinuxCpuEventHub"
+ }
+ ],
+ "fileLogs": [
+ {
+ "file": "/var/log/myladtestlog",
+ "table": "MyLadTestLog",
+ "sinks": "MyFilelogJsonBlob,MyLoggingEventHub"
+ }
+ ]
+ },
+ "protectedSettings": {
+ "storageAccountName": "yourdiagstgacct",
+ "storageAccountSasToken": "[parameters('storageSasToken')]",
+ "sinksConfig": {
+ "sink": [
+ {
+ "name": "MySyslogJsonBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "MyFilelogJsonBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "MyLinuxCpuJsonBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "MyJsonMetricsBlob",
+ "type": "JsonBlob"
+ },
+ {
+ "name": "MyLinuxCpuEventHub",
+ "type": "EventHub",
+ "sasURL": "[parameters('eventHubUrl')]"
+ },
+ {
+ "name": "MyMetricEventHub",
+ "type": "EventHub",
+ "sasURL": "[parameters('eventHubUrl')]"
+ },
+ {
+ "name": "MyLoggingEventHub",
+ "type": "EventHub",
+ "sasURL": "[parameters('eventHubUrl')]"
}
+ ]
}
- ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]"
+ ]
} ] } ``` ++ #### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "vmName": {
- "value": "my-linux-vm"
- },
- "vmId": {
- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-linux-vm"
- },
- "location": {
- "value": "westus"
- },
- "storageAccountName": {
- "value": "mystorageaccount"
- },
- "storageSasToken": {
- "value": "?sv=2019-10-10&ss=bfqt&srt=sco&sp=rwdlacupx&se=2020-04-26T23:06:44Z&st=2020-04-26T15:06:44Z&spr=https&sig=1QpoTvrrEW6VN2taweUq1BsaGkhDMnFGTfWakucZl4%3D"
- },
- "eventHubUrl": {
- "value": "https://my-eventhub-namespace.servicebus.windows.net/my-eventhub?sr=my-eventhub-namespace.servicebus.windows.net%2fmy-eventhub&sig=4VEGPTg8jxUAbTcyeF2kwOr8XZdfgTdMWEQWnVaMSqw=&skn=manage"
- }
+ "vmName": {
+ "value": "my-linux-vm"
+ },
+ "vmId": {
+ "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-linux-vm"
+ },
+ "location": {
+ "value": "westus"
+ },
+ "storageAccountName": {
+ "value": "mystorageaccount"
+ },
+ "storageSasToken": {
+ "value": "?sv=2019-10-10&ss=bfqt&srt=sco&sp=rwdlacupx&se=2020-04-26T23:06:44Z&st=2020-04-26T15:06:44Z&spr=https&sig=1QpoTvrrEW6VN2taweUq1BsaGkhDMnFGTfWakucZl4%3D"
+ },
+ "eventHubUrl": {
+ "value": "https://my-eventhub-namespace.servicebus.windows.net/my-eventhub?sr=my-eventhub-namespace.servicebus.windows.net%2fmy-eventhub&sig=4VEGPTg8jxUAbTcyeF2kwOr8XZdfgTdMWEQWnVaMSqw=&skn=manage"
+ }
} } ```
azure-monitor Resource Manager Data Collection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-data-collection-rules.md
The following sample creates an association between an Azure virtual machine and
"vmName": { "value": "my-windows-vm" },
- "location": {
- "value": "eastus"
+ "associationName": {
+ "value": "my-windows-vm-my-dcr"
+ },
+ "dataCollectionRuleId": {
+ "value": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/microsoft.insights/datacollectionrules/my-dcr"
}
- }
+ }
} ```
The following sample creates an association between an Azure Arc-enabled server
"vmName": { "value": "my-windows-vm" },
- "location": {
- "value": "eastus"
+ "associationName": {
+ "value": "my-windows-vm-my-dcr"
+ },
+ "dataCollectionRuleId": {
+ "value": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/microsoft.insights/datacollectionrules/my-dcr"
}
- }
+ }
} ```
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 2/23/2022 Last updated : 5/11/2022 # Supported resources for metric alerts in Azure Monitor
Here's the full list of Azure Monitor metric sources supported by the newer aler
|||--|-| |Microsoft.Aadiam/azureADMetrics | Yes | No | Azure Active Directory (metrics in private preview) | |Microsoft.ApiManagement/service | Yes | No | [API Management](../essentials/metrics-supported.md#microsoftapimanagementservice) |
+|Microsoft.App/containerApps | Yes | No | Container Apps |
|Microsoft.AppConfiguration/configurationStores |Yes | No | [App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | |Microsoft.AppPlatform/spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) | |Microsoft.Automation/automationAccounts | Yes| No | [Automation Accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.DataBoxEdge/dataBoxEdgeDevices | Yes | Yes | [Data Box](../essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) | |Microsoft.DataFactory/datafactories| Yes| No | [Data Factories V1](../essentials/metrics-supported.md#microsoftdatafactorydatafactories) | |Microsoft.DataFactory/factories |Yes | No | [Data Factories V2](../essentials/metrics-supported.md#microsoftdatafactoryfactories) |
+|Microsoft.DataProtection/backupVaults | Yes | Yes | Backup Vaults |
|Microsoft.DataShare/accounts | Yes | No | [Data Shares](../essentials/metrics-supported.md#microsoftdatashareaccounts) | |Microsoft.DBforMariaDB/servers | No | No | [DB for MariaDB](../essentials/metrics-supported.md#microsoftdbformariadbservers) | |Microsoft.DBforMySQL/servers | No | No |[DB for MySQL](../essentials/metrics-supported.md#microsoftdbformysqlservers)|
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Network/azurefirewalls | Yes | No | [Firewalls](../essentials/metrics-supported.md#microsoftnetworkazurefirewalls) | |Microsoft.Network/dnsZones | No | No | [DNS Zones](../essentials/metrics-supported.md#microsoftnetworkdnszones) | |Microsoft.Network/expressRouteCircuits | Yes | No |[ExpressRoute Circuits](../essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) |
+|Microsoft.Network/expressRouteGateways | Yes | No |[ExpressRoute Gateways](../essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) |
|Microsoft.Network/expressRoutePorts | Yes | No |[ExpressRoute Direct](../essentials/metrics-supported.md#microsoftnetworkexpressrouteports) | |Microsoft.Network/loadBalancers (only for Standard SKUs)| Yes| No | [Load Balancers](../essentials/metrics-supported.md#microsoftnetworkloadbalancers) | |Microsoft.Network/natGateways| No | No | [NAT Gateways](../essentials/metrics-supported.md#microsoftnetworknatgateways) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Peering/peerings | Yes | No | [Peerings](../essentials/metrics-supported.md#microsoftpeeringpeerings) | |Microsoft.Peering/peeringServices | Yes | No | [Peering Services](../essentials/metrics-supported.md#microsoftpeeringpeeringservices) | |Microsoft.PowerBIDedicated/capacities | No | No | [Capacities](../essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) |
+|Microsoft.Purview/accounts | Yes | No | [Purview Accounts](../essentials/metrics-supported.md#microsoftpurviewaccounts) |
|Microsoft.RecoveryServices/vaults | Yes | Yes | [Recovery Services vaults](../essentials/metrics-supported.md#microsoftrecoveryservicesvaults) | |Microsoft.Relay/namespaces | Yes | No | [Relays](../essentials/metrics-supported.md#microsoftrelaynamespaces) | |Microsoft.Search/searchServices | No | No | [Search services](../essentials/metrics-supported.md#microsoftsearchsearchservices) | |Microsoft.ServiceBus/namespaces | Yes | No | [Service Bus](../essentials/metrics-supported.md#microsoftservicebusnamespaces) |
+|Microsoft.SignalRService/WebPubSub | Yes | No | [Web PubSub Service](../essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) |
|Microsoft.Sql/managedInstances | No | Yes | [SQL Managed Instances](../essentials/metrics-supported.md#microsoftsqlmanagedinstances) | |Microsoft.Sql/servers/databases | No | Yes | [SQL Databases](../essentials/metrics-supported.md#microsoftsqlserversdatabases) | |Microsoft.Sql/servers/elasticPools | No | Yes | [SQL Elastic Pools](../essentials/metrics-supported.md#microsoftsqlserverselasticpools) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Synapse/workspaces/bigDataPools | Yes | No | [Synapse Analytics Apache Spark Pools](../essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) | |Microsoft.Synapse/workspaces/sqlPools | Yes | No | [Synapse Analytics SQL Pools](../essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) | |Microsoft.VMWareCloudSimple/virtualMachines | Yes | No | [CloudSimple Virtual Machines](../essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) |
+|Microsoft.Web/containerApps | Yes | No | Container Apps |
|Microsoft.Web/hostingEnvironments/multiRolePools | Yes | No | [App Service Environment Multi-Role Pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools)| |Microsoft.Web/hostingEnvironments/workerPools | Yes | No | [App Service Environment Worker Pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools)| |Microsoft.Web/serverfarms | Yes | No | [App Service Plans](../essentials/metrics-supported.md#microsoftwebserverfarms)|
azure-monitor Resource Manager Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-log.md
description: Sample Azure Resource Manager templates to deploy Azure Monitor log
Previously updated : 2/23/2022 Last updated : 05/11/2022 # Resource Manager template samples for log alert rules in Azure Monitor+ This article includes samples of [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure log query alerts in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template. [!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] ## Template for all resource types (from version 2021-08-01)+ The following sample creates a rule that can target any resource.
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of the alert')
+@minLength(1)
+param alertName string
+
+@description('Location of the alert')
+@minLength(1)
+param location string
+
+@description('Description of alert')
+param alertDescription string = 'This is a metric alert'
+
+@description('Severity of alert {0,1,2,3,4}')
+@allowed([
+ 0
+ 1
+ 2
+ 3
+ 4
+])
+param alertSeverity int = 3
+
+@description('Specifies whether the alert is enabled')
+param isEnabled bool = true
+
+@description('Specifies whether the alert will automatically resolve')
+param autoMitigate bool = true
+
+@description('Specifies whether to check linked storage and fail creation if the storage was not found')
+param checkWorkspaceAlertsStorageConfigured bool = false
+
+@description('Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz')
+@minLength(1)
+param resourceId string
+
+@description('Name of the metric used in the comparison to activate the alert.')
+@minLength(1)
+param query string
+
+@description('Name of the measure column used in the alert evaluation.')
+param metricMeasureColumn string
+
+@description('Name of the resource ID column used in the alert targeting the alerts.')
+param resourceIdColumn string
+
+@description('Operator comparing the current value with the threshold value.')
+@allowed([
+ 'Equals'
+ 'GreaterThan'
+ 'GreaterThanOrEqual'
+ 'LessThan'
+ 'LessThanOrEqual'
+])
+param operator string = 'GreaterThan'
+
+@description('The threshold value at which the alert is activated.')
+param threshold int = 0
+
+@description('The number of periods to check in the alert evaluation.')
+param numberOfEvaluationPeriods int = 1
+
+@description('The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods).')
+param minFailingPeriodsToAlert int = 1
+
+@description('How the data that is collected should be combined over time.')
+@allowed([
+ 'Average'
+ 'Minimum'
+ 'Maximum'
+ 'Total'
+ 'Count'
+])
+param timeAggregation string = 'Average'
+
+@description('Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param windowSize string = 'PT5M'
+
+@description('how often the metric alert is evaluated represented in ISO 8601 duration format')
+@allowed([
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+])
+param evaluationFrequency string = 'PT5M'
+
+@description('Mute actions for the chosen period of time (in ISO 8601 duration format) after the alert is fired.')
+@allowed([
+ 'PT1M'
+ 'PT5M'
+ 'PT15M'
+ 'PT30M'
+ 'PT1H'
+ 'PT6H'
+ 'PT12H'
+ 'PT24H'
+])
+param muteActionsDuration string
+
+@description('The ID of the action group that is triggered when the alert is activated or deactivated')
+param actionGroupId string = ''
+
+resource alert 'Microsoft.Insights/scheduledQueryRules@2021-08-01' = {
+ name: alertName
+ location: location
+ tags: {}
+ properties: {
+ description: alertDescription
+ severity: alertSeverity
+ enabled: isEnabled
+ scopes: [
+ resourceId
+ ]
+ evaluationFrequency: evaluationFrequency
+ windowSize: windowSize
+ criteria: {
+ allOf: [
+ {
+ query: query
+ metricMeasureColumn: metricMeasureColumn
+ resourceIdColumn: resourceIdColumn
+ dimensions: []
+ operator: operator
+ threshold: threshold
+ timeAggregation: timeAggregation
+ failingPeriods: {
+ numberOfEvaluationPeriods: numberOfEvaluationPeriods
+ minFailingPeriodsToAlert: minFailingPeriodsToAlert
+ }
+ }
+ ]
+ }
+ muteActionsDuration: muteActionsDuration
+ autoMitigate: autoMitigate
+ checkWorkspaceAlertsStorageConfigured: checkWorkspaceAlertsStorageConfigured
+ actions: {
+ actionGroups: [
+ actionGroupId
+ ]
+ customProperties: {
+ key1: 'value1'
+ key2: 'value2'
+ }
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the alert"
- }
- },
- "location": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Location of the alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "This is a metric alert",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "alertSeverity": {
- "type": "int",
- "defaultValue": 3,
- "allowedValues": [
- 0,
- 1,
- 2,
- 3,
- 4
- ],
- "metadata": {
- "description": "Severity of alert {0,1,2,3,4}"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert is enabled"
- }
- },
- "autoMitigate": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether the alert will automatically resolve"
- }
- },
- "checkWorkspaceAlertsStorageConfigured": {
- "type": "bool",
- "defaultValue": false,
- "metadata": {
- "description": "Specifies whether to check linked storage and fail creation if the storage was not found"
- }
- },
- "resourceId": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
- }
- },
- "query": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "metricMeasureColumn": {
- "type": "string",
- "metadata": {
- "description": "Name of the measure column used in the alert evaluation."
- }
- },
- "resourceIdColumn": {
- "type": "string",
- "metadata": {
- "description": "Name of the resource ID column used in the alert targeting the alerts."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterThan",
- "allowedValues": [
- "Equals",
- "NotEquals",
- "GreaterThan",
- "GreaterThanOrEqual",
- "LessThan",
- "LessThanOrEqual"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "threshold": {
- "type": "string",
- "defaultValue": "0",
- "metadata": {
- "description": "The threshold value at which the alert is activated."
- }
- },
- "numberOfEvaluationPeriods": {
- "type": "string",
- "defaultValue": "1",
- "metadata": {
- "description": "The number of periods to check in the alert evaluation."
- }
- },
- "minFailingPeriodsToAlert": {
- "type": "string",
- "defaultValue": "1",
- "metadata": {
- "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
- }
- },
- "timeAggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Minimum",
- "Maximum",
- "Total",
- "Count"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": null,
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
- }
- },
- "evaluationFrequency": {
- "type": "string",
- "defaultValue": "PT5M",
- "allowedValues": [
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H"
- ],
- "metadata": {
- "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
- }
- },
- "muteActionsDuration": {
- "type": "string",
- "allowedValues": [
- "PT1M",
- "PT5M",
- "PT15M",
- "PT30M",
- "PT1H",
- "PT6H",
- "PT12H",
- "PT24H"
- ],
- "metadata": {
- "description": "Mute actions for the chosen period of time (in ISO 8601 duration format) after the alert is fired."
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the alert"
+ }
},
- "variables": { },
- "resources": [
- {
- "name": "[parameters('alertName')]",
- "type": "Microsoft.Insights/scheduledQueryRules",
- "location": "[parameters('location')]",
- "apiVersion": "2021-08-01",
- "tags": {},
- "properties": {
- "description": "[parameters('alertDescription')]",
- "severity": "[parameters('alertSeverity')]",
- "enabled": "[parameters('isEnabled')]",
- "scopes": ["[parameters('resourceId')]"],
- "evaluationFrequency":"[parameters('evaluationFrequency')]",
- "windowSize": "[parameters('windowSize')]",
- "criteria": {
- "allOf": [
- {
- "query": "[parameters('query')]",
- "metricMeasureColumn": "[parameters('metricMeasureColumn')]",
- "resourceIdColumn": "[parameters('resourceIdColumn')]",
- "dimensions":[],
- "operator": "[parameters('operator')]",
- "threshold" : "[parameters('threshold')]",
- "timeAggregation": "[parameters('timeAggregation')]",
- "failingPeriods": {
- "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
- "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
- }
- }
- ]
- },
- "muteActionsDuration": "[parameters('muteActionsDuration')]",
- "autoMitigate": "[parameters('autoMitigate')]",
- "checkWorkspaceAlertsStorageConfigured": "[parameters('checkWorkspaceAlertsStorageConfigured')]",
- "actions": {
- "actionGroups": "[parameters('actionGroupId')]",
- "customProperties": {
- "key1": "value1",
- "key2": "value2"
- }
- }
+ "location": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Location of the alert"
+ }
+ },
+ "alertDescription": {
+ "type": "string",
+ "defaultValue": "This is a metric alert",
+ "metadata": {
+ "description": "Description of alert"
+ }
+ },
+ "alertSeverity": {
+ "type": "int",
+ "defaultValue": 3,
+ "allowedValues": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4
+ ],
+ "metadata": {
+ "description": "Severity of alert {0,1,2,3,4}"
+ }
+ },
+ "isEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert is enabled"
+ }
+ },
+ "autoMitigate": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert will automatically resolve"
+ }
+ },
+ "checkWorkspaceAlertsStorageConfigured": {
+ "type": "bool",
+ "defaultValue": false,
+ "metadata": {
+ "description": "Specifies whether to check linked storage and fail creation if the storage was not found"
+ }
+ },
+ "resourceId": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example /subscriptions/00000000-0000-0000-0000-0000-00000000/resourceGroups/ResourceGroupName/providers/Microsoft.compute/virtualMachines/VM_xyz"
+ }
+ },
+ "query": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "Name of the metric used in the comparison to activate the alert."
+ }
+ },
+ "metricMeasureColumn": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the measure column used in the alert evaluation."
+ }
+ },
+ "resourceIdColumn": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the resource ID column used in the alert targeting the alerts."
+ }
+ },
+ "operator": {
+ "type": "string",
+ "defaultValue": "GreaterThan",
+ "allowedValues": [
+ "Equals",
+ "GreaterThan",
+ "GreaterThanOrEqual",
+ "LessThan",
+ "LessThanOrEqual"
+ ],
+ "metadata": {
+ "description": "Operator comparing the current value with the threshold value."
+ }
+ },
+ "threshold": {
+ "type": "int",
+ "defaultValue": 0,
+ "metadata": {
+ "description": "The threshold value at which the alert is activated."
+ }
+ },
+ "numberOfEvaluationPeriods": {
+ "type": "int",
+ "defaultValue": 1,
+ "metadata": {
+ "description": "The number of periods to check in the alert evaluation."
+ }
+ },
+ "minFailingPeriodsToAlert": {
+ "type": "int",
+ "defaultValue": 1,
+ "metadata": {
+ "description": "The number of unhealthy periods to alert on (must be lower or equal to numberOfEvaluationPeriods)."
+ }
+ },
+ "timeAggregation": {
+ "type": "string",
+ "defaultValue": "Average",
+ "allowedValues": [
+ "Average",
+ "Minimum",
+ "Maximum",
+ "Total",
+ "Count"
+ ],
+ "metadata": {
+ "description": "How the data that is collected should be combined over time."
+ }
+ },
+ "windowSize": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Period of time used to monitor alert activity based on the threshold. Must be between one minute and one day. ISO 8601 duration format."
+ }
+ },
+ "evaluationFrequency": {
+ "type": "string",
+ "defaultValue": "PT5M",
+ "allowedValues": [
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H"
+ ],
+ "metadata": {
+ "description": "how often the metric alert is evaluated represented in ISO 8601 duration format"
+ }
+ },
+ "muteActionsDuration": {
+ "type": "string",
+ "allowedValues": [
+ "PT1M",
+ "PT5M",
+ "PT15M",
+ "PT30M",
+ "PT1H",
+ "PT6H",
+ "PT12H",
+ "PT24H"
+ ],
+ "metadata": {
+ "description": "Mute actions for the chosen period of time (in ISO 8601 duration format) after the alert is fired."
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated or deactivated"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/scheduledQueryRules",
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('alertName')]",
+ "location": "[parameters('location')]",
+ "tags": {},
+ "properties": {
+ "description": "[parameters('alertDescription')]",
+ "severity": "[parameters('alertSeverity')]",
+ "enabled": "[parameters('isEnabled')]",
+ "scopes": [
+ "[parameters('resourceId')]"
+ ],
+ "evaluationFrequency": "[parameters('evaluationFrequency')]",
+ "windowSize": "[parameters('windowSize')]",
+ "criteria": {
+ "allOf": [
+ {
+ "query": "[parameters('query')]",
+ "metricMeasureColumn": "[parameters('metricMeasureColumn')]",
+ "resourceIdColumn": "[parameters('resourceIdColumn')]",
+ "dimensions": [],
+ "operator": "[parameters('operator')]",
+ "threshold": "[parameters('threshold')]",
+ "timeAggregation": "[parameters('timeAggregation')]",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": "[parameters('numberOfEvaluationPeriods')]",
+ "minFailingPeriodsToAlert": "[parameters('minFailingPeriodsToAlert')]"
+ }
}
+ ]
+ },
+ "muteActionsDuration": "[parameters('muteActionsDuration')]",
+ "autoMitigate": "[parameters('autoMitigate')]",
+ "checkWorkspaceAlertsStorageConfigured": "[parameters('checkWorkspaceAlertsStorageConfigured')]",
+ "actions": {
+ "actionGroups": [
+ "[parameters('actionGroupId')]"
+ ],
+ "customProperties": {
+ "key1": "value1",
+ "key2": "value2"
+ }
}
- ]
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "value": "New Alert"
- },
- "location": {
- "value": "eastus"
- },
- "alertDescription": {
- "value": "New alert created via template"
- },
- "alertSeverity": {
- "value":3
- },
- "isEnabled": {
- "value": true
- },
- "resourceId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/Microsoft.Compute/virtualMachines/replace-with-resource-name"
- },
- "query": {
- "value": "Perf | where ObjectName == \"Processor\" and CounterName == \"% Processor Time\""
- },
- "metricMeasureColumn": {
- "value": "AggregatedValue"
- },
- "operator": {
- "value": "GreaterThan"
- },
- "threshold": {
- "value": "80"
- },
- "timeAggregation": {
- "value": "Average"
- },
- "actionGroupId": {
- "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "alertName": {
+ "value": "New Alert"
+ },
+ "location": {
+ "value": "eastus"
+ },
+ "alertDescription": {
+ "value": "New alert created via template"
+ },
+ "alertSeverity": {
+ "value":3
+ },
+ "isEnabled": {
+ "value": true
+ },
+ "resourceId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/replace-with-resourceGroup-name/providers/Microsoft.Compute/virtualMachines/replace-with-resource-name"
+ },
+ "query": {
+ "value": "Perf | where ObjectName == \"Processor\" and CounterName == \"% Processor Time\""
+ },
+ "metricMeasureColumn": {
+ "value": "AggregatedValue"
+ },
+ "operator": {
+ "value": "GreaterThan"
+ },
+ "threshold": {
+ "value": "80"
+ },
+ "timeAggregation": {
+ "value": "Average"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/replace-with-subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Insights/actionGroups/replace-with-action-group"
}
+ }
} ``` ## Number of results template (up to version 2018-04-16)+ The following sample creates a [number of results alert rule](../alerts/alerts-unified-log.md#result-count). ### Notes
The following sample creates a [number of results alert rule](../alerts/alerts-u
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Resource ID of the Log Analytics workspace.')
+param sourceId string = ''
+
+@description('Location for the alert. Must be the same location as the workspace.')
+param location string = ''
+
+@description('The ID of the action group that is triggered when the alert is activated.')
+param actionGroupId string = ''
+
+resource logQueryAlert 'Microsoft.Insights/scheduledQueryRules@2018-04-16' = {
+ name: 'Sample log query alert'
+ location: location
+ properties: {
+ description: 'Sample log query alert'
+ enabled: 'true'
+ source: {
+ query: 'Event | where EventLevelName == "Error" | summarize count() by Computer'
+ dataSourceId: sourceId
+ queryType: 'ResultCount'
+ }
+ schedule: {
+ frequencyInMinutes: 15
+ timeWindowInMinutes: 60
+ }
+ action: {
+ 'odata.type': 'Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction'
+ severity: '4'
+ aznsAction: {
+ actionGroup: array(actionGroupId)
+ emailSubject: 'Alert mail subject'
+ customWebhookPayload: '{ "alertname":"#alertrulename", "IncludeSearchResults":true }'
+ }
+ trigger: {
+ thresholdOperator: 'GreaterThan'
+ threshold: 1
+ }
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "sourceId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Resource ID of the Log Analytics workspace."
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Location for the alert. Must be the same location as the workspace."
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated."
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "sourceId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Resource ID of the Log Analytics workspace."
+ }
},
- "resources":[
- {
- "type":"Microsoft.Insights/scheduledQueryRules",
- "name":"Sample log query alert",
- "apiVersion": "2018-04-16",
- "location": "[parameters('location')]",
- "properties":{
- "description": "Sample log query alert",
- "enabled": "true",
- "source": {
- "query": "Event | where EventLevelName == \"Error\" | summarize count() by Computer",
- "dataSourceId": "[parameters('sourceId')]",
- "queryType":"ResultCount"
- },
- "schedule":{
- "frequencyInMinutes": 15,
- "timeWindowInMinutes": 60
- },
- "action":{
- "odata.type": "Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction",
- "severity": "4",
- "aznsAction":{
- "actionGroup": "[array(parameters('actionGroupId'))]",
- "emailSubject": "Alert mail subject",
- "customWebhookPayload":"{ \"alertname\":\"#alertrulename\", \"IncludeSearchResults\":true }"
- },
- "trigger":{
- "thresholdOperator": "GreaterThan",
- "threshold": 1
- }
- }
- }
+ "location": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Location for the alert. Must be the same location as the workspace."
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/scheduledQueryRules",
+ "apiVersion": "2018-04-16",
+ "name": "Sample log query alert",
+ "location": "[parameters('location')]",
+ "properties": {
+ "description": "Sample log query alert",
+ "enabled": "true",
+ "source": {
+ "query": "Event | where EventLevelName == \"Error\" | summarize count() by Computer",
+ "dataSourceId": "[parameters('sourceId')]",
+ "queryType": "ResultCount"
+ },
+ "schedule": {
+ "frequencyInMinutes": 15,
+ "timeWindowInMinutes": 60
+ },
+ "action": {
+ "odata.type": "Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction",
+ "severity": "4",
+ "aznsAction": {
+ "actionGroup": "[array(parameters('actionGroupId'))]",
+ "emailSubject": "Alert mail subject",
+ "customWebhookPayload": "{ \"alertname\":\"#alertrulename\", \"IncludeSearchResults\":true }"
+ },
+ "trigger": {
+ "thresholdOperator": "GreaterThan",
+ "threshold": 1
+ }
}
- ]
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "sourceId": {
- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/bw-samples-arm/providers/microsoft.operationalinsights/workspaces/bw-arm-01"
- },
- "location": {
- "value": "westus"
- },
- "actionGroupId": {
- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/bw-samples-arm/providers/microsoft.insights/actionGroups/ARM samples group 01"
- }
+ "sourceId": {
+ "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/bw-samples-arm/providers/microsoft.operationalinsights/workspaces/bw-arm-01"
+ },
+ "location": {
+ "value": "westus"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/bw-samples-arm/providers/microsoft.insights/actionGroups/ARM samples group 01"
+ }
} } ``` ## Metric measurement template (up to version 2018-04-16)+ The following sample creates a [metric measurement alert rule](../alerts/alerts-unified-log.md#calculation-of-a-value). ### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Resource ID of the Log Analytics workspace.')
+param sourceId string = ''
+
+@description('Location for the alert. Must be the same location as the workspace.')
+param location string = ''
+
+@description('The ID of the action group that is triggered when the alert is activated.')
+param actionGroupId string = ''
+
+resource metricMeasurementLogQueryAlert 'Microsoft.Insights/scheduledQueryRules@2018-04-16' = {
+ name: 'Sample metric measurement log query alert'
+ location: location
+ properties: {
+ description: 'Sample metric measurement query alert rule'
+ enabled: 'true'
+ source: {
+ query: 'Event | where EventLevelName == "Error" | summarize AggregatedValue = count() by bin(TimeGenerated,1h), Computer'
+ dataSourceId: sourceId
+ queryType: 'ResultCount'
+ }
+ schedule: {
+ frequencyInMinutes: 15
+ timeWindowInMinutes: 60
+ }
+ action: {
+ 'odata.type': 'Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction'
+ severity: '4'
+ aznsAction: {
+ actionGroup: array(actionGroupId)
+ emailSubject: 'Alert mail subject'
+ }
+ trigger: {
+ thresholdOperator: 'GreaterThan'
+ threshold: 10
+ metricTrigger: {
+ thresholdOperator: 'Equal'
+ threshold: 1
+ metricTriggerType: 'Consecutive'
+ metricColumn: 'Computer'
+ }
+ }
+ }
+ }
+}
+
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "sourceId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Resource ID of the Log Analytics workspace."
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Location for the alert. Must be the same location as the workspace."
- }
- },
- "actionGroupId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The ID of the action group that is triggered when the alert is activated."
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "sourceId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Resource ID of the Log Analytics workspace."
+ }
},
- "resources":[
- {
- "type":"Microsoft.Insights/scheduledQueryRules",
- "name":"Sample metric measurement log query alert",
- "apiVersion": "2018-04-16",
- "location": "[parameters('location')]",
- "properties":{
- "description": "Sample metric measurement query alert rule",
- "enabled": "true",
- "source": {
- "query": "Event | where EventLevelName == \"Error\" | summarize AggregatedValue = count() by bin(TimeGenerated,1h), Computer",
- "dataSourceId": "[parameters('sourceId')]",
- "queryType":"ResultCount"
- },
- "schedule":{
- "frequencyInMinutes": 15,
- "timeWindowInMinutes": 60
- },
- "action":{
- "odata.type": "Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction",
- "severity": "4",
- "aznsAction":{
- "actionGroup": "[array(parameters('actionGroupId'))]",
- "emailSubject": "Alert mail subject"
- },
- "trigger":{
- "thresholdOperator": "GreaterThan",
- "threshold": 10,
- "metricTrigger":{
- "thresholdOperator": "Equal",
- "threshold": 1,
- "metricTriggerType": "Consecutive",
- "metricColumn": "Computer"
- }
- }
- }
+ "location": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Location for the alert. Must be the same location as the workspace."
+ }
+ },
+ "actionGroupId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The ID of the action group that is triggered when the alert is activated."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/scheduledQueryRules",
+ "apiVersion": "2018-04-16",
+ "name": "Sample metric measurement log query alert",
+ "location": "[parameters('location')]",
+ "properties": {
+ "description": "Sample metric measurement query alert rule",
+ "enabled": "true",
+ "source": {
+ "query": "Event | where EventLevelName == \"Error\" | summarize AggregatedValue = count() by bin(TimeGenerated,1h), Computer",
+ "dataSourceId": "[parameters('sourceId')]",
+ "queryType": "ResultCount"
+ },
+ "schedule": {
+ "frequencyInMinutes": 15,
+ "timeWindowInMinutes": 60
+ },
+ "action": {
+ "odata.type": "Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction",
+ "severity": "4",
+ "aznsAction": {
+ "actionGroup": "[array(parameters('actionGroupId'))]",
+ "emailSubject": "Alert mail subject"
+ },
+ "trigger": {
+ "thresholdOperator": "GreaterThan",
+ "threshold": 10,
+ "metricTrigger": {
+ "thresholdOperator": "Equal",
+ "threshold": 1,
+ "metricTriggerType": "Consecutive",
+ "metricColumn": "Computer"
}
+ }
}
- ]
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "sourceId": {
- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/bw-samples-arm/providers/microsoft.operationalinsights/workspaces/bw-arm-01"
- },
- "location": {
- "value": "westus"
- },
- "actionGroupId": {
- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/bw-samples-arm/providers/microsoft.insights/actionGroups/ARM samples group 01"
- }
+ "sourceId": {
+ "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/bw-samples-arm/providers/microsoft.operationalinsights/workspaces/bw-arm-01"
+ },
+ "location": {
+ "value": "westus"
+ },
+ "actionGroupId": {
+ "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/bw-samples-arm/providers/microsoft.insights/actionGroups/ARM samples group 01"
+ }
} } ``` ## Next steps
-* [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about alert rules](./alerts-overview.md).
+- [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
+- [Learn more about alert rules](./alerts-overview.md).
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
When you migrate to a workspace-based resource, no data is transferred from your
Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). The migration process is **permanent, and cannot be reversed**. Once you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. However, once you migrate you're able to change the target workspace as often as needed.
-<!-- This note duplicates information in pricing.md. Understanding workspace-based usage and costs has been added as a migration prerequisite.
-> [!NOTE]
-> Data ingestion and retention for workspace-based Application Insights resources are [billed through the Log Analytics workspace](../logs/manage-cost-storage.md) where the data is located. Pay-as-you-go data ingestion and data retention are billed similarly in Log Anaytics as they are in Application Insights. If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period. [Learn more]( ./pricing.md#workspace-based-application-insights) about billing for workspace-based Application Insights resources.
>- If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource use the [workspace-based resource creation guide](create-workspace-resource.md). ## Pre-requisites
If you don't need to migrate an existing resource, and instead want to create a
Once the migration is complete, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs. > [!CAUTION]
- > Diagnostics settings uses a different export format/schema than continuous export, migrating will break any existing integrations with Stream Analytics.
+ > * Diagnostics settings uses a different export format/schema than continuous export, migrating will break any existing integrations with Stream Analytics.
+ > * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource.
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
From within the Application Insights resource pane, select **Properties** > **Ch
The legacy continuous export functionality is not supported for workspace-based resources. Instead, select **Diagnostic settings** > **add diagnostic setting** from within your Application Insights resource. You can select all tables, or a subset of tables to archive to a storage account, or to stream to an Azure Event Hub. > [!NOTE]
-> There are currently no additional charges for the telemetry export. Pricing information for this feature will be available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). Prior to the start of billing, notifications will be sent. Should you choose to continue using telemetry export after the notice period, you will be billed at the applicable rate.
+> * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
+> * Pricing information for this feature will be available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). Prior to the start of billing, notifications will be sent. Should you choose to continue using telemetry export after the notice period, you will be billed at the applicable rate.
## Next steps
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Want to keep your telemetry for longer than the standard retention period? Or process it in some specialized way? Continuous Export is ideal for this purpose. The events you see in the Application Insights portal can be exported to storage in Microsoft Azure in JSON format. From there, you can download your data and write whatever code you need to process it. > [!IMPORTANT]
-> Continuous export has been deprecated. When [migrating to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry.
-
-> [!NOTE]
-> Continuous export is only supported for classic Application Insights resources. [Workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
->
-
+> * Continuous export has been deprecated and is only supported for classic Application Insights resources.
+> * When [migrating to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
+> * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
Before you set up continuous export, there are some alternatives you might want to consider:
When you open your blob store, you'll see a container with a set of blob files.
![Inspect the blob store with a suitable tool](./media/export-telemetry/04-data.png) + The date and time are UTC and are when the telemetry was deposited in the store - not the time it was generated. So if you write code to download the data, it can move linearly through the data. Here's the form of the path:
On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdin
## Diagnostic settings based export
-Diagnostic settings based export uses a different schema than continuous export. It also supports features that continuous export doesn't like:
+Diagnostic settings export is preferred because it provides additional features.
+ > [!div class="checklist"]
+ > * Azure storage accounts with virtual networks, firewalls, and private links
+ > * Export to Event Hubs
-* Azure storage accounts with virtual networks, firewalls, and private links.
-* Export to Event Hubs.
+Diagnostic settings export further differs from continuous export in the following ways:
+* Updated schema.
+* Telemetry data is sent as it arrives instead of in batched uploads.
+ > [!IMPORTANT]
+ > Additional costs may be incurred due to an increase in calls to the destination, such as a storage account.
To migrate to diagnostic settings-based export:
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Resources in Azure automatically generate [resource logs](essentials/platform-lo
There is a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Azure Monitor best practices - cost management](logs/cost-logs.md) for recommendations on optimizing the cost of your log collection.
-See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-in-azure-portal) to create a diagnostic setting for an Azure resource.
+See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-diagnostic-settings) to create a diagnostic setting for an Azure resource.
Since a diagnostic setting needs to be created for each Azure resource, use Azure Policy to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition.
-See [Create at scale using Azure Policy](essentials/diagnostic-settings.md#create-at-scale-using-azure-policy) for a process for creating policy definitions for a particular Azure service and details for creating diagnostic settings at scale.
+See [Create diagnostic settings at scale using Azure Policy](essentials/diagnostic-settings-policy.md) for a process for creating policy definitions for a particular Azure service and details for creating diagnostic settings at scale.
### Enable insights Insights provide a specialized monitoring experience for a particular service. They use the same data already being collected such as platform metrics and resource logs, but they provide custom workbooks the assist you in identifying and analyzing the most critical data. Most insights will be available in the Azure portal with no configuration required, other than collecting resource logs for that service. See the monitoring documentation for each Azure service to determine whether it has an insight and if it requires configuration.
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
na Previously updated : 04/27/2021 Last updated : 05/09/2022
azure-monitor Diagnostic Settings Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings-policy.md
+
+ Title: Create diagnostic settings at scale using Azure Policy
+description: Use Azure Policy to create diagnostic settings in Azure Monitor to be created at scale as each Azure resource is created.
++++ Last updated : 05/09/2022++
+# Create diagnostic settings at scale using Azure Policy
+Since a [diagnostic settings](diagnostic-settings.md) needs to be created for each monitored Azure resource, Azure Policy can be used to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this fact, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition.
+
+With the addition of resource log category groups, you can now choose options that dynamically update as the log categories change. For more information, see [diagnostic settings sources](diagnostic-settings.md#sources) listed earlier in this article. All resource types have the "All" category. Some have the "Audit" category.
+
+## Built-in policy definitions for Azure Monitor
+There are two built-in policy definitions for each resource type: one to send to a Log Analytics workspace and another to send to an event hub. If you need only one location, assign that policy for the resource type. If you need both, assign both policy definitions for the resource.
+
+For example, the following image shows the built-in diagnostic setting policy definitions for Azure Data Lake Analytics.
+
+![Partial screenshot from the Azure Policy Definitions page showing two built-in diagnostic setting policy definitions for Data Lake Analytics.](media/diagnostic-settings-policy/built-in-diagnostic-settings.png)
+
+## Custom policy definitions
+For resource types that don't have a built-in policy, you need to create a custom policy definition. You could do this manually in the Azure portal by copying an existing built-in policy and then modifying it for your resource type. It's more efficient, though, to create the policy programmatically by using a script in the PowerShell Gallery.
+
+The script [Create-AzDiagPolicy](https://www.powershellgallery.com/packages/Create-AzDiagPolicy) creates policy files for a particular resource type that you can install by using PowerShell or the Azure CLI. Use the following procedure to create a custom policy definition for diagnostic settings:
+
+1. Ensure that you have [Azure PowerShell](/powershell/azure/install-az-ps) installed.
+2. Install the script by using the following command:
+
+ ```azurepowershell
+ Install-Script -Name Create-AzDiagPolicy
+ ```
+
+3. Run the script by using the parameters to specify where to send the logs. You'll be prompted to specify a subscription and resource type.
+
+ For example, to create a policy definition that sends logs to a Log Analytics workspace and an event hub, use the following command:
+
+ ```azurepowershell
+ Create-AzDiagPolicy.ps1 -ExportLA -ExportEH -ExportDir ".\PolicyFiles"
+ ```
+
+ Alternatively, you can specify a subscription and resource type in the command. For example, to create a policy definition that sends logs to a Log Analytics workspace and an event hub for SQL Server databases, use the following command:
+
+ ```azurepowershell
+ Create-AzDiagPolicy.ps1 -SubscriptionID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -ResourceType Microsoft.Sql/servers/databases -ExportLA -ExportEH -ExportDir ".\PolicyFiles"
+ ```
+
+5. The script creates separate folders for each policy definition. Each folder contains three files named *azurepolicy.json*, *azurepolicy.rules.json*, and *azurepolicy.parameters.json*. If you want to create the policy manually in the Azure portal, you can copy and paste the contents of *azurepolicy.json* because it includes the entire policy definition. Use the other two files with PowerShell or the Azure CLI to create the policy definition from a command line.
+
+ The following examples show how to install the policy definition from both PowerShell and the Azure CLI. Each example includes metadata to specify a category of **Monitoring** to group the new policy definition with the built-in policy definitions.
+
+ ```azurepowershell
+ New-AzPolicyDefinition -name "Deploy Diagnostic Settings for SQL Server database to Log Analytics workspace" -policy .\Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.rules.json -parameter .\Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.parameters.json -mode All -Metadata '{"category":"Monitoring"}'
+ ```
+
+ ```azurecli
+ az policy definition create --name 'deploy-diag-setting-sql-database--workspace' --display-name 'Deploy Diagnostic Settings for SQL Server database to Log Analytics workspace' --rules 'Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.rules.json' --params 'Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.parameters.json' --subscription 'AzureMonitor_Docs' --mode All
+ ```
+
+## Initiative
+Rather than create an assignment for each policy definition, a common strategy is to create an initiative that includes the policy definitions to create diagnostic settings for each Azure service. Create an assignment between the initiative and a management group, subscription, or resource group, depending on how you manage your environment. This strategy offers the following benefits:
+
+- Create a single assignment for the initiative instead of multiple assignments for each resource type. Use the same initiative for multiple monitoring groups, subscriptions, or resource groups.
+- Modify the initiative when you need to add a new resource type or destination. For example, your initial requirements might be to send data only to a Log Analytics workspace, but later you want to add an event hub. Modify the initiative instead of creating new assignments.
+
+For details on creating an initiative, see [Create and assign an initiative definition](../../governance/policy/tutorials/create-and-manage.md#create-and-assign-an-initiative-definition). Consider the following recommendations:
+
+- Set **Category** to **Monitoring** to group it with related built-in and custom policy definitions.
+- Instead of specifying the details for the Log Analytics workspace and the event hub for policy definitions included in the initiative, use a common initiative parameter. This parameter allows you to easily specify a common value for all policy definitions and change that value if necessary.
+
+![Screenshot that shows settings for initiative definition.](media/diagnostic-settings-policy/initiative-definition.png)
+
+## Assignment
+Assign the initiative to an Azure management group, subscription, or resource group, depending on the scope of your resources to monitor. A [management group](../../governance/management-groups/overview.md) is useful for scoping policy, especially if your organization has multiple subscriptions.
+
+![Screenshot of the settings for the Basics tab in the Assign initiative section of the Diagnostic settings to Log Analytics workspace in the Azure portal.](media/diagnostic-settings-policy/initiative-assignment.png)
+
+By using initiative parameters, you can specify the workspace or any other details once for all of the policy definitions in the initiative.
+
+![Screenshot that shows initiative parameters on the Parameters tab.](media/diagnostic-settings-policy/initiative-parameters.png)
+
+## Remediation
+The initiative will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can create diagnostic settings for any resources that were already created.
+
+When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. See [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) for details on the remediation.
+
+![Screenshot that shows initiative remediation for a Log Analytics workspace.](media/diagnostic-settings-policy/initiative-remediation.png)
+
+## Troubleshooting
+
+### Metric category is not supported
+
+When deploying a diagnostic setting, you receive an error message, similar to *Metric category 'xxxx' is not supported*. You may receive this error even though your previous deployment succeeded.
+
+The problem occurs when using a Resource Manager template, REST API, Azure CLI, or Azure PowerShell. Diagnostic settings created via the Azure portal are not affected as only the supported category names are presented.
+
+The problem is caused by a recent change in the underlying API. Metric categories other than 'AllMetrics' are not supported and never were except for a few specific Azure services. In the past, other category names were ignored when deploying a diagnostic setting. The Azure Monitor backend redirected these categories to 'AllMetrics'. As of February 2021, the backend was updated to specifically confirm the metric category provided is accurate. This change has caused some deployments to fail.
+
+If you receive this error, update your deployments to replace any metric category names with 'AllMetrics' to fix the issue. If the deployment was previously adding multiple categories, only one with the 'AllMetrics' reference should be kept. If you continue to have the problem, contact Azure support through the Azure portal.
+
+### Setting disappears due to non-ASCII characters in resourceID
+
+Diagnostic settings do not support resourceIDs with non-ASCII characters (for example, Preproducci├│n). Since you cannot rename resources in Azure, your only option is to create a new resource without the non-ASCII characters. If the characters are in a resource group, you can move the resources under it to a new one. Otherwise, you'll need to recreate the resource.
+
+## Next steps
+
+- [Read more about Azure platform Logs](./platform-logs-overview.md)
+- [Read more about diagnostic settings](./diagnostic-settings.md)
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Title: Create diagnostic settings to send Azure Monitor platform metrics and logs to different destinations
+ Title: Diagnostic settings in Azure Monitor
description: Send Azure Monitor platform metrics and logs to Azure Monitor Logs, Azure storage, or Azure Event Hubs using a diagnostic setting.
Last updated 03/07/2022
-# Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations
-
+# Diagnostic settings in Azure Monitor
This article provides details on creating and configuring diagnostic settings to send Azure platform metrics and logs to different destinations. [Platform metrics](./metrics-supported.md) are sent automatically to [Azure Monitor Metrics](./data-platform-metrics.md) by default and without configuration.
Any destinations for the diagnostic setting must be created before creating the
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You have to enable the *Allow trusted Microsoft services* to bypass this firewall setting in Event Hub, so that Azure Monitor (Diagnostic Settings) service is granted access to your Event Hubs resources.| | Partner integrations | Varies by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-## Create in Azure portal
+## Create diagnostic settings
+You can create and edit diagnostic settings using multiple methods.
+
+# [Azure portal](#tab/portal)
You can configure diagnostic settings in the Azure portal either from the Azure Monitor menu or from the menu for the resource.
You can configure diagnostic settings in the Azure portal either from the Azure
After a few moments, the new setting appears in your list of settings for this resource, and logs are streamed to the specified destinations as new event data is generated. It may take up to 15 minutes between when an event is emitted and when it [appears in a Log Analytics workspace](../logs/data-ingestion-time.md).
-## Create using PowerShell
+# [PowerShell](#tab/powershell)
Use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters.
Following is an example PowerShell cmdlet to create a diagnostic setting using a
Set-AzDiagnosticSetting -Name KeyVault-Diagnostics -ResourceId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault -Category AuditEvent -MetricCategory AllMetrics -Enabled $true -StorageAccountId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount -WorkspaceId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/oi-default-east-us/providers/microsoft.operationalinsights/workspaces/myworkspace -EventHubAuthorizationRuleId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey ```
-## Create using Azure CLI
+# [CLI](#tab/cli)
Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with [Azure CLI](/cli/azure/monitor). See the documentation for this command for descriptions of its parameters. > [!IMPORTANT] > You cannot use this method for the Azure Activity log. Instead, use [Create diagnostic setting in Azure Monitor using a Resource Manager template](./resource-manager-diagnostic-settings.md) to create a Resource Manager template and deploy it with CLI.
-Following is an example CLI command to create a diagnostic setting using all three destinations. The syntax is slightly difference depending on your client.
+Following is an example CLI command to create a diagnostic setting using all three destinations. The syntax is slightly different depending on your client.
+
+**CMD client**
-# [CMD](#tab/CMD)
```azurecli az monitor diagnostic-settings create ^ --name KeyVault-Diagnostics ^
az monitor diagnostic-settings create ^
--workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace ^ --event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey ```
-# [PowerShell](#tab/PowerShell)
+
+**PowerShell client**
+ ```azurecli az monitor diagnostic-settings create ` --name KeyVault-Diagnostics `
az monitor diagnostic-settings create `
--workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace ` --event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey ```
-# [Bash](#tab/Bash)
+
+**Bash client**
+ ```azurecli az monitor diagnostic-settings create \ --name KeyVault-Diagnostics \
az monitor diagnostic-settings create \
--workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace \ --event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey ```-
-## Create using Resource Manager template
+# [Resource Manager](#tab/arm)
See [Resource Manager template samples for diagnostic settings in Azure Monitor](./resource-manager-diagnostic-settings.md) to create or update diagnostic settings with a Resource Manager template.
-## Create using REST API
+## [REST API](#tab/api)
See [Diagnostic Settings](/rest/api/monitor/diagnosticsettings) to create or update diagnostic settings using the [Azure Monitor REST API](/rest/api/monitor/).
-## Create at scale using Azure Policy
-
-Since a diagnostic setting needs to be created for each Azure resource, Azure Policy can be used to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this fact, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition.
-
-With the addition of resource log category groups, you can now choose options that dynamically update as the log categories change. For more information, see [diagnostic settings sources](#sources) listed earlier in this article. All resource types have the "All" category. Some have the "Audit" category.
-
-### Built-in policy definitions for Azure Monitor
-There are two built-in policy definitions for each resource type: one to send to a Log Analytics workspace and another to send to an event hub. If you need only one location, assign that policy for the resource type. If you need both, assign both policy definitions for the resource.
-
-For example, the following image shows the built-in diagnostic setting policy definitions for Azure Data Lake Analytics.
-
-![Partial screenshot from the Azure Policy Definitions page showing two built-in diagnostic setting policy definitions for Data Lake Analytics.](media/diagnostic-settings/builtin-diagnostic-settings.png)
-
-### Custom policy definitions
-For resource types that don't have a built-in policy, you need to create a custom policy definition. You could do this manually in the Azure portal by copying an existing built-in policy and then modifying it for your resource type. It's more efficient, though, to create the policy programmatically by using a script in the PowerShell Gallery.
-
-The script [Create-AzDiagPolicy](https://www.powershellgallery.com/packages/Create-AzDiagPolicy) creates policy files for a particular resource type that you can install by using PowerShell or the Azure CLI. Use the following procedure to create a custom policy definition for diagnostic settings:
-
-1. Ensure that you have [Azure PowerShell](/powershell/azure/install-az-ps) installed.
-2. Install the script by using the following command:
-
- ```azurepowershell
- Install-Script -Name Create-AzDiagPolicy
- ```
-
-3. Run the script by using the parameters to specify where to send the logs. You'll be prompted to specify a subscription and resource type.
-
- For example, to create a policy definition that sends logs to a Log Analytics workspace and an event hub, use the following command:
-
- ```azurepowershell
- Create-AzDiagPolicy.ps1 -ExportLA -ExportEH -ExportDir ".\PolicyFiles"
- ```
-
- Alternatively, you can specify a subscription and resource type in the command. For example, to create a policy definition that sends logs to a Log Analytics workspace and an event hub for SQL Server databases, use the following command:
-
- ```azurepowershell
- Create-AzDiagPolicy.ps1 -SubscriptionID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -ResourceType Microsoft.Sql/servers/databases -ExportLA -ExportEH -ExportDir ".\PolicyFiles"
- ```
-
-5. The script creates separate folders for each policy definition. Each folder contains three files named *azurepolicy.json*, *azurepolicy.rules.json*, and *azurepolicy.parameters.json*. If you want to create the policy manually in the Azure portal, you can copy and paste the contents of *azurepolicy.json* because it includes the entire policy definition. Use the other two files with PowerShell or the Azure CLI to create the policy definition from a command line.
-
- The following examples show how to install the policy definition from both PowerShell and the Azure CLI. Each example includes metadata to specify a category of **Monitoring** to group the new policy definition with the built-in policy definitions.
-
- ```azurepowershell
- New-AzPolicyDefinition -name "Deploy Diagnostic Settings for SQL Server database to Log Analytics workspace" -policy .\Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.rules.json -parameter .\Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.parameters.json -mode All -Metadata '{"category":"Monitoring"}'
- ```
-
- ```azurecli
- az policy definition create --name 'deploy-diag-setting-sql-database--workspace' --display-name 'Deploy Diagnostic Settings for SQL Server database to Log Analytics workspace' --rules 'Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.rules.json' --params 'Apply-Diag-Settings-LA-Microsoft.Sql-servers-databases\azurepolicy.parameters.json' --subscription 'AzureMonitor_Docs' --mode All
- ```
-
-### Initiative
-Rather than create an assignment for each policy definition, a common strategy is to create an initiative that includes the policy definitions to create diagnostic settings for each Azure service. Create an assignment between the initiative and a management group, subscription, or resource group, depending on how you manage your environment. This strategy offers the following benefits:
--- Create a single assignment for the initiative instead of multiple assignments for each resource type. Use the same initiative for multiple monitoring groups, subscriptions, or resource groups.-- Modify the initiative when you need to add a new resource type or destination. For example, your initial requirements might be to send data only to a Log Analytics workspace, but later you want to add an event hub. Modify the initiative instead of creating new assignments.-
-For details on creating an initiative, see [Create and assign an initiative definition](../../governance/policy/tutorials/create-and-manage.md#create-and-assign-an-initiative-definition). Consider the following recommendations:
--- Set **Category** to **Monitoring** to group it with related built-in and custom policy definitions.-- Instead of specifying the details for the Log Analytics workspace and the event hub for policy definitions included in the initiative, use a common initiative parameter. This parameter allows you to easily specify a common value for all policy definitions and change that value if necessary.-
-![Screenshot that shows settings for initiative definition.](media/diagnostic-settings/initiative-definition.png)
-
-### Assignment
-Assign the initiative to an Azure management group, subscription, or resource group, depending on the scope of your resources to monitor. A [management group](../../governance/management-groups/overview.md) is useful for scoping policy, especially if your organization has multiple subscriptions.
-
-![Screenshot of the settings for the Basics tab in the Assign initiative section of the Diagnostic settings to Log Analytics workspace in the Azure portal.](media/diagnostic-settings/initiative-assignment.png)
-
-By using initiative parameters, you can specify the workspace or any other details once for all of the policy definitions in the initiative.
-
-![Screenshot that shows initiative parameters on the Parameters tab.](media/diagnostic-settings/initiative-parameters.png)
-
-### Remediation
-The initiative will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can create diagnostic settings for any resources that were already created.
-
-When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. See [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) for details on the remediation.
-
-![Screenshot that shows initiative remediation for a Log Analytics workspace.](media/diagnostic-settings/initiative-remediation.png)
+# [Azure Policy](#tab/policy)
+See [Create diagnostic settings at scale using Azure Policy](diagnostic-settings-policy.md) for details on using Azure Policy to create diagnostic settings at scale.
+ ## Troubleshooting ### Metric category is not supported
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
There are different options for viewing and analyzing the different Azure platfo
- View the Activity log in the Azure portal and access events from PowerShell and CLI. See [View the Activity log](../essentials/activity-log.md#view-the-activity-log) for details. - View Azure Active Directory Security and Activity reports in the Azure portal. See [What are Azure Active Directory reports?](../../active-directory/reports-monitoring/overview-reports.md) for details.-- Resource logs are automatically generated by supported Azure resources, but they aren't available to be viewed unless you send them to a [destination](#destinations).
+- Resource logs are automatically generated by supported Azure resources, but they aren't available to be viewed unless you create a [diagnostic setting](#diagnostic-settings).
-## Destinations
-You can send platform logs to one or more of the destinations in the following table depending on your monitoring requirements. Configure destinations for platform logs by [creating a Diagnostic setting](../essentials/diagnostic-settings.md).
+## Diagnostic settings
+Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes. Resource logs must have a diagnostic setting be used since they have no other way of being viewed.
| Destination | Description | |:|:| | Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. | | Event hub | Send platform log data outside of Azure, for example to a third-party SIEM or custom telemetry platform. | | Azure storage | Archive the logs for audit or backup. |
+| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you are already using one of the partners. |
- For details on creating a diagnostic setting for activity log or resource logs, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md). - For details on creating a diagnostic setting for Azure Active Directory logs, see the following articles.
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
Previously updated : 07/17/2019 Last updated : 05/09/2022 # Azure resource logs
-Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) that provide insight into operations that were performed within an Azure resource. The content of resource logs varies by the Azure service and resource type. Resource logs are not collected by default. You must create a diagnostic setting for each Azure resource to send its resource logs to a Log Analytics workspace to use with [Azure Monitor Logs](../logs/data-platform-logs.md), Azure Event Hubs to forward outside of Azure, or to Azure Storage for archiving.
-
-See [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md) for details on creating a diagnostic setting and [Deploy Azure Monitor at scale using Azure Policy](../best-practices.md) for details on using Azure Policy to automatically create a diagnostic setting for each Azure resource you create.
+Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) that provide insight into operations that were performed within an Azure resource. The content of resource logs varies by the Azure service and resource type. Resource logs are not collected by default. This article describes the [diagnostic setting](diagnostic-settings.md) required for each Azure resource to send its resource logs to different destinations.
## Send to Log Analytics workspace Send resource logs to a Log Analytics workspace to enable the features of [Azure Monitor Logs](../logs/data-platform-logs.md) which includes the following:
See [Create diagnostic settings to send platform logs and metrics to different d
- Azure diagnostics - All data written is to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. - Resource-specific - Data is written to individual table for each category of the resource.
-### Azure diagnostics mode
-In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. This is the legacy method used today by most Azure services. Since multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected. See [AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics) for details on the structure of this table and how it works with this potentially large number of columns.
-
-Consider the following example where diagnostic settings are being collected in the same workspace for the following data types:
--- Audit logs of service 1 (having a schema consisting of columns A, B, and C) -- Error logs of service 1 (having a schema consisting of columns D, E, and F) -- Audit logs of service 2 (having a schema consisting of columns G, H, and I) -
-The AzureDiagnostics table will look as follows:
-
-| ResourceProvider | Category | A | B | C | D | E | F | G | H | I |
-| -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- |
-| Microsoft.Service1 | AuditLogs | x1 | y1 | z1 | | | | | | |
-| Microsoft.Service1 | ErrorLogs | | | | q1 | w1 | e1 | | | |
-| Microsoft.Service2 | AuditLogs | | | | | | | j1 | k1 | l1 |
-| Microsoft.Service1 | ErrorLogs | | | | q2 | w2 | e2 | | | |
-| Microsoft.Service2 | AuditLogs | | | | | | | j3 | k3 | l3 |
-| Microsoft.Service1 | AuditLogs | x5 | y5 | z5 | | | | | | |
-| ... |
- ### Resource-specific In this mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. This method is recommended since it makes it much easier to work with the data in log queries, provides better discoverability of schemas and their structure, improves performance across both ingestion latency and query times, and the ability to grant Azure RBAC rights on a specific table. All Azure services will eventually migrate to the Resource-Specific mode.
The example above would result in three tables being created:
| Service2 | AuditLogs | j3 | k3 | l3| | ... |
+### Azure diagnostics mode
+In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. This is the legacy method used today by most Azure services. Since multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected. See [AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics) for details on the structure of this table and how it works with this potentially large number of columns.
+
+Consider the following example where diagnostic settings are being collected in the same workspace for the following data types:
+
+- Audit logs of service 1 (having a schema consisting of columns A, B, and C)
+- Error logs of service 1 (having a schema consisting of columns D, E, and F)
+- Audit logs of service 2 (having a schema consisting of columns G, H, and I)
+
+The AzureDiagnostics table will look as follows:
+| ResourceProvider | Category | A | B | C | D | E | F | G | H | I |
+| -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- |
+| Microsoft.Service1 | AuditLogs | x1 | y1 | z1 | | | | | | |
+| Microsoft.Service1 | ErrorLogs | | | | q1 | w1 | e1 | | | |
+| Microsoft.Service2 | AuditLogs | | | | | | | j1 | k1 | l1 |
+| Microsoft.Service1 | ErrorLogs | | | | q2 | w2 | e2 | | | |
+| Microsoft.Service2 | AuditLogs | | | | | | | j3 | k3 | l3 |
+| Microsoft.Service1 | AuditLogs | x5 | y5 | z5 | | | | | | |
+| ... |
### Select the collection mode Most Azure resources will write data to the workspace in either **Azure Diagnostic** or **Resource-Specific mode** without giving you a choice. See the [documentation for each service](./resource-logs-schema.md) for details on which mode it uses. All Azure services will eventually use Resource-Specific mode. As part of this transition, some resources will allow you to select a mode in the diagnostic setting. Specify resource-specific mode for any new diagnostic settings since this makes the data easier to manage and may help you to avoid complex migrations at a later date.
Following is sample output data from Event Hubs for a resource log:
``` ## Send to Azure Storage
-Send resource logs to Azure storage to retain it for archiving. Once you have created the diagnostic setting, a storage container is created in the storage account as soon as an event occurs in one of the enabled log categories. The blobs within the container use the following naming convention:
+Send resource logs to Azure storage to retain it for archiving. Once you have created the diagnostic setting, a storage container is created in the storage account as soon as an event occurs in one of the enabled log categories.
+
+> [!NOTE]
+> An alternate strategy for archiving is to send the resource log to a Log Analytics workspace with an [archive policy](../logs/data-retention-archive.md).
+
+The blobs within the container use the following naming convention:
``` insights-logs-{log category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/RESOURCEGROUPS/{resource group name}/PROVIDERS/{resource provider name}/{resource type}/{resource name}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json
Each PT1H.json blob contains a JSON blob of events that occurred within the hour
Within the PT1H.json file, each event is stored with the following format. This will use a common top-level schema but be unique for each Azure service as described in [Resource logs schema](./resource-logs-schema.md).
+> [!NOTE]
+> Logs are written to the blob relevant to time that the log was generated, not time that it was received. This means at the turn of the hour, both the previous hour and current hour blobs could be receiving new writes.
++ ``` JSON {"time": "2016-07-01T00:00:37.2040000Z","systemId": "46cdbb41-cb9c-4f3d-a5b4-1d458d827ff1","category": "NetworkSecurityGroupRuleCounter","resourceId": "/SUBSCRIPTIONS/s1id1234-5679-0123-4567-890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/TESTNSG","operationName": "NetworkSecurityGroupCounters","properties": {"vnetResourceGuid": "{12345678-9012-3456-7890-123456789012}","subnetPrefix": "10.3.0.0/24","macAddress": "000123456789","ruleName": "/subscriptions/ s1id1234-5679-0123-4567-890123456789/resourceGroups/testresourcegroup/providers/Microsoft.Network/networkSecurityGroups/testnsg/securityRules/default-allow-rdp","direction": "In","type": "allow","matchedConnections": 1988}} ```
+## Azure Monitor partner integrations
+Resource logs can also be sent partner solutions that are fully integrated into Azure. See [Azure Monitor partner integrations](../../partner-solutions/overview.md) for a list of these solutions and details on configuring them.
+ ## Next steps * [Read more about resource logs](../essentials/platform-logs-overview.md).
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
Title: Azure Monitoring REST API walkthrough description: How to authenticate requests and use the Azure Monitor REST API to retrieve available metric definitions and metric values. Previously updated : 03/19/2018 Last updated : 05/09/2022 # Azure Monitoring REST API walkthrough This article shows you how to perform authentication so your code can use the [Microsoft Azure Monitor REST API Reference](/rest/api/monitor/).
azure-portal Azure Portal Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards.md
Title: Create a dashboard in the Azure portal description: This article describes how to create and customize a dashboard in the Azure portal. Previously updated : 10/19/2021 Last updated : 05/05/2022 # Create a dashboard in the Azure portal
This example shows how to create a new private dashboard with an assigned name.
:::image type="content" source="media/azure-portal-dashboards/dashboard-name.png" alt-text="Screenshot of an empty grid with the Tile Gallery.":::
-1. To save the dashboard as is, select **Done customizing** in the page header. Or, continue to Step 2 of the next section to add tiles and save your dashboard.
+1. To save the dashboard as is, select **Save** in the page header. Or, continue to Step 2 of the next section to add tiles and save your dashboard.
The dashboard view now shows your new dashboard. Select the arrow next to the dashboard name to see dashboards available to you. The list might include dashboards that other users have created and shared.
To add tiles to a dashboard, follow these steps:
1. If desired, [resize or rearrange](#resize-or-rearrange-tiles) your tiles.
-1. To save your changes, select **Save** in the page header. You can also preview the changes without saving by selecting **Preview** in the page header. This preview mode also allows you to see how [filters](#set-and-override-dashboard-filters) affect your tiles. From the preview screen, you can select **Save** to keep the changes, **Discard** to remove them, or **Edit** to go back to the editing options and make further changes.
+1. To save your changes, select **Save**. You can also preview the changes without saving by selecting **Preview**. This preview mode also allows you to see how [filters](#set-and-override-dashboard-filters) affect your tiles. From the preview screen, you can select **Save** to keep the changes, **Cancel** to remove them, or **Edit** to go back to the editing options and make further changes.
:::image type="content" source="media/azure-portal-dashboards/dashboard-save.png" alt-text="Screenshot of the Preview, Save, and Discard options.":::
To customize the tile:
Data on the dashboard shows activity and refreshes based on the global filters. Some tiles will allow you to select a different time span for just one tile. To do so, follow these steps:
-1. Select **Customize tile data** from the context menu or from the ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) in the upper left corner of the tile.
+1. Select **Configure tile settings** from the context menu or from the ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) in the upper left corner of the tile.
:::image type="content" source="media/azure-portal-dashboards/dashboard-customize-tile-data.png" alt-text="Screenshot of tile context menu.":::
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
Title: Bicep modules description: Describes how to define a module in a Bicep file, and how to use module scopes. Previously updated : 04/08/2022 Last updated : 05/10/2022 # Bicep modules
-Bicep enables you to organize deployments into modules. A module is just a Bicep file that is deployed from another Bicep file. With modules, you improve the readability of your Bicep files by encapsulating complex details of your deployment. You can also easily reuse modules for different deployments.
+Bicep enables you to organize deployments into modules. A module is a Bicep file (or an ARM JSON template) that is deployed from another Bicep file. With modules, you improve the readability of your Bicep files by encapsulating complex details of your deployment. You can also easily reuse modules for different deployments.
-To share modules with other people in your organization, create a [template spec](../templates/template-specs.md), [public registry](https://github.com/Azure/bicep-registry-modules), or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions.
+To share modules with other people in your organization, create a [template spec](../bicep/template-specs.md), [public registry](https://github.com/Azure/bicep-registry-modules), or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions.
> [!TIP] > The choice between module registry and template specs is mostly a matter of preference. There are a few things to consider when you choose between the two:
So, a simple, real-world example would look like:
::: code language="bicep" source="~/azure-docs-bicep-samples/syntax-samples/modules/local-file-definition.bicep" :::
+You can also use an ARM JSON template as a module:
++ Use the symbolic name to reference the module in another part of the Bicep file. For example, you can use the symbolic name to get the output from a module. The symbolic name may contain a-z, A-Z, 0-9, and underscore (`_`). The name can't start with a number. A module can't have the same name as a parameter, variable, or resource.
-The path can be either a local file or a file in a registry. For more information, see [Path to module](#path-to-module).
+The path can be either a local file or a file in a registry. The local file can be either a Bicep file or an ARM JSON template. For more information, see [Path to module](#path-to-module).
The **name** property is required. It becomes the name of the nested deployment resource in the generated template.
azure-vmware Netapp Files With Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
Title: Integrate Azure NetApp Files with Azure VMware Solution
+ Title: Attach Azure NetApp Files to Azure VMware Solution VMs
description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures. Previously updated : 06/08/2021 Last updated : 05/10/2022
-# Integrate Azure NetApp Files with Azure VMware Solution
+# Attach Azure NetApp Files to Azure VMware Solution VMs
[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure service for migration and running the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. In this article, you'll set up, test, and verify the Azure NetApp Files volume as a file share for Azure VMware Solution workloads using the Network File System (NFS) protocol. The guest operating system runs inside virtual machines (VMs) accessing Azure NetApp Files volumes.
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 03/21/2022 Last updated : 05/11/2022
Archive tier supports the following clients:
| Workloads | Preview | Generally available | | | | |
-| SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | None | All regions, except West US 3, West India, UAE North, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West. |
-| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe, Australia South East, France Central, France South, Japan West, Korea Central, Korea South, UAE North, Germany West Central, Norway East. | Australia East, South Central US, West Central US, Southeast Asia, Central India. |
+| SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | None | All regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West. |
+| Azure Virtual Machines | US Gov Virginia, US Gov Texas, US Gov Arizona, UAE North, China North 2, China East 2. | All public regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West, UAE North. |
## How Azure Backup moves recovery points to the Vault-archive tier?
cognitive-services Howtodetectfacesinimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoDetectFacesinImage.md
Title: "Call the detect API - Face"
+ Title: "Call the Detect API - Face"
description: This guide demonstrates how to use face detection to extract attributes like age, emotion, or head pose from a given image.
ms.devlang: csharp
-# Call the detect API
+# Call the Detect API
This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
In this guide, you learned how to use the various functionalities of face detect
## Related articles - [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/find-similar-faces.md
+
+ Title: "Find similar faces"
+
+description: Use the Face service to find similar faces (face search by image).
+++++++ Last updated : 05/05/2022++++
+# Find similar faces
+
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+
+This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../Quickstarts/client-libraries.md).
+
+## Set up sample URL
+
+This guide uses remote images that are accessed by URL. Save a reference to the following URL string. All of the images accessed in this guide are located at this URL path.
+
+```
+"https://csdx.blob.core.windows.net/resources/Face/Images/"
+```
+
+## Detect faces for comparison
+
+You need to detect faces in images before you can compare them. In this guide, the following remote image, called *findsimilar.jpg*, will be used as the source:
+
+![Photo of a man who is smiling.](../media/quickstarts/find-similar.jpg)
+
+#### [C#](#tab/csharp)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_face_detect_recognize)]
+
+The following code uses the above method to get face data from a series of images.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_loadfaces)]
++
+#### [JavaScript](#tab/javascript)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
++
+The following code uses the above method to get face data from a series of images.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your subscription key and endpoint where appropriate. Then run the command to detect one of the target faces.
++
+Find the `"faceId"` value in the JSON response and save it to a temporary location. Then, call the above command again for these other image URLs, and save their face IDs as well. You'll use these IDs as the target group of faces from which to find a similar face.
++
+Finally, detect the single source face that you'll use for matching, and save its ID. Keep this ID separate from the others.
++++
+## Find and print matches
+
+In this guide, the face detected in the *Family1-Dad1.jpg* image should be returned as the face that's similar to the source image face.
+
+![Photo of a man who is smiling; this is the same person as the previous image.](../media/quickstarts/family-1-dad-1.jpg)
+
+#### [C#](#tab/csharp)
+
+The following code calls the Find Similar API on the saved list of faces.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar)]
+
+The following code prints the match details to the console:
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar_print)]
+
+#### [JavaScript](#tab/javascript)
+
+The following method takes a set of target faces and a single source face. Then, it compares them and finds all the target faces that are similar to the source face. Finally, it prints the match details to the console.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your subscription key and endpoint where appropriate.
++
+Paste in the following JSON content for the `body` value:
++
+Then, copy over the source face ID value to the `"faceId"` field. Then copy the other face IDs, separated by commas, as terms in the `"faceIds"` array.
+
+Run the command, and the returned JSON should show the correct face ID as a similar match.
+++
+## Next steps
+
+In this guide, you learned how to call the Find Similar API to do a face search by similarity in a larger group of faces. Next, learn more about the different recognition models available for face comparison operations.
+
+* [Specify a face recognition model](specify-recognition-model.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the released languages and public preview languages.
> [!NOTE] > If you want to use languages that aren't listed here, please contact us by email at [mspafeedback@microsoft.com](mailto:mspafeedback@microsoft.com). >
-> For pronunciation assessment feature, the released en-US language is available in all [Speech-to-Text regions](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation), and preview languages are available in one region: West US.
+> For pronunciation assessment supported regions, see [available regions](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation).
## Speech translation
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
The Speech service is available in these regions for speech-to-text, pronunciati
If you plan to train a custom model with audio data, use one of the [regions with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for faster training. You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy the fully trained model to another region later. > [!TIP]
-> For pronunciation assessment feature, the released en-US language is available in all speech-to-text regions, and [preview languages](language-support.md#pronunciation-assessment) are available in one region: West US.
+> For pronunciation assessment feature, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `es-ES` and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
### Intent recognition
communication-services Ui Library Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/ui-library/ui-library-overview.md
zone_pivot_groups: acs-plat-web-mobile
# UI Library Overview UI Library makes it easy for you to build modern communications user experiences using Azure Communication Services. It gives you a library of production-ready UI components that you can drop into your applications:
+<br/>
+<br/>
+>[!VIDEO https://www.youtube.com/embed/pCp4aQvRsGw]
::: zone pivot="platform-web" [!INCLUDE [Web UI Library](includes/web-ui-library.md)]
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication to your applications without being an expert in underlying technologies such as media encoding or telephony. Azure Communication Service is available in multiple [Azure geographies](concepts/privacy.md) and Azure for government.
+> [!VIDEO https://www.youtube.com/embed/chMHVHLFcao]
+ Azure Communication Services supports various communication formats: - [Voice and Video Calling](concepts/voice-video-calling/calling-sdk-features.md)
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md
ms.devlang: azurecli
# Quickstart: Create and manage Communication Services resources Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication Services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management SDK. The management SDK and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the SDKs is available in the Azure portal. -
+<br/>
+<br/>
+>[!VIDEO https://www.youtube.com/embed/3In3o5DhOHU]
> [!WARNING]
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
zone_pivot_groups: acs-js-csharp-java-python
> [!IMPORTANT] > SMS messages can be sent to and received from United States phone numbers. Phone numbers located in other geographies are not yet supported by Communication Services SMS. > For more information, see **[Phone number types](../../concepts/telephony/plan-solution.md)**.
+<br/>
+<br/>
+>[!VIDEO https://www.youtube.com/embed/YEyxSZqzF4o]
::: zone pivot="programming-language-csharp" [!INCLUDE [Send SMS with .NET SDK](./includes/send-sms-net.md)]
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
-# Quickstart: Deploy an AKS cluster with confidential computing nodes by using the Azure CLI
+# Quickstart: Deploy an AKS cluster with confidential computing Intel SGX agent nodes by using the Azure CLI
In this quickstart, you'll use the Azure CLI to deploy an Azure Kubernetes Service (AKS) cluster with enclave-aware (DCsv2/DCSv3) VM nodes. You'll then run a simple Hello World application in an enclave. You can also provision a cluster and add confidential computing nodes from the Azure portal, but this quickstart focuses on the Azure CLI.
Features of confidential computing nodes include:
- Intel SGX DCAP Driver preinstalled on the confidential computing nodes. For more information, see [Frequently asked questions for Azure confidential computing](./confidential-nodes-aks-faq.yml). > [!NOTE]
-> DCsv2/DCsv3 VMs use specialized hardware that's subject to higher pricing and region availability. For more information, see the [available SKUs and supported regions](virtual-machine-solutions-sgx.md). DCsv3 VM's are currently in preview in limited regions. Please refer to above mentioned page for details.
+> DCsv2/DCsv3 VMs use specialized hardware that's subject region availability. For more information, see the [available SKUs and supported regions](virtual-machine-solutions-sgx.md).
## Prerequisites
Features of confidential computing nodes include:
This quickstart requires: - An active Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Azure CLI version 2.0.64 or later installed and configured on your deployment machine.
+- Azure CLI version 2.0.64 or later installed and configured on your deployment machine.
Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md).-- A minimum of six DCsv2 cores available in your subscription.
+- A minimum of eight DCsv2/DCSv3/DCdsv3 cores available in your subscription.
- By default, the quota for confidential computing per Azure subscription is eight VM cores. If you plan to provision a cluster that requires more than eight cores, follow [these instructions](../azure-portal/supportability/per-vm-quota-requests.md) to raise a quota-increase ticket.
+ By default, there is no pre-assigned quota for Intel SGX VM sizes for your Azure subscriptions. You should follow [these instructions](../azure-portal/supportability/per-vm-quota-requests.md) to request for VM core quota for your subscriptions.
-## Create an AKS cluster with enclave-aware confidential computing nodes and add-on
+## Create an AKS cluster with enclave-aware confidential computing nodes and Intel SGX add-on
-Use the following instructions to create an AKS cluster with the confidential computing add-on enabled, add a node pool to the cluster, and verify what you created.
+Use the following instructions to create an AKS cluster with the Intel SGX add-on enabled, add a node pool to the cluster, and verify what you created with hello world enclave application.
### Create an AKS cluster with a system node pool > [!NOTE] > If you already have an AKS cluster that meets the prerequisite criteria listed earlier, [skip to the next section](#add-a-user-node-pool-with-confidential-computing-capabilities-to-the-aks-cluster) to add a confidential computing node pool.
-First, create a resource group for the cluster by using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *westus2* region:
+First, create a resource group for the cluster by using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *eastus2* region:
```azurecli-interactive
-az group create --name myResourceGroup --location westus2
+az group create --name myResourceGroup --location eastus2
``` Now create an AKS cluster, with the confidential computing add-on enabled, by using the [az aks create][az-aks-create] command:
Now create an AKS cluster, with the confidential computing add-on enabled, by us
```azurecli-interactive az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addons confcom ```
+The above command will deploy a new AKS cluster with system node pool of non confidential computing node. Confidential computing Intel SGX nodes are not recommended for system node pools.
-### Add a user node pool with confidential computing capabilities to the AKS cluster
+### Add an user node pool with confidential computing capabilities to the AKS cluster<a id="add-a-user-node-pool-with-confidential-computing-capabilities-to-the-aks-cluster"></a>
-Run the following command to add a user node pool of `Standard_DC2s_v2` size with three nodes to the AKS cluster. You can choose another larger sized SKU from the [list of supported DCsv2/Dcsv3 SKUs and regions](../virtual-machines/dcv2-series.md).
+Run the following command to add a user node pool of `Standard_DC4s_v3` size with three nodes to the AKS cluster. You can choose another larger sized SKU from the [list of supported DCsv2/DCsv3 SKUs and regions](../virtual-machines/dcv3-series.md).
```azurecli-interactive
-az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-vm-size Standard_DC2s_v2
+az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-vm-size Standard_DC4s_v3 --node-count 2
```
-After you run the command, a new node pool with DCsv2 should be visible with confidential computing add-on DaemonSets ([SGX device plug-in](confidential-nodes-aks-overview.md#confidential-computing-add-on-for-aks)).
+After you run the command, a new node pool with DCsv3 should be visible with confidential computing add-on DaemonSets ([SGX device plug-in](confidential-nodes-aks-overview.md#confidential-computing-add-on-for-aks)).
### Verify the node pool and add-on
Run the following command to enable the confidential computing add-on:
az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup ```
-### Add a DCsv2/DCsv3 user node pool to the cluster
+### Add a DCsv3 user node pool to the cluster
> [!NOTE]
-> To use the confidential computing capability, your existing AKS cluster needs to have a minimum of one node pool that's based on a DCsv2/DCsv3 VM SKU. To learn more about DCs-v2 VMs SKUs for confidential computing, see the [available SKUs and supported regions](virtual-machine-solutions-sgx.md).
+> To use the confidential computing capability, your existing AKS cluster needs to have a minimum of one node pool that's based on a DCsv2/DCsv3 VM SKU. To learn more about DCs-v2/Dcs-v3 VMs SKUs for confidential computing, see the [available SKUs and supported regions](virtual-machine-solutions-sgx.md).
Run the following command to create a node pool: ```azurecli-interactive
-az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-count 1 --node-vm-size Standard_DC4s_v2
+az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-count 2 --node-vm-size Standard_DC4s_v3
``` Verify that the new node pool with the name *confcompool1* has been created:
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-overview.md
Previously updated : 11/04/2021 Last updated : 05/10/2022 # Confidential computing nodes on Azure Kubernetes Service
-[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container application to run in an isolated, hardware protected, integrity protected in an attestable environment.
+[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container applications to run in an isolated, hardware protected, integrity protected attestable Trusted Execution Environment(TEE).
## Overview
-Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nodes](confidential-computing-enclaves.md) powered by Intel SGX. These nodes allow you to run sensitive workloads within a hardware-based trusted execution environment (TEE). TEEs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes, as well as Azure operator. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero-trust security planning and defense-in-depth container strategy.
+Azure Kubernetes Service (AKS) supports adding [Intel SGX confidential computing VM nodes](confidential-computing-enclaves.md) as agent pools in a cluster. These nodes allow you to run sensitive workloads within a hardware-based TEE. TEEs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes, as well as Azure operator. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero-trust, security planning and defense-in-depth container strategy.
:::image type="content" source="./media/confidential-nodes-aks-overview/sgx-aks-node.png" alt-text="Graphic of AKS Confidential Compute Node, showing confidential containers with code and data secured inside."::: ## AKS Confidential Nodes Features -- Hardware based and process level container isolation through Intel SGX trusted execution environment (TEE)
+- Hardware based and process level container isolation through Intel SGX trusted execution environment (TEE)
- Heterogenous node pool clusters (mix confidential and non-confidential node pools) - Encrypted Page Cache (EPC) memory-based pod scheduling (requires add-on) - Intel SGX DCAP driver pre-installed
Confidential computing nodes on AKS also support containers that are programmed
<!-- LINKS - external --> [Azure Attestation]: ../attestation/index.yml - <!-- LINKS - internal --> [DC Virtual Machine]: /confidential-computing/virtual-machine-solutions-sgx.md
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
+
+ Title: Disaster recovery guidance for Azure Container Apps
+description: Learn how to plan for and recover from disaster recovery scenarios in Azure Container Apps
+++++ Last updated : 5/10/2022++
+# Disaster recovery guidance for Azure Container Apps
+
+Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) to offer high-availability protection for your applications and data from data center failures.
+
+Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones.
+
+In the unlikely event of a full region outage, you have the option of using one of two strategies:
+
+- **Manual recovery**: Manually deploy to a new region, or wait for the region to recover, and then manually redeploy all environments and apps.
+
+- **Resilient recovery**: First, deploy your container apps in advance to multiple regions. Next, use Azure Front Door or Azure Traffic Manager to handle incoming requests, pointing traffic to your primary region. Then, should an outage occur, you can redirect traffic away from the affected region. See [Cross-region replication in Azure](/azure/availability-zones/cross-region-replication-azure) for more information.
+
+> [!NOTE]
+> Regardless of which strategy you choose, make sure your deployment configuration files are in source control so you can easily redeploy if necessary.
+
+Additionally, the following resources can help you create your own disaster recovery plan:
+
+- [Failure and disaster recovery for Azure applications](/azure/architecture/reliability/disaster-recovery)
+- [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service)
container-apps Get Started Existing Container Image Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image-portal.md
This article demonstrates how to deploy an existing container to Azure Container
- If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). ## Setup
+> [!NOTE]
+> An Azure Container Apps environment can be deployed as a zone redundant resource in regions where support is available. This is a deployment-time only configuration option.
+ Begin by signing in to the [Azure portal](https://portal.azure.com). ## Create a container app
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
An Azure account with an active subscription is required. If you don't already h
## Setup
+> [!NOTE]
+> An Azure Container Apps environment can be deployed as a zone redundant resource in regions where support is available. This is a deployment-time only configuration option.
+ <!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Previously updated : 12/02/2021 Last updated : 05/03/2022 - # Quotas for Azure Container Apps Preview
-The following quotas exist per subscription for Azure Container Apps Preview.
+The following quotas are on a per subscription basis for Azure Container Apps.
| Feature | Quantity | |||
-| Environments per region | 2 |
+| Environments per region | 5 |
| Container apps per environment | 20 |
-| Replicas per container app | 25 |
+| Replicas per container app | 30 |
| Cores per replica | 2 |
-| Cores per environment | 50 |
+| Cores per environment | 20 |
+
+To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
container-apps Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/samples.md
+
+ Title: Azure Container Apps samples
+description: Learn how to use Azure Container Apps from existing samples
++++ Last updated : 05/11/2022+++
+# Azure Container Apps samples
+
+Refer to the following samples to learn how to use Azure Container Apps in different contexts and paired with different technologies.
+
+| Name | Description |
+|--|--|
+| [Deploy an Orleans Cluster to Container Apps](https://github.com/Azure-Samples/Orleans-Cluster-on-Azure-Container-Apps) | An end-to-end sample and tutorial for getting a Microsoft Orleans cluster running on Azure Container Apps. Worker microservices rapidly transmit data to a back-end Orleans cluster for monitoring and storage, emulating thousands of physical devices in the field. |
+| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps ) | This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. |
+| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps ) | Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. |
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
With an HTTP scaling rule, you have control over the threshold that determines w
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `concurrentRequests`| Once the number of requests exceeds this then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 100 | 1 | n/a |
+| `concurrentRequests`| Once the number of requests exceeds this then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 10 | 1 | n/a |
In the following example, the container app scales out up to five replicas and can scale down to zero. The scaling threshold is set to 100 concurrent requests per second.
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
The following resources are needed for the scenario in this article:
- **A Kubernetes cluster** where you will install a Helm chart. If needed, create an AKS cluster [using the Azure CLI][./learn/quick-kubernetes-deploy-cli], [using Azure PowerShell][./learn/quick-kubernetes-deploy-powershell], or [using the Azure portal][./learn/quick-kubernetes-deploy-portal]. - **Azure CLI version 2.0.71 or later** - Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-## Enable OCI support
+## Set up Helm client
Use the `helm version` command to verify that you have installed Helm 3:
Use the `helm version` command to verify that you have installed Helm 3:
helm version ```
-Set the following environment variable to enable OCI support in the Helm 3 client. Currently, this support is experimental and subject to change.
-
-```console
-export HELM_EXPERIMENTAL_OCI=1
-```
+> [!NOTE]
+> The version indicated must be at least 3.8.0, as OCI support in earlier versions was experimental.
Set the following environment variables for the target registry. The ACR_NAME is the registry resource name. If the ACR registry url is myregistry.azurecr.io, set the ACR_NAME to myregistry
container-registry Container Registry Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-support-policies.md
+
+ Title: Azure Container Registry technical support policies
+description: Learn about Azure Container Registry (ACR) technical support policies
+ Last updated : 02/18/2022+
+#Customer intent: As a developer, I want to understand what ACR components I need to manage, what components are managed by Microsoft.
++
+# Support policies for Azure Container Registry(ACR)
+
+This article provides details about Azure Container Registry (ACR) support policies, supported features, and limitations.
+
+## Features supported by Azure Container Registry
+
+>* [Connect to ACR using Azure private link](container-registry-private-link.md)
+>* [Push and pull Helm charts to ACR](container-registry-helm-repos.md)
+>* [Encrypt using Customer managed keys](container-registry-customer-managed-keys.md)
+>* [Enable Content trust](container-registry-content-trust.md)
+>* [Scan Images using Azure Security Center](../defender-for-cloud/defender-for-container-registries-introduction.md)
+>* [ACR Tasks](/azure/container-registry/container-registry-tasks-overview)
+>* [Import container images to ACR](container-registry-import-images.md)
+>* [Image locking in ACR](container-registry-image-lock.md)
+>* [Synchronize content with ACR using Connected Registry](intro-connected-registry.md)
+>* [Geo replication in ACR](container-registry-geo-replication.md)
+
+## Microsoft/ACR canΓÇÖt extend support
+
+* Any local network issues that interrupt the connection to ACR service.
+* Vulnerabilities or issues caused by running third-party container images using ACR Tasks.
+* Vulnerabilities or bugs with images in the ACR customer store.
+
+## Microsoft/ACR extends support
+
+* General queries about the supported features of ACR.
+* Unable to pull image due to authentication errors, image size, and client-side issues with container runtime.
+* Unable to push an image to ACR due to authentication errors, image size, and client-side issues with container runtime.
+* Unable to add VNET/Subnet to ACR Firewall across subscription.
+* Issues with slow push/pull operations due to client, network, or ACR.
+* Issues with integration of ACR with Azure Kubernetes Service (AKS) or with any other Azure service.
+* Authentication issues in ACR, authentication errors in integration, repository-based access role(RBAC).
+
+## Shared responsibility
+
+* Issues with slow push/pull operations caused by a slow-performing client VM, network, or ACR. Here the customers have to provide the time range, image name, and configuration settings.
+* Issues with Integration of ACR with any other Azure service. Here the customers have to provide the details of the client used to build and pull the image and push it to ACR. For example, the customer uses the DevOps pipeline to build the image and push it to ACR.
+
+## Customers have to self support
+
+* Microsoft/ACR canΓÇÖt make any changes if there's a detection of base image vulnerability in the security center (Microsoft Defender for Cloud). Customers can reach out for guidance
+* Microsoft/ACR canΓÇÖt make any changes with Dockerfile. Customers have to identify and review it from their end.
+
+| ACR support | Link |
+| - | -- |
+| Create a support ticket | https://aka.ms/acr/support/create-ticket |
+| Service updates and releases | [ACR Blog](https://azure.microsoft.com/blog/tag/azure-container-registry/) |
+| Roadmap | https://aka.ms/acr/roadmap |
+| FAQ | https://aka.ms/acr/faq |
+| Audit Logs | https://aka.ms/acr/audit-logs |
+| Health-Check-CLI | https://aka.ms/acr/health-check |
+| ACR Links | https://aka.ms/acr/links |
+### API and SDK reference
+
+>* [SDK for Python](https://pypi.org/project/azure-mgmt-containerregistry/)
+>* [SDK for .NET](https://www.nuget.org/packages/Azure.Containers.ContainerRegistry)
+>* [REST API Reference](/rest/api/containerregistry/)
+
+## Upstream bugs
+
+The ACR support will identify the root cause of every issue raised. The team will report all the identified bugs as an [issue in the ACR repository](https://github.com/Azure/acr/issues) with supporting details. The engineering team will review and provide a workaround solution, bug fix, or upgrade with a new release timeline. All the bug fixes integrate from upstream.
+Customers can watch the issues, bug fixes, add more details, and follow the new releases.
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/bulk-executor-graph-dotnet.md
description: Learn how to use the bulk executor library to massively import grap
Previously updated : 05/02/2020 Last updated : 05/10/2022 -
+ms.devlang: csharp, java
+
-# Using the graph bulk executor .NET library to perform bulk operations in Azure Cosmos DB Gremlin API
+# Bulk ingestion in Azure Cosmos DB Gremlin API using BulkExecutor
[!INCLUDE[appliesto-gremlin-api](../includes/appliesto-gremlin-api.md)]
-This tutorial provides instructions about using Azure Cosmos DB's bulk executor .NET library to import and update graph objects into an Azure Cosmos DB Gremlin API container. This process makes use of the Graph class in the [bulk executor library](../bulk-executor-overview.md) to create Vertex and Edge objects programmatically to then insert multiple of them per network request. This behavior is configurable through the bulk executor library to make optimal use of both database and local memory resources.
+Graph database often has a use case to perform bulk ingestion to refresh the entire graph or update a portion of it. Cosmos DB, which is a distributed database and backbone of Azure Cosmos DB - Gremlin API, is meant to perform if the load is well distributed. BulkExecutor libraries in Cosmos DB designed to exploit this unique capability of Cosmos DB and provide the best performance, refer [here](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
-As opposed to sending Gremlin queries to a database, where the command is evaluated and then executed one at a time, using the bulk executor library will instead require to create and validate the objects locally. After creating the objects, the library allows you to send graph objects to the database service sequentially. Using this method, data ingestion speeds can be increased up to 100x, which makes it an ideal method for initial data migrations or periodical data movement operations. Learn more by visiting the GitHub page of the [Azure Cosmos DB Graph bulk executor sample application](https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started).
+This tutorial provides instructions about using Azure Cosmos DB's bulk executor library to import and update graph objects into an Azure Cosmos DB Gremlin API container. This process makes use to create Vertex and Edge objects programmatically to then insert multiple of them per network request.
-## Bulk operations with graph data
+Instead of sending Gremlin queries to a database, where the command is evaluated and then executed one at a time, using the BulkExecutor library will require to create and validate the objects locally. After initializing, the graph objects, the library allows you to send graph objects to the database service sequentially. Using this method, data ingestion speeds can be increased up to 100x, which makes it an ideal method for initial data migrations or periodical data movement operations.
-The [bulk executor library](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph) contains a `Microsoft.Azure.CosmosDB.BulkExecutor.Graph` namespace to provide functionality for creating and importing graph objects.
+It's now available in following flavors:
-The following process outlines how data migration can be used for a Gremlin API container:
-1. Retrieve records from the data source.
-2. Construct `GremlinVertex` and `GremlinEdge` objects from the obtained records and add them into an `IEnumerable` data structure. In this part of the application the logic to detect and add relationships should be implemented, in case the data source isn't a graph database.
-3. Use the [Graph BulkImportAsync method](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph.graphbulkexecutor.bulkimportasync) to insert the graph objects into the collection.
+## .NET
+### Prerequisites
+* Visual Studio 2019 with the Azure development workload. You can get started with the [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/) for free.
+* An Azure subscription. You can create [a free Azure account here](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db). Alternatively, you can create a Cosmos database account with [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+* An Azure Cosmos DB Gremlin API database with an **unlimited collection**. The guide shows how to get started with [Azure Cosmos DB Gremlin API in .NET](./create-graph-dotnet.md).
+* Git. For more information check out the [Git Downloads page](https://git-scm.com/downloads).
+#### Clone
+To run this sample, run the `git clone` command below:
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git
+```
+The sample is available at path .\azure-cosmos-graph-bulk-executor\dotnet\src\
-This mechanism will improve the data migration efficiency as compared to using a Gremlin client. This improvement is experienced because inserting data with Gremlin will require the application send a query at a time that will need to be validated, evaluated, and then executed to create the data. The bulk executor library will handle the validation in the application and send multiple graph objects at a time for each network request.
+#### Sample
+```csharp
-### Creating Vertices and Edges
+IGraphBulkExecutor graphBulkExecutor = new GraphBulkExecutor("MyConnectionString", "myDatabase", "myContainer");
-`GraphBulkExecutor` provides the `BulkImportAsync` method that requires a `IEnumerable` list of `GremlinVertex` or `GremlinEdge` objects, both defined in the `Microsoft.Azure.CosmosDB.BulkExecutor.Graph.Element` namespace. In the sample, we separated the edges and vertices into two BulkExecutor import tasks. See the example below:
+List<IGremlinElement> gremlinElements = new List<IGremlinElement>();
+gremlinElements.AddRange(Program.GenerateVertices(Program.documentsToInsert));
+gremlinElements.AddRange(Program.GenerateEdges(Program.documentsToInsert));
+BulkOperationResponse bulkOperationResponse = await graphBulkExecutor.BulkImportAsync(
+ gremlinElements: gremlinElements,
+ enableUpsert: true);
+```
-```csharp
+### Execute
+Modify the following parameters as:
-IBulkExecutor graphbulkExecutor = new GraphBulkExecutor(documentClient, targetCollection);
-
-BulkImportResponse vResponse = null;
-BulkImportResponse eResponse = null;
-
-try
-{
- // Import a list of GremlinVertex objects
- vResponse = await graphbulkExecutor.BulkImportAsync(
- Utils.GenerateVertices(numberOfDocumentsToGenerate),
- enableUpsert: true,
- disableAutomaticIdGeneration: true,
- maxConcurrencyPerPartitionKeyRange: null,
- maxInMemorySortingBatchSize: null,
- cancellationToken: token);
-
- // Import a list of GremlinEdge objects
- eResponse = await graphbulkExecutor.BulkImportAsync(
- Utils.GenerateEdges(numberOfDocumentsToGenerate),
- enableUpsert: true,
- disableAutomaticIdGeneration: true,
- maxConcurrencyPerPartitionKeyRange: null,
- maxInMemorySortingBatchSize: null,
- cancellationToken: token);
-}
-catch (DocumentClientException de)
-{
- Trace.TraceError("Document client exception: {0}", de);
-}
-catch (Exception e)
-{
- Trace.TraceError("Exception: {0}", e);
-}
-```
+Parameter|Description
+|
+`ConnectionString`|It is **your .NET SDK endpoint** found in the Overview section of your Azure Cosmos DB Gremlin API database account. It has the format of `https://your-graph-database-account.documents.azure.com:443/`
+`DatabaseName`, `ContainerName`|These parameters are the **target database and container names**.
+`DocumentsToInsert`| Number of documents to be generated (only relevant to generate synthetic data)
+`PartitionKey` | To ensure partition key is specified along with each document while ingestion.
+`NumberOfRUs` | Only relevant if container doesn't exists and needs to be created as part of execution
-For more information about the parameters of the bulk executor library, see [BulkImportData to Azure Cosmos DB article](../bulk-executor-dot-net.md#bulk-import-data-to-an-azure-cosmos-account).
+Download the full sample application in .NET from [here](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/dotnet).
-The payload needs to be instantiated into `GremlinVertex` and `GremlinEdge` objects. Here's how these objects can be created:
+## JAVA
-**Vertices**:
-```csharp
-// Creating a vertex
-GremlinVertex v = new GremlinVertex(
- "vertexId",
- "vertexLabel");
+### Sample usage
-// Adding custom properties to the vertex
-v.AddProperty("customProperty", "value");
+The sample application is provided to illustrate how to use the GraphBulkExecutor package. Samples are available for using either the Domain object annotations or using the POJO objects directly. It's recommended, to try both approaches, to determine which better meets your implementation and performance demands.
-// Partitioning keys must be specified for all vertices
-v.AddProperty("partitioningKey", "value");
+### Clone
+To run the sample, run the `git clone` command below:
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git
```
+The sample is available at .\azure-cosmos-graph-bulk-executor\java\
-**Edges**:
-```csharp
-// Creating an edge
-GremlinEdge e = new GremlinEdge(
- "edgeId",
- "edgeLabel",
- "targetVertexId",
- "sourceVertexId",
- "targetVertexLabel",
- "sourceVertexLabel",
- "targetVertexPartitioningKey",
- "sourceVertexPartitioningKey");
-
-// Adding custom properties to the edge
-e.AddProperty("customProperty", "value");
+### Prerequisites
+
+To run this sample, you'll need to have the following software:
+
+* OpenJDK 11
+* Maven
+* An Azure Cosmos DB Account configured to use the Gremlin API
+
+### Sample
+```java
+private static void executeWithPOJO(Stream<GremlinVertex> vertices,
+ Stream<GremlinEdge> edges,
+ boolean createDocs) {
+ results.transitionState("Configure Database");
+ UploadWithBulkLoader loader = new UploadWithBulkLoader();
+ results.transitionState("Write Documents");
+ loader.uploadDocuments(vertices, edges, createDocs);
+ }
```
-> [!NOTE]
-> The bulk executor utility doesn't automatically check for existing Vertices before adding Edges. This needs to be validated in the application before running the BulkImport tasks.
+To run the sample, refer the configuration as follows and modify as needed:
+### Configuration
-## Sample application
+The /resources/application.properties file defines the data required to configure the Cosmos DB the required values are:
-### Prerequisites
-* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
-* An Azure subscription. You can create [a free Azure account here](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db). Alternatively, you can create a Cosmos database account with [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
-* An Azure Cosmos DB Gremlin API database with an **unlimited collection**. This guide shows how to get started with [Azure Cosmos DB Gremlin API in .NET](./create-graph-dotnet.md).
-* Git. For more information check out the [Git Downloads page](https://git-scm.com/downloads).
+* **sample.sql.host**: It's the value provided by the Azure Cosmos DB. Ensure you use the ".NET SDK URI", which can be located on the Overview section of the Cosmos DB Account.
+* **sample.sql.key**: You can get the primary or secondary key from the Keys section of the Cosmos DB Account.
+* **sample.sql.database.name**: The name of the database within the Cosmos DB account to run the sample against. If the database isn't found, the sample code will create it.
+* **sample.sql.container.name**: The name of the container within the database to run the sample against. If the container isn't found, the sample code will create it.
+* **sample.sql.partition.path**: If the container needs to be created, this value will be used to define the partitionKey path.
+* **sample.sql.allow.throughput**: The container will be updated to use the throughput value defined here. If you're exploring different throughput options to meet your performance demands, make sure to reset the throughput on the container when done with your exploration. There are costs associated with leaving the container provisioned with a higher throughput.
-### Clone the sample application
-In this tutorial, we'll follow through the steps for getting started by using the [Azure Cosmos DB Graph bulk executor sample](https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started) hosted on GitHub. This application consists of a .NET solution that randomly generates vertex and edge objects and then executes bulk insertions to the specified graph database account. To get the application, run the `git clone` command below:
+### Execute
+
+Once the configuration is modified as per your environment, then run the command:
```bash
-git clone https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started.git
+mvn clean package
```
-This repository contains the GraphBulkExecutor sample with the following files:
+For added safety, you can also run the integration tests by changing the "skipIntegrationTests" value in the pom.xml to
+false.
-File|Description
-|
-`App.config`|This is where the application and database-specific parameters are specified. This file should be modified first to connect to the destination database and collections.
-`Program.cs`| This file contains the logic behind creating the `DocumentClient` collection, handling the cleanups and sending the bulk executor requests.
-`Util.cs`| This file contains a helper class that contains the logic behind generating test data, and checking if the database and collections exist.
+Assuming the Unit tests were run successfully. You can run the command line to execute the sample code:
-In the `App.config` file, the following are the configuration values that can be provided:
+```bash
+java -jar target/azure-cosmos-graph-bulk-executor-1.0-jar-with-dependencies.jar -v 1000 -e 10 -d
+```
-Setting|Description
-|
-`EndPointUrl`|This is **your .NET SDK endpoint** found in the Overview page of your Azure Cosmos DB Gremlin API database account. This has the format of `https://your-graph-database-account.documents.azure.com:443/`
-`AuthorizationKey`|This is the Primary or Secondary key listed under your Azure Cosmos DB account. Learn more about [Securing Access to Azure Cosmos DB data](../secure-access-to-data.md#primary-keys)
-`DatabaseName`, `CollectionName`|These are the **target database and collection names**. When `ShouldCleanupOnStart` is set to `true` these values, along with `CollectionThroughput`, will be used to drop them and create a new database and collection. Similarly, if `ShouldCleanupOnFinish` is set to `true`, they'll be used to delete the database as soon as the ingestion is over. The target collection must be **an unlimited collection**.
-`CollectionThroughput`|This is used to create a new collection if the `ShouldCleanupOnStart` option is set to `true`.
-`ShouldCleanupOnStart`|This will drop the database account and collections before the program is run, and then create new ones with the `DatabaseName`, `CollectionName` and `CollectionThroughput` values.
-`ShouldCleanupOnFinish`|This will drop the database account and collections with the specified `DatabaseName` and `CollectionName` after the program is run.
-`NumberOfDocumentsToImport`|This will determine the number of test vertices and edges that will be generated in the sample. This number will apply to both vertices and edges.
-`NumberOfBatches`|This will determine the number of test vertices and edges that will be generated in the sample. This number will apply to both vertices and edges.
-`CollectionPartitionKey`|This will be used to create the test vertices and edges, where this property will be auto-assigned. This will also be used when re-creating the database and collections if the `ShouldCleanupOnStart` option is set to `true`.
-
-### Run the sample application
-
-1. Add your specific database configuration parameters in `App.config`. This will be used to create a DocumentClient instance. If the database and container haven't been created yet, they'll be created automatically.
-2. Run the application. This will call `BulkImportAsync` two times, one to import Vertices and one to import Edges. If any of the objects generates an error when they're inserted, they'll be added to either `.\BadVertices.txt` or `.\BadEdges.txt`.
-3. Evaluate the results by querying the graph database. If the `ShouldCleanupOnFinish` option is set to true, then the database will automatically be deleted.
+Running the above commands will execute the sample with a small batch (1k Vertices and roughly 5k Edges). Use the following command lines arguments to tweak the volumes run and which sample version to run.
+
+### Command line Arguments
+
+There are several command line arguments are available while running this sample, which is detailed as:
+
+* **--vertexCount** (-v): Tells the application how many person vertices to generate.
+* **--edgeMax** (-e): Tells the application what the maximum number of edges to generate for each Vertex. The generator will randomly select a number between 1 and the value provided here.
+* **--domainSample** (-d): Tells the application to run the sample using the Person and Relationship domain structures instead of the GraphBulkExecutors GremlinVertex and GremlinEdge POJOs.
+* **--createDocuments** (-c): Tells the application to use create operations. If not present, the application will default to using upsert operations.
+
+### Details about the sample
+
+#### Person Vertex
+
+The Person class is a fairly simple domain object that has been decorated with several annotations to help the
+transformation into the GremlinVertex class. They are as follows:
+
+* **GremlinVertex**: Notice how we're using the optional "label" parameter to define all Vertices created using this class.
+* **GremlinId**: Being used to define which field will be used as the ID value. While the field name on the Person class is ID, it isn't required.
+* **GremlinProperty**: Is being used on the email field to change the name of the property when stored in the database.
+* **GremlinPartitionKey**: Is being used to define which field on the class contains the partition key. The field name provided here should match the value defined by the partition path on the container.
+* **GremlinIgnore**: Is being used to exclude the isSpecial field from the property being written to the database.
+
+#### Relationship Edge
+
+The RelationshipEdge is a fairly versatile domain object. Using the field level label annotation allows for a dynamic
+collection of edge types to be created. The following annotations are represented in this sample domain edge:
+
+* **GremlinEdge**: The GremlinEdge decoration on the class, defines the name of the field for the specified partition key. The value assigned, when the edge document is created, will come from the source vertex information.
+* **GremlinEdgeVertex**: Notice that there are two instances of GremlinEdgeVertex defined. One for each side of the edge (Source and Destination). Our sample has the field's data type as GremlinEdgeVertexInfo. The information provided by GremlinEdgeVertex class is required for the edge to be created correctly in the database. Another option would be to have the data type of the vertices be a class that has been decorated with the GremlinVertex annotations.
+* **GremlinLabel**: The sample edge is using a field to define what the label value is. It allows different labels to be defined while still using the same base domain class.
+
+### Output Explained
+
+The console will finish its run with a json string describing the run times of the sample. The json string contains the
+following information.
+
+* **startTime**: The System.nanoTime() when the process started.
+* **endtime**: The System.nanoTime() when the process completed.
+* **durationInNanoSeconds**: The difference between the endTime and the startTime.
+* **durationInMinutes**: The durationInNanoSeconds converted into minutes. Important to note that durationInMinutes is represented as a float number, not a time value. For example, a value 2.5 would be 2 minutes and 30 seconds.
+* **vertexCount**: The volume of vertices generated which should match the value passed into the command line execution.
+* **edgeCount**: The volume of edges generated which isn't static and it's built with an element of randomness.
+* **exception**: Only populated when there was an exception thrown when attempting to make the run.
+
+#### States Array
+
+The states array gives insight into how long each step within the execution takes. The steps that occur are:
+
+* **Build sample vertices**: The time it takes to fabricate the requested volume of Person objects.
+* **Build sample edges**: The time it takes to fabricate the Relationship objects.
+* **Configure Database**: The amount of time it took to get the database configured based on the values provided in the
+ application.properties.
+* **Write Documents**: The total time it took to write the documents to the database.
+
+Each state will contain the following values:
+
+* **stateName**: The name of the state being reported.
+* **startTime**: The System.nanoTime() when the state started.
+* **endtime**: The System.nanoTime() when the state completed.
+* **durationInNanoSeconds**: The difference between the endTime and the startTime.
+* **durationInMinutes**: The durationInNanoSeconds converted into minutes. Important to note that durationInMinutes is represented as a float number, not a time value. for example, a value 2.5 would be 2 minutes and 30 seconds.
## Next steps
-* To learn about NuGet package details and release notes of bulk executor .NET library, see [bulk executor SDK details](../sql-api-sdk-bulk-executor-dot-net.md).
-* Check out the [Performance Tips](../bulk-executor-dot-net.md#performance-tips) to further optimize the usage of bulk executor.
-* Review the [BulkExecutor.Graph Reference article](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph) for more details about the classes and methods defined in this namespace.
+* Review the [BulkExecutor Java, which is Open Source](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/java/src/main/java/com/azure/graph/bulk/impl) for more details about the classes and methods defined in this namespace.
+* Review the [BulkMode, which is part of .NET V3 SDK](../sql/tutorial-sql-api-dotnet-bulk-import.md)
cost-management-billing Reservation Discount Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-databricks.md
Databricks pre-purchase applies to all Databricks workloads and tiers. You can t
| Jobs Compute | 0.15 | 0.30 | | Jobs Light Compute | 0.07 | 0.22 | | SQL Compute | NA | 0.22 |
+| Delta Live Tables | NA | 0.30 (core), 0.38 (pro), 0.54 (advanced) |
For example, when a quantity of Data Analytics ΓÇô Standard tier is consumed, the pre-purchased Databricks commit units is deducted by 0.4 units. When a quantity of Data Engineering Light ΓÇô Standard tier is used, the pre-purchased Databricks commit unit is deducted by 0.07 units.
data-factory Concepts Data Flow Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-udf.md
Last updated 04/20/2022
# User defined functions (Preview) in mapping data flow A user defined function is a customized expression you can define to be able to reuse logic across multiple mapping data flows. User defined functions live in a collection called a data flow library to be able to easily group up common sets of customized functions.
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mysql.md
Last updated 12/20/2021
This article outlines how to use Copy Activity in Azure Data Factory or Synapse Analytics pipelines to copy data from and to Azure Database for MySQL, and use Data Flow to transform data in Azure Database for MySQL. To learn more, read the introductory articles for [Azure Data Factory](introduction.md) and [Synapse Analytics](../synapse-analytics/overview-what-is.md).
-This connector is specialized for [Azure Database for MySQL service](../mysql/overview.md). To copy data from generic MySQL database located on-premises or in the cloud, use [MySQL connector](connector-mysql.md).
+This connector is specialized for
+- [Azure Database for MySQL Single Server](../mysql/single-server-overview.md)
+- [Azure Database for MySQL Flexible Server](../mysql/flexible-server/overview.md) (Currently public access is only supported)
+
+
+ To copy data from generic MySQL database located on-premises or in the cloud, use [MySQL connector](connector-mysql.md).
+
+## Prerequisites
+
+This quickstart requires the following resources and configuration mentioned below as a starting point:
+
+- An existing Azure database for MySQL Single server or MySQL Flexible Server.
+- Enable **Allow public access from any Azure service within Azure to this server** in networking page of the MySQL server . This will allow you to use Data factory studio.
## Supported capabilities
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for Cloud description: Enable the container protections of Microsoft Defender for Containers -- zone_pivot_groups: k8s-host Previously updated : 03/27/2022 Last updated : 05/10/2022 + # Enable Microsoft Defender for Containers Microsoft Defender for Containers is the cloud-native solution for securing your containers.
A full list of supported alerts is available in the [reference table of all Defe
[!INCLUDE [Remove the extension](./includes/defender-for-containers-remove-extension.md)] ::: zone-end + ::: zone pivot="defender-for-container-aks" [!INCLUDE [Remove the profile](./includes/defender-for-containers-remove-profile.md)] ::: zone-end++
+## Next steps
+
+[Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-container-registries-usage.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/26/2022 Last updated : 05/09/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## May 2022
+
+Updates in May Include:
+
+- [Availability of Defender for SQL to protect Amazon Web Services (AWS) and Google Cloud Computing (GCP) environment](#general-availability-ga-of-defender-for-sql-for-aws-and-gcp-environments)
++
+### General availability (GA) of Defender for SQL for AWS and GCP environments
+
+The database protection capabilities provided by Microsoft Defender for Cloud now include support for your SQL databases hosted in AWS and GCP environments.
+
+Using Defender for SQL, enterprises can now protect their data, whether hosted in Azure, AWS, GCP, or on-premises machines.
+
+Microsoft Defender for SQL now provides a unified cross-environment experience to view security recommendations, security alerts and vulnerability assessment findings encompassing SQL servers and the underlying Windows OS.
++
+Using the multi-cloud onboarding experience, you can enable and enforce databases protection for VMs in AWS and GCP. After enabling multi-cloud protection, all supported resources covered by your subscription are protected. Future resources created within the same subscription will also be protected.
+
+Learn how to protect and connect your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
++ ## April 2022 Updates in April include:
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
You can set up a sample graph for this scenario using the **Azure Digital Twins
You can use the **Azure Digital Twins Data Simulator** to provision a sample twin graph and push property updates to it. The twin graph created here models pasteurization processes for a dairy company.
-Start by opening the [Azure Digital Twins Data Simulator](https://explorer.digitaltwins.azure.net/tools/data-pusher) web application in your browser.
+Start by opening the [Azure Digital Twins Data Simulator](https://explorer.digitaltwins.azure.net/tools/data-pusher) in your browser. Set these fields:
+* **Instance URL**: Enter the host name of your Azure Digital Twins instance. The host name can be found in the [portal](https://portal.azure.com) page for your instance, and has a format like `<Azure-Digital-Twins-instance-name>.api.<region-code>.digitaltwins.azure.net`.
+* **Simulation Type**: Select *Dairy facility* from the dropdown menu.
+Select **Generate Environment**.
-Enter the host name of your Azure Digital Twins instance in the Instance URL field. The host name can be found in the [portal](https://portal.azure.com) page for your instance, and has a format like `<Azure-Digital-Twins-instance-name>.api.<region-code>.digitaltwins.azure.net`. Select **Generate Environment**.
-You'll see confirmation messages on the screen as models, twins, and relationships are created in your environment. When the simulation is ready, the **Start simulation** button will become enabled. Select **Start simulation** to push simulated data to your Azure Digital Twins instance. To continuously update the twins in your Azure Digital Twins instance, keep this browser window in the foreground on your desktop (and complete other browser actions in a separate window).
+You'll see confirmation messages on the screen as models, twins, and relationships are created in your environment.
+
+When the simulation is ready, the **Start simulation** button will become enabled. Select **Start simulation** to push simulated data to your Azure Digital Twins instance. To continuously update the twins in your Azure Digital Twins instance, keep this browser window in the foreground on your desktop (and complete other browser actions in a separate window).
To verify that data is flowing through the data history pipeline, navigate to the [Azure portal](https://portal.azure.com) and open the Event Hubs namespace resource you created. You should see charts showing the flow of messages into and out of the namespace, indicating the flow of incoming messages from Azure Digital Twins and outgoing messages to Azure Data Explorer.
digital-twins Reference Query Clause Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-clause-match.md
# Mandatory fields. Title: Azure Digital Twins query language reference - MATCH clause (preview)
+ Title: Azure Digital Twins query language reference - MATCH clause
description: Reference documentation for the Azure Digital Twins query language MATCH clause Previously updated : 02/25/2022 Last updated : 05/11/2022
#
-# Azure Digital Twins query language reference: MATCH clause (preview)
+# Azure Digital Twins query language reference: MATCH clause
-This document contains reference information on the *MATCH clause* for the [Azure Digital Twins query language](concepts-query-language.md). This clause is currently in preview.
+This document contains reference information on the *MATCH clause* for the [Azure Digital Twins query language](concepts-query-language.md).
The `MATCH` clause is used in the Azure Digital Twins query language as part of the [FROM clause](reference-query-clause-from.md). `MATCH` allows you to specify which pattern should be followed while traversing relationships in the Azure Digital Twins graph (this is also known as a "variable hop" query pattern).
You can chain multiple relationship conditions together, like this. The placehol
### Examples
-Here's an example that combines relationship direction, relationship name, and number of hops The following query finds twins Floor and Room where the relationship between Floor and Room meets these conditions:
+Here's an example that combines relationship direction, relationship name, and number of hops. The following query finds twins Floor and Room where the relationship between Floor and Room meets these conditions:
* the relationship is left-to-right, with Floor as the source and Room as the target * the relationship has a name of either 'contains' or 'isAssociatedWith' * the relationship has either 4 or 5 hops
The following limits apply to queries using `MATCH`:
``` If your scenario requires you to use `$dtId` on other twins, consider using the [JOIN clause](reference-query-clause-join.md) instead.
-* MATCH queries that traverse the same twin multiple times may unexpectedly remove this twin from results.
+* MATCH queries that traverse the same twin multiple times may unexpectedly remove this twin from results.
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Azure Database Migration Service prerequisites that are common across all suppor
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription.
+ - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-managed-instance-ads.md)
> [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard. * Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart) or [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal)
firewall Policy Rule Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-rule-sets.md
Previously updated : 11/9/2021 Last updated : 05/11/2022
Even though you can't delete the default rule collection groups nor modify their
Rule collection groups contain one or multiple rule collections, which can be of type DNAT, network, or application. For example, you can group rules belonging to the same workloads or a VNet in a rule collection group.
-Rule collection groups have a maximum size of 2 Mb. If you need more than 2 Mb, you can split the rules into multiple rule collection groups. A Firewall Policy can contain 50 rule collection groups.
+Rule collection groups have a maximum size of 2 MB. If you need more than 2 MB, you can split the rules into multiple rule collection groups. A Firewall Policy can contain 50 rule collection groups.
## Rule collections
governance Assign Policy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-python.md
Python can be used, including [bash on Windows 10](/windows/wsl/install-win10) o
> [!NOTE] > Azure CLI is required to enable Python to use the **CLI-based authentication** in the following > examples. For information about other options, see
- > [Authenticate using the Azure management libraries for Python](/azure/developer/python/azure-sdk-authenticate).
+ > [Authenticate using the Azure management libraries for Python](/azure/developer/python/sdk/authentication-overview).
1. Authenticate through Azure CLI.
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/index.md
Title: Index of policy samples
-description: Index of built-ins for Azure Policy. Categories Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more.
Previously updated : 09/08/2021
+description: Index of built-ins for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more.
Last updated : 05/11/2022 ++ # Azure Policy Samples
Azure:
- [NIST SP 800-53 Rev. 5](./nist-sp-800-53-r5.md) - [NIST SP 800-53 Rev. 4](./nist-sp-800-53-r4.md) - [NIST SP 800-171 R2](./nist-sp-800-171-r2.md)
+- [SWIFT CSCF v2021](./swift-cscf-v2021.md)
- [UK OFFICIAL and UK NHS](./ukofficial-uknhs.md) The following are the [Regulatory Compliance](../concepts/regulatory-compliance.md) built-ins in
governance Swift Cscf V2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-cscf-v2021.md
+
+ Title: "Regulatory Compliance details for [Preview]: SWIFT CSCF v2021"
+description: "Details of the [Preview]: SWIFT CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment."
Last updated : 05/10/2022+++++
+# Details of the [Preview]: SWIFT CSCF v2021 Regulatory Compliance built-in initiative
+
+The following article details how the Azure Policy Regulatory Compliance built-in initiative
+definition maps to **compliance domains** and **controls** in [Preview]: SWIFT CSCF v2021.
+For more information about this compliance standard, see
+[[Preview]: SWIFT CSCF v2021](https://nvd.nist.gov/800-53). To understand
+_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
+[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
+
+The following mappings are to the **[Preview]: SWIFT CSCF v2021** controls. Use the
+navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete
+initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
+Then, find and select the **[Preview]: SWIFT CSCF v2021** Regulatory Compliance built-in
+initiative definition.
+
+> [!IMPORTANT]
+> Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
+> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
+> control; however, there often is not a one-to-one or complete match between a control and one or
+> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions
+> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In
+> addition, the compliance standard includes controls that aren't addressed by any Azure Policy
+> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your
+> overall compliance status. The associations between compliance domains, controls, and Azure Policy
+> definitions for this compliance standard may change over time. To view the change history, see the
+> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/SWIFT_CSCF_v2021.json).
+
+## SWIFT Environment Protection
+
+### SWIFT Environment Protection
+
+**ID**: SWIFT CSCF v2021 1.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[\[Preview\]: Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](../../../key-vault/general/private-link-service.md). |Audit, Deny, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[App Service should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |This policy audits any App Service not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](../../../container-registry/container-registry-private-link.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
+|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) |
+|[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](../../../virtual-network/network-security-groups-overview.md) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Key Vault should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea4d6841-2173-4317-9747-ff522a45120f) |This policy audits any Key Vault not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_KeyVault_Audit.json) |
+|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) |
+|[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) |
+|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) |
+|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) |
+|[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+|[SQL Server should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5d2f14-d830-42b6-9899-df6cfe9c71a3) |This policy audits any SQL Server not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_SQLServer_AuditIfNotExists.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage Accounts should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60d21c4f-21a3-4d94-85f4-b924e6aeeda4) |This policy audits any Storage Account not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_StorageAccount_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
+
+### Operating System Privileged Account Control
+
+**ID**: SWIFT CSCF v2021 1.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) |
+|[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+
+### Virtualisation Platform Protection
+
+**ID**: SWIFT CSCF v2021 1.3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) |
+
+### Restriction of Internet Access
+
+**ID**: SWIFT CSCF v2021 1.4
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
+
+## Reduce Attack Surface and Vulnerabilities
+
+### Internal Data Flow Security
+
+**ID**: SWIFT CSCF v2021 2.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb7ddfbdc-1260-477d-91fd-98bd9be789a6) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceApiApp_AuditHTTP_Audit.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
+|[Azure SQL Database should be running TLS version 1.2 or newer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32e6bbec-16b6-44c2-be37-c5b672d103cf) |Setting TLS version to 1.2 or newer improves security by ensuring your Azure SQL Database can only be accessed from clients using TLS 1.2 or newer. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities. |Audit, Disabled, Deny |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_MiniumTLSVersion_Audit.json) |
+|[Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) |
+|[Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for AKS Engine and Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md) |audit, Audit, deny, Deny, disabled, Disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/IngressHttpsOnly.json) |
+|[Latest TLS version should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_ApiApp_Audit.json) |
+|[Latest TLS version should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Latest TLS version should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
+|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) |
+|[SQL Managed Instance should have the minimal TLS version of 1.2](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8793640-60f7-487c-b5c3-1d37215905c4) |Setting minimal TLS version to 1.2 improves security by ensuring your SQL Managed Instance can only be accessed from clients using TLS 1.2. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_MiniumTLSVersion_Audit.json) |
+|[Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+
+### Security Updates
+
+**ID**: SWIFT CSCF v2021 2.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit Windows VMs with a pending reboot](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4221adbc-5c0f-474f-88b7-037a99e6114c) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the machine is pending reboot for any of the following reasons: component based servicing, Windows Update, pending file rename, pending computer rename, configuration manager pending reboot. Each detection has a unique registry path. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPendingReboot_AINE.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+
+### System Hardening
+
+**ID**: SWIFT CSCF v2021 2.3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) |
+|[Audit Windows machines that contain certificates expiring within the specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1417908b-4bff-46ee-a2a6-4acc899320ab) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if certificates in the specified store have an expiration date out of range for the number of days given as parameter. The policy also provides the option to only check for specific certificates or exclude specific certificates, and whether to report on expired certificates. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_CertificateExpiration_AINE.json) |
+|[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
+
+### Back-office Data Flow Security
+
+**ID**: SWIFT CSCF v2021 2.4A
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb7ddfbdc-1260-477d-91fd-98bd9be789a6) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceApiApp_AuditHTTP_Audit.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
+|[Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) |
+|[Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
+|[Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+
+### External Transmission Data Protection
+
+**ID**: SWIFT CSCF v2021 2.5A
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb7ddfbdc-1260-477d-91fd-98bd9be789a6) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceApiApp_AuditHTTP_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](../../../site-recovery/index.yml). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) |
+|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/container-registry-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Geo-redundant storage should be enabled for Storage Accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf045164-79ba-4215-8f95-f8048dc1780b) |Use geo-redundancy to create highly available applications |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/GeoRedundant_StorageAccounts_Audit.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+
+### Operator Session Confidentiality and Integrity
+
+**ID**: SWIFT CSCF v2021 2.6
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure SQL Database should be running TLS version 1.2 or newer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32e6bbec-16b6-44c2-be37-c5b672d103cf) |Setting TLS version to 1.2 or newer improves security by ensuring your Azure SQL Database can only be accessed from clients using TLS 1.2 or newer. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities. |Audit, Disabled, Deny |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_MiniumTLSVersion_Audit.json) |
+|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Latest TLS version should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_ApiApp_Audit.json) |
+|[Latest TLS version should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Latest TLS version should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
+|[SQL Managed Instance should have the minimal TLS version of 1.2](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8793640-60f7-487c-b5c3-1d37215905c4) |Setting minimal TLS version to 1.2 improves security by ensuring your SQL Managed Instance can only be accessed from clients using TLS 1.2. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_MiniumTLSVersion_Audit.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+
+### Vulnerability Scanning
+
+**ID**: SWIFT CSCF v2021 2.7
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+## Physically Secure the Environment
+
+### Physical Security
+
+**ID**: SWIFT CSCF v2021 3.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) |
+
+## Prevent Compromise of Credentials
+
+### Password Policy
+
+**ID**: SWIFT CSCF v2021 4.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) |
+|[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
+|[Audit Windows machines that allow re-use of the previous 24 passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b054a0d-39e2-4d53-bea3-9734cad2c69b) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that allow re-use of the previous 24 passwords |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEnforce_AINE.json) |
+|[Audit Windows machines that do not have a maximum password age of 70 days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ceb8dc2-559c-478b-a15b-733fbf1e3738) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not have a maximum password age of 70 days |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsMaximumPassword_AINE.json) |
+|[Audit Windows machines that do not have a minimum password age of 1 day](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F237b38db-ca4d-4259-9e47-7882441ca2c0) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not have a minimum password age of 1 day |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsMinimumPassword_AINE.json) |
+|[Audit Windows machines that do not have the password complexity setting enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf16e0bb-31e1-4646-8202-60a235cc7e74) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not have the password complexity setting enabled |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordComplexity_AINE.json) |
+|[Audit Windows machines that do not restrict the minimum password length to 14 characters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2d0e922-65d0-40c4-8f87-ea6da2d307a2) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not restrict the minimum password length to 14 characters |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordLength_AINE.json) |
+
+### Multi-factor Authentication
+
+**ID**: SWIFT CSCF v2021 4.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+
+## Manage Identities and Segregate Privileges
+
+### Logical Access Control
+
+**ID**: SWIFT CSCF v2021 5.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+
+### Token Management
+
+**ID**: SWIFT CSCF v2021 5.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
+|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+
+### Physical and Logical Password Storage
+
+**ID**: SWIFT CSCF v2021 5.4
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
+|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+
+## Detect Anomalous Activity to Systems or Transaction Records
+
+### Malware Protection
+
+**ID**: SWIFT CSCF v2021 6.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Microsoft Antimalware for Azure should be configured to automatically update protection signatures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc43e4a30-77cb-48ab-a4dd-93f175c63b57) |This policy audits any Windows virtual machine not configured with automatic update of Microsoft Antimalware protection signatures. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VirtualMachines_AntiMalwareAutoUpdate_AuditIfNotExists.json) |
+|[Microsoft IaaSAntimalware extension should be deployed on Windows servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b597639-28e4-48eb-b506-56b05d366257) |This policy audits any Windows server VM without Microsoft IaaSAntimalware extension deployed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/WindowsServers_AntiMalware_AuditIfNotExists.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+
+### Software Integrity
+
+**ID**: SWIFT CSCF v2021 6.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) |
+|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) |
+|[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+
+### Database Integrity
+
+**ID**: SWIFT CSCF v2021 6.3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
+|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) |
+|[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
+
+### Logging and Monitoring
+
+**ID**: SWIFT CSCF v2021 6.4
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Activity log should be retained for at least one year](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb02aacc0-b073-424e-8298-42b22829ee0a) |This policy audits the activity log if the retention is not set for 365 days or forever (retention days set to 0). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLogRetention_365orGreater.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](../../../site-recovery/index.yml). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
+|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Azure Monitor solution 'Security and Audit' must be deployed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3e596b57-105f-48a6-be97-03e9243bad6e) |This policy ensures that Security and Audit is deployed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Security_Audit_MustBeDeployed.json) |
+|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) |
+|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
+|[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) |
+|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+|[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) |
+|[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
+
+### Intrusion Detection
+
+**ID**: SWIFT CSCF v2021 6.5A
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) |
+|[CORS should not allow every resource to access your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F358c20a6-3f9e-4f0e-97ff-c6ce485e2aac) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API app. Allow only required domains to interact with your API app. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_ApiApp_Audit.json) |
+|[CORS should not allow every resource to access your Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
+|[CORS should not allow every resource to access your Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your web application. Allow only required domains to interact with your web app. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |
+|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) |
+|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
+|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) |
+|[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+
+## Plan for Incident Response and Information Sharing
+
+### Cyber Incident Response Planning
+
+**ID**: SWIFT CSCF v2021 7.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+## Next steps
+
+Additional articles about Azure Policy:
+
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- See the [initiative definition structure](../concepts/initiative-definition-structure.md).
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance First Query Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-python.md
installed.
> [!NOTE] > Azure CLI is required to enable Python to use the **CLI-based authentication** in the following > examples. For information about other options, see
- > [Authenticate using the Azure management libraries for Python](/azure/developer/python/azure-sdk-authenticate).
+ > [Authenticate using the Azure management libraries for Python](/azure/developer/python/sdk/authentication-overview).
1. Authenticate through Azure CLI.
hdinsight Apache Hbase Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-overview.md
description: An introduction to Apache HBase in HDInsight, a NoSQL database buil
Previously updated : 04/20/2020 Last updated : 05/11/2022 #Customer intent: As a developer new to Apache HBase and Apache HBase in Azure HDInsight, I want to have a basic understanding of Microsoft's implementation of Apache HBase in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
hdinsight Hdinsight Hadoop Hive Out Of Memory Error Oom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-hive-out-of-memory-error-oom.md
keywords: out of memory error, OOM, Hive settings
Previously updated : 11/28/2019 Last updated : 05/11/2022 # Fix an Apache Hive out of memory error in Azure HDInsight
With the new settings, the query successfully ran in under 10 minutes.
## Next steps
-Getting an OOM error doesn't necessarily mean the container size is too small. Instead, you should configure the memory settings so that the heap size is increased and is at least 80% of the container memory size. For optimizing Hive queries, see [Optimize Apache Hive queries for Apache Hadoop in HDInsight](hdinsight-hadoop-optimize-hive-query.md).
+Getting an OOM error doesn't necessarily mean the container size is too small. Instead, you should configure the memory settings so that the heap size is increased and is at least 80% of the container memory size. For optimizing Hive queries, see [Optimize Apache Hive queries for Apache Hadoop in HDInsight](hdinsight-hadoop-optimize-hive-query.md).
hdinsight Interactive Query Troubleshoot Tez Hangs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-tez-hangs.md
Title: Apache Tez application hangs in Azure HDInsight
description: Apache Tez application hangs in Azure HDInsight Previously updated : 08/09/2019 Last updated : 05/11/2022 # Scenario: Apache Tez application hangs in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Overview Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-data-lake-storage-gen1.md
description: Overview of Data Lake Storage Gen1 in HDInsight.
Previously updated : 04/21/2020 Last updated : 05/11/2022 # Azure Data Lake Storage Gen1 overview in HDInsight
For more information on how to access the data in Data Lake Storage Gen1, see [A
* [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) * [Introduction to Azure Storage](../storage/common/storage-introduction.md)
-* [Azure Data Lake Storage Gen2 overview](./overview-data-lake-storage-gen2.md)
+* [Azure Data Lake Storage Gen2 overview](./overview-data-lake-storage-gen2.md)
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
The response to this request looks like the following example:
## Create a job
-Use the following request to retrieve the details of the devices in a job:
+Use the following request to create a job:
```http PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006?api-version=1.2-preview ```
+The `group` field in the request body identifies a device group in your IoT Central application. A job uses a device group to identify the set of devices the job operates on.
++
+If you don't already have a suitable device group, you can create one with REST API call. The following example creates a device group with `group1` as the group ID:
+
+```http
+PUT https://{subdomain}.{baseDomain}/api/deviceGroups/group1?api-version=1.2-preview
+```
+
+When you create a device group, you define a `filter` that selects the devices to include in the group. A filter identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" device template where the `provisioned` property is `true`.
+
+```json
+{
+ "displayName": "Device group 1",
+ "description": "Custom device group.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "group1",
+ "displayName": "Device group 1",
+ "description": "Custom device group.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true"
+}
+```
+
+You can now use the `id` value from the response to create a new job.
++ ```json { "displayName": "Set target temperature", "description": "Set target temperature device property",
- "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "group": "group1",
"batch": { "type": "percentage", "value": 25
The response to this request looks like the following example. The initial job s
"id": "job-006", "displayName": "Set target temperature", "description": "Set target temperature device property",
- "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "group": "group1",
"batch": { "type": "percentage", "value": 25
iot-edge How To Provision Devices At Scale Linux On Windows Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-tpm.md
Simulated TPM samples:
```powershell Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" ```
+
+ If you have enrolled the device using a custom **Registration Id**, you must specify that Registration Id as well when provisioning:
+
+ ```powershell
+ Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" -registrationId "REGISTRATION_ID_HERE"
+ ```
# [Windows Admin Center](#tab/windowsadmincenter)
Simulated TPM samples:
```powershell Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" ```
+
+ If you have enrolled the device using a custom **Registration Id**, you must specify that Registration Id as well when provisioning:
+
+ ```powershell
+ Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" -registrationId "REGISTRATION_ID_HERE"
+ ```
:::moniker-end
Use the following commands on your device to verify that the IoT Edge installed
The device provisioning service enrollment process lets you set the device ID and device twin tags at the same time as you provision the new device. You can use those values to target individual devices or groups of devices by using automatic device management.
-Learn how to [deploy and monitor IoT Edge modules at scale by using the Azure portal](how-to-deploy-at-scale.md) or [the Azure CLI](how-to-deploy-cli-at-scale.md).
+Learn how to [deploy and monitor IoT Edge modules at scale by using the Azure portal](how-to-deploy-at-scale.md) or [the Azure CLI](how-to-deploy-cli-at-scale.md).
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Provision-EflowVm** command adds the provisioning information for your IoT
| deviceId | The device ID of an existing IoT Edge device | Device ID for provisioning an IoT Edge device (**ManualX509**). | | scopeId | The scope ID for an existing DPS instance. | Scope ID for provisioning an IoT Edge device (**DpsTPM**, **DpsX509**, or **DpsSymmetricKey**). | | symmKey | The primary key for an existing DPS enrollment or the primary key of an existing IoT Edge device registered using symmetric keys | Symmetric key for provisioning an IoT Edge device (**DpsSymmetricKey**). |
-| registrationId | The registration ID of an existing IoT Edge device | Registration ID for provisioning an IoT Edge device (**DpsSymmetricKey**). |
+| registrationId | The registration ID of an existing IoT Edge device | Registration ID for provisioning an IoT Edge device (**DpsSymmetricKey**, **DpsTPM**). |
| identityCertPath | Directory path | Absolute destination path of the identity certificate on your Windows host machine (**ManualX509**, **DpsX509**). | | identityPrivKeyPath | Directory path | Absolute source path of the identity private key on your Windows host machine (**ManualX509**, **DpsX509**). | | globalEndpoint | Device Endpoint URL | URL for Global Endpoint to be used for DPS provisioning. |
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-schema.md
Each property is a name-value pair of type string.
* **Minimum Property Value Length**: `1` * **Maximum Property Value Length**: `64`
-_Note that the same exact set of compatibility properties cannot be re-used with a different Provider and Name combination._
+_Note that the same exact set of compatibility properties cannot be used with more than one Update Provider and Name combination._
### instructions object
machine-learning Concept Sourcing Human Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-sourcing-human-data.md
We suggest the following best practices for manually collecting human data direc
:::row::: :::column span="":::
- **Communicate expectations clearly in the Statement of Work (SOW) with suppliers.**
+ **Communicate expectations clearly in the Statement of Work (SOW) (contracts or agreements) with suppliers.**
:::column-end::: :::column span="":::
- - An SOW which lacks requirements for responsible data collection work may result in low-quality or poorly collected data.
+ - A contract which lacks requirements for responsible data collection work may result in low-quality or poorly collected data.
:::column-end::: :::row-end:::
We suggest the following best practices for manually collecting human data direc
>[!NOTE] >This article focuses on recommendations for human data, including personal data and sensitive data such as biometric data, health data, racial or ethnic data, data collected manually from the general public or company employees, as well as metadata relating to human characteristics, such as age, ancestry, and gender identity, that may be created via annotation or labeling.
+[Download the full recommendations here](https://bit.ly/3FK8m8A)
++ ## Best practices for collecting age, ancestry, and gender identity
To enable people to self-identify, consider using the following survey questions
>[!CAUTION] >In some parts of the world, there are laws that criminalize specific gender categories, so it may be dangerous for data contributors to answer this question honestly. Always give people a way to opt out. And work with regional experts and attorneys to conduct a careful review of the laws and cultural norms of each place where you plan to collect data, and if needed, avoid asking this question entirely.
+[Download the full guidance here.](https://bit.ly/3woCOAz)
## Next steps For more information on how to work with your data:
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Title: What's new on the Data Science Virtual Machine description: Release notes for the Azure Data Science Virtual Machine-+ -+ Last updated 12/14/2021
Main changes:
* Changed CUDA to version 11.5 * Changed Docker to version 20.10.10 * Changed Intellijidea to version 2021.2.3
-* Changed NVIDIA Drivers to version 495.29.05
-* Changed NVIDIA SMI to version 495.29.05
+* Changed NVIDIA Drivers to version 470.103.01
+* Changed NVIDIA SMI to version 470.103.01
* Changed Nodejs to version v16.13.0 * Changed Pycharm to version 2021.2.3 * Changed VS Code to version 1.61.2
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
When you enable **No public IP**, your compute cluster doesn't use a public IP f
> [!WARNING] > By default, you do not have public internet access from No Public IP Compute Cluster. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) with a public IP.
-A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877**.
+A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877** and inbound from source **AzureLoadBalancer** and any port source to destination **VirtualNetwork** and port **44224** destination.
**No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace. A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
marketplace Analytics Api Delete Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-delete-report.md
On execution, this API deletes all of the report and report execution records.
| Method | Request URI | | | - | | DELETE | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport/{Report ID}` |
-|||
**Request header**
On execution, this API deletes all of the report and report execution records.
| | - | - | | Authorization | string | Required. The Azure AD access token in the form `Bearer <token>` | | Content Type | string | `Application/JSON` |
-||||
**Path parameter**
None
| Parameter name | Required | string | Description | | | - | - | - | | `reportId` | Yes | string | ID of the report thatΓÇÖs being modified |
-|||||
**Glossary**
Response payload:
| `RecurrenceCount` | Recurrence count provided during report creation | | `CallbackUrl` | Callback URL provided in the request | | `Format` | Format of the report files |
-|||
marketplace Analytics Api Get Report Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-get-report-queries.md
The Get report queries API gets all queries that are available for use in report
| **Method** | **Request URI** | | | | | GET | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledQueries?queryId={QueryID}&queryName={QueryName}&includeSystemQueries={include_system_queries}&includeOnlySystemQueries={include_only_system_queries}` |
-|||
**Request header**
The Get report queries API gets all queries that are available for use in report
| | | | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token in the form `Bearer <token>` | | Content-Type | string | `Application/JSON` |
-||||
**Path parameter**
None
| `queryName` | string | No | Filter to get details of only queries with the name given in the argument | | `IncludeSystemQueries` | boolean | No | Include predefined system queries in the response | | `IncludeOnlySystemQueries` | boolean | No | Include only system queries in the response |
-|||||
**Request payload**
This table describes the key definitions of elements in the response.
| `CreatedTime` | Time of creation of query | | `TotalCount` | Number of datasets in the Value array | | `StatusCode` | Result Code. The possible values are 200, 400, 401, 403, 500 |
-|||
marketplace Analytics Api Pause Report Executions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-pause-report-executions.md
This API, on execution, pauses the scheduled execution of reports.
| Method | Request URI | | | - | | PUT | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport/pause/{Report ID}` |
-|||
**Request header**
This API, on execution, pauses the scheduled execution of reports.
| | - | - | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token in the form `Bearer <token>` | | Content-Type | string | `Application/JSON` |
-||||
**Path parameter**
None
| Parameter name | Required | Type | Description | | | - | - | - | | `reportId` | Yes | string | ID of the report being modified |
-|||||
**Glossary**
Response payload:
| `RecurrenceCount` | Recurrence count provided during report creation | | `CallbackUrl` | Callback URL provided in the request | | `Format` | Format of the report files |
-|||
marketplace Analytics Api Resume Report Executions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-resume-report-executions.md
This API, on execution, resumes the scheduled execution of a paused commercial m
| Method | Request URI | | | - | | PUT | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport/resume/{reportId}` |
-|||
**Request header**
This API, on execution, resumes the scheduled execution of a paused commercial m
| | - | - | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token in the form `Bearer <token>` | | Content-Type | string | `Application/JSON` |
-||||
**Path parameter**
None
| Parameter name | Required | Type | Description | | | - | - | - | | `reportId` | Yes | string | ID of the report being modified |
-|||||
**Glossary**
Response payload:
| `RecurrenceCount` | Recurrence count provided during report creation | | `CallbackUrl` | Callback URL provided in the request | | `Format` | Format of the report files |
-|||
marketplace Analytics Api Try Report Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-try-report-queries.md
This API executes a Report query statement. The API returns only 10 records that
| **Method** | **Request URI** | | | | | GET | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledQueries/testQueryResult?exportQuery={query text}` |
-|||
**Request header**
This API executes a Report query statement. The API returns only 10 records that
| | | | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token in the form `Bearer <token>` | | Content-Type | string | `Application/JSON` |
-|||
**QueryParameter**
This API executes a Report query statement. The API returns only 10 records that
| | | | | `exportQuery` | string | Report query string that needs to be executed | | `queryId` | string | A valid existing query ID |
-|||
**Path** **Parameter**
marketplace Analytics Api Update Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-update-report.md
This API helps you modify a report parameter.
| Method | Request URI | | | - | | PUT | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport/{Report ID}` |
-|||
**Request header**
This API helps you modify a report parameter.
| | - | - | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token in the form `Bearer <token>` | | Content-Type | string | `Application/JSON` |
-||||
**Path parameter**
None
| Parameter name | Required | Type | Description | | | - | - | - | | `reportId` | Yes | string | ID of the report being modified |
-|||||
**Request payload**
This table lists the key definitions of elements in the request payload.
| `RecurrenceCount` | No | Number of reports to be generated. Default is indefinite | integer | | `Format` | Yes | File format of the exported file. Default is CSV. | CSV/TSV | | `CallbackUrl` | Yes | https callback URL to be called on report generation | string |
-|||||
**Glossary**
Response payload:
| `RecurrenceCount` | Recurrence count provided during report creation | | `CallbackUrl` | Callback URL provided in the request | | `Format` | Format of the report files |
-|||
marketplace Analytics Available Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-available-apis.md
Following are the list of APIs for accessing commercial marketplace analytics da
| **API** | **Functionality** | | | | | [Get all datasets](analytics-api-get-all-datasets.md) | Gets all the available datasets. Datasets list the tables, columns, metrics, and time ranges. |
-|||
## Query management APIs
Following are the list of APIs for accessing commercial marketplace analytics da
| [Create Report Query](analytics-programmatic-access.md#create-report-query-api) | Creates custom queries that define the dataset from which columns and metrics need to be exported. | | [GET Report Queries](analytics-api-get-report-queries.md) | Gets all the queries available for use in reports. Gets all the system and user-defined queries by default. | | [DELETE Report Queries](analytics-api-delete-report-queries.md) | Deletes user-defined queries. |
-|||
## Report management APIs
Following are the list of APIs for accessing commercial marketplace analytics da
| [Delete Report](analytics-api-delete-report.md) | Deletes all the report and report execution records. | | [Pause Report Executions](analytics-api-pause-report-executions.md) | Pauses the scheduled execution of reports. | | [Resume Report Executions](analytics-api-resume-report-executions.md) | Resumes the scheduled execution of a paused report. |
-|||
## Report execution pull APIs
Following are the list of APIs for accessing commercial marketplace analytics da
| **API** | **Functionality** | | | | | [Get Report Executions](analytics-programmatic-access.md#get-report-executions-api) | Get all the executions that have happened for a given report. |
-|||
## Next steps
marketplace Analytics Custom Query Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-custom-query-specification.md
These are some sample queries that show how to extract various types of data.
| **SELECT** MarketplaceSubscriptionId, EstimatedExtendedChargeCC **FROM** ISVUsage **ORDER BY** EstimatedExtendedChargeCC **LIMIT** 10 | This query will get the top 10 subscriptions in decreasing order of the number of licenses sold under each subscription. | | **SELECT** CustomerId, NormalizedUsage, RawUsage **FROM** ISVUsage **WHERE** NormalizedUsage > 100000 **ORDER BY** NormalizedUsage **TIMESPAN** LAST_6_MONTHS | This query will get the NormalizedUsage and RawUsage of all the Customers who have NormalizedUsage greater than 100,000. | | **SELECT** MarketplaceSubscriptionId, MonthStartDate, NormalizedUsage **FROM** ISVUsage **WHERE** CustomerId IN (ΓÇÿ2a31c234-1f4e-4c60-909e-76d234f93161ΓÇÖ, ΓÇÿ80780748-3f9a-11eb-b378-0242ac130002ΓÇÖ) | This query will get the `MarketplaceSubscriptionId` and the normalized usage for every month by the two `CustomerId` values: `2a31c234-1f4e-4c60-909e-76d234f93161` and `80780748-3f9a-11eb-b378-0242ac130002`. |
-|||
## Query specification
This table describes the symbols used in queries.
| * | Zero or more | | + | One or more | | &#124; | Or/One of the list |
-| | |
### Query definition
marketplace Analytics Make Your First Api Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-make-your-first-api-call.md
_**Table 1: Description of parameters used in this request example**_
| `RecurrenceInterval` | Recurrence interval provided during report creation. | | `RecurrenceCount` | Recurrence count provided during report creation. | | `Format` | CSV and TSV file formats are supported. |
-|||
**Response example**:
marketplace Analytics Programmatic Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-programmatic-access.md
The following example shows how to create a custom query to get _Normalized Usag
| Method | Request URI | | | - | | POST | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledQueries` |
-|||
**Request header**
The following example shows how to create a custom query to get _Normalized Usag
| - | - | - | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token. The format is `Bearer <token>`. | | Content-Type | `string` | `application/JSON` |
-||||
**Path parameter**
This table provides the key definitions of elements in the request payload.
| `Name` | Yes | Friendly name of the query | string | | `Description` | No | Description of what the query returns | string | | `Query` | Yes | Report query string | Data type: string<br>[Custom query](analytics-sample-queries.md) based on business need |
-|||||
> [!NOTE] > For custom query samples, see [Examples of sample queries](analytics-sample-queries.md).
This table provides the key definitions of elements in the response.
| `TotalCount` | Number of datasets in the Value array | | `StatusCode` | Result Code<br>The possible values are 200, 400, 401, 403, 500 | | `message` | Status message from the execution of the API |
-|||
## Create report API
On creating a custom report template successfully and receiving the `QueryID` as
| Method | Request URI | | | - | | POST | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport` |
-|||
**Request header**
On creating a custom report template successfully and receiving the `QueryID` as
| | - | -- | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token. The format is `Bearer <token>`. | | Content Type | string | `application/JSON` |
-||||
**Path parameter**
This table provides the key definitions of elements in the request payload.
| `ExecuteNow` | No | This parameter should be used to create a report that will be executed only once. `StartTime`, `RecurrenceInterval`, and `RecurrenceCount` are ignored if this is set to `true`. The report is executed immediately in an asynchronous fashion. | true/false | | `QueryStartTime` | No | Optionally specifies the start time for the query extracting the data. This parameter is applicable only for one time execution report which has `ExecuteNow` set to `true`. The format should be yyyy-MM-ddTHH:mm:ssZ | Timestamp as string | | `QueryEndTime` | No | Optionally specifies the end time for the query extracting the data. This parameter is applicable only for one time execution report which has `ExecuteNow` set to `true`. The format should be yyyy-MM-ddTHH:mm:ssZ | Timestamp as string |
-|||||
**Sample response**
This table provides the key definitions of elements in the response.
| `TotalCount` | Number of datasets in the Value array | | `StatusCode` | Result Code<br>The possible values are 200, 400, 401, 403, 500 | | `message` | Status message from the execution of the API |
-|||
## Get report executions API
You can use this method to query the status of a report execution using the `Rep
| Method | Request URI | | | - | | Get | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport/execution/{reportId}?executionId={executionId}&executionStatus={executionStatus}&getLatestExecution={getLatestExecution}` |
-|||
**Request header**
You can use this method to query the status of a report execution using the `Rep
| | | | | Authorization | string | Required. The Azure Active Directory (Azure AD) access token. The format is `Bearer <token>`. | | Content type | string | `application/json` |
-||||
**Path parameter**
None
| `executionId` | No | string | Filter to get details of only reports with `executionId` given in this argument. Multiple `executionIds` can be specified by separating them with a semicolon ΓÇ£;ΓÇ¥. | | `executionStatus` | No | string/enum | Filter to get details of only reports with `executionStatus` given in this argument.<br>Valid values are: `Pending`, `Running`, `Paused`, and `Completed` <br>The default value is `Completed`. Multiple statuses can be specified by separating them with a semicolon ΓÇ£;ΓÇ¥. | | `getLatestExecution` | No | boolean | The API will return details of the latest report execution.<br>By default, this parameter is set to `true`. If you choose to pass the value of this parameter as `false`, then the API will return the last 90 days execution instances. |
-|||||
**Request payload**
Key definitions of elements in the response.
| `TotalCount` | Number of datasets in the Value array | | `StatusCode` | Result Code <br>The possible values are 200, 400, 401, 403, 404 and 500 | | `message` | Status message from the execution of the API |
-|||
## Next steps - You can try out the APIs through the [Swagger API URL](https://api.partnercenter.microsoft.com/insights/v1/cmp/swagger/https://docsupdatetracker.net/index.html)
marketplace Analytics Programmatic Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-programmatic-faq.md
This table describes the API responses and what to do if you receive them.
| There are no executions that have occurred for the given filter conditions. Please recheck the reportId or executionId and retry the API after the report's scheduled execution time | 404 | Make sure that the `reportId` is correct. Retry the API after the reportΓÇÖs scheduled execution time as specified in the request payload. | | Internal error encountered while creating report. Correlation ID <> | 500 | Make sure that the format of date for the fields "StartTime", "QueryStartTime" and "QueryEndTime" are correct. | | Service unavailable | 500 | If you continuously receive a service unavailable (5xx error), please create a [support ticket](support.md). |
-||||
## No records
marketplace Analytics Sample Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-sample-queries.md
These sample queries apply to the Customers report.
| List customer details with active customers of the partner until the date you choose | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE IsActive = 1` | | List customer details with churned customers of the partner until the date you choose | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE IsActive = 0` | | List of new customers from a specific geography in the last six months | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE DateAcquired <= ΓÇÿ2020-06-30ΓÇÖ AND CustomerCountryRegion = ΓÇÿUnited StatesΓÇÖ` |
-|||
## Usage report queries
These sample queries apply to the Usage report.
| List usage details with Offer Name, metered usage for the last 6M | `SELECT OfferName, MeteredUsage FROM ISVUsage WHERE OfferName = ΓÇÿExample Offer NameΓÇÖ AND OfferType IN (ΓÇÿSaaSΓÇÖ, ΓÇÿAzure ApplicationsΓÇÖ) TIMESPAN LAST_6_MONTHS` | | List all offer usage details of all offers for last 6M | `SELECT OfferType, OfferName, SKU, IsPrivateOffer, UsageReference, UsageDate, RawUsage, EstimatedPricePC FROM ISVUsage ORDER BY UsageDate DESC TIMESPAN LAST_MONTH` | | List all offer usage details of private offers for last 6M | `SELECT OfferType, OfferName, SKU, IsPrivateOffer, UsageReference, UsageDate, RawUsage, EstimatedPricePC FROM ISVUsage WHERE IsPrivateOffer = '1' ORDER BY UsageDate DESC TIMESPAN LAST_MONTH` |
-|||
## Orders report queries
These sample queries apply to the Orders report.
| List Order details for trial orders active for the last 6M | `SELECT AssetId, Quantity, PurchaseRecordId, PurchaseRecordLineItemId from ISVOrder WHERE OrderStatus = 'Active' and IsTrial = 'True' TIMESPAN LAST_6_MONTHS` | | List Order details for all offers that are active for the last 6M | `SELECT OfferName, SKU, IsPrivateOffer, AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate, BilledRevenue FROM ISVOrder WHERE OrderStatus = 'Active' TIMESPAN LAST_6_MONTHS` | | List Order details for private offers active for the last 6M | `SELECT OfferName, SKU, IsPrivateOffer, AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate, BilledRevenue FROM ISVOrder WHERE IsPrivateOffer = '1' and OrderStatus = 'Active' TIMESPAN LAST_6_MONTHS` |
-|||
## Revenue report queries
These sample queries apply to the Revenue report.
| List billed revenue of the partner for last 1 month | `SELECT BillingAccountId, OfferName, OfferType, Revenue, EarningAmountCC, EstimatedRevenueUSD, EarningAmountUSD, PayoutStatus, PurchaseRecordId, LineItemId,TransactionAmountCC,TransactionAmountUSD, Quantity,Units FROM ISVRevenue TIMESPAN LAST_MONTH` | | List estimated revenue in USD of all transactions with sent status in last 3 months | `SELECT BillingAccountId, OfferName, OfferType, EstimatedRevenueUSD, EarningAmountUSD, PayoutStatus, PurchaseRecordId, LineItemId, TransactionAmountUSD FROM ISVRevenue where PayoutStatus='Sent' TIMESPAN LAST_3_MONTHS` | | List of non-trial transactions for subscription-based billing model | `SELECT BillingAccountId, OfferName,OfferType, TrialDeployment EstimatedRevenueUSD, EarningAmountUSD FROM ISVRevenue WHERE TrialDeployment=ΓÇÖFalseΓÇÖ and BillingModel=ΓÇÖSubscriptionBasedΓÇÖ` |
-|||
## Quality of service report queries
This sample query applies to the Quality of service report.
| **Query Description** | **Sample Query** | | | - | | List deployment status of offers for last 6 months | `SELECT OfferId, Sku, DeploymentStatus, DeploymentCorrelationId, SubscriptionId, CustomerTenantId, CustomerName, TemplateType, StartTime, EndTime, DeploymentDurationInMilliSeconds, DeploymentRegion FROM ISVQualityOfService TIMESPAN LAST_6_MONTHS` |
-|||
## Customer retention report queries
This sample query applies to the Customer retention report.
| | - | | List customer retention details for last 6 months | `SELECT OfferCategory, OfferName, ProductId, DeploymentMethod, ServicePlanName, Sku, SkuBillingType, CustomerId, CustomerName, CustomerCompanyName, CustomerCountryName, CustomerCountryCode, CustomerCurrencyCode, FirstUsageDate, AzureLicenseType, OfferType, Offset FROM ISVOfferRetention TIMESPAN LAST_6_MONTHS` | | List usage activity and revenue details of all customers in last 6 months | `SELECT OfferCategory, OfferName, Sku, ProductId, OfferType, FirstUsageDate, Offset, CustomerId, CustomerName, CustomerCompanyName, CustomerCountryName, CustomerCountryCode, CustomerCurrencyCode FROM ISVOfferRetention TIMESPAN LAST_6_MONTHS` |
-|||
## Next steps
marketplace Azure Ad Free Or Trial Landing Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-ad-free-or-trial-landing-page.md
As part of the [OpenID Connect](../active-directory/develop/v2-protocols-oidc.md
| oid | Identifier in the Microsoft identity system that uniquely identifies the user across applications. Microsoft Graph will return this value as the ID property for a given user account. | | tid | Identifier that represents the Azure AD tenant the user is from. In the case of an MSA identity, this will always be `9188040d-6c67-4c5b-b112-36a304b66dad`. For more information, see the note in the next section: Use Microsoft Graph API. | | sub | Identifier that uniquely identifies the user in this specific application. |
-|||
## Use the Microsoft Graph API
The ID token contains basic information to identify the user, but your activatio
| mobilePhone | Primary cellular telephone number for the user. | | preferredLanguage | ISO 639-1 code for the user's preferred language. | | surname | Last name of the user. |
-|||
Additional propertiesΓÇösuch as the name of the user's company or the user's location (country)ΓÇöcan be selected for inclusion in the request. For more details, see [Properties for the user resource type](/graph/api/resources/user#properties).
Most apps that are registered with Azure AD grant delegated permissions to read
> Accounts from the MSA tenant (with tenant ID `9188040d-6c67-4c5b-b112-36a304b66dad`) will not return more information than has already been collected with the ID token. So you can skip this call to the Graph API for these accounts. ## Next steps-- [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)
+- [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)
marketplace Azure Ad Saas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-ad-saas.md
This table provides details for the purchase management process steps.
| 2. After purchasing, the buyer selects **Configure account** in Azure Marketplace or **Configure now** in AppSource, which directs the buyer to the publisherΓÇÖs landing page for this offer. The buyer must be able to sign in to the publisherΓÇÖs SaaS application with Azure AD SSO and must only be asked for minimal consent that does not require Azure AD administrator approval. | Design a [landing page](azure-ad-transactable-saas-landing-page.md) for the offer so that it receives a user with their Azure AD or Microsoft account (MSA) identity and facilitates any additional provisioning or setup thatΓÇÖs required. | Required | | 3. The publisher requests purchase details from the SaaS fulfillment API. | Using an [access token](./partner-center-portal/pc-saas-registration.md) generated from the landing pageΓÇÖs Application ID, [call the resolve endpoint](./partner-center-portal/pc-saas-fulfillment-subscription-api.md#resolve-a-purchased-subscription) to retrieve specifics about the purchase. | Required | | 4. Through Azure AD and the Microsoft Graph API, the publisher gathers the company and user details required to provision the buyer in the publisherΓÇÖs SaaS application. | Decompose the Azure AD user token to find name and email, or [call the Microsoft Graph API](/graph/use-the-api) and use delegated permissions to [retrieve information](/graph/api/user-get) about the user who is logged in. | Required |
-||||
## Process steps for subscription management
This table describes the details about the subscription management process steps
| | - | - | | 5. The publisher manages the subscription to the SaaS application through the SaaS fulfillment API. | Handle subscription changes and other management tasks through the [SaaS fulfillment APIs](./partner-center-portal/pc-saas-fulfillment-apis.md).<br><br>This step requires an access token as described in process step 3. | Required | | 6. When using metered pricing, the publisher emits usage events to the metering service API. | If your SaaS app features usage-based billing, make usage notifications through the [Marketplace metering service APIs](marketplace-metering-service-apis.md).<br><br>This step requires an access token as described in Step 3. | Required for metering |
-||||
## Process steps for user management
Process steps 7 through 9 are optional user management process steps. They provi
| 7. Azure AD administrators at the buyerΓÇÖs company can optionally manage access for users and groups through Azure AD. | No publisher action is required to enable this if Azure AD SSO is set up for users (Step 9). | Not applicable | | 8. The Azure AD Provisioning Service communicates changes between Azure AD and the publisherΓÇÖs SaaS application. | [Implement a SCIM endpoint](../active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md) to receive updates from Azure AD as users are added and removed. | Recommended | | 9. After the app is permissioned and provisioned, users from the buyerΓÇÖs company can use Azure AD SSO to log in to the publisherΓÇÖs SaaS application. | [Use Azure AD SSO](../active-directory/manage-apps/what-is-single-sign-on.md) to enable users to sign in once with one account to the publisherΓÇÖs SaaS application. | Recommended |
-||||
## Next steps - [Build the landing page for your transactable SaaS offer in the commercial marketplace](azure-ad-transactable-saas-landing-page.md) - [Build the landing page for your free or trial SaaS offer in the commercial marketplace](azure-ad-free-or-trial-landing-page.md)-- [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)
+- [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)
marketplace Azure Ad Transactable Saas Landing Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-ad-transactable-saas-landing-page.md
As part of the [OpenID Connect](../active-directory/develop/v2-protocols-oidc.md
| oid | Identifier in the Microsoft identity system that uniquely identifies the user across applications. Microsoft Graph will return this value as the ID property for a given user account. | | tid | Identifier that represents the Azure AD tenant the buyer is from. In the case of an MSA identity, this will always be `9188040d-6c67-4c5b-b112-36a304b66dad`. For more information, see the note in the next section: Use the Microsoft Graph API. | | sub | Identifier that uniquely identifies the user in this specific application. |
-|
## Use the Microsoft Graph API
The ID token contains basic information to identify the buyer, but your activati
| mobilePhone | Primary cellular telephone number for the user. | | preferredLanguage | ISO 639-1 code for the user's preferred language. | | surname | Last name of the user. |
-|||
Additional propertiesΓÇösuch as the name of the user's company or the user's location (country)ΓÇöcan be selected for inclusion in the request. See [properties for the user resource type](/graph/api/resources/user#properties) for more details.
Most apps that are registered with Azure AD grant delegated permissions to read
## Next steps -- [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)
+- [How to create a SaaS offer in the commercial marketplace](create-new-saas-offer.md)
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-get-sas-uri.md
This script uses following commands to generate the SAS URI for a snapshot and c
||| | az disk grant-access | Generates read-only SAS that is used to copy the underlying VHD file to a storage account or download it to on-premises | | az storage blob copy start | Copies a blob asynchronously from one storage account to another. Use az storage blob show to check the status of the new blob. |
-|
## Generate the SAS address
Following are common issues encountered when working with shared access signatur
| SAS URI has white spaces in VHD name | `Failure: Copying Images. Not able to download blob using provided SAS Uri.` | Update the SAS URI to remove white spaces. | | SAS URI Authorization error | `Failure: Copying Images. Not able to download blob due to authorization error.` | Review and correct the SAS URI format. Regenerate if necessary. | | SAS URI "st" and "se" parameters do not have full date-time specification | `Failure: Copying Images. Not able to download blob due to incorrect SAS Uri.` | SAS URI **Start Date** and **End Date** parameters (`st` and `se` substrings) must have full date-time format, such as `11-02-2017T00:00:00Z`. Shortened versions are invalid (some commands in Azure CLI may generate shortened values by default). |
-|
For details, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md).
Check the SAS URI before publishing it on Partner Center to avoid any issues rel
- If you run into issues, see [VM SAS failure messages](azure-vm-sas-failure-messages.md) - [Sign in to Partner Center](https://go.microsoft.com/fwlink/?linkid=2165935)-- [Create a virtual machine offer on Azure Marketplace](azure-vm-offer-setup.md)
+- [Create a virtual machine offer on Azure Marketplace](azure-vm-offer-setup.md)
marketplace Azure Vm Plan Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-technical-configuration.md
Previously updated : 03/16/2022 Last updated : 05/10/2022 # Technical configuration for a virtual machine offer
This option lets you use the same technical configuration settings across plans
Some common reasons for reusing the technical configuration settings from another plan include: - The same images are available for both *Pay as you go* and *BYOL*.-- To reuse the same technical configuration from a public plan for a private plan with a different price. - Your solution behaves differently based on the plan the user chooses to deploy. For example, the software is the same, but features vary by plan.
+> [!NOTE]
+> If you would like to use a public plan to create a private plan with a different price, consider creating a private offer instead of reusing the technical configuration. Learn more about [the difference between private plans and private offers](/azure/marketplace/isv-customer-faq). Learn more about [how to create a private offer](/azure/marketplace/isv-customer).
+ Leverage [Azure Instance Metadata Service](../virtual-machines/windows/instance-metadata-service.md) (IMDS) to identify which plan your solution is deployed within to validate license or enabling of appropriate features.
-If you later decide to publish different changes between your plans, you can detach them. Detach the plan reusing the technical configuration by deselecting this option with your plan. Once detached, your plan will carry the same technical configuration settings at the place of your last setting and your plans may diverge in configuration. A plan that has been published independently in the past cannot reuse a technical configuration later.
+If you later decide to publish different changes between your plans, you can detach them. Detach the plan by deselecting this option with your plan that is reusing the technical configuration. Once detached, your plan will carry the same technical configuration settings at the place of your last setting and your plans may diverge in configuration. A plan that has been published independently in the past cannot reuse a technical configuration later.
## Operating system
Select the link to choose up to six recommended virtual machine sizes to display
## Open ports
-Add open public or private ports on a deployed virtual machine.
+Add public ports that will be automatically opened on a deployed virtual machine. You may specify the ports individually or via a range along with the supported protocol ΓÇô TCP, UDP, or both. Be sure to use a hyphen if specifying a port range (Ex: 80-150).
## Properties
-Here is a list of properties that can be selected for your VM.
+Here is a list of properties that can be selected for your VM. Enable the properties that are applicable to the images in your plan.
-- **Supports backup**: Enable this property if your images support Azure VM backup. Learn more about [Azure VM backup](../backup/backup-azure-vms-introduction.md).
+- **Supports VM extensions**: Extensions are small applications that provide post-deployment configuration and automation on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script inside of it, a VM extension can be used. Linux VM extension validations require the following to be part of the image:
-- **Supports accelerated networking**: Enable this property if the VM images for this plan support single root I/O virtualization (SR-IOV) to a VM, enabling low latency and high throughput on the network interface. Learn more about [accelerated networking](https://go.microsoft.com/fwlink/?linkid=2124513).
+ - Azure Linux Agent greater 2.2.41
+
+ - Python version above 2.6+
-- **Supports cloud-init configuration**: Enable this property if the images in this plan support cloud-init post deployment scripts. Learn more about [cloud-init configuration](../virtual-machines/linux/using-cloud-init.md).
+ For more information, seeΓÇ»[VM Extension](/azure/marketplace/azure-vm-certification-faq).
-- **Supports hotpatch**: Windows Server Azure Editions supports Hot Patch. Learn more about [Hot Patch](../automanage/automanage-hotpatch.md).
+- **Supports backup**: Enable this property if your images support Azure VM backup. Learn more aboutΓÇ»[Azure VM backup](/azure/backup/backup-azure-vms-introduction).
-- **Supports extensions**: Enable this property if the images in this plan support extensions. Extensions are small applications that provide post-deployment configuration and automation on Azure VMs. Learn more about [Azure virtual machine extensions](./azure-vm-certification-faq.yml#vm-extensions).
+- **Supports accelerated networking**: The VM images in this plan support single root I/O virtualization (SR-IOV) to a VM, enabling low latency and high throughput on the network interface. Learn more about [accelerated networking for Linux](/azure/virtual-network/create-vm-accelerated-networking-cli). Learn more about [accelerated networking for Windows](/azure/virtual-network/create-vm-accelerated-networking-powershell).
-- **Is a network virtual appliance**: Enable this property if this product is a Network Virtual Appliance. A network virtual appliance is a product that performs one or more network functions, such as a Load Balancer, VPN Gateway, Firewall or Application Gateway. Learn more about [network virtual appliances](https://go.microsoft.com/fwlink/?linkid=2155373).
+- **Is a network virtual appliance**: A network virtual appliance is a product that performs one or more network functions, such as a Load Balancer, VPN Gateway, Firewall or Application Gateway. Learn more about [network virtual appliances](https://go.microsoft.com/fwlink/?linkid=2155373).
-- **Remote desktop or SSH disabled**: Enable this property if any of the following conditions are true:
- - Virtual machines deployed with these images don't allow customers to access it using Remote Desktop or SSH. Learn more about [locked VM images](./azure-vm-certification-faq.yml#locked-down-or-ssh-disabled-offer).
- - Image does not support _sampleuser_ while deploying.
- - Image has limited access.
- - Image does not comply with the [Certification Test Tool](azure-vm-image-test.md#use-certification-test-tool-for-azure-certified).
- - Image requires setup during initial login which causes automation to not connect to the virtual machine.
- - Image does not support port 22.
+- **Supports NVMe** - Enable this property if the images in this plan support NVMe disk interface. The NVMe interface offers higher and consistent IOPS and bandwidth relative to legacy SCSI interface.
-- **Requires custom ARM template for deployment**: Enable this property if the images in this plan can only be deployed using a custom ARM template. To learn more see the [Custom templates section of Troubleshoot virtual machine certification](./azure-vm-certification-faq.yml#custom-templates).
+- **Supports cloud-init configuration**: Enable this property if the images in this plan support cloud-init post deployment scripts. Learn more aboutΓÇ»[cloud-init configuration](/azure/virtual-machines/linux/using-cloud-init).
-## Generations
+- **Supports hibernation** ΓÇô The images in this plan support hibernation/resume.
-Generating a virtual machine defines the virtual hardware it uses. Based on your customerΓÇÖs needs, you can publish a Generation 1 VM, Generation 2 VM, or both.
+- **Remote desktop/SSH not supported**: Enable this property if any of the following conditions are true:
-1. When creating a new offer, select a **Generation type** and enter the requested details:
+ - Virtual machines deployed with these images don't allow customers to access it using Remote Desktop or SSH. Learn more aboutΓÇ»[locked VM images](/azure/marketplace/azure-vm-certification-faq#locked-down-or-ssh-disabled-offer.md). Images that are published with either SSH disabled (for Linux) or RDP disabled (for Windows) are treated as Locked down VMs. There are special business scenarios to restrict access to users. During validation checks, Locked down VMs might not allow execution of certain certification commands.
- :::image type="content" source="./media/create-vm/azure-vm-generations-image-details-1.png" alt-text="A view of the Generation detail section in Partner Center.":::
+ - Image does not support sampleuser while deploying.
+ - Image has limited access.
+ - Image does not comply with the Certification Test Tool.
+ - Image requires setup during initial login which causes automation to not connect to the virtual machine.
+ - Image does not support port 22.
-2. To add another generation to a plan, select **Add generation**...
+- **Requires custom ARM template for deployment**: Enable this property if the images in this plan can only be deployed using a custom ARM template. In general, all the images that are published under a VM offer will follow standard ARM template for deployment. However, there are scenarios that might require customization while deploying VMs (for example, multiple NIC(s) to be configured).
+Below are examples (non-exhaustive) that might require custom templates for deploying the VM:
- :::image type="content" source="./media/create-vm/azure-vm-generations-add.png" alt-text="A view of the 'Add Generation' link.":::
+ - VM requires additional network subnets.
- ...and enter the requested details:
+ - Additional metadata to be inserted in ARM template.
+
+ - Commands that are prerequisite to the execution of ARM template.
- :::image type="content" source="./media/create-vm/azure-vm-generations-image-details-3.png" alt-text="A view of the generation details window.":::
+## Image types
-<!-- The **Generation ID** you choose will be visible to customers in places such as product URLs and ARM templates (if applicable). Use only lowercase, alphanumeric characters, dashes, or underscores; it cannot be modified once published.
>
-3. To update an existing VM that has a Generation 1 already published, edit details on the **Technical configuration** page.
+Generations of a virtual machine defines the virtual hardware it uses. Based on your customerΓÇÖs needs, you can publish a Generation 1 VM, Generation 2 VM, or both. To learn more about the differences between Generation 1 and Generation 2 capabilities, see [Support for generation 2 VMs on Azure](/azure/virtual-machines/generation-2).
-To learn more about the differences between Generation 1 and Generation 2 capabilities, see [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
+When creating a new plan, select an Image type from the drop-down menu. You can choose either X64 Gen 1 or X64 Gen 2. To add another image type to a plan, select **+Add image type**. You will need to provide a SKU ID for each new image type that is added.
> [!NOTE]
-> A published generation requires at least one image version to remain available for customers. To remove the entire plan (along with all its generations and images), select **Deprecate plan** on the **Plan Overview** page (see first section in this article).
+> A published generation requires at least one image version to remain available for customers. To remove the entire plan (along with all its generations and images), select **Deprecate plan** on the **Plan Overview** page. Learn more about [deprecating plans](/azure/marketplace/deprecate-vm).
+>
## VM images
-Provide a disk version and the shared access signature (SAS) URI for the virtual machine images. Add up to 16 data disks for each VM image. Provide only one new image version per plan in a specified submission. After an image has been published, you can't edit it, but you can delete it. Deleting a version prevents both new and existing users from deploying a new instance of the deleted version.
-
-These two required fields are shown in the prior image above:
--- **Disk version**: The version of the image you are providing.-- **OS VHD link**: The image stored in Azure Compute Gallery (formerly know as Shared Image Gallery). Learn how to capture your image in an [Azure Compute Gallery](azure-vm-create-using-approved-base.md#capture-image).
+To add a new image version, click **+Add VM image**. This will open a panel in which you will then need to specify an image version number. From there, you can provide your image(s) via either the Azure Compute Gallery and/or using a shared access signature (SAS) URI.
-Data disks (select **Add data disk (maximum 16)**) are also VHD shared access signature URIs that are stored in their Azure storage accounts. Add only one image per submission in a plan.
+Keep in mind the following when publishing VM images:
-Regardless of which operating system you use, add only the minimum number of data disks that the solution requires. During deployment, customers can't remove disks that are part of an image, but they can always add disks during or after deployment.
+1. Provide only one new VM image per image type in a given submission.
+2. After an image has been published, you can't edit it, but you can deprecate it. Deprecating a version prevents both new and existing users from deploying a new instance of the deprecated version. Learn more about [deprecating VM images](/azure/marketplace/deprecate-vm).
+3. You can add up to 16 data disks for each VM image provided. Regardless of which operating system you use, add only the minimum number of data disks that the solution requires. During deployment, customers canΓÇÖt remove disks that are part of an image, but they can always add disks during or after deployment.
> [!NOTE]
-> If you provide your images using SAS and have data disks, you also need to provide them as SAS URI. If you are using a shared image, they are captured as part of your image in Azure Compute Gallery. Once your offer is published to Azure Marketplace, you can delete the image from your Azure storage or Azure Compute Gallery.
+> If you provide your images using the SAS URI method and you are adding data disks, you also need to provide them in the form of a SAS URI. Data disks are also VHD shared access signature URIs that are stored in your Azure storage accounts. If you are using a gallery image, the data disks are captured as part of your image in Azure Compute Gallery.
Select **Save draft**, then select **← Plan overview** at the top left to see the plan you just created.
-Once your VM image has published, you can delete the image from your Azure storage.
+Once your VM image is published, you can delete the image from your Azure storage.
## Reorder plans (optional)
For VM offers with more than one plan, you can change the order that your plans
## Next steps
+- [Learn more about how to reorder plans](azure-vm-plan-reorder-plans.md)
- [Resell through CSPs](azure-vm-resell-csp.md)
marketplace Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/categories.md
This table shows the primary categories and subcategories that map to Azure Mark
| Security | Identity & Access Management<br>Information Protection<br>Threat Protection | | Storage | Backup & Recovery<br>Block File & Object Sharing<br>Data Management<br>Enterprise Hybrid Storage | | Web | Blogs & CMS<br>Ecommerce<br>Starter Web Apps<br>Web Apps<br>Web Apps Frameworks |
-|||
## Microsoft AppSource categories and subcategories
This table shows the primary categories and subcategories that map to Microsoft
| Productivity | Blogs<br>Content Creation & Management<br>Document & File management<br>Email Management<br>Gamification<br>Language & Translation<br>Search & Reference<br>Workflow Automation | | Project Management | Project Accounting & Revenue Recognition<br>Project Invoicing<br>Project Time & Expense Reporting<br>Project Resource Planning & Utilization Metrics <br>Project Planning & Tracking <br>Project Sales Proposals & Bids | | Sales | Business Data Management<br>Configure, Price, Quote (CPQ)<br>Contract Management<br>CRM<br>Sales Enablement<br>Telesales |
-|||
marketplace Cloud Partner Portal Api Cancel Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-cancel-operations.md
contains the operation's location that should be used to query status.
## URI parameters - | **Name** | **Description** | **Data type** | | | - | -- | | publisherId | Publisher identifier, for example, `contoso` | String | | offerId | Offer identifier | String | | api-version | Current version of API | Date |
-| | | |
## Header- | **Name** | **Value** | | | - | | Content-Type | application/json | | Authorization | Bearer YOUR TOKEN |
-| | |
## Body example
contains the operation's location that should be used to query status.
| **Name** | **Description** | | -- | | | notification-emails | Comma separated list of email Ids to be notified of the progress of the publishing operation. |
-| | |
### Response
contains the operation's location that should be used to query status.
| **Name** | **Value** | | | - | | Location | The relative path to retrieve this operation's status. |
-| | |
### Response status codes
contains the operation's location that should be used to query status.
| 400 | Bad/Malformed request. The error response body could provide more information. | | 403 | Access Forbidden. The client does not have access to the namespace specified in the request. | | 404 | Not found. The specified entity does not exist. |
-| | |
marketplace Cloud Partner Portal Api Creating Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-creating-offer.md
This call updates a specific offer within the publisher namespace or creates a n
| publisherId | Publisher identifier, for example `contoso` | String | | offerId | Offer identifier | String | | api-version | Latest version of the API | Date |
-| | | |
## Header
This call updates a specific offer within the publisher namespace or creates a n
| | - | | Content-Type | `application/json` | | Authorization | `Bearer YOUR_TOKEN` |
-| | |
## Body example
The following example creates an offer with offerID of `contosovirtualmachine`.
| 403 | `Forbidden`. The client doesn't have access to the requested namespace. | | 404 | `Not found`. The entity referred to by the client does not exist. | | 412 | The server does not meet one of the preconditions that the requester specified in the request. The client should check the ETAG sent with the request. |
-| | |
## Uploading artifacts
These categories and their respective keys are applicable for Azure apps, Virtua
| Web Apps Frameworks | web-apps-frameworks | web-apps-frameworks | web-apps-frameworks | | Web Apps | web-apps | web-apps | web-apps | | Other | other | other | other |
-||||
### Microsoft AppSource categories
These categories and their respective keys are applicable for SaaS, Power BI app
| Maps | maps | maps | maps | | News & Weather | news-and-weather | news-and-weather | news-and-weather | | Other | other-geolocation | other-geolocation | other-geolocation |
-||||
### Microsoft AppSource industries
These industries and their respective keys are applicable for SaaS, Power BI app
| Nonprofits | Nonprofits | nonprofits | | ***Real Estate*** | ***RealEstate*** | ***real-estate*** | | Other - Unsegmented | RealEstate\_OtherUnsegmented | other-unsegmented |
-|||
marketplace Cloud Partner Portal Api Go Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-go-live.md
from the [Publish](./cloud-partner-portal-api-publish-offer.md) API operation.
| publisherId | Publisher identifier for the offer to retrieve, for example `contoso` | String | | offerId | Offer identifier of the offer to retrieve | String | | api-version | Latest version of the API | Date |
-| | | |
## Header
from the [Publish](./cloud-partner-portal-api-publish-offer.md) API operation.
| | - | | Content-Type | `application/json` | | Authorization | `Bearer YOUR_TOKEN` |
-| | |
## Body example
from the [Publish](./cloud-partner-portal-api-publish-offer.md) API operation.
| **Name** | **Value** | | -- | - | | Location | The relative path to retrieve this operation's status |
-| | |
### Response status codes
from the [Publish](./cloud-partner-portal-api-publish-offer.md) API operation.
| 202 | `Accepted` - The request was successfully accepted. The response contains a location to track the operation status. | | 400 | `Bad/Malformed request` - Additional error information is found within the response body. | | 404 | `Not found` - The specified entity does not exist. |
-| | |
marketplace Cloud Partner Portal Api Publish Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-publish-offer.md
Starts the publishing process for the specified offer. This call is a long runni
| publisherId | Publisher identifier, for example `contoso` | String | | offerId | Offer identifier | String | | api-version | Latest version of the API | Date |
-| | |
## Header
Starts the publishing process for the specified offer. This call is a long runni
| -- | | | Content-Type | `application/json` | | Authorization | `Bearer YOUR_TOKEN` |
-| | |
## Body example
Starts the publishing process for the specified offer. This call is a long runni
| **Name** | **Description** | | | - | | notification-emails | Comma-separated list of email addresses to be notified of the progress of the publishing operation. |
-| | |
### Response
Starts the publishing process for the specified offer. This call is a long runni
| **Name** | **Value** | | -- | - | | Location | The relative path to retrieve this operation's status |
-| | |
### Response status codes
Starts the publishing process for the specified offer. This call is a long runni
| 400 | `Bad/Malformed request` - The error response body may provide more information. | | 422 | `Un-processable entity` - Indicates that the entity to be published failed validation. | | 404 | `Not found` - The entity specified by the client doesn't exist. |
-| | |
marketplace Cloud Partner Portal Api Retrieve Offer Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-retrieve-offer-status.md
Retrieves the current status of the offer.
| publisherId | Publisher identifier, for example `Contoso` | String | | offerId | GUID that uniquely identifies the offer | String | | api-version | Latest version of API | Date |
-| | |
## Header
Retrieves the current status of the offer.
| - | - | | Content-Type | `application/json` | | Authorization | `Bearer YOUR_TOKEN` |
-| | |
## Body example
Retrieves the current status of the offer.
| previewLinks | *Not currently implemented* | | liveLinks | *Not currently implemented* | | notificationEmails | Deprecated for offers migrated to Partner Center. Notification emails for migrated offers will be sent to the email specified under the Seller contact info in Account settings.<br><br>For non-migrated offers, comma-separated list of email addresses to be notified of the progress of the operation |
-| | |
### Response status codes
Retrieves the current status of the offer.
| 200 | `OK` - The request was successfully processed, and the current status of the offer was returned. | | 400 | `Bad/Malformed request` - The error response body may contain more information. | | 404 | `Not found` - The specified entity doesn't exist. |
-| | |
### Offer status
Retrieves the current status of the offer.
| Succeeded | Offer submission has completed processing. | | Canceled | Offer submission was canceled. | | Failed | Offer submission failed. |
-| | |
### Step Status
Retrieves the current status of the offer.
| Rejected | Step is rejected. | | Complete | Step is complete. | | Canceled | Step was canceled. |
-| | |
marketplace Cloud Partner Portal Api Retrieve Offers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-retrieve-offers.md
Retrieves a summarized list of offers under a publisher namespace.
| - | | -- | | publisherId | Publisher identifier, for example `contoso` | String | | api-version | Latest version of API | Date |
-| | |
## Header
Retrieves a summarized list of offers under a publisher namespace.
| | - | | Content-Type | `application/json` | | Authorization | `Bearer YOUR_TOKEN` |
-| | |
## Body example
Retrieves a summarized list of offers under a publisher namespace.
| version | Current version of the offer. The version property cannot be modified by the client. It's incremented after each publishing. | | definition | Contains a summarized view of the actual definition of the workload. To get a detailed definition, use the [Retrieve specific offer](./cloud-partner-portal-api-retrieve-specific-offer.md) API. | | changedTime | UTC time when the offer was last modified |
-| | |
### Response status codes
Retrieves a summarized list of offers under a publisher namespace.
| 400 | `Bad/Malformed request` - The error response body may contain more information. | | 403 | `Forbidden` - The client doesn't have access to the specified namespace. | | 404 | `Not found` - The specified entity doesn't exist. |
-| | |
### Offer Status
Retrieves a summarized list of offers under a publisher namespace.
| Succeeded | Offer submission has completed processing. | | Canceled | Offer submission was canceled. | | Failed | Offer submission failed. |
-| | |
marketplace Cloud Partner Portal Api Retrieve Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-retrieve-operations.md
query parameters to filter on running operations.
| offerId | Offer identifier | String | | operationId | GUID that uniquely identifies the operation on the offer. The operationId may be retrieved by using this API, and is also returned in the HTTP header of the response for any long running operation, such as the [Publish offer](./cloud-partner-portal-api-publish-offer.md) API. | Guid | | api-version | Latest version of API | Date |
-| | | |
## Header
query parameters to filter on running operations.
| | -- | | Content-Type | `application/json` | | Authorization | `Bearer YOUR_TOKEN` |
-| | |
## Body example
query parameters to filter on running operations.
| lastActionDateTime | UTC datetime when the last update was done on the operation | | status | Status of the operation, either `not started` \| `running` \| `failed` \| `completed`. Only one operation can have status `running` at a time. | | error | Error message for failed operations |
-| | |
### Response step properties
query parameters to filter on running operations.
| status | The status of the step, either `notStarted` \| `running` \| `failed` \| `completed` | | messages | Any notifications or warnings encountered during the step. Array of strings | | progressPercentage | An integer from 0 to 100 indicating the progression of the step |
-| | |
### Response status codes
query parameters to filter on running operations.
| 400 | `Bad/Malformed request` - The error response body may contain more information. | | 403 | `Forbidden` - The client doesn't have access to the specified namespace. | | 404 | `Not found` - The specified entity does not exist. |
-| | |
marketplace Cloud Partner Portal Api Retrieve Specific Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-retrieve-specific-offer.md
You can also retrieve a particular version of the offer, or retrieve the offer i
| version | Version of the offer being retrieved. By default, the latest offer version is retrieved. | Integer | | slotId | The slot from which the offer is to be retrieved, can be one of: <br/> - `Draft` (default) retrieves the offer version currently in draft. <br/> - `Preview` retrieves the offer version currently in preview. <br/> - `Production` retrieves the offer version currently in production. | enum | | api-version | Latest version of API | Date |
-| | | |
## Header
You can also retrieve a particular version of the offer, or retrieve the offer i
| | -- | | Content-Type | `application/json` | | Authorization | `Bearer YOUR_TOKEN` |
-| | |
## Body example
You can also retrieve a particular version of the offer, or retrieve the offer i
| version | Current version of the offer. The version property cannot be modified by the client. It's incremented after each publishing. | | definition | Actual definition of the workload | | changedTime | UTC datetime when the offer was last modified |
-| | |
### Response status codes
You can also retrieve a particular version of the offer, or retrieve the offer i
| 400 | `Bad/Malformed request` - The error response body may contain more information. | | 403 | `Forbidden` - The client doesn't have access to the specified namespace. | | 404 | `Not found` - The specified entity doesn't exist. Client should check the publisherId, offerId, and version (if specified). |
-| | |
### Offer status
You can also retrieve a particular version of the offer, or retrieve the offer i
| Succeeded | Offer submission has completed processing. | | Canceled | Offer submission was canceled. | | Failed | Offer submission failed. |
-| | |
marketplace Cloud Partner Portal Api Setting Price https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-partner-portal-api-setting-price.md
customized core pricing, and their corresponding currency codes.
| US | United States | USD | | UY | Uruguay | UYU | | VE | Venezuela | USD |
-| | | |
marketplace Determine Your Listing Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/determine-your-listing-type.md
This table shows which listing options are available for each offer type:
| Managed Service | | | | &#10004;<sup>1</sup> | | Power BI App | | | | &#10004;<sup>1</sup> | | Software as a service | &#10004; | &#10004; | &#10004; | &#10004;<sup>1</sup> |
-||||||
<sup>1</sup> The **Get It Now** listing option includes Get It Now (Free), bring your own license (BYOL), Subscription, and Usage-based pricing. For more information, see [Get It Now](#get-it-now).
This table shows which offer types support the pricing options that are included
| Power BI App | &#10004; | | | | | Azure Virtual Machine | | &#10004; | | &#10004;<sup>2</sup> | | Software as a service | &#10004; | | &#10004; | &#10004; |
-||||||
<sup>1</sup> The **Pricing model** column of the **Plan overview** tab shows **Free** or **BYOL**, but it's not selectable.
The following table shows the options that are available for different offer typ
| Dynamics 365 apps on Dataverse and Power Apps | AppSource | AppSource | | | AppSource <sup>3</sup> | | Dynamics 365 Operations Apps | AppSource | AppSource | | | | | Power BI App | | | AppSource | | |
-|||||||
<sup>1</sup> SaaS transactable offers in AppSource only accept credit cards at this time.
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-review-publish.md
You can review your offer status on the **Overview** tab of the commercial marke
| Live | Offer is live in the marketplace and can be seen and acquired by customers. | | Pending stop sell | Publisher selected "stop sell" on an offer or plan, but the action has not yet been completed. | | Not available in the marketplace | A previously published offer in the marketplace has been removed. |
-|||
## Validation and publishing steps
marketplace Gtm Offer Listing Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-offer-listing-best-practices.md
For an analysis of how your offers are performing, go to the [Marketplace Insigh
**Learn more** documents | Include supporting sales and marketing assets under **Learn more**; examples include:<ul><li>white papers</li><li>brochures</li><li>checklists</li><li>PowerPoint presentations</li></ul><br>Save all files in PDF format. Your goal here should be to educate customers, not sell to them.<br><br>Add a link to your app landing page to all your documents and add URL parameters to help you track visitors and trials. | | Videos (AppSource, consulting services, and SaaS offers only) | The strongest videos communicate the value of your offer in narrative form:<ul><li>Make your customer, not your company, the hero of the story.</li><li>Your video should address the principal challenges and goals of your target customer.</li><li>Recommended length: 60-90 seconds.</li><li>Incorporate key search words that use the name of the videos.</li></ul><br>Consider adding additional videos, such as a how-to, getting started, or customer testimonials. | | Screenshots (1280×720 px) | Add up to five screenshots. Incorporate key search words in the file names. |
-|
> [!IMPORTANT] > Make sure your offer name and offer description adhere to **[Microsoft Trademark and Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general.aspx)** and other relevant, product-specific guidelines when referring to Microsoft trademarks and the names of Microsoft software, products, and services.
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-app-license.md
This table illustrates the high-level process to manage ISV app licenses:
| Step 5: Manage licenses | The license plans will appear in Microsoft 365 Admin Center for the customer to [assign to users or groups](/microsoft-365/commerce/licenses/manage-third-party-app-licenses) in their organization. The customer can also install the application in their tenant via the Power Platform Admin Center. | | Step 6: Perform license check | When a user within the customerΓÇÖs organization tries to run an application, Microsoft checks to ensure that user has a license before permitting them to run it. If they donΓÇÖt have a license, the user sees a message explaining that they need to contact an administrator for a license. | | Step 7: View reports | ISVs can view information on provisioned and assigned licenses over a period of time and by geography. |
-|||
## Enabling app license management through Microsoft
marketplace License Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/license-dashboard.md
You can use the download icon in the upper-right corner of any widget to downloa
| Tenant ID | Unique ID of the tenant | | License State | License state | | Service ID | Unique identifier used in the package to map the plan with the license checks |
-|
## Next steps
marketplace Marketplace Apis Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-apis-guide.md
The activities below are not sequential. The activity you use is dependent on yo
| <center>**4. Sale**<br><img src="medi)<br>[Reporting APIs](https://partneranalytics-api.azureedge.net/partneranalytics-api/Programmatic%20Access%20to%20Commercial%20Marketplace%20Analytics%20Data_v1.pdf) | Contract signing<br>Revenue Recognition<br>Invoicing<br>Billing<br>Azure portal / Admin Center<br>PC Marketplace Rewards<br>PC Payouts Reports<br>PC Marketplace Analytics<br>PC Co-Sell Closing | | <center>**5. Maintenance**<br><img src="medi)<br>[(EA Customer) Azure Consumption API](/rest/api/consumption/)<br>[(EA Customer) Azure Charges List API](/rest/api/consumption/charges/list) | Recurring billing<br>Overages<br>Product Support<br>PC Payouts Reports<br>PC Marketplace Analytics | | <center>**6. Contract End**<br><img src="medi)<br>AMA/VM's: auto-renew | Renew or<br>Terminate<br>PC Marketplace Analytics |
-|
## Next steps
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
The transact publishing option is currently supported for the following offer ty
| Azure Application <br>(Managed application) | Monthly | Yes | Usage-based | | Azure Virtual Machine | Monthly* | No | Usage-based, BYOL | | Software as a service (SaaS) | Monthly and annual | Yes | Flat rate, per user, usage-based. |
-|||||
\* Azure Virtual Machine offers support usage-based billing plans. These plans are billed monthly for hourly use of the subscription based on per core, per core size, or per market and core size usage.
Usage-based pricing has the following cost structure:
||| | Azure usage cost (D1/1-Core) | $0.14 per hour | | *Customer is billed by Microsoft* | *$1.14 per hour* |
-||
In this scenario, Microsoft bills $1.14 per hour for use of your published VM image.
In this scenario, Microsoft bills $1.14 per hour for use of your published VM im
| Microsoft pays you 97% of your license cost | $0.97 per hour | | Microsoft keeps 3% of your license cost | $0.03 per hour | | Microsoft keeps 100% of the Azure usage cost | $0.14 per hour |
-||
**Bring Your Own License (BYOL)**
BYOL has the following cost structure:
||| |Azure usage cost (D1/1-Core) | $0.14 per hour | | *Customer is billed by Microsoft* | *$0.14 per hour* |
-||
In this scenario, Microsoft bills $0.14 per hour for use of your published VM image.
In this scenario, Microsoft bills $0.14 per hour for use of your published VM im
||| | Microsoft keeps the Azure usage cost | $0.14 per hour | | Microsoft keeps 0% of your license cost | $0.00 per hour |
-||
**SaaS app subscription**
SaaS subscriptions can be priced at a flat rate or per user on a monthly or annu
|--|| | Azure usage cost (D1/1-Core) | Billed directly to the publisher, not the customer | | *Customer is billed by Microsoft* | *$100.00 per month (publisher must account for any incurred or pass-through infrastructure costs in the license fee)* |
-||
In this scenario, Microsoft bills $100.00 for your software license and pays out $97.00.
marketplace Marketplace Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-containers.md
These are the available licensing options for Azure Container offers:
| | | | Free | List your offer to customers for free. | | BYOL | The Bring Your Own Licensing option lets your customers bring existing software licenses to Azure.\* |
-|
\* As the publisher, you support all aspects of the software license transaction, including (but not limited to) order, fulfillment, metering, billing, invoicing, payment, and collection.
marketplace Marketplace Criteria Content Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-criteria-content-validation.md
This article explains the requirements and guidelines for listing new offers and
| 9 | Learn more | Links at the bottom (under the description, not Azure Marketplace links on the left) lead to more information about the solution and are publicly available and displaying correctly. | Links to specific items (for example, spec pages on the partner site) and not just the partner home page. | | 10 | Solution support and help | Link to at least one of the following: <ul><li>Telephone numbers</li><li>Email support</li><li>Chat agents</li><li>Community forums |<ul><li>All support methods are listed.</li><li>Paid support is offered free during the *Trial* or *Test Drive* period. | | 11 | Legal | Policies or terms are available via a public URL. | |
-|||
## Trial offer requirements | No. | Listing element | Base requirement | Optimal requirement | |: |: |: |: | | | List status (Listing option) | The link must lead to a customer-led *Trial* experience. | Other listing options (for example, *Buy Now*) are also available. |
-|||
## SaaS application requirements
This article explains the requirements and guidelines for listing new offers and
| 9 | Lead management | Select the system where your leads will be stored. See [get customer leads](./partner-center-portal/commercial-marketplace-get-customer-leads.md) to connect your CRM system. | | | 10 | Contacts: Solution support and help | <ul><li>Engineering contact name: The name of the engineering contact for your app. This contact will receive technical communications from Microsoft.</li><li>Engineering contact email: The email address of the engineering contact for your app.</li><li>Engineering contacts phone: The phone number of the engineering contact. [ISO phone number notations](https://en.wikipedia.org/wiki/E.123) are supported.</li><li>Support contact name: The name of the support contact for your app. This contact will receive support-related communications from Microsoft.</li><li>Support contact email: The email address of the support contact for your app.</li><li>Support contact phone: The phone number of the support contact. [ISO phone number notations](https://en.wikipedia.org/wiki/E.123) are supported.</li><li>Support URL: The URL of your support page. | <ul><li>All support methods are listed.</li><li>Paid support offered free during the *Trial* or *Test Drive* period. | | 11 | Legal |<ul><li>Privacy policy URL: The URL for your app's privacy policy in the Privacy policy URL field in the CPP.</li><li>Terms of use: The terms of use of your app. Customers are required to accept these terms before they can try your app. | Policies or terms are available via a public URL site. |
-|||
## Container offer requirements
This article explains the requirements and guidelines for listing new offers and
| 2 | Plans | The partner selects new plans.| The title mirrors the title style already available in the description. Avoid using long titles. | | 3 | Marketplace artifacts | Logos are displayed correctly. |<ul><li>Logos: Small (48 x 48 px, optional), Medium (90 x 90 px, optional), and Large (from 216 x 216 to 350 x 350 px, required).</li><li>Screenshot (max. 5): Requires a .PNG image with a resolution of 1280 x 720 pixels.| | 4 | Lead management |<ul><li>Lead management: Select the system where your leads will be stored.</li><li>See [get customer leads](./partner-center-portal/commercial-marketplace-get-customer-leads.md) to connect your CRM system. | |
-|||
## Consulting offer requirements
This article explains the requirements and guidelines for listing new offers and
| 10 | Products | Must be Azure products. | | | 11 | Country/region | Ensure that the country/region matches the selected currency. | | | 12 | Learn more | <ul><li>Links at the bottom (under the description, not Azure Marketplace links on the left) lead to more information about the solution and are publicly available and being displayed correctly.</li><li>Links must have a "friendly name" and are not displayed as the file name of any downloads. | |
-||||
## Next steps
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
These are the available licensing options for Dynamics 365 offer types:
| Dynamics 365 Operations Apps | Contact me | | Dynamics 365 apps on Dataverse and Power Apps | Get it now<sup>1</sup><br>Get it now (Free)<br>Free trial (listing)<br>Contact me | | Dynamics 365 Business Central | Get it now (Free)<br>Free trial (listing)<br>Contact me |
-|||
<sup>1</sup> Customers will see a **Get it now** button on the offer listing page in AppSource for offers configured for [ISV app license management](isv-app-license.md). Customers can select this button to contact you to purchase licenses for the app.
The following table describes the transaction process of each listing option.
| Free trial (listing) | Offer your customers a one-, three- or six-month free trial. Offer listing free trials are created, managed, and configured by your service and do not have subscriptions managed by Microsoft. | | Get it now (free) | List your offer to customers for free. | | Get it now | Enables you to manage your ISV app licenses in Partner Center.<br>Currently available to the following offer type only:<ul><li>Dynamics 365 apps on Dataverse and Power Apps</li></ul><br>For more information about this option, see [ISV app license management](isv-app-license.md). |
-|||
## Test drive
After you've considered the planning items described above, select one of the fo
| [Dynamics 365 Operations Apps](./dynamics-365-operations-offer-setup.md) | When you're building for Enterprise Edition, first review these additional [publishing processes and guidelines](/dynamics365/fin-ops-core/dev-itpro/lcs-solutions/lcs-solutions-app-source). Product types include Commerce, Finance, Human Resources, Project Operations, and Supply Chain Management. | | [Dynamics 365 for Business Central](dynamics-365-business-central-offer-setup.md) | n/a | | [Dynamics 365 apps on Dataverse and Power Apps](dynamics-365-customer-engage-offer-setup.md) | First review these additional [publishing processes and guidelines](/dynamics365/customer-engagement/developer/publish-app-appsource). Product types include Customer Service, Customer Voice, Project Operations, Field Service, Marketing, Mixed Reality, Power Apps, Power Automate, Power Virtual Agents, Project Service Automation, and Sales. |
-|
marketplace Marketplace Geo Availability Currencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-geo-availability-currencies.md
A CSP can purchase an offer in Partner Center in their end customer's currency s
| Vietnam | VN | USD | EUR, USD, VND | | Yemen | YE | USD | EUR, USD, YER | | Zambia | ZM | USD | EUR, USD, ZMW |
-| Zimbabwe | ZW | USD | EUR, USD|
-| | | |
+| Zimbabwe | ZW | USD | EUR, USD |
\* For customers in Brazil, the commercial marketplace through Cloud Solution Providers (CSP) uses USD.
marketplace Marketplace Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-iot-edge.md
These are the available licensing options for Azure Container offers:
| | | | Free | List your offer to customers for free. | | BYOL | The Bring Your Own Licensing option lets your customers bring existing software licenses to Azure.\* |
-|
\* As the publisher, you support all aspects of the software license transaction, including (but not limited to) order, fulfillment, metering, billing, invoicing, payment, and collection.
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-apis.md
Only one usage event can be emitted for each hour of a calendar day per resource
| Parameter | Recommendation | | - | - | | `ApiVersion` | Use 2018-08-31. |
-| | |
*Request headers:*
Only one usage event can be emitted for each hour of a calendar day per resource
| `x-ms-requestid` | Unique string value for tracking the request from the client, preferably a GUID. If this value is not provided, one will be generated and provided in the response headers. | | `x-ms-correlationid` | Unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated and provided in the response headers. | | `authorization` | A unique access token that identifies the ISV that is making this API call. The format is `"Bearer <access_token>"` when the token value is retrieved by the publisher as explained for <br> <ul> <li> SaaS in [Get the token with an HTTP POST](partner-center-portal/pc-saas-registration.md#get-the-token-with-an-http-post). </li> <li> Managed application in [Authentication strategies](marketplace-metering-service-authentication.md). </li> </ul> |
-| | |
*Request body example:*
The batch usage event API allows you to emit usage events for more than one purc
| `x-ms-requestid` | Unique string value for tracking the request from the client, preferably a GUID. If this value is not provided, one will be generated, and provided in the response headers. | | `x-ms-correlationid` | Unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated, and provided in the response headers. | | `authorization` | A unique access token that identifies the ISV that is making this API call. The format is `Bearer <access_token>` when the token value is retrieved by the publisher as explained for <br> <ul> <li> SaaS in [Get the token with an HTTP POST](partner-center-portal/pc-saas-registration.md#get-the-token-with-an-http-post). </li> <li> Managed application in [Authentication strategies](./marketplace-metering-service-authentication.md). </li> </ul> |
-| | |
>[!NOTE] >In the request body, the resource identifier has different meanings for SaaS app and for Azure Managed app emitting custom meter. The resource identifier for SaaS App is `resourceID`. The resource identifier for Azure Application Managed Apps plans is `resourceUri`.
GET: `https://marketplaceapi.microsoft.com/api/usageEvents`
| dimension (optional) | Default = all available | | azureSubscriptionId (optional) | Default = all available | | reconStatus (optional) | Default = all available |
-|||
*Possible values of reconStatus*:
GET: `https://marketplaceapi.microsoft.com/api/usageEvents`
| Mismatch | MarketplaceAPI and Partner Center Analytics quantities are both non-zero, however not matching | | TestHeaders | Subscription listed with test headers, and therefore not in PC Analytics | | DryRun | Submitted with SessionMode=DryRun, and therefore not in PC |
-|||
*Request headers*:
GET: `https://marketplaceapi.microsoft.com/api/usageEvents`
| x-ms-requestid | Unique string value (preferably a GUID), for tracking the request from the client. If this value is not provided, one will be generated and provided in the response headers. | | x-ms-correlationid | Unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated and provided in the response headers. | | authorization | A unique access token that identifies the ISV that is making this API call. The format is `Bearer <access_token>` when the token value is retrieved by the publisher. For more information, see:<br><ul><li>SaaS in [Get the token with an HTTP POST](./partner-center-portal/pc-saas-registration.md#get-the-token-with-an-http-post)</li><li>Managed application in [Authentication strategies](marketplace-metering-service-authentication.md)</li></ul> |
-|||
### Responses
marketplace Marketplace Metering Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-authentication.md
For more information about these tokens, see [Azure Active Directory access toke
| **Parameter name** | **Required** | **Description** | | | | | | `tenantId` | True | Tenant ID of the registered Azure AD application. |
-| | | |
#### *Request header* | **Header name** | **Required** | **Description** | | | | | | `Content-Type` | True | Content type associated with the request. The default value is `application/x-www-form-urlencoded`. |
-| | | |
#### *Request body*
For more information about these tokens, see [Azure Active Directory access toke
| `Client_id` | True | Client/app identifier associated with the Azure AD app.| | `client_secret` | True | Secret associated with the Azure AD app. | | `Resource` | True | Target resource for which the token is requested. Use `20e940b3-4c77-4b0b-9a53-9e16a1b010a7`. |
-| | | |
#### *Response* | **Name** | **Type** | **Description** | | | | - | | `200 OK` | `TokenResponse` | Request succeeded. |
-| | | |
#### *TokenResponse*
marketplace Marketplace Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-power-bi.md
Before you begin, review these links, which provide templates, tips, and samples
- [Tips for authoring a Power BI app](/power-bi/service-template-apps-tips) - [Samples](/power-bi/service-template-apps-samples) - ## Publishing benefits Benefits of publishing to the commercial marketplace:
This is the only licensing option available for Power BI app offers:
| Licensing option | Transaction process | | | | | Get it now (free) | List your offer to customers for free. |
-|
\* As the publisher, you support all aspects of the software license transaction, including (but not limited to) order, fulfillment, metering, billing, invoicing, payment, and collection.
marketplace Marketplace Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-virtual-machines.md
These are the available licensing options for VM offers:
| | | | Usage-based | Also known as pay-as-you-go. This licensing model lets you bill your customers per hour through various pricing options. | | BYOL | The Bring Your Own Licensing option lets your customers bring existing software licenses to Azure. * |
-|
`*` As the publisher, you support all aspects of the software license transaction, including (but not limited to) order, fulfillment, metering, billing, invoicing, payment, and collection.
The following are types of trials that can be configured to help identify custom
| | - | | Free trial | Offer your customers a one-, three- or six-month free trial. | | Test drive | This option lets your customers evaluate your solution at no additional cost to them. They don't need to be an existing Azure customer to engage with the trial experience. Learn more about [test drives](#test-drive). |
-|
> [!NOTE] > The licensing model along with any trial opportunities you select will determine the additional information you'll need to provide when you create the offer in Partner Center.
marketplace Pc Saas Fulfillment Subscription Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api.md
Calling the Resolve API will return subscription details and status for SaaS sub
| `x-ms-correlationid` | A unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated and provided in the response headers. | | `authorization` | A unique access token that identifies the publisher making this API call. The format is `"Bearer <accessaccess_token>"` when the token value is retrieved by the publisher as explained in [Get a token based on the Azure AD app](./pc-saas-registration.md#get-the-token-with-an-http-post). | | `x-ms-marketplace-token` | The purchase identification *token* parameter to resolve. The token is passed in the landing page URL call when the customer is redirected to the SaaS partner's website (for example: `https://contoso.com/signup?token=<token><authorization_token>`). <br> <br> Note that the *token* value being encoded is part of the landing page URL, so it needs to be decoded before it's used as a parameter in this API call. <br> <br> Here's an example of an encoded string in the URL: `contoso.com/signup?token=ab%2Bcd%2Fef`, where *token* is `ab%2Bcd%2Fef`. The same token decoded will be: `Ab+cd/ef` |
-| | |
*Response codes:*
After the SaaS account is configured for an end user, the publisher must call th
| Parameter | Value | | -- | | | `ApiVersion` | Use 2018-08-31. |
-| `subscriptionId` | The unique identifier of the purchased SaaS subscription. This ID is obtained after resolving the commercial marketplace authorization token by using the [Resolve API](#resolve-a-purchased-subscription).
- |
+| `subscriptionId` | The unique identifier of the purchased SaaS subscription. This ID is obtained after resolving the commercial marketplace authorization token by using the [Resolve API](#resolve-a-purchased-subscription). |
*Request headers:*
marketplace Plan Azure App Managed App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-managed-app.md
Use an Azure Application: Managed application plan when the following conditions
| Azure-compatible virtual hard disk (VHD) | VMs must be built on Windows or Linux. For more information, see:<br> * [Create an Azure VM technical asset](./azure-vm-certification-faq.yml#address-a-vulnerability-or-an-exploit-in-a-vm-offer) (for Windows VHDs).<br> * [Linux distributions endorsed on Azure](../virtual-machines/linux/endorsed-distros.md) (for Linux VHDs). | | Customer usage attribution | All new Azure application offers must also include an [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md) GUID. For more information about customer usage attribution and how to enable it, see [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md). | | Deployment package | You'll need a deployment package that will let customers deploy your plan. If you create multiple plans that require the same technical configuration, you can use the same package. For details, see the next section: Deployment package. |
-|||
> [!NOTE] > Managed applications must be deployable through Azure Marketplace. If customer communication is a concern, reach out to interested customers after you've enabled lead sharing.
You can configure a maximum of five policies, and only one instance of each Poli
| Azure Data Lake Store Encryption | No | | Audit Diagnostic Setting | Yes | | Audit Resource Location compliance | No |
-|||
For each policy type you add, you must associate Standard or Free Policy SKU. The Standard SKU is required for audit policies. Policy names are limited to 50 characters. ## Next steps -- [Create an Azure application offer](azure-app-offer-setup.md)
+- [Create an Azure application offer](azure-app-offer-setup.md)
marketplace Plan Azure App Solution Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-solution-template.md
The solution template plan type requires an [Azure Resource Manager template (AR
| Customer usage attribution | Enabling customer usage attribution is required on all solution templates that are published on Azure Marketplace. For more information about customer usage attribution and how to enable it, see [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md). | | Use managed disks | [Managed disks](../virtual-machines/managed-disks-overview.md) is the default option for persisted disks of infrastructure as a service (IaaS) VMs in Azure. You must use managed disks in solution templates.<ul><li>To update your solution templates, follow the guidance in [Use managed disks in Azure Resource Manager templates](../virtual-machines/using-managed-disks-template-deployments.md), and use the provided [samples](https://github.com/Azure/azure-quickstart-templates).</li><li>To publish the VHD as an image in Azure Marketplace, import the underlying VHD of the managed disks to a storage account by using either [Azure PowerShell](/previous-versions/azure/virtual-machines/scripts/virtual-machines-powershell-sample-copy-managed-disks-vhd) or the [Azure CLI](/previous-versions/azure/virtual-machines/scripts/virtual-machines-cli-sample-copy-managed-disks-vhd)</ul> | | Deployment package | You'll need a deployment package that will let customers deploy your plan. If you create multiple plans that require the same technical configuration, you can use the same plan package. For details, see the next section: Deployment package. |
-|||
## Deployment package
For more information, see [Private offers in the Microsoft commercial marketplac
## Next steps -- [Create an Azure application offer](azure-app-offer-setup.md)
+- [Create an Azure application offer](azure-app-offer-setup.md)
marketplace Plan Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-consulting-service-offer.md
To help create your offer more easily, prepare some of these items ahead of time
|Getting Started with Azure IoT in Manufacturing |Manufacturing IoT: 2-Day Assessment | |Workshop on Smart Toasters |Smart Toasters: 1-Week Workshop | |SQL Server Migration PoC by Contoso |SQL Migration: 3-Wk Proof of Concept |
-| | |
**Search results summary**: Describe the purpose or goal of your offer in 200 characters or less. This summary is used in the commercial marketplace listing search results. It shouldnΓÇÖt be identical to the title. Consider including your top SEO keywords.
When writing the description, follow these criteria, according to your service t
|Implementation |Include a detailed agenda for implementations longer than a day, and describe what engineering changes, technical artifacts, or other artifacts a customer can expect as outcomes of the engagement. | |Proof of concept |Describe what engineering changes, technical artifacts, or other artifacts a customer can expect as outcomes of the engagement. | |Workshop |Include a detailed daily, weekly, or monthly agenda depending on the duration of your offer. Articulate the learning goals or other deliverables of your workshop. |
-| | |
Here are some tips for writing your description:
Follow these guidelines for your logos:
Your consulting service offer can be made available in one or more countries or regions. In Partner Center, you can decide the price for each market you select. For the complete list of supported markets and currencies, see [Geographic availability and currency support for the commercial marketplace](./marketplace-geo-availability-currencies.md). - ## Next steps * [Create a consulting service offer in the commercial marketplace](./create-consulting-service-offer.md)
marketplace Plan Saas Dev Test Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-dev-test-offer.md
This table describes the differences between the settings for DEV offers and PRO
| Connection webhook | Enter your dev/test endpoint. | Enter your production endpoint. | | Azure Active Directory tenant ID | Enter your test app registration tenant ID (Azure AD directory ID). | Enter your production app registration tenant ID. | | Azure Active Directory application ID | Enter your test app registration application ID (client ID). | Enter your production app registration application ID. |
-||||
### Plan visibility
To reduce your cost for testing the pricing models, including Marketplace custom
| $0.00 - $0.01 | Set a total transaction cost of zero to have no financial impact or one cent to have a low cost. Use this price when making calls to the metering APIs, or to test purchasing plans in your offer while developing your solution. | | $0.01 | Use this price range to test analytics, reporting, and the purchase process. | | $50.00 - $100.00 | Use this price range to test payout. For information about our payment schedule, see [Payout schedules and processes](/partner-center/payout-policy-details). |
-|||
> [!IMPORTANT] > To avoid being charged a store service fee on your test, open a [support ticket](support.md) within 7 days of the test purchase.
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
The following table shows the listing options for SaaS offers in the commercial
| Free trial | The customer is redirected to your target URL via Azure Active Directory (Azure AD).``*``<br>You can change to a different listing option after publishing the offer. | | Get it now (Free) | The customer is redirected to your target URL via Azure AD.``*``<br>You can change to a different listing option after publishing the offer. | | Sell through Microsoft | Offers sold through Microsoft are called _transactable_ offers. An offer that is transactable is one in which Microsoft facilitates the exchange of money for a software license on the publisherΓÇÖs behalf. We bill SaaS offers using the pricing model you choose, and manage customer transactions on your behalf. Azure infrastructure usage fees are billed to you, the partner, directly. You should account for infrastructure costs in your pricing model. This is explained in more detail in [SaaS billing](#saas-billing) below.<br><br>**Note**: You cannot change this option once your offer is published. |
-|||
``*`` Publishers are responsible for supporting all aspects of the software license transaction, including but not limited to order, fulfillment, metering, billing, invoicing, payment, and collection.
If your SaaS offer is *both* an IT solution (Azure Marketplace) and a business s
| Yes | Yes | Yes | Azure Marketplace and Azure portal* | | Yes | No | Yes | Azure portal only | | No | No | Yes | Azure portal and AppSource |
-|||||
&#42; The private plan of the offer will only be available via the Azure portal and AppSource.
The following example shows a sample breakdown of costs and payouts to demonstra
| Customer is billed by Microsoft | $100.00 per month (Publisher must account for any incurred or pass-through infrastructure costs in the license fee) | | **Microsoft bills** | **$100 per month** | | Microsoft charges a 3% Marketplace Service Fee and pays you 97% of your license cost | $97.00 per month |
-|
A preview audience can access your offer prior to being published live in the online stores. They can see how your offer will look in the commercial marketplace and test the end-to-end functionality before you publish it live.
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plans-pricing.md
The following table shows the plan options for each offer type. The following se
| Managed service | | &#10004; (BYOL) | &#10004; | | Software as a service | &#10004; | | &#10004; | | Azure virtual machine | &#10004; | | &#10004; |
-|||||
Plans are not supported for the following offer types:
This table provides pricing information thatΓÇÖs specific to various offer types
| Managed service | <ul><li>[Plan a Managed Service offer](plan-managed-service-offer.md#plans-and-pricing)</li><li>[Create plans for a Managed Service offer](create-managed-service-offer-plans.md#define-pricing-and-availability) | | Power BI app | <ul><li>[Plan a Power BI App offer](marketplace-power-bi.md#licensing-options)</li></ul> | | Software as a Service (SaaS) | <ul><li>[SaaS pricing models](plan-saas-offer.md#saas-pricing-models)</li><li>[SaaS billing](plan-saas-offer.md#saas-billing)</li><li>[Create plans for a SaaS offer](create-new-saas-offer-plans.md#define-a-pricing-model)</li></ul> |
-|
+ ## Custom prices
marketplace Power Bi Visual Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-properties.md
Select up to three **[Categories](./categories.md)** for grouping your offer int
| Filters | Narrow down the data within a report by using filters. | | Narratives | Use narratives to tell a story with text and data. | | Other | More specialized visuals to discover. |
-|||
## Industries
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
When planning a price change, consider the following:
| | | | | Type of price change | This dictates how far into the future the price will be scheduled. | - Price decreases are scheduled for the first of the next month.<br> - Price increases are scheduled for the first of a future month, at least 90 days after the price change is published.<br> | | Offer type | This dictates when you need to publish the price change via Partner Center. | Price changes must be published before the cut-off times below to be scheduled for the next month (based on type of price change):<br> - Software as a service offer: Four days before the end of the month.<br> - Virtual machine offer: Six days before the end of the month.<br> - Azure application offer: 14 days before the end of the month.<br> |
-|
#### Examples
marketplace Submission Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/submission-api-overview.md
Refer to the following table for supported submission APIs for each offer type.
| Managed Service | &#x2714; | | | Power BI App | &#x2714; | | | Software as a Service | | &#x2714; |
-|
Microsoft 365 Office add-ins, Microsoft 365 SharePoint solutions, Microsoft 365 Teams apps, and Power BI Visuals donΓÇÖt have submission API support.
marketplace Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/support.md
Open a ticket with Microsoft [marketplace publisher support](https://go.microsof
| Support channel | Description | Availability | |: |: |: | | For assistance, visit the Create an incident page located at [Marketplace Support](https://go.microsoft.com/fwlink/?linkid=2165533)</li> </ul> | Support for Partner Center. | Support is provided 24x5. |
-|
### Technical
Open a ticket with Microsoft [marketplace publisher support](https://go.microsof
| Support channel | Description | Availability | |: |: |: | | Email: [cebrand@microsoft.com](mailto:cebrand@microsoft.com) | Answers to questions about usage for Azure logos and branding. | |
-|
For questions about Marketplace Rewards, contact [Partner Center support](https://partner.microsoft.com/support/v2/?stage=1).
marketplace Test Saas Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-saas-plan.md
_Query parameters:_
| quantity | You can enter 1 for quantity as the test value | | dimension | Enter the name of the dimension defined in the metered plan | | planId | Enter the metered plan ID |
-|||
### View the response
For more details about sending metered usage events, see [Marketplace metered bi
When you complete your tests, you can do the following: - [Unsubscribe from and deactivate your test plan](test-saas-unsubscribe.md).-- [Create a plan](create-new-saas-offer-plans.md) in your production offer with the prices you want to charge customers and [publish the production offer live](test-publish-saas-offer.md).
+- [Create a plan](create-new-saas-offer-plans.md) in your production offer with the prices you want to charge customers and [publish the production offer live](test-publish-saas-offer.md).
marketplace Update Existing Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/update-existing-offer.md
Use these steps to update an offer that's been successfully published to Preview
1. When you're ready to publish your updated offer, select **Review and publish** from any page. The **Review and publish** page will open. On this page you'll see the completion status for the sections of the offer that you updated: - **Unpublished changes**: The section has been updated and is complete. All required data has been provided and there were no errors introduced in the updates. - **Incomplete**: The updates made to the section introduced errors that need to be fixed or requires more information to be provided.
-2. Select **Publish** to submit the updated offer for publishing. Your offer will then go through the standard [validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+1. Select **Publish** to submit the updated offer for publishing. Your offer will then go through the standard [validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
> [!IMPORTANT] > You must review your offer preview once it's available and select **Go-live** to publish your updated offer to your intended audience (public or private).
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/usage-dashboard.md
_**Table 1: Dictionary of data terms**_
| Estimated Financial Impact (USD) | Estimated Financial Impact in USD | **Applicable for offers with custom meter dimensions**.<br>When Partner Center flags an overage usage by the customer for the offerΓÇÖs custom meter dimension as anomalous, the field specifies the estimated financial impact (in USD) of the anomalous overage usage.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic means, then the value will be null._ | EstimatedFinancialImpactUSD | | Asset Id | Asset Id | **Applicable for offers with custom meter dimensions**.<br>The unique identifier of the customer's order subscription for your commercial marketplace service. Virtual machine usage-based offers are not associated with an order. | Asset Id | | N/A | Resource Id | The fully qualified ID of the resource, including the resource name and resource type. Note that this is a data field available in download reports only.<br>Use the format:<br> /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name}<br>**Note**: This field will be deprecated on 10/20/2021. | N/A |
-|||||
### Usage page filters
marketplace User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/user-roles.md
In order to access capabilities related to marketplace or your developer account
| Business contributor | &#10004;&#160;&#160;Access financial information<br>&#10004;&#160;&#160;Set pricing details<br>&#x2718;&#160;&#160;Create or submit new apps and add-ons | | Financial contributor | &#10004;&#160;&#160;View payout reports<br>&#10004;&#160;&#160;Manage payout and tax profiles<br>&#x2718;&#160;&#160;Make changes to apps or settings | | Marketer | &#10004;&#160;&#160;Respond to customer reviews<br>&#10004;&#160;&#160;View non-financial reports<br>&#x2718;&#160;&#160;Make changes to apps or settings |
-|||
> [!NOTE] > For the Commercial Marketplace program, the Global admin, Business Contributor, Financial Contributor, and Marketer roles are not used. Assigning these roles to users has no effect. Only the Manager and Developer roles grant permissions to users.
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
The Log Analytics workspace must exist in the following regions:
West Central US West Europe West US
- West US 2
- West US 3
+ West US 2
:::column-end::: :::row-end:::
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 05/4/2022 Last updated : 05/11/2022
One advantage of running your workload in Azure is global reach. The flexible se
| Norway East | :heavy_check_mark: | :x: | :x: | | South Africa North | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| South India | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| South India | :x: $$ | :x: | :heavy_check_mark: |
| Southeast Asia | :heavy_check_mark: | :x: $ | :x: | | Sweden Central | :heavy_check_mark: | :x: | :x: |
-| Switzerland North | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
+| Switzerland North | :heavy_check_mark: | :x: $ ** | :x: |
| UAE North | :heavy_check_mark: | :x: | :x: | | US Gov Arizona | :heavy_check_mark: | :x: | :x: | | US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| UK West | :heavy_check_mark: | :x: | :x: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :x: |
-| West US 2 | :heavy_check_mark: | :x: $ | :x: |
+| West US 2 | :x: $$ | :x: $ | :x: |
| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :x: | $ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
+$$ New server deployments are temporarily blocked in these regions. Already provisioned servers are fully supported.
+ ** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure Portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server. <!-- We continue to add more regions for flexible server. -->
postgresql Howto Build Scalable Apps Model High Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-high-throughput.md
Last updated 04/28/2022
## Common filter as shard key
-To pick the shard key for a high-throughput transactional (HTAP) application,
-follow these guidelines:
+To pick the shard key for a high-throughput transactional application, follow
+these guidelines:
* Choose a column that is used for point lookups and is present in most create, read, update, and delete operations.
postgresql Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-compute.md
Previously updated : 02/23/2022 Last updated : 05/10/2022 # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) compute and storage
storage resources.
| Resource | Available options | |--|--|
-| Compute, vCores | 2, 4, 8 |
+| Compute, vCores | 2, 4, 8, 16, 32, 64 |
| Memory per vCore, GiB | 4 | | Storage size, GiB | 128, 256, 512 | | Storage type | General purpose (SSD) |
selected number of vCores.
| 2 | 8 | | 4 | 16 | | 8 | 32 |
+| 16 | 64 |
+| 32 | 128 |
+| 64 | 256 |
The total amount of storage you provision also defines the I/O capacity available to the basic tier node.
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
+
+ Title: Upgrade a packet core instance
+
+description: In this how-to guide, you'll learn how to upgrade a packet core instance using the Azure portal.
++++ Last updated : 04/27/2022+++
+# Upgrade the packet core instance in a site - Azure portal
+
+Each Azure Private 5G Core Preview site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). You'll need to periodically upgrade your packet core instances to get access to the latest Azure Private 5G Core features and maintain support for your private mobile network. In this how-to guide, you'll learn how to upgrade a packet core instance using the Azure portal.
+
+> [!NOTE]
+> Azure Resource Manager templates (ARM templates) are not currently available for upgrading packet core instances. Use the Azure portal to carry out upgrades.
+
+## Prerequisites
+
+- Contact your Microsoft assigned trials engineer. They'll guide you through the upgrade process and provide you with the required information, including the amount of time you'll need to allow for the upgrade to complete and the new software version number.
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+
+## Upgrade the packet core instance
+
+Carry out the following steps to upgrade the packet core instance.
+
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Search for and select the **Mobile Network** resource representing the private mobile network.
+
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+
+1. In the **Resource** menu, select **Sites**.
+1. Select the site containing the packet core instance you want to upgrade.
+1. Under the **Network function** heading, select the name of the packet core control plane resource shown next to **Packet Core**.
+
+ :::image type="content" source="media/upgrade-packet-core-azure-portal/packet-core-field.png" alt-text="Screenshot of the Azure portal showing the Packet Core field.":::
+
+1. Select **Upgrade version**.
+
+ :::image type="content" source="media/upgrade-packet-core-azure-portal/upgrade-version.png" alt-text="Screenshot of the Azure portal showing the Upgrade version option.":::
+
+1. Under **Upgrade packet core version**, fill out the **New version** field with the string for the new software version provided to you by your trials engineer.
+
+ :::image type="content" source="media/upgrade-packet-core-azure-portal/upgrade-packet-core-version.png" alt-text="Screenshot of the Azure portal showing the New version field on the Upgrade packet core version screen.":::
+
+1. Select **Modify**.
+1. Azure will now redeploy the packet core instance at the new software version. The Azure portal will display the following confirmation screen when this deployment is complete.
+
+ :::image type="content" source="media/site-deployment-complete.png" alt-text="Screenshot of the Azure portal showing the confirmation of a successful deployment of a packet core instance.":::
+
+1. Select **Go to resource group**, and then select the **Packet Core Control Plane** resource representing the control plane function of the packet core instance in the site.
+1. Check the **Version** field under the **Configuration** heading to confirm that it displays the new software version.
+
+## Next steps
+You may want to use Log Analytics or the packet core dashboards to confirm your packet core instance is operating normally after the upgrade.
+
+- [Monitor Azure Private 5G Core with Log Analytics](monitor-private-5g-core-with-log-analytics.md)
+- [Packet core dashboards](packet-core-dashboards.md)
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| |Azure Machine Learning | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Machine Learning.](../machine-learning/how-to-configure-private-link.md) |
+|Azure Bot Service | All public regions | Supported only on Direct Line App Service extension | GA </br> [Learn how to create a private endpoint for Azure Bot Service](/azure/bot-service/dl-network-isolation-concept) |
### Analytics
The following tables list the Private Link services and the regions where they'r
| Azure Database for PostgreSQL - Single server | All public regions <br/> All Government regions<br/>All China regions | Supported for General Purpose and Memory Optimized pricing tiers | GA <br/> [Learn how to create a private endpoint for Azure Database for PostgreSQL.](../postgresql/concepts-data-access-and-security-private-link.md) | | Azure Database for MySQL | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Database for MySQL.](../mysql/concepts-data-access-security-private-link.md) | | Azure Database for MariaDB | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Database for MariaDB.](../mariadb/concepts-data-access-security-private-link.md) |
+| Azure Cache for Redis | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Cache for Redis.](../azure-cache-for-redis/cache-private-link.md) |
### Integration
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
Check blog, demo and related tutorials:
* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md) * [Data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md) * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Video: Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Video: Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
This section contains a reference of how actions in Microsoft Purview data polic
## Next steps Check blog, demo and related tutorials:
-* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md) * [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md) * [Blog: What's New in Microsoft Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
Network security group | Supported | Specify an existing resource in the target
Reserved (static) IP address | Supported | You can't currently configure this. The value defaults to the source value. <br/><br/> If the NIC on the source VM has a static IP address, and the target subnet has the same IP address available, it's assigned to the target VM.<br/><br/> If the target subnet doesn't have the same IP address available, the initiate move for the VM will fail. Dynamic IP address | Supported | You can't currently configure this. The value defaults to the source value.<br/><br/> If the NIC on the source has dynamic IP addressing, the NIC on the target VM is also dynamic by default. IP configurations | Supported | You can't currently configure this. The value defaults to the source value.
+VNET Peering | Not Retained | The VNET which is moved to the target region will not retain itΓÇÖs VNET peering configuration present in the source region. To retain the peering, it needs to do be done again manually in the target region.
## Outbound access requirements
search Search Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-analyzers.md
Previously updated : 09/08/2021 Last updated : 05/11/2022
An *analyzer* is a component of the [full text search engine](search-lucene-query-architecture.md) that's responsible for processing strings during indexing and query execution. Text processing (also known as lexical analysis) is transformative, modifying a string through actions such as these:
-+ Remove non-essential words (stopwords) and punctuation
++ Remove non-essential words ([stopwords](https://github.com/Azure-Samples/azure-search-sample-dat)) and punctuation + Split up phrases and hyphenated words into component parts + Lower-case any upper-case words + Reduce words into primitive root forms for storage efficiency and so that matches can be found regardless of tense
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
Title: Create an index alias
-description: Create an alias to define a secondary name that can be used to refer to an index for querying and indexing.
+description: Create an Azure Cognitive Search index alias to define a static secondary name that can be used for referencing a search index for querying and indexing.
Last updated 03/01/2022
> [!IMPORTANT] > Index aliases are currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In Azure Cognitive Search, an alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. You can create an alias that maps to a search index and substitute the alias name in places where you would otherwise reference an index name. This gives you added flexibility if you ever need to change which index your application is pointing to. Instead of updating the references to the index name in your application, you can just update the mapping for your alias.
+In Azure Cognitive Search, an index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. You can create an alias that maps to a search index and substitute the alias name in places where you would otherwise reference an index name. An alias adds flexibility if you need to change which index your application is pointing to. Instead of updating the references in your application, you can just update the mapping for your alias.
The main goal of index aliases is to make it easier to manage your production indexes. For example, if you need to make a change to your index definition, such as editing a field or adding a new analyzer, you'll have to create a new search index because all search indexes are immutable. This means you either need to [drop and rebuild your index](search-howto-reindex.md) or create a new index and then migrate your application over to that index.
Instead of dropping and rebuilding your index, you can use index aliases. A typi
1. When you need to make a change to your index that requires a rebuild, create a new search index 1. When your new index is ready to go, update the alias to map to the new index and requests will automatically be routed to the new index
-## Create an alias
+## Create an index alias
You can create an alias using the preview REST API, the preview SDKs, or through [Visual Studio Code](search-get-started-vs-code.md). An alias consists of the `name` of the alias and the name of the search index that the alias is mapped to. Only one index name can be specified in the `indexes` array.
To create an alias in Visual Studio Code:
-
-## Send requests
+## Send requests to an index alias
Once you've created your alias, you're ready to start using it. Aliases can be used for [querying](/rest/api/searchservice/search-documents) and [indexing](/rest/api/searchservice/addupdate-or-delete-documents).
PUT /aliases/my-alias?api-version=2021-04-30-preview
After you make the update to the alias, requests will automatically start to be routed to the new index. > [!NOTE]
-> An update to an alias may take up to 10 seconds to propogate through the system so you should wait at least 10 seconds before deleting the index that the alias was previously mapped to.
+> An update to an alias may take up to 10 seconds to propagate through the system so you should wait at least 10 seconds before deleting the index that the alias was previously mapped to.
## See also
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
The following properties can be set for CORS:
[**Create Index**](/rest/api/searchservice/create-index) creates the physical data structures (files and inverted indices) on your search service. Once the index is created, your ability to effect changes using [**Update Index**](/rest/api/searchservice/update-index) is contingent upon whether your modifications invalidate those physical structures. Most field attributes can't be changed once the field is created in your index.
+Alternatively, you can [create an index alias](search-how-to-alias.md) that serves as a stable reference in your application code. Instead of updating your code, you can update an index alias to point to newer index versions.
+ To minimize churn in the design process, the following table describes which elements are fixed and flexible in the schema. Changing a fixed element requires an index rebuild, whereas flexible elements can be changed at any time without impacting the physical implementation. | Element | Can be updated? |
search Search Lucene Query Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md
Lexical analyzers process *term queries* and *phrase queries* after the query tr
The most common form of lexical analysis is *linguistic analysis* which transforms query terms based on rules specific to a given language: * Reducing a query term to the root form of a word
-* Removing non-essential words (stopwords, such as "the" or "and" in English)
+* Removing non-essential words ([stopwords](https://github.com/Azure-Samples/azure-search-sample-dat), such as "the" or "and" in English)
* Breaking a composite word into component parts * Lower casing an upper case word
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
But you'll also want to become familiar with methodologies for loading an index
+ [Create a search index](search-how-to-create-search-index.md) ++ [Create an index alias](search-how-to-alias.md)+ + [Data import overview](search-what-is-data-import.md) + [Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents)
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
Title: Customer Lockbox for Microsoft Azure description: Technical overview of Customer Lockbox for Microsoft Azure, which provides control over cloud provider access when Microsoft may need to access customer data.-+ -+ Last updated 05/12/2021
security Protection Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/protection-customer-data.md
description: Learn how Azure protects customer data through data segregation, da
documentationcenter: na -+ editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
na Previously updated : 04/8/2022 Last updated : 05/10/2022
Access to customer data by Microsoft operations and support personnel is denied
Azure support personnel are assigned unique corporate Active Directory accounts by Microsoft. Azure relies on Microsoft corporate Active Directory, managed by Microsoft Information Technology (MSIT), to control access to key information systems. Multi-factor authentication is required, and access is granted only from secure consoles.
-All access attempts are monitored and can be displayed via a basic set of reports.
- ## Data protection Azure provides customers with strong data security, both by default and as customer options.
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
When a Microsoft Sentinel automation rule runs a playbook, it uses a special Mic
In order for an automation rule to run a playbook, this account must be granted explicit permissions to the resource group where the playbook resides. At that point, any automation rule will be able to run any playbook in that resource group.
-When you're configuring an automation rule and adding a **run playbook** action, a drop-down list of playbooks will appear. Playbooks to which Microsoft Sentinel does not have permissions will show as unavailable ("grayed out"). You can grant Microsoft Sentinel permission to the playbooks' resource groups on the spot by selecting the **Manage playbook permissions** link.
+When you're configuring an automation rule and adding a **run playbook** action, a drop-down list of playbooks will appear. Playbooks to which Microsoft Sentinel does not have permissions will show as unavailable ("grayed out"). You can grant Microsoft Sentinel permission to the playbooks' resource groups on the spot by selecting the **Manage playbook permissions** link. To grant those permissions, you'll need **Owner** permissions on those resource groups. [See the full permissions requirements](tutorial-respond-threats-playbook.md#respond-to-incidents).
#### Permissions in a multi-tenant architecture
sentinel Configure Fusion Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-fusion-rules.md
This detection is enabled by default in Microsoft Sentinel. To check or change i
1. Fusion can also detect scenario-based threats using rules based on the following **scheduled analytics rule templates**.
- To enable the queries availiable as templates in the **Analytics** blade, go to the **Rule templates** tab, select the rule name in the templates gallery, and click **Create rule** in the details pane.
+ To enable the queries available as templates in the **Analytics** blade, go to the **Rule templates** tab, select the rule name in the templates gallery, and click **Create rule** in the details pane.
- [Cisco - firewall block but success logon to Azure AD](https://github.com/Azure/Azure-Sentinel/blob/60e7aa065b196a6ed113c748a6e7ae3566f8c89c/Detections/MultipleDataSources/SigninFirewallCorrelation.yaml) - [Fortinet - Beacon pattern detected](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Detections/CommonSecurityLog/Fortinet-NetworkBeaconPattern.yaml)
This detection is enabled by default in Microsoft Sentinel. To check or change i
- [Suspicious Resource deployment](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Detections/AzureActivity/NewResourceGroupsDeployedTo.yaml) - [Palo Alto Threat signatures from Unusual IP addresses](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/CommonSecurityLog/PaloAlto-UnusualThreatSignatures.yaml)
- To add queries that are not currently availiable as a rule template, see [create a custom analytics rule with a scheduled query](detect-threats-custom.md#create-a-custom-analytics-rule-with-a-scheduled-query).
+ To add queries that are not currently available as a rule template, see [create a custom analytics rule with a scheduled query](detect-threats-custom.md#create-a-custom-analytics-rule-with-a-scheduled-query).
- [New Admin account activity seen which was not seen historically](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Hunting%20Queries/OfficeActivity/new_adminaccountactivity.yaml)
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Users with particular job requirements may need to be assigned additional roles
Microsoft Sentinel uses **playbooks** for automated threat response. Playbooks are built on **Azure Logic Apps**, and are a separate Azure resource. You might want to assign to specific members of your security operations team the ability to use Logic Apps for Security Orchestration, Automation, and Response (SOAR) operations. You can use the [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) role to assign explicit permission for using playbooks.
+- **Giving Microsoft Sentinel permissions to run playbooks**
+
+ Microsoft Sentinel uses a special service account to run incident-trigger playbooks manually or to call them from automation rules. The use of this account (as opposed to your user account) increases the security level of the service.
+
+ In order for an automation rule to run a playbook, this account must be granted explicit permissions to the resource group where the playbook resides. At that point, any automation rule will be able to run any playbook in that resource group. To grant these permissions to this service account, your account must have **Owner** permissions on the resource groups containing the playbooks.
+ - **Connecting data sources to Microsoft Sentinel** For a user to add **data connectors**, you must assign the user write permissions on the Microsoft Sentinel workspace. Also, note the required additional permissions for each connector, as listed on the relevant connector page.
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
To create an automation rule:
<a name="explicit-permissions"></a> > [!IMPORTANT]
+ >
> **Microsoft Sentinel must be granted explicit permissions in order to run playbooks based on the incident trigger**, whether manually or from automation rules. If a playbook appears "grayed out" in the drop-down list, it means Sentinel does not have permission to that playbook's resource group. Click the **Manage playbook permissions** link to assign permissions.
+ >
> In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and click **Apply**.
+ >
> :::image type="content" source="./media/tutorial-respond-threats-playbook/manage-permissions.png" alt-text="Manage permissions":::
+ >
> - You yourself must have **owner** permissions on any resource group to which you want to grant Microsoft Sentinel permissions, and you must have the **Logic App Contributor** role on any resource group containing playbooks you want to run.
+ >
> - In a multi-tenant deployment, if the playbook you want to run is in a different tenant, you must grant Microsoft Sentinel permission to run the playbook in the playbook's tenant. > 1. From the Microsoft Sentinel navigation menu in the playbooks' tenant, select **Settings**. > 1. In the **Settings** blade, select the **Settings** tab, then the **Playbook permissions** expander. > 1. Click the **Configure permissions** button to open the **Manage permissions** panel mentioned above, and continue as described there.
+ >
> - If, in an **MSSP** scenario, you want to [run a playbook in a customer tenant](automate-incident-handling-with-automation-rules.md#permissions-in-a-multi-tenant-architecture) from an automation rule created while signed into the service provider tenant, you must grant Microsoft Sentinel permission to run the playbook in ***both tenants***. In the **customer** tenant, follow the instructions for the multi-tenant deployment in the preceding bullet point. In the **service provider** tenant, you must add the Azure Security Insights app in your Azure Lighthouse onboarding template: > 1. From the Azure Portal go to **Azure Active Directory**. > 1. Click on **Enterprise Applications**.
service-connector Tutorial Connect Web App App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-connect-web-app-app-configuration.md
+
+ Title: 'Tutorial: Connect a web app to Azure App Configuration with Service Connector'
+description: Learn how you can connect an ASP.NET Core application hosted in Azure Web Apps to App Configuration using Service Connector'
++++ Last updated : 05/01/2022++
+# Tutorial: Connect a web app to Azure App Configuration with Service Connector
+
+Learn how to connect an ASP.NET Core app running on Azure App Service, to Azure App Configuration, using one of the following methods:
+
+- System-assigned managed identity (SMI)
+- User-assigned managed identity (UMI)
+- Service principal
+- Connection string
+
+In this tutorial, use the Azure CLI to complete the following tasks:
+
+> [!div class="checklist"]
+> - Set up Azure resources
+> - Create a connection between a web app and App Configuration
+> - Build and deploy your app to Azure App Service
+
+## Prerequisites
+
+- An Azure account with an active subscription. Your access role within the subscription must be "Contributor" or "Owner". [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- The Azure CLI. You can use it in [Azure Cloud Shell](https://shell.azure.com/) or [install it locally](/cli/azure/install-azure-cli).
+- [.NET SDK](https://dotnet.microsoft.com/download)
+- [Git](/devops/develop/git/install-and-set-up-git)
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Set up Azure resources
+
+Start by creating your Azure resources.
+
+1. Clone the following sample repo:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet.git
+ ```
+
+1. Deploy the web app to Azure
+
+ Run `az login` to sign in to and follow these steps to create an App Service and deploy the sample app. Make sure you have the Subscription Contributor role.
+
+ ### [SMI](#tab/smi)
+
+ Create an app service and deploy the sample app that uses system-assigned managed identity to interact with App Config.
+
+ ```azurecli
+ # Change directory to the SMI sample
+ cd serviceconnector-webapp-appconfig-dotnet\system-managed-identity
+
+ # Create a web app
+
+ LOCATION='eastus'
+ RESOURCE_GROUP_NAME='service-connector-tutorial-rg'
+ APP_SERVICE_NAME='webapp-appconfig-smi'
+
+ az webapp up --location $LOCATION --resource-group $RESOURCE_GROUP_NAME --name $APP_SERVICE_NAME
+ ```
+
+ | Parameter | Description | Example |
+ |--|--|-|
+ | Location | Choose a location near you. Use `az account list-locations --output table` to list locations. | *eastus* |
+ | Resource group name | You'll use this resource group to organize all the Azure resources needed to complete this tutorial. | *service-connector-tutorial-rg* |
+ | App service name | The app service name is used as the name of the resource in Azure and to form the fully qualified domain name for your app, in the form of the server endpoint `https://<app-service-name>.azurewebsites.com`. This name must be unique across all Azure and the only allowed characters are `A`-`Z`, `0`-`9`, and `-`. | *webapp-appconfig-smi* |
+
+ ### [UMI](#tab/umi)
+
+ Create an app service and deploy the sample app that uses user-assigned managed identity to interact with App Config.
+
+ ```azurecli
+ # Change directory to the UMI sample
+ cd serviceconnector-webapp-appconfig-dotnet\user-assigned-managed-identity
+
+ # Create a web app
+
+ LOCATION='eastus'
+ RESOURCE_GROUP_NAME='service-connector-tutorial-rg'
+ APP_SERVICE_NAME='webapp-appconfig-umi'
+
+ az webapp up --location $LOCATION --resource-group $RESOURCE_GROUP_NAME --name $APP_SERVICE_NAME
+ ```
+
+ | Parameter | Description | Example |
+ |--|--|-|
+ | Location | Choose a location near you. Use `az account list-locations --output table` to list locations. | *eastus* |
+ | Resource group name | You'll use this resource group to organize all the Azure resources needed to complete this tutorial. | *service-connector-tutorial-rg* |
+ | App service name | The app service name is used as the name of the resource in Azure and to form the fully qualified domain name for your app, in the form of the server endpoint `https://<app-service-name>.azurewebsites.com`. This name must be unique across all Azure and the only allowed characters are `A`-`Z`, `0`-`9`, and `-`. | *webapp-appconfig-umi* |
+
+ Create a user-assigned managed identity. Save the output into a temporary notepad.
+ ```azurecli
+ az identity create --resource-group $RESOURCE_GROUP_NAME -n "myIdentity"
+ ```
+
+ ### [Service principal](#tab/serviceprincipal)
+
+ Create an app service and deploy the sample app that uses service principal to interact with App Config.
+
+ ```azurecli
+ # Change directory to the service principal sample
+ cd serviceconnector-webapp-appconfig-dotnet\service-principal
+
+ # Create a web app
+
+ LOCATION='eastus'
+ RESOURCE_GROUP_NAME='service-connector-tutorial-rg'
+ APP_SERVICE_NAME='webapp-appconfig-sp'
+
+ az webapp up --location $LOCATION --resource-group $RESOURCE_GROUP_NAME --name $APP_SERVICE_NAME
+ ```
+
+ | Parameter | Description | Example |
+ |--|--|-|
+ | Location | Choose a location near you. Use `az account list-locations --output table` to list locations. | *eastus* |
+ | Resource group name | You'll use this resource group to organize all the Azure resources needed to complete this tutorial. | *service-connector-tutorial-rg* |
+ | App service name | The app service name is used as the name of the resource in Azure and to form the fully qualified domain name for your app, in the form of the server endpoint `https://<app-service-name>.azurewebsites.com`. This name must be unique across all Azure and the only allowed characters are `A`-`Z`, `0`-`9`, and `-`. | *webapp-appconfig-sp* |
+
+ Create a service principal, make sure to replace the `yourSubscriptionID` with your actual subscription ID. Save the output into a temporary notepad.
+
+ ```azurecli
+ az ad sp create-for-rbac --name myServicePrincipal --role Contributor --scopes /subscriptions/{yourSubscriptionID}/resourceGroups/$RESOURCE_GROUP_NAME
+ ```
+
+ ### [Connection string](#tab/connectionstring)
+
+ Create an app service and deploy the sample app that uses connection string to interact with App Config.
+
+ ```azurecli
+ # Change directory to the service principal sample
+ cd serviceconnector-webapp-appconfig-dotnet\connection-string
+
+ # Create a web app
+
+ LOCATION='eastus'
+ RESOURCE_GROUP_NAME='service-connector-tutorial-rg'
+ APP_SERVICE_NAME='webapp-appconfig-cs'
+
+ az webapp up --location $LOCATION --resource-group $RESOURCE_GROUP_NAME --name $APP_SERVICE_NAME
+ ```
+
+ | Parameter | Description | Example |
+ |--|--|-|
+ | Location | Choose a location near you. Use `az account list-locations --output table` to list locations. | *eastus* |
+ | Resource group name | You'll use this resource group to organize all the Azure resources needed to complete this tutorial. | *service-connector-tutorial-rg* |
+ | App service name | The app service name is used as the name of the resource in Azure and to form the fully qualified domain name for your app, in the form of the server endpoint `https://<app-service-name>.azurewebsites.com`. This name must be unique across all Azure and the only allowed characters are `A`-`Z`, `0`-`9`, and `-`. | *webapp-appconfig-cs* |
+
+
+
+1. Create an Azure App Configuration store
+
+ ```azurecli
+ APP_CONFIG_NAME='my-app-config'
+
+ az appconfig create -g $RESOURCE_GROUP_NAME -n $APP_CONFIG_NAME --sku Free -l eastus
+ ```
+
+1. Import the test configuration file to Azure App Configuration.
+
+ ### [SMI](#tab/smi)
+
+ Import the test configuration file to Azure App Configuration using a system-assigned managed identity.
+
+ 1. Cd into the folder `ServiceConnectorSample`
+ 1. Import the [./sampleconfigs.json](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/system-managed-identity/ServiceConnectorSample/sampleconfigs.json) test configuration file into the App Configuration store. If you're using Cloud Shell, upload [sampleconfigs.json](../cloud-shell/persisting-shell-storage.md) before running the command.
+
+ ```azurecli
+ az appconfig kv import -n $APP_CONFIG_NAME --source file --format json --path ./sampleconfigs.json --separator : --yes
+ ```
+
+ ### [UMI](#tab/umi)
+
+ Import the test configuration file to Azure App Configuration using a user-assigned managed identity.
+
+ 1. Cd into the folder `ServiceConnectorSample`
+ 1. Import the [./sampleconfigs.json](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/user-assigned-managed-identity/ServiceConnectorSample/sampleconfigs.json) test configuration file into the App Configuration store. If you're using Cloud Shell, upload [sampleconfigs.json](../cloud-shell/persisting-shell-storage.md) before running the command.
+
+ ```azurecli
+ az appconfig kv import -n $APP_CONFIG_NAME --source file --format json --path ./sampleconfigs.json --separator : --yes
+ ```
+
+ ### [Service principal](#tab/serviceprincipal)
+
+ Import the test configuration file to Azure App Configuration using service principal.
+
+ 1. Cd into the folder `ServiceConnectorSample`
+ 1. Import the [./sampleconfigs.json](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/service-principal/ServiceConnectorSample/sampleconfigs.json) test configuration file into the App Configuration store. If you're using Cloud Shell, upload [sampleconfigs.json](../cloud-shell/persisting-shell-storage.md) before running the command.
+
+ ```azurecli
+ az appconfig kv import -n $APP_CONFIG_NAME --source file --format json --path ./sampleconfigs.json --separator : --yes
+ ```
+
+ ### [Connection string](#tab/connectionstring)
+
+ Import the test configuration file to Azure App Configuration using a connection string.
+
+ 1. Cd into the folder `ServiceConnectorSample`
+ 1. Import the [./sampleconfigs.json](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/connection-string/ServiceConnectorSample/sampleconfigs.json) test configuration file into the App Configuration store. If you're using Cloud Shell, upload [sampleconfigs.json](../cloud-shell/persisting-shell-storage.md) before running the command.
+
+ ```azurecli
+ az appconfig kv import -n $APP_CONFIG_NAME --source file --format json --path ./sampleconfigs.json --separator : --yes
+ ```
+
+
+
+## Connect the web app to App Configuration
+
+Create a connection between your web application and your App Configuration store.
+
+### [SMI](#tab/smi)
+
+Create a connection between your web application and your App Configuration store, using a system-assigned managed identity authentication. This connection is done through Service Connector.
+
+```azurecli
+az webapp connection create appconfig -g $RESOURCE_GROUP_NAME -n $APP_SERVICE_NAME --app-config $APP_CONFIG_NAME --tg $RESOURCE_GROUP_NAME --connection "app_config_smi" --system-identity
+```
+
+`system-identity` refers to the system-assigned managed identity (SMI) authentication type. Service Connector also supports the following authentications: user-assigned managed identity (UMI), connection string (secret) and service principal.
+
+### [UMI](#tab/umi)
+
+Create a connection between your web application and your App Configuration store, using a user-assigned managed identity authentication. This connection is done through Service Connector.
+
+```azurecli
+az webapp connection create appconfig -g $RESOURCE_GROUP_NAME -n $APP_SERVICE_NAME --app-config $APP_CONFIG_NAME --tg $RESOURCE_GROUP_NAME --connection "app_config_umi" --user-identity client-id=<myIdentityClientId> subs-id=<myTestSubsId>
+```
+
+`user-identity` refers to the user-assigned managed identity authentication type. Service Connector also supports the following authentications: system-assigned managed identity, connection string (secret) and service principal.
+
+There are two ways you can find the `client-id`:
+
+- In the Azure CLI, enter `az identity show -n "myIdentity" -g $RESOURCE_GROUP_NAME --query 'clientId'`.
+- In the Azure portal, open the Managed Identity that was created earlier and in **Overview**, get the value under **Client ID**.
+
+### [Service principal](#tab/serviceprincipal)
+
+Create a connection between your web application and your App Configuration store, using a service principal. This is done through Service Connector.
+
+```azurecli
+az webapp connection create appconfig -g $RESOURCE_GROUP_NAME -n $APP_SERVICE_NAME --app-config $APP_CONFIG_NAME --tg $RESOURCE_GROUP_NAME --connection "app_config_sp" --service-principal client-id=<mySPClientId> secret=<mySPSecret>
+```
+
+`service-principal` refers to the service principal authentication type. Service Connector also supports the following authentications: system-assigned managed identity (UMI), user-assigned managed identity (UMI) and connection string (secret).
+
+### [Connection string](#tab/connectionstring)
+
+Create a connection between your web application and your App Configuration store, using a connection string. This connection is done through Service Connector.
+
+```azurecli
+az webapp connection create appconfig -g $RESOURCE_GROUP_NAME -n $APP_SERVICE_NAME --app-config $APP_CONFIG_NAME --tg $RESOURCE_GROUP_NAME --connection "app_config_cs" --secret
+```
+
+`secret` refers to the connection-string authentication type. Service Connector also supports the following authentications: system-assigned managed identity, user-assigned managed identity, and service principal.
+++
+## Validate the connection
+
+1. To check if the connection is working, navigate to your web app at `https://<myWebAppName>.azurewebsites.net/` from your browser. Once the website is up, you'll see it displaying "Hello. Your Azure WebApp is connected to App Configuration by ServiceConnector now".
+
+## How it works
+
+Find below what Service Connector manages behind the scenes for each authentication type.
+
+### [SMI](#tab/smi)
+
+Service Connector manages the connection configuration for you:
+
+- Set up the web app's `AZURE_APPCONFIGURATION_ENDPOINT` to let the application access it and get the App Configuration endpoint. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/system-managed-identity/ServiceConnectorSample/Program.cs#L10).
+- Activate the web app's system-assigned managed authentication and grant App Configuration a Data Reader role to let the application authenticate to the App Configuration using DefaultAzureCredential from Azure.Identity. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/system-managed-identity/ServiceConnectorSample/Program.cs#L13).
+
+### [UMI](#tab/umi)
+
+Service Connector manages the connection configuration for you:
+
+- Set up the web app's `AZURE_APPCONFIGURATION_ENDPOINT`, `AZURE_APPCONFIGURATION_CLIENTID`
+to let the application access it and get app configuration endpoint in [code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/user-assigned-managed-identity/ServiceConnectorSample/Program.cs#L10-L12);
+- Activate the web app's user-assigned managed authentication and grant App Configuration a Data Reader role to let the application authenticate to the App Configuration using DefaultAzureCredential from Azure.Identity. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/user-assigned-managed-identity/ServiceConnectorSample/Program.cs#L16).
+
+### [Service principal](#tab/serviceprincipal)
+
+Service Connector manages the connection configuration for you:
+
+- Set up the web app's `AZURE_APPCONFIGURATION_ENDPOINT` to let the application access it and get the App Configuration endpoint. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/service-principal/ServiceConnectorSample/Program.cs#L10).
+- save service principal credential to WebApp AppSettings `AZURE_APPCONFIGURATION_CLIENTID`. `AZURE_APPCONFIGURATION_TENANTID`, `AZURE_APPCONFIGURATION_CLIENTSECRET` and grant App Configuration Data Reader role to the service principal, so the application could be authenticated to the App Configuration in [code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/service-principal/ServiceConnectorSample/Program.cs#L11-L18), by using `ClientSecretCredential` from [Azure.Identity](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.Identity/1.0.0/api/https://docsupdatetracker.net/index.html).
+
+### [Connection string](#tab/connectionstring)
+
+Service Connector manages the connection configuration for you:
+
+- Set up the web app's `AZURE_APPCONFIGURATION_CONNECTIONSTRING` to let the application access it and get the App Configuration connection string. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/connection-string/Microsoft.Azure.ServiceConnector.Sample/Program.cs#L9-L12).
+- Activate the web app's system-assigned managed authentication and grant App Configuration a Data Reader role to let the application authenticate to the App Configuration using DefaultAzureCredential from Azure.Identity. Access [sample code](https://github.com/Azure-Samples/serviceconnector-webapp-appconfig-dotnet/blob/main/connection-string/Microsoft.Azure.ServiceConnector.Sample/Program.cs#L43).
+++
+For more information, go to [Service Connector internals.](concept-service-connector-internals.md)
+
+## Test (optional)
+
+Optionally, do the following tests:
+
+1. Update the value of the key `SampleApplication:Settings:Messages` in the App Configuration Store.
+
+ ```azurecli
+ az appconfig kv set -n <myAppConfigStoreName> --key SampleApplication:Settings:Messages --value hello --yes
+ ```
+
+1. Navigate to your Azure web app by going to `https://<myWebAppName>.azurewebsites.net/` and refresh the page. You'll see that the message is updated to "hello".
+
+## Cleanup
+
+Once you're done, delete the Azure resources you created.
+
+`az group delete -n <myResourceGroupName> --yes`
+
+## Next steps
+
+Follow the tutorials listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-fabric Service Fabric Cluster Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-capacity.md
The capacity needs of your cluster will be determined by your specific workload
**For production workloads, the recommended VM size (SKU) is [Standard D2_V2](../virtual-machines/dv2-dsv2-series.md) (or equivalent) with a minimum of 50 GB of local SSD, 2 cores, and 4 GiB of memory.** A minimum of 50 GB local SSD is recommended, however some workloads (such as those running Windows containers) will require larger disks. When choosing other [VM sizes](../virtual-machines/sizes-general.md) for production workloads, keep in mind the following constraints: -- Partial core VM sizes like Standard A0 are not supported.
+- Partial / single core VM sizes like Standard A0 are not supported.
- *A-series* VM sizes are not supported for performance reasons. - Low-priority VMs are not supported.
+- [B-Series Burstable SKU's](https://docs.microsoft.com/azure/virtual-machines/sizes-b-series-burstable) are not supported.
#### Primary node type
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Enable replication. This procedure assumes that the primary Azure region is East
![Screenshot that displays the enable replication parameters.](./media/azure-to-azure-how-to-enable-replication/enabled-rwizard-3.PNG)
+5. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**. The time taken for initial replication depends on various factors such as the disk size, used storage on the disks, etc. Data transfer happens at ~23% of the disk throughput. Initial replication creates snapshot of disk and transfer that snapshot.
+ ### Enable replication for added disks If you add disks to an Azure VM for which replication is enabled, the following occurs:
site-recovery Vmware Azure Deploy Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-deploy-configuration-server.md
If you want to add an additional NIC to the configuration server, add it before
7. The tool performs some configuration tasks, and then reboots. 8. Sign in to the machine again. The configuration server management wizard starts automatically in a few seconds.
+### Verify connectivity
+Make sure the machine can access these URLs based on your environment:
++
+IP address-based firewall rules should allow communication to all of the Azure URLs that are listed above over HTTPS (443) port. To simplify and limit the IP Ranges, it is recommended that URL filtering be done.
+
+- **Commercial IPs** - Allow the [Azure Datacenter IP Ranges](https://www.microsoft.com/download/confirmation.aspx?id=41653), and the HTTPS (443) port. Allow IP address ranges for the Azure region of your subscription to support the AAD, Backup, Replication, and Storage URLs.
+- **Government IPs** - Allow the [Azure Government Datacenter IP Ranges](https://www.microsoft.com/en-us/download/details.aspx?id=57063), and the HTTPS (443) port for all USGov Regions (Virginia, Texas, Arizona, and Iowa) to support AAD, Backup, Replication, and Storage URLs.
+ ### Configure settings 1. In the configuration server management wizard, select **Setup connectivity**. From the drop-down boxes, first select the NIC that the in-built process server uses for discovery and push installation of mobility service on source machines. Then select the NIC that the configuration server uses for connectivity with Azure. Select **Save**. You can't change this setting after it's configured. Don't change the IP address of a configuration server. Ensure that the IP assigned to the configuration server is a static IP and not a DHCP IP.
site-recovery Vmware Azure Set Up Replication Tutorial Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication-tutorial-preview.md
Follow these steps to enable replication:
- Cache storage account: Now, choose the cache storage account which Azure Site Recovery uses for staging purposes - caching and storing logs before writing the changes on to the managed disks.
- By default, a new LRS v1 type storage account will be created by Azure Site Recovery for the first enable replication operation in a vault. For the next operations, same cache storage account will be re-used.
+ By default, a new LRS v1 type storage account will be created by Azure Site Recovery for the first enable replication operation in a vault. For the next operations, the same cache storage account will be re-used.
- Managed disks By default, Standard HDD managed disks are created in Azure. You can customize the type of Managed disks by Selecting **Customize**. Choose the type of disk based on the business requirement. Ensure [appropriate disk type is chosen](../virtual-machines/disks-types.md#disk-type-comparison) based on the IOPS of the source machine disks. For pricing information, refer to the managed disk pricing document [here](https://azure.microsoft.com/pricing/details/managed-disks/).
Follow these steps to enable replication:
- Select **OK** to save the policy.
- The policy will be created and can used for protecting the chosen source machines.
+ The policy will be created and can be used for protecting the chosen source machines.
11. After choosing the replication policy, select **Next**. Review the Source and Target properties. Select **Enable Replication** to initiate the operation.
spring-cloud How To Built In Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-built-in-persistent-storage.md
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
Azure Spring Cloud provides two types of built-in storage for your application: persistent and temporary.
storage Encryption Customer Provided Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-customer-provided-keys.md
Title: Provide an encryption key on a request to Blob storage
-description: Clients making requests against Azure Blob storage have the option to provide an encryption key on a per-request basis. Including the encryption key on the request provides granular control over encryption settings for Blob storage operations.
+description: Clients making requests against Azure Blob storage can provide an encryption key on a per-request basis. Including the encryption key on the request provides granular control over encryption settings for Blob storage operations.
Previously updated : 12/14/2020 Last updated : 05/09/2022
# Provide an encryption key on a request to Blob storage
-Clients making requests against Azure Blob storage have the option to provide an AES-256 encryption key on a per-request basis. Including the encryption key on the request provides granular control over encryption settings for Blob storage operations. Customer-provided keys can be stored in Azure Key Vault or in another key store.
+Clients making requests against Azure Blob storage can provide an AES-256 encryption key to encrypt that blob on a write operation. Subsequent requests to read or write to the blob must include the same key. Including the encryption key on the request provides granular control over encryption settings for Blob storage operations. Customer-provided keys can be stored in Azure Key Vault or in another key store.
## Encrypting read and write operations When a client application provides an encryption key on the request, Azure Storage performs encryption and decryption transparently while reading and writing blob data. Azure Storage writes an SHA-256 hash of the encryption key alongside the blob's contents. The hash is used to verify that all subsequent operations against the blob use the same encryption key.
-Azure Storage does not store or manage the encryption key that the client sends with the request. The key is securely discarded as soon as the encryption or decryption process is complete.
+Azure Storage doesn't store or manage the encryption key that the client sends with the request. The key is securely discarded as soon as the encryption or decryption process is complete.
-When a client creates or updates a blob using a customer-provided key on the request, then subsequent read and write requests for that blob must also provide the key. If the key is not provided on a request for a blob that has already been encrypted with a customer-provided key, then the request fails with error code 409 (Conflict).
+When a client creates or updates a blob using a customer-provided key on the request, then subsequent read and write requests for that blob must also provide the key. If the key isn't provided on a request for a blob that has already been encrypted with a customer-provided key, then the request fails with error code 409 (Conflict).
If the client application sends an encryption key on the request, and the storage account is also encrypted using a Microsoft-managed key or a customer-managed key, then Azure Storage uses the key provided on the request for encryption and decryption. To send the encryption key as part of the request, a client must establish a secure connection to Azure Storage using HTTPS.
-Each blob snapshot can have its own encryption key.
+Each blob snapshot or blob version can have its own encryption key.
+
+Object replication isn't supported for blobs in the source account that are encrypted with a customer-provided key.
## Request headers for specifying customer-provided keys
The following Blob storage operations support sending customer-provided encrypti
## Rotate customer-provided keys
-To rotate an encryption key that was used to encrypt a blob, download the blob and then re-upload it with the new encryption key.
+To rotate an encryption key that was used to encrypt a blob, download the blob and then reupload it with the new encryption key.
> [!IMPORTANT] > The Azure portal cannot be used to read from or write to a container or blob that is encrypted with a key provided on the request.
To rotate an encryption key that was used to encrypt a blob, download the blob a
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
+This table shows how this feature is supported in your account and the effect on that support when you enable certain capabilities.
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 05/05/2022 Last updated : 05/09/2022
Data sets have unique lifecycles. Early in the lifecycle, people access some dat
With the lifecycle management policy, you can: -- Transition blobs from cool to hot immediately when they are accessed, to optimize for performance.-- Transition current versions of a blob, previous versions of a blob, or blob snapshots to a cooler storage tier if these objects have not been accessed or modified for a period of time, to optimize for cost. In this scenario, the lifecycle management policy can move objects from hot to cool, from hot to archive, or from cool to archive.
+- Transition blobs from cool to hot immediately when they're accessed, to optimize for performance.
+- Transition current versions of a blob, previous versions of a blob, or blob snapshots to a cooler storage tier if these objects haven't been accessed or modified for a period of time, to optimize for cost. In this scenario, the lifecycle management policy can move objects from hot to cool, from hot to archive, or from cool to archive.
- Delete current versions of a blob, previous versions of a blob, or blob snapshots at the end of their lifecycles. - Define rules to be run once per day at the storage account level. - Apply rules to containers or to a subset of blobs, using name prefixes or [blob index tags](storage-manage-find-blobs.md) as filters. Consider a scenario where data is frequently accessed during the early stages of the lifecycle, but only occasionally after two weeks. Beyond the first month, the data set is rarely accessed. In this scenario, hot storage is best during the early stages. Cool storage is most appropriate for occasional access. Archive storage is the best tier option after the data ages over a month. By moving data to the appropriate storage tier based on its age with lifecycle management policy rules, you can design the least expensive solution for your needs.
-Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts. Lifecycle management does not affect system containers such as the *$logs* or *$web* containers.
+Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts. Lifecycle management doesn't affect system containers such as the `$logs` or `$web` containers.
> [!IMPORTANT] > If a data set needs to be readable, do not set a policy to move blobs to the archive tier. Blobs in the archive tier cannot be read unless they are first rehydrated, a process which may be time-consuming and expensive. For more information, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
Each rule within the policy has several parameters, described in the following t
| Parameter name | Parameter type | Notes | Required | |--|--|--|--| | `name` | String | A rule name can include up to 256 alphanumeric characters. Rule name is case-sensitive. It must be unique within a policy. | True |
-| `enabled` | Boolean | An optional boolean to allow a rule to be temporary disabled. Default value is true if it's not set. | False |
+| `enabled` | Boolean | An optional boolean to allow a rule to be temporarily disabled. Default value is true if it's not set. | False |
| `type` | An enum value | The current valid type is `Lifecycle`. | True | | `definition` | An object that defines the lifecycle rule | Each definition is made up of a filter set and an action set. | True |
Filters include:
| Filter name | Filter type | Notes | Is Required | |-|-|-|-|
-| blobTypes | An array of predefined enum values. | The current release supports `blockBlob` and `appendBlob`. Only delete is supported for `appendBlob`, set tier is not supported. | Yes |
+| blobTypes | An array of predefined enum values. | The current release supports `blockBlob` and `appendBlob`. Only delete is supported for `appendBlob`, set tier isn't supported. | Yes |
| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. | No | | blobIndexMatch | An array of dictionary values consisting of blob index tag key and value conditions to be matched. Each rule can define up to 10 blob index tag condition. For example, if you want to match all blobs with `Project = Contoso` under `https://myaccount.blob.core.windows.net/` for a rule, the blobIndexMatch is `{"name": "Project","op": "==","value": "Contoso"}`. | If you don't define blobIndexMatch, the rule applies to all blobs within the storage account. | No |
Lifecycle management supports tiering and deletion of current versions, previous
| tierToArchive | Supported for `blockBlob` | Supported | Supported | | delete<sup>1</sup> | Supported for `blockBlob` and `appendBlob` | Supported | Supported |
-<sup>1</sup> When applied to an account with a hierarchical namespace enabled, a delete action removes empty directories. If the directory is not empty, then the delete action removes objects that meet the policy conditions within the first 24-hour cycle. If that action results in an empty directory that also meets the policy conditions, then that directory will be removed within the next 24-hour cycle, and so on.
+<sup>1</sup> When applied to an account with a hierarchical namespace enabled, a delete action removes empty directories. If the directory isn't empty, then the delete action removes objects that meet the policy conditions within the first 24-hour cycle. If that action results in an empty directory that also meets the policy conditions, then that directory will be removed within the next 24-hour cycle, and so on.
> [!NOTE] > If you define more than one action on the same blob, lifecycle management applies the least expensive action to the blob. For example, action `delete` is cheaper than action `tierToArchive`. Action `tierToArchive` is cheaper than action `tierToCool`.
This example shows how to transition block blobs prefixed with `sample-container
You can enable last access time tracking to keep a record of when your blob is last read or written and as a filter to manage tiering and retention of your blob data. To learn how to enable last access time tracking, see [Optionally enable access time tracking](lifecycle-management-policy-configure.md#optionally-enable-access-time-tracking).
-When last access time tracking is enabled, the blob property called `LastAccessTime` is updated when a blob is read or written. A [Get Blob](/rest/api/storageservices/get-blob) operation is considered an access operation. [Get Blob Properties](/rest/api/storageservices/get-blob-properties), [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata), and [Get Blob Tags](/rest/api/storageservices/get-blob-tags) are not access operations, and therefore don't update the last access time.
+When last access time tracking is enabled, the blob property called `LastAccessTime` is updated when a blob is read or written. A [Get Blob](/rest/api/storageservices/get-blob) operation is considered an access operation. [Get Blob Properties](/rest/api/storageservices/get-blob-properties), [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata), and [Get Blob Tags](/rest/api/storageservices/get-blob-tags) aren't access operations, and therefore don't update the last access time.
-To minimize the impact on read access latency, only the first read of the last 24 hours updates the last access time. Subsequent reads in the same 24-hour period do not update the last access time. If a blob is modified between reads, the last access time is the more recent of the two values.
+To minimize the effect on read access latency, only the first read of the last 24 hours updates the last access time. Subsequent reads in the same 24-hour period don't update the last access time. If a blob is modified between reads, the last access time is the more recent of the two values.
-In the following example, blobs are moved to cool storage if they haven't been accessed for 30 days. The `enableAutoTierToHotFromCool` property is a Boolean value that indicates whether a blob should automatically be tiered from cool back to hot if it is accessed again after being tiered to cool.
+In the following example, blobs are moved to cool storage if they haven't been accessed for 30 days. The `enableAutoTierToHotFromCool` property is a Boolean value that indicates whether a blob should automatically be tiered from cool back to hot if it's accessed again after being tiered to cool.
```json {
In the following example, blobs are moved to cool storage if they haven't been a
### Archive data after ingest
-Some data stays idle in the cloud and is rarely, if ever, accessed. The following lifecycle policy is configured to archive data shortly after it is ingested. This example transitions block blobs in a container named `archivecontainer` into an archive tier. The transition is accomplished by acting on blobs 0 days after last modified time:
+Some data stays idle in the cloud and is rarely, if ever, accessed. The following lifecycle policy is configured to archive data shortly after it's ingested. This example transitions block blobs in a container named `archivecontainer` into an archive tier. The transition is accomplished by acting on blobs 0 days after last modified time:
```json {
Some data stays idle in the cloud and is rarely, if ever, accessed. The followin
### Expire data based on age
-Some data is expected to expire days or months after creation. You can configure a lifecycle management policy to expire data by deletion based on data age. The following example shows a policy that deletes all block blobs that have not been modified in the last 365 days.
+Some data is expected to expire days or months after creation. You can configure a lifecycle management policy to expire data by deletion based on data age. The following example shows a policy that deletes all block blobs that haven't been modified in the last 365 days.
```json {
Some data should only be expired if explicitly marked for deletion. You can conf
### Manage previous versions
-For data that is modified and accessed regularly throughout its lifetime, you can enable blob storage versioning to automatically maintain previous versions of an object. You can create a policy to tier or delete previous versions. The version age is determined by evaluating the version creation time. This policy rule tiers previous versions within container `activedata` that are 90 days or older after version creation to cool tier, and deletes previous versions that are 365 days or older.
+For data that is modified and accessed regularly throughout its lifetime, you can enable blob storage versioning to automatically maintain previous versions of an object. You can create a policy to tier or delete previous versions. The version age is determined by evaluating the version creation time. This policy rule moves previous versions within container `activedata` that are 90 days or older after version creation to the cool tier, and deletes previous versions that are 365 days or older.
```json {
For data that is modified and accessed regularly throughout its lifetime, you ca
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
+This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities.
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
This table shows how this feature is supported in your account and the impact on
The lifecycle management feature is available in all Azure regions.
-Lifecycle management policies are free of charge. Customers are billed for standard operation costs for the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) API calls. Delete operations are free.
+Lifecycle management policies are free of charge. Customers are billed for standard operation costs for the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) API calls. Delete operations are free. However, other Azure services and utilities such as [Microsoft Defender for Storage](../../defender-for-cloud/defender-for-storage-introduction.md) may charge for operations that are managed through a lifecycle policy.
Each update to a blob's last access time is billed under the [other operations](https://azure.microsoft.com/pricing/details/storage/blobs/) category.
The platform runs the lifecycle policy once a day. Once you configure a policy,
### If I update an existing policy, how long does it take for the actions to run?
-The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete. If the update is to disable or delete a rule, and enableAutoTierToHotFromCool was used, auto-tiering to Hot tier will still happen. For example, set a rule including enableAutoTierToHotFromCool based on last access. If the rule is disabled/deleted, and a blob is currently in cool and then accessed, it will move back to Hot as that is applied on access outside of lifecycle management. The blob will not then move from Hot to Cool given the lifecycle management rule is disabled/deleted. The only way to prevent autoTierToHotFromCool is to turn off last access time tracking.
+The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete. If the update is to disable or delete a rule, and enableAutoTierToHotFromCool was used, auto-tiering to Hot tier will still happen. For example, set a rule including enableAutoTierToHotFromCool based on last access. If the rule is disabled/deleted, and a blob is currently in cool and then accessed, it will move back to Hot as that is applied on access outside of lifecycle management. The blob won't then move from Hot to Cool given the lifecycle management rule is disabled/deleted. The only way to prevent autoTierToHotFromCool is to turn off last access time tracking.
### I manually rehydrated an archived blob. How do I prevent it from being moved back to the Archive tier temporarily? When a blob is moved from one access tier to another, its last modification time doesn't change. If you manually rehydrate an archived blob to hot tier, it would be moved back to archive tier by the lifecycle management engine. Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. You may also copy the blob to another location if it needs to stay in hot or cool tier permanently.
-### The blob prefix match string did not apply the policy to the expected blobs
+### The blob prefix match string didn't apply the policy to the expected blobs
The blob prefix match field of a policy is a full or partial blob path, which is used to match the blobs you want the policy actions to apply to. The path must start with the container name. If no prefix match is specified, then the policy will apply to all the blobs in the storage account. The format of the prefix match string is `[container name]/[blob name]`.
Keep in mind the following points about the prefix match string:
- A prefix match string like *container1/* applies to all blobs in the container named *container1*. A prefix match string of *container1*, without the trailing forward slash character (/), applies to all blobs in all containers where the container name begins with the string *container1*. The prefix will match containers named *container11*, *container1234*, *container1ab*, and so on. - A prefix match string of *container1/sub1/* applies to all blobs in the container named *container1* that begin with the string *sub1/*. For example, the prefix will match blobs named *container1/sub1/test.txt* or *container1/sub1/sub2/test.txt*.-- The asterisk character `*` is a valid character in a blob name. If the asterisk character is used in a prefix, then the prefix will match blobs with an asterisk in their names. The asterisk does not function as a wildcard character.-- The question mark character `?` is a valid character in a blob name. If the question mark character is used in a prefix, then the prefix will match blobs with a question mark in their names. The question mark does not function as a wildcard character.
+- The asterisk character `*` is a valid character in a blob name. If the asterisk character is used in a prefix, then the prefix will match blobs with an asterisk in their names. The asterisk doesn't function as a wildcard character.
+- The question mark character `?` is a valid character in a blob name. If the question mark character is used in a prefix, then the prefix will match blobs with a question mark in their names. The question mark doesn't function as a wildcard character.
- The prefix match considers only positive (=) logical comparisons. Negative (!=) logical comparisons are ignored. ### Is there a way to identify the time at which the policy will be executing?
-Unfortunately, there is no way to track the time at which the policy will be executing, as it is a background scheduling process. However, the platform will run the policy once per day.
+Unfortunately, there's no way to track the time at which the policy will be executing, as it's a background scheduling process. However, the platform will run the policy once per day.
## Next steps
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Previously updated : 05/02/2022 Last updated : 05/09/2022
The following diagram shows how object replication replicates block blobs from a
To learn how to configure object replication, see [Configure object replication](object-replication-configure.md).
-## Prerequisites for object replication
+## Prerequisites and caveats for object replication
Object replication requires that the following Azure Storage features are also enabled: - [Change feed](storage-blob-change-feed.md): Must be enabled on the source account. To learn how to enable change feed, see [Enable and disable the change feed](storage-blob-change-feed.md#enable-and-disable-the-change-feed). - [Blob versioning](versioning-overview.md): Must be enabled on both the source and destination accounts. To learn how to enable versioning, see [Enable and manage blob versioning](versioning-enable.md).
-Enabling change feed and blob versioning may incur additional costs. For more details, refer to the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/).
+Enabling change feed and blob versioning may incur additional costs. For more information, see the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/).
-Object replication is supported for general-purpose v2 storage accounts and premium block blob accounts. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs and page blobs are not supported.
+Object replication is supported for general-purpose v2 storage accounts and premium block blob accounts. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs and page blobs aren't supported.
-Customer-managed failover is not supported for either the source or the destination account in an object replication policy.
+Object replication is supported for accounts that are encrypted with customer-managed keys. For more information about customer-managed keys, see [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md).
+
+Object replication isn't supported for blobs in the source account that are encrypted with a customer-provided key. For more information about customer-provided keys, see [Provide an encryption key on a request to Blob storage](encryption-customer-provided-keys.md).
+
+Customer-managed failover isn't supported for either the source or the destination account in an object replication policy.
## How object replication works
Object replication asynchronously copies block blobs in a container according to
Object replication requires that blob versioning is enabled on both the source and destination accounts. When a replicated blob in the source account is modified, a new version of the blob is created in the source account that reflects the previous state of the blob, before modification. The current version in the source account reflects the most recent updates. Both the current version and any previous versions are replicated to the destination account. For more information about how write operations affect blob versions, see [Versioning on write operations](versioning-overview.md#versioning-on-write-operations).
-When a blob in the source account is deleted, the current version of the blob becomes a previous version, and there is no longer a current version. All existing previous versions of the blob are preserved. This state is replicated to the destination account. For more information about how delete operations affect blob versions, see [Versioning on delete operations](versioning-overview.md#versioning-on-delete-operations).
+When a blob in the source account is deleted, the current version of the blob becomes a previous version, and there's no longer a current version. All existing previous versions of the blob are preserved. This state is replicated to the destination account. For more information about how to delete operations affect blob versions, see [Versioning on delete operations](versioning-overview.md#versioning-on-delete-operations).
### Snapshots
-Object replication does not support blob snapshots. Any snapshots on a blob in the source account are not replicated to the destination account.
+Object replication doesn't support blob snapshots. Any snapshots on a blob in the source account aren't replicated to the destination account.
### Blob tiering
When you create a replication rule, by default only new block blobs that are sub
You can also specify one or more filters as part of a replication rule to filter block blobs by prefix. When you specify a prefix, only blobs matching that prefix in the source container will be copied to the destination container.
-The source and destination containers must both exist before you can specify them in a rule. After you create the replication policy, write operations to the destination container are not permitted. Any attempts to write to the destination container fail with error code 409 (Conflict). To write to a destination container for which a replication rule is configured, you must either delete the rule that is configured for that container, or remove the replication policy. Read and delete operations to the destination container are permitted when the replication policy is active.
+The source and destination containers must both exist before you can specify them in a rule. After you create the replication policy, write operations to the destination container aren't permitted. Any attempts to write to the destination container fail with error code 409 (Conflict). To write to a destination container for which a replication rule is configured, you must either delete the rule that is configured for that container, or remove the replication policy. Read and delete operations to the destination container are permitted when the replication policy is active.
You can call the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation on a blob in the destination container to move it to the archive tier. For more information about the archive tier, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md#archive-access-tier).
The following table describes what happens when you create a replication policy
| Storage account identifier in policy definition | Cross-tenant replication allowed | Cross-tenant replication disallowed | |--|--|--|
-| Full resource ID | Same-tenant policies can be created.<br /><br /> Cross-tenant policies can be created. | Same-tenant policies can be created.<br /><br /> Cross-tenant policies cannot be created. |
-| Account name only | Same-tenant policies can be created.<br /><br /> Cross-tenant policies can be created. | Neither same-tenant nor cross-tenant policies can be created. An error occurs, because Azure Storage cannot verify that source and destination accounts are in the same tenant. The error indicates that you must specify the full resource ID for the **sourceAccount** and **destinationAccount** entries in the policy definition file. |
+| Full resource ID | Same-tenant policies can be created.<br /><br /> Cross-tenant policies can be created. | Same-tenant policies can be created.<br /><br /> Cross-tenant policies can't be created. |
+| Account name only | Same-tenant policies can be created.<br /><br /> Cross-tenant policies can be created. | Neither same-tenant nor cross-tenant policies can be created. An error occurs, because Azure Storage can't verify that source and destination accounts are in the same tenant. The error indicates that you must specify the full resource ID for the **sourceAccount** and **destinationAccount** entries in the policy definition file. |
### Specify the policy and rule IDs The following table summarizes which values to use for the **policyId** and **ruleId** entries in the policy definition file in each scenario.
-| When you are creating the policy definition file for this account... | Set the policy ID to this value | Set rule IDs to this value |
+| When you're creating the policy definition file for this account... | Set the policy ID to this value | Set rule IDs to this value |
|-|-|-| | Destination account | The string value *default*. Azure Storage will create the policy ID value for you. | An empty string. Azure Storage will create the rule ID values for you. | | Source account | The value of the policy ID returned when you download the policy definition file for the destination account. | The values of the rule IDs returned when you download the policy definition file for the destination account. | ## Prevent replication across Azure AD tenants
-An Azure Active Directory (Azure AD) tenant is a dedicated instance of Azure AD that represents an organization for the purpose of identity and access management. Each Azure subscription has a trust relationship with a single Azure AD tenant. All resources in a subscription, including storage accounts, are associated with the same Azure AD tenant. For more information, see [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
+An Azure Active Directory (Azure AD) tenant is a dedicated instance of Azure AD that represents an organization for identity and access management. Each Azure subscription has a trust relationship with a single Azure AD tenant. All resources in a subscription, including storage accounts, are associated with the same Azure AD tenant. For more information, see [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
By default, a user with appropriate permissions can configure object replication with a source storage account that is in one Azure AD tenant and a destination account that is in a different tenant. If your security policies require that you restrict object replication to storage accounts that reside within the same tenant only, you can disallow replication across tenants by setting a security property, the **AllowCrossTenantReplication** property (preview). When you disallow cross-tenant object replication for a storage account, then for any object replication policy that is configured with that storage account as the source or destination account, Azure Storage requires that both the source and destination accounts reside within the same Azure AD tenant. For more information about disallowing cross-tenant object replication, see [Prevent object replication across Azure Active Directory tenants](object-replication-prevent-cross-tenant-policies.md).
-To disallow cross-tenant object replication for a storage account, set the **AllowCrossTenantReplication** property to *false*. If the storage account does not currently participate in any cross-tenant object replication policies, then setting the **AllowCrossTenantReplication** property to *false* prevents future configuration of cross-tenant object replication policies with this storage account as the source or destination.
+To disallow cross-tenant object replication for a storage account, set the **AllowCrossTenantReplication** property to *false*. If the storage account doesn't currently participate in any cross-tenant object replication policies, then setting the **AllowCrossTenantReplication** property to *false* prevents future configuration of cross-tenant object replication policies with this storage account as the source or destination.
-If the storage account currently does participates in one or more cross-tenant object replication policies, then setting the **AllowCrossTenantReplication** property to *false* is not permitted. You must delete the existing cross-tenant policies before you can disallow cross-tenant replication.
+If the storage account currently participates in one or more cross-tenant object replication policies, then setting the **AllowCrossTenantReplication** property to *false* isn't permitted. You must delete the existing cross-tenant policies before you can disallow cross-tenant replication.
-By default, the **AllowCrossTenantReplication** property is not set for a storage account, and its value is *null*, which is equivalent to *true*. When the value of the **AllowCrossTenantReplication** property for a storage account is *null* or *true*, then authorized users can configure cross-tenant object replication policies with this account as the source or destination. For more information about how to configure cross-tenant policies, see [Configure object replication for block blobs](object-replication-configure.md).
+By default, the **AllowCrossTenantReplication** property isn't set for a storage account, and its value is *null*, which is equivalent to *true*. When the value of the **AllowCrossTenantReplication** property for a storage account is *null* or *true*, then authorized users can configure cross-tenant object replication policies with this account as the source or destination. For more information about how to configure cross-tenant policies, see [Configure object replication for block blobs](object-replication-configure.md).
You can use Azure Policy to audit a set of storage accounts to ensure that the **AllowCrossTenantReplication** property is set to prevent cross-tenant object replication. You can also use Azure Policy to enforce governance for a set of storage accounts. For example, you can create a policy with the deny effect to prevent a user from creating a storage account where the **AllowCrossTenantReplication** property is set to *true*, or from modifying an existing storage account to change the property value to *true*.
If the replication status for a blob in the source account indicates failure, th
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
+This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities.
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
storage Sas Service Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create.md
Previously updated : 03/23/2021 Last updated : 05/10/2022 -+
storage Storage Blob Customer Provided Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-customer-provided-key.md
Previously updated : 12/18/2020 Last updated : 05/09/2022
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Previously updated : 03/30/2021 Last updated : 05/09/2022
Use an Azure Resource Manager template to deploy an Azure Storage account with M
Use an Azure Policy to enable Microsoft Defender for Cloud across storage accounts under a specific subscription or resource group. 1. Launch the Azure **Policy - Definitions** page.
-1. Search for the **Deploy Microsoft Defender for Storage accounts** policy.
+1. Search for the **Azure Defender for Storage should be enabled** policy, then select the policy to view the policy definition page.
- :::image type="content" source="media/azure-defender-storage-configure/storage-atp-policy-definitions.png" alt-text="Apply policy to enable Microsoft Defender for Storage accounts":::
+ :::image type="content" source="media/azure-defender-storage-configure/storage-defender-policy-definitions.png" alt-text="Locate built-in policy to enable Microsoft Defender for Storage for your storage accounts." lightbox="media/azure-defender-storage-configure/storage-defender-policy-definitions.png":::
-1. Select an Azure subscription or resource group.
+1. Select the **Assign** button for the built-in policy, then specify an Azure subscription. You can also optionally specify a resource group to further scope the policy assignment.
- :::image type="content" source="media/azure-defender-storage-configure/storage-atp-policy2.png" alt-text="Select subscription or resource group for scope of policy ":::
+ :::image type="content" source="media/azure-defender-storage-configure/storage-defender-policy-assignment.png" alt-text="Select subscription and optionally resource group to scope the policy assignment." lightbox="media/azure-defender-storage-configure/storage-defender-policy-assignment.png":::
-1. Assign the policy.
-
- :::image type="content" source="media/azure-defender-storage-configure/storage-atp-policy1.png" alt-text="Assign policy to enable Microsoft Defender for Storage":::
+1. Select **Review + create** to review the policy definition and then create it with the specified scope.
### [PowerShell](#tab/azure-powershell)
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-standard-account.md
Previously updated : 10/04/2021 Last updated : 05/09/2022
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 04/21/2022 Last updated : 05/10/2022
# Azure Storage redundancy
-Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
+Azure Storage always stores multiple copies of your data so that it's protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose include:
The redundancy setting for a storage account is shared for all storage services
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers two options for how your data is replicated in the primary region: -- **Locally redundant storage (LRS)** copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option, but is not recommended for applications requiring high availability or durability.
+- **Locally redundant storage (LRS)** copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option, but isn't recommended for applications requiring high availability or durability.
- **Zone-redundant storage (ZRS)** copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability, Microsoft recommends using ZRS in the primary region, and also replicating to a secondary region. > [!NOTE] > Microsoft recommends using ZRS in the primary region for Azure Data Lake Storage Gen2 workloads.
-### Locally-redundant storage
+### Locally redundant storage
Locally redundant storage (LRS) replicates your storage account three times within a single data center in the primary region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
LRS is a good choice for the following scenarios:
- If your application stores data that can be easily reconstructed if data loss occurs, you may opt for LRS. - If your application is restricted to replicating data only within a country or region due to data governance requirements, you may opt for LRS. In some cases, the paired regions across which the data is geo-replicated may be in another country or region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).-- If your scenario is using Azure unmanaged disks. While it is possible to create a storage account for Azure unmanaged disks that uses GRS, it is not recommended due to potential issues with consistency over asynchronous geo-replication.
+- If your scenario is using Azure unmanaged disks, you may opt for LRS. While it's possible to create a storage account for Azure unmanaged disks that uses GRS, it isn't recommended due to potential issues with consistency over asynchronous geo-replication.
### Zone-redundant storage Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year.
-With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
+With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS repointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
A write request to a storage account that is using ZRS happens synchronously. The write operation returns successfully only after the data is written to all replicas across the three availability zones.
-Microsoft recommends using ZRS in the primary region for scenarios that require high availability. ZRS is also recommended for restricting replication of data to within a country or region to meet data governance requirements.
+Microsoft recommends using ZRS in the primary region for scenarios that require high availability. ZRS is also recommended for restricting replication of data to a particular country or region to meet data governance requirements.
Microsoft recommends using ZRS for Azure Files workloads. If a zone becomes unavailable, no remounting of Azure file shares from the connected clients is required.
The following diagram shows how your data is replicated across availability zone
ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, Microsoft recommends using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region.
-The Archive tier for Blob Storage is not currently supported for ZRS accounts. Unmanaged disks don't support ZRS or GZRS.
+The Archive tier for Blob Storage isn't currently supported for ZRS accounts. Unmanaged disks don't support ZRS or GZRS.
For more information about which regions support ZRS, see [Azure regions with availability zones](../../availability-zones/az-overview.md#azure-regions-with-availability-zones).
Azure Storage offers two options for copying your data to a secondary region:
> [!NOTE] > The primary difference between GRS and GZRS is how data is replicated in the primary region. Within the secondary region, data is always replicated synchronously three times using LRS. LRS in the secondary region protects your data against hardware failures.
-With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there is a failover to the secondary region. For read access to the secondary region, configure your storage account to use read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information, see [Read access to data in the secondary region](#read-access-to-data-in-the-secondary-region).
+With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there's a failover to the secondary region. For read access to the secondary region, configure your storage account to use read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information, see [Read access to data in the secondary region](#read-access-to-data-in-the-secondary-region).
If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
Only standard general-purpose v2 storage accounts support GZRS. GZRS is supporte
## Read access to data in the secondary region
-Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. When you enable read access to the secondary region, your data is available to be read at all times, including in a situation where the primary region becomes unavailable. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
+Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. When you enable read access to the secondary region, your data is always available to be read, including in a situation where the primary region becomes unavailable. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
> [!NOTE] > Azure Files does not support read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
When read access to the secondary is enabled, your application can be read from
### Check the Last Sync Time property
-Because data is replicated to the secondary region asynchronously, the secondary region is often behind the primary region. If a failure happens in the primary region, it's likely that all writes to the primary will not yet have been replicated to the secondary.
+Because data is replicated to the secondary region asynchronously, the secondary region is often behind the primary region. If a failure happens in the primary region, it's likely that all writes to the primary won't yet have been replicated to the secondary.
-To determine which write operations have been replicated to the secondary region, your application can check the **Last Sync Time** property for your storage account. All write operations written to the primary region prior to the last sync time have been successfully replicated to the secondary region, meaning that they are available to be read from the secondary. Any write operations written to the primary region after the last sync time may or may not have been replicated to the secondary region, meaning that they may not be available for read operations.
+To determine which write operations have been replicated to the secondary region, your application can check the **Last Sync Time** property for your storage account. All write operations written to the primary region prior to the last sync time have been successfully replicated to the secondary region, meaning that they're available to be read from the secondary. Any write operations written to the primary region after the last sync time may or may not have been replicated to the secondary region, meaning that they may not be available for read operations.
You can query the value of the **Last Sync Time** property using Azure PowerShell, Azure CLI, or one of the Azure Storage client libraries. The **Last Sync Time** property is a GMT date/time value. For more information, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
The following table shows which redundancy options are supported by each Azure S
| LRS | ZRS | GRS | RA-GRS | GZRS | RA-GZRS | |||||||
-| Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1,</sup><sup>2</sup> <br/>Azure managed disks | Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1,</sup><sup>2</sup> <br/>Azure managed disks<sup>3</sup> | Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1</sup> | Blob storage <br/>Queue storage <br/>Table storage <br/> | Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1</sup> | Blob storage <br/>Queue storage <br/>Table storage <br/> |
+| Blob storage (including Data Lake Storage)<br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1,</sup><sup>2</sup> <br/>Azure managed disks<br/> Page blobs | Blob storage (including Data Lake Storage)<br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1,</sup><sup>2</sup> <br/>Azure managed disks<sup>3</sup> | Blob storage (including Data Lake Storage)<br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1</sup> | Blob storage (including Data Lake Storage)<br/>Queue storage <br/>Table storage <br/> | Blob storage (including Data Lake Storage)<br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1</sup> | Blob storage (including Data Lake Storage)<br/>Queue storage <br/>Table storage <br/> |
-<sup>1</sup> Standard file shares are supported on LRS and ZRS. Standard file shares are supported on GRS and GZRS as long as they are less than or equal to five TiB in size.<br/>
+<sup>1</sup> Standard file shares are supported on LRS and ZRS. Standard file shares are supported on GRS and GZRS as long as they're less than or equal to 5 TiB in size.<br/>
<sup>2</sup> Premium file shares are supported on LRS and ZRS.<br/> <sup>3</sup> ZRS managed disks have certain limitations. See the [Limitations](../../virtual-machines/disks-redundancy.md#limitations) section of the redundancy options for managed disks article for details.<br/>
The following table shows which redundancy options are supported for each type o
| Storage account types | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-|
-| **Recommended** | Standard general-purpose v2 (`StorageV2`)<sup>1</sup><br/><br/> Premium block blobs (`BlockBlobStorage`)<sup>1</sup><br/><br/> Premium file shares (`FileStorage`) | Standard general-purpose v2 (`StorageV2`)<sup>1</sup><br/><br/> Premium block blobs (`BlockBlobStorage`)<sup>1</sup><br/><br/> Premium file shares (`FileStorage`) | Standard general-purpose v2 (`StorageV2`)<sup>1</sup> | Standard general-purpose v2 (`StorageV2`)<sup>1</sup> |
+| **Recommended** | Standard general-purpose v2 (`StorageV2`)<sup>1</sup><br/><br/> Premium block blobs (`BlockBlobStorage`)<sup>1</sup><br/><br/> Premium file shares (`FileStorage`) <br/><br/> Premium page blobs (`StorageV2`) | Standard general-purpose v2 (`StorageV2`)<sup>1</sup><br/><br/> Premium block blobs (`BlockBlobStorage`)<sup>1</sup><br/><br/> Premium file shares (`FileStorage`) | Standard general-purpose v2 (`StorageV2`)<sup>1</sup> | Standard general-purpose v2 (`StorageV2`)<sup>1</sup> |
| **Legacy** | Standard general-purpose v1 (`Storage`)<br/><br/> Legacy blob (`BlobStorage`) | N/A | Standard general-purpose v1 (`Storage`)<br/><br/> Legacy blob (`BlobStorage`) | N/A | <sup>1</sup> Accounts of this type with a hierarchical namespace enabled also support the specified redundancy option.
All geo-redundant offerings support Microsoft-managed failover in the event of a
## Data integrity
-Azure Storage regularly verifies the integrity of data stored using cyclic redundancy checks (CRCs). If data corruption is detected, it is repaired using redundant data. Azure Storage also calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
+Azure Storage regularly verifies the integrity of data stored using cyclic redundancy checks (CRCs). If data corruption is detected, it's repaired using redundant data. Azure Storage also calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
## See also
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md
To confirm the certificate is expired, perform the following steps:
1. Open the Certificates MMC snap-in, select Computer Account and navigate to Certificates (Local Computer)\Personal\Certificates. 2. Check if the client authentication certificate is expired.
-If the client authentication certificate is expired, perform the following steps to resolve the issue:
-
-1. Verify Azure File Sync agent version 4.0.1.0 or later is installed.
-2. Run the following PowerShell command on the server:
-
- ```powershell
- Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
- ```
+If the client authentication certificate is expired, run the following PowerShell command on the server:
+```powershell
+Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
+```
<a id="-2134375896"></a>**Sync failed due to authentication certificate not found.** | Error | Code |
If the client authentication certificate is expired, perform the following steps
This error occurs because the certificate used for authentication is not found.
-To resolve this issue, perform the following steps:
-
-1. Verify Azure File Sync agent version 4.0.1.0 or later is installed.
-2. Run the following PowerShell command on the server:
-
- ```powershell
- Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
- ```
+To resolve this issue, run the following PowerShell command on the server:
+```powershell
+Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>
+```
<a id="-2134364039"></a>**Sync failed due to authentication identity not found.** | Error | Code |
This error occurs because of an internal problem with the sync database. This er
| **Error string** | ECS_E_INVALID_AAD_TENANT | | **Remediation required** | Yes |
-Make sure you have the latest Azure File Sync agent. As of agent V10, Azure File Sync supports moving the subscription to a different Azure Active Directory tenant.
-
-Once you have the latest agent version, you must give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](#troubleshoot-rbac)).
+Verify you have the latest Azure File Sync agent version installed and give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](#troubleshoot-rbac)).
<a id="-2134364010"></a>**Sync failed due to firewall and virtual network exception not configured**
If files fail to tier to Azure Files:
| 0x80072ee7 | -2147012889 | WININET_E_NAME_NOT_RESOLVED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | | 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to tier due to access denied error. This error can occur if the file is located on a DFS-R read-only replication folder. | Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. | | 0x80072efe | -2147012866 | WININET_E_CONNECTION_ABORTED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
-| 0x80c80261 | -2134375839 | ECS_E_GHOSTING_MIN_FILE_SIZE | The file failed to tier because the file size is less than the supported size. | If the agent version is less than 9.0, the minimum supported file size is 64 KiB. If agent version is 9.0 and newer, the minimum supported file size is based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. |
+| 0x80c80261 | -2134375839 | ECS_E_GHOSTING_MIN_FILE_SIZE | The file failed to tier because the file size is less than the supported size. | The minimum supported file size is based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. |
| 0x80c83007 | -2134364153 | ECS_E_STORAGE_ERROR | The file failed to tier due to an Azure storage issue. | If the error persists, open a support request. | | 0x800703e3 | -2147023901 | ERROR_OPERATION_ABORTED | The file failed to tier because it was recalled at the same time. | No action required. The file will be tiered when the recall completes and the file is no longer in use. | | 0x80c80264 | -2134375836 | ECS_E_GHOSTING_FILE_NOT_SYNCED | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
If the above conditions are not met, restoring access is not possible as these t
<a id="get-orphaned"></a>**How to get the list of orphaned tiered files**
-1. Verify Azure File Sync agent version v5.1 or later is installed.
-2. Run the following PowerShell commands to list orphaned tiered files:
+1. Run the following PowerShell commands to list orphaned tiered files:
```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" $orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path> $orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt ```
-3. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
+2. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
<a id="remove-orphaned"></a>**How to remove orphaned tiered files**
$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
This option deletes the orphaned tiered files on the Windows Server but requires removing the server endpoint if it exists due to recreation after 30 days or is connected to a different sync group. File conflicts will occur if files are updated on the Windows Server or Azure file share before the server endpoint is recreated.
-1. Verify Azure File Sync agent version v5.1 or later is installed.
-2. Back up the Azure file share and server endpoint location.
-3. Remove the server endpoint in the sync group (if exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
+1. Back up the Azure file share and server endpoint location.
+2. Remove the server endpoint in the sync group (if exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
> [!Warning] > If the server endpoint is not removed prior to using the Remove-StorageSyncOrphanedTieredFiles cmdlet, deleting the orphaned tiered file on the server will delete the full file in the Azure file share.
-4. Run the following PowerShell commands to list orphaned tiered files:
+3. Run the following PowerShell commands to list orphaned tiered files:
```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" $orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path> $orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt ```
-5. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
-6. Run the following PowerShell commands to delete orphaned tiered files:
+4. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
+5. Run the following PowerShell commands to delete orphaned tiered files:
```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
$orphanFilesRemoved.OrphanedTieredFiles > DeletedOrphanFiles.txt
- Tiered files that are accessible (not orphan) will not be deleted. - Non-tiered files will remain on the server.
-7. Optional: Recreate the server endpoint if deleted in step 3.
+6. Optional: Recreate the server endpoint if deleted in step 3.
*Option 2: Mount the Azure file share and copy the files locally that are orphaned on the server*
If you encounter issues with Azure File Sync on a server, start by completing th
If the issue is not resolved, run the AFSDiag tool and send its .zip file output to the support engineer assigned to your case for further diagnosis.
-To run AFSDiag, perform the steps below.
+To run AFSDiag, perform the steps below:
-For agent version v11 and later:
1. Open an elevated PowerShell window, and then run the following commands (press Enter after each command): > [!NOTE]
For agent version v11 and later:
2. Reproduce the issue. When you're finished, enter **D**. 3. A .zip file that contains logs and trace files is saved to the output directory that you specified.
-For agent version v10 and earlier:
-1. Create a directory where the AFSDiag output will be saved (for example, C:\Output).
- > [!NOTE]
- >AFSDiag will delete all content in the output directory prior to collecting logs. Specify an output location which does not contain data.
-2. Open an elevated PowerShell window, and then run the following commands (press Enter after each command):
-
- ```powershell
- cd "c:\Program Files\Azure\StorageSyncAgent"
- Import-Module .\afsdiag.ps1
- Debug-Afs c:\output # Note: Use the path created in step 1.
- ```
-
-3. For the Azure File Sync kernel mode trace level, enter **1** (unless otherwise specified, to create more verbose traces), and then press Enter.
-4. For the Azure File Sync user mode trace level, enter **1** (unless otherwise specified, to create more verbose traces), and then press Enter.
-5. Reproduce the issue. When you're finished, enter **D**.
-6. A .zip file that contains logs and trace files is saved to the output directory that you specified.
-- ## See also - [Monitor Azure File Sync](file-sync-monitoring.md) - [Troubleshoot Azure Files problems in Windows](../files/storage-troubleshoot-windows-file-connection-problems.md)
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
To configure ACLs with superuser permissions, you must mount the share by using
The following permissions are included on the root directory of a file share: -- BUILTIN\Administrators:(OI)(CI)(F)-- BUILTIN\Users:(RX)-- BUILTIN\Users:(OI)(CI)(IO)(GR,GE)-- NT AUTHORITY\Authenticated Users:(OI)(CI)(M)-- NT AUTHORITY\SYSTEM:(OI)(CI)(F)-- NT AUTHORITY\SYSTEM:(F)-- CREATOR OWNER:(OI)(CI)(IO)(F)
+- `BUILTIN\Administrators:(OI)(CI)(F)`
+- `BUILTIN\Users:(RX)`
+- `BUILTIN\Users:(OI)(CI)(IO)(GR,GE)`
+- `NT AUTHORITY\Authenticated Users:(OI)(CI)(M)`
+- `NT AUTHORITY\SYSTEM:(OI)(CI)(F)`
+- `NT AUTHORITY\SYSTEM:(F)`
+- `CREATOR OWNER:(OI)(CI)(IO)(F)`
|Users|Definition| |||
-|BUILTIN\Administrators|All users who are domain administrators of the on-prem AD DS environment.
-|BUILTIN\Users|Built-in security group in AD. It includes NT AUTHORITY\Authenticated Users by default. For a traditional file server, you can configure the membership definition per server. For Azure Files, there isn't a hosting server, hence BUILTIN\Users includes the same set of users as NT AUTHORITY\Authenticated Users.|
-|NT AUTHORITY\SYSTEM|The service account of the operating system of the file server. Such service account doesn't apply in Azure Files context. It is included in the root directory to be consistent with Windows Files Server experience for hybrid scenarios.|
-|NT AUTHORITY\Authenticated Users|All users in AD that can get a valid Kerberos token.|
-|CREATOR OWNER|Each object either directory or file has an owner for that object. If there are ACLs assigned to "CREATOR OWNER" on that object, then the user that is the owner of this object has the permissions to the object defined by the ACL.|
-
+|`BUILTIN\Administrators`|Built-in security group representing administrators of the file server. This group is empty, and no one can be added to it.
+|`BUILTIN\Users`|Built-in security group representing users of the file server. It includes `NT AUTHORITY\Authenticated Users` by default. For a traditional file server, you can configure the membership definition per server. For Azure Files, there isn't a hosting server, hence `BUILTIN\Users` includes the same set of users as `NT AUTHORITY\Authenticated Users`.|
+|`NT AUTHORITY\SYSTEM`|The service account of the operating system of the file server. Such service account doesn't apply in Azure Files context. It is included in the root directory to be consistent with Windows Files Server experience for hybrid scenarios.|
+|`NT AUTHORITY\Authenticated Users`|All users in AD that can get a valid Kerberos token.|
+|`CREATOR OWNER`|Each object either directory or file has an owner for that object. If there are ACLs assigned to `CREATOR OWNER` on that object, then the user that is the owner of this object has the permissions to the object defined by the ACL.|
## Mount a file share from the command prompt
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/automation-powershell.md
We can check that everything is wired properly in the `Test Pane`.
After that we need to `Publish` the job, which will allow us to link the runbook to a schedule. Creating and linking the schedule is a straightforward process that won't be discussed here. Now is a good time to remember that there are [workarounds](../automation/shared-resources/schedules.md#schedule-runbooks-to-run-more-frequently) to achieve schedule intervals under 1 hour.
-Finally, we can set up an alert. The first step is to enable logs via the [Diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md?tabs=CMD#create-in-azure-portal) of the Automation Account. The second step is to capture errors via a query like we did for Functions.
+Finally, we can set up an alert. The first step is to enable logs via the [Diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md?tabs=cli#create-diagnostic-settings) of the Automation Account. The second step is to capture errors via a query like we did for Functions.
## Outcome
stream-analytics Stream Analytics Parallelization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-parallelization.md
As a prerequisite, you may want to be familiar with the notion of Streaming Unit
A Stream Analytics job definition includes at least one streaming input, a query, and output. Inputs are where the job reads the data stream from. The query is used to transform the data input stream, and the output is where the job sends the job results to. ## Partitions in inputs and outputs
-Partitioning lets you divide data into subsets based on a [partition key](../event-hubs/event-hubs-scalability.md#partitions). If your input (for example Event Hubs) is partitioned by a key, it is highly recommended to specify this partition key when adding input to your Stream Analytics job. Scaling a Stream Analytics job takes advantage of partitions in the input and output. A Stream Analytics job can consume and write different partitions in parallel, which increases throughput.
+Partitioning lets you divide data into subsets based on a [partition key](../event-hubs/event-hubs-scalability.md#partitions). If your input (for example Event Hubs) is partitioned by a key, it's highly recommended to specify this partition key when adding input to your Stream Analytics job. Scaling a Stream Analytics job takes advantage of partitions in the input and output. A Stream Analytics job can consume and write different partitions in parallel, which increases throughput.
### Inputs
-All Azure Stream Analytics input can take advantage of partitioning:
-- EventHub (need to set the partition key explicitly with PARTITION BY keyword if using compatibility level 1.1 or below)-- IoT Hub (need to set the partition key explicitly with PARTITION BY keyword if using compatibility level 1.1 or below)-- Blob storage+
+All Azure Stream Analytics streaming inputs can take advantage of partitioning: Event Hubs, IoT Hub, Blob storage.
+
+> [!NOTE]
+> For compatibility level 1.2 and above, the partition key is to be set as an **input property**, with no need for the PARTITION BY keyword in the query. For compatibility level 1.1 and below, the partition key instead needs to be defined with the PARTITION BY keyword **in the query**.
### Outputs
For more information about partitions, see the following articles:
* [Event Hubs features overview](../event-hubs/event-hubs-features.md#partitions) * [Data partitioning](/azure/architecture/best-practices/data-partitioning)
+### Query
+
+For a job to be parallel, partition keys need to be aligned between all inputs, all query logic steps and all outputs. The query logic partitioning is determined by the keys used for joins and aggregations (GROUP BY). This last requirement can be ignored if the query logic isn't keyed (projection, filters, referential joins...).
+
+* If an input and an output are partitioned by WarehouseId, and the query groups by ProductId without WarehouseId, then the job isn't parallel.
+* If two inputs to be joined are partitioned by different partition keys (WarehouseId and ProductId), then the job isn't parallel.
+* If two or more independent data flows are contained in a single job, each with its own partition key, then the job isn't parallel.
+
+Only when all inputs, outputs and query steps are using the same key will the job be parallel.
+ ## Embarrassingly parallel jobs An *embarrassingly parallel* job is the most scalable scenario in Azure Stream Analytics. It connects one partition of the input to one instance of the query to one partition of the output. This parallelism has the following requirements:
-1. If your query logic depends on the same key being processed by the same query instance, you must make sure that the events go to the same partition of your input. For Event Hubs or IoT Hub, this means that the event data must have the **PartitionKey** value set. Alternatively, you can use partitioned senders. For blob storage, this means that the events are sent to the same partition folder. An example would be a query instance that aggregates data per userID where input event hub is partitioned using userID as partition key. However, if your query logic does not require the same key to be processed by the same query instance, you can ignore this requirement. An example of this logic would be a simple select-project-filter query.
+1. If your query logic depends on the same key being processed by the same query instance, you must make sure that the events go to the same partition of your input. For outputs to Event Hubs or IoT Hub, the event data must have the **PartitionKey** value set. Alternatively, you can use partitioned senders. For blob storage, the events are sent to the same partition folder. An example would be a query instance that aggregates data per userID where input event hub is partitioned using userID as partition key. However, if your query logic does not require the same key to be processed by the same query instance, you can ignore this requirement. An example of this logic would be a simple select-project-filter query.
-2. The next step is to make your query is partitioned. For jobs with compatibility level 1.2 or higher (recommended), custom column can be specified as Partition Key in the input settings and the job will be paralellized automatically. Jobs with compatibility level 1.0 or 1.1, requires you to use **PARTITION BY PartitionId** in all the steps of your query. Multiple steps are allowed, but they all must be partitioned by the same key.
+2. The next step is to make your query partitioned. For jobs with compatibility level 1.2 or higher (recommended), custom column can be specified as Partition Key in the input settings and the job will be parallelized automatically. Jobs with compatibility level 1.0 or 1.1, requires you to use **PARTITION BY PartitionId** in all the steps of your query. Multiple steps are allowed, but they all must be partitioned by the same key.
-3. Most of the outputs supported in Stream Analytics can take advantage of partitioning. If you use an output type that doesn't support partitioning your job won't be *embarrassingly parallel*. For Event Hub outputs, ensure **Partition key column** is set to the same partition key used in the query. Refer to the [output section](#outputs) for more details.
+3. Most of the outputs supported in Stream Analytics can take advantage of partitioning. If you use an output type that doesn't support partitioning your job won't be *embarrassingly parallel*. For Event Hubs outputs, ensure **Partition key column** is set to the same partition key used in the query. Refer to the [output section](#outputs) for more details.
4. The number of input partitions must equal the number of output partitions. Blob storage output can support partitions and inherits the partitioning scheme of the upstream query. When a partition key for Blob storage is specified, data is partitioned per input partition thus the result is still fully parallel. Here are examples of partition values that allow a fully parallel job:
- * 8 event hub input partitions and 8 event hub output partitions
- * 8 event hub input partitions and blob storage output
- * 8 event hub input partitions and blob storage output partitioned by a custom field with arbitrary cardinality
- * 8 blob storage input partitions and blob storage output
- * 8 blob storage input partitions and 8 event hub output partitions
+ * Eight event hub input partitions and eight event hub output partitions
+ * Eight event hub input partitions and blob storage output
+ * Eight event hub input partitions and blob storage output partitioned by a custom field with arbitrary cardinality
+ * Eight blob storage input partitions and blob storage output
+ * Eight blob storage input partitions and eight event hub output partitions
The following sections discuss some example scenarios that are embarrassingly parallel. ### Simple query
-* Input: Event hub with 8 partitions
-* Output: Event hub with 8 partitions ("Partition key column" must be set to use "PartitionId")
+* Input: Event hub with eight partitions
+* Output: Event hub with eight partitions ("Partition key column" must be set to use "PartitionId")
Query:
This query is a simple filter. Therefore, we don't need to worry about partition
### Query with a grouping key
-* Input: Event hub with 8 partitions
+* Input: Event hub with eight partitions
* Output: Blob storage Query:
Query:
GROUP BY TumblingWindow(minute, 3), TollBoothId, PartitionId ```
-This query has a grouping key. Therefore, the events grouped together must be sent to the same Event Hub partition. Since in this example we group by TollBoothID, we should be sure that TollBoothID is used as the partition key when the events are sent to Event Hub. Then in ASA, we can use **PARTITION BY PartitionId** to inherit from this partition scheme and enable full parallelization. Since the output is blob storage, we don't need to worry about configuring a partition key value, as per requirement #4.
+This query has a grouping key. Therefore, the events grouped together must be sent to the same event hub partition. Since in this example we group by TollBoothID, we should be sure that TollBoothID is used as the partition key when the events are sent to Event Hubs. Then in ASA, we can use **PARTITION BY PartitionId** to inherit from this partition scheme and enable full parallelization. Since the output is blob storage, we don't need to worry about configuring a partition key value, as per requirement #4.
## Example of scenarios that are *not* embarrassingly parallel In the previous section, we showed some embarrassingly parallel scenarios. In this section, we discuss scenarios that don't meet all the requirements to be embarrassingly parallel. ### Mismatched partition count
-* Input: Event hub with 8 partitions
+* Input: Event hub with eight partitions
* Output: Event hub with 32 partitions
-If the input partition count doesn't match the output partition count, the topology isn't embarrassingly parallel irrespective of the query. However we can still get some level or parallelization.
+If the input partition count doesn't match the output partition count, the topology isn't embarrassingly parallel irrespective of the query. However we can still get some level of parallelization.
### Query using non-partitioned output
-* Input: Event hub with 8 partitions
+* Input: Event hub with eight partitions
* Output: Power BI
-Power BI output doesn't currently support partitioning. Therefore, this scenario is not embarrassingly parallel.
+Power BI output doesn't currently support partitioning. Therefore, this scenario isn't parallel.
### Multi-step query with different PARTITION BY values
-* Input: Event hub with 8 partitions
-* Output: Event hub with 8 partitions
+* Input: Event hub with eight partitions
+* Output: Event hub with eight partitions
* Compatibility level: 1.0 or 1.1 Query:
Query:
GROUP BY TumblingWindow(minute, 3), TollBoothId ```
-As you can see, the second step uses **TollBoothId** as the partitioning key. This step is not the same as the first step, and it therefore requires us to do a shuffle.
+As you can see, the second step uses **TollBoothId** as the partitioning key. This step isn't the same as the first step, and it therefore requires us to do a shuffle. This job isn't parallel.
### Multi-step query with different PARTITION BY values
-* Input: Event hub with 8 partitions
-* Output: Event hub with 8 partitions ("Partition key column" must be set to use "TollBoothId")
+* Input: Event hub with eight partitions ("Partition key column" not set, default to "PartitionId")
+* Output: Event hub with eight partitions ("Partition key column" must be set to use "TollBoothId")
* Compatibility level - 1.2 or above Query:
Query:
GROUP BY TumblingWindow(minute, 3), TollBoothId ```
-Compatibility level 1.2 or above enables parallel query execution by default. For example, query from the previous section will be partitioned as long as "TollBoothId" column is set as input Partition Key. PARTITION BY PartitionId clause is not required.
+Compatibility level 1.2 or above enables parallel query execution by default. But here the keys aren't aligned. If we knew the input event hub to be partitioned by "TollBoothId", we could set it up in the input config and get a parallel job. In any case, the PARTITION BY clause isn't required.
## Calculate the maximum streaming units of a job The total number of streaming units that can be used by a Stream Analytics job depends on the number of steps in the query defined for the job and the number of partitions for each step.
Partitioning a step requires the following conditions:
When a query is partitioned, the input events are processed and aggregated in separate partition groups, and outputs events are generated for each of the groups. If you want a combined aggregate, you must create a second non-partitioned step to aggregate. ### Calculate the max streaming units for a job
-All non-partitioned steps together can scale up to six streaming units (SUs) for a Stream Analytics job. In addition to this, you can add 6 SUs for each partition in a partitioned step.
+All non-partitioned steps together can scale up to six streaming units (SUs) for a Stream Analytics job. In addition, you can add 6 SUs for each partition in a partitioned step.
You can see some **examples** in the table below. | Query | Max SUs for the job | | | - |
-| <ul><li>The query contains one step.</li><li>The step is not partitioned.</li></ul> | 6 |
+| <ul><li>The query contains one step.</li><li>The step isn't partitioned.</li></ul> | 6 |
| <ul><li>The input data stream is partitioned by 16.</li><li>The query contains one step.</li><li>The step is partitioned.</li></ul> | 96 (6 * 16 partitions) | | <ul><li>The query contains two steps.</li><li>Neither of the steps is partitioned.</li></ul> | 6 |
-| <ul><li>The input data stream is partitioned by 3.</li><li>The query contains two steps. The input step is partitioned and the second step is not.</li><li>The <strong>SELECT</strong> statement reads from the partitioned input.</li></ul> | 24 (18 for partitioned steps + 6 for non-partitioned steps |
+| <ul><li>The input data stream is partitioned by 3.</li><li>The query contains two steps. The input step is partitioned and the second step isn't.</li><li>The <strong>SELECT</strong> statement reads from the partitioned input.</li></ul> | 24 (18 for partitioned steps + 6 for non-partitioned steps |
### Examples of scaling
To use more SUs for the query, both the input data stream and the query must be
GROUP BY TumblingWindow(minute, 3), TollBoothId, PartitionId ```
-When a query is partitioned, the input events are processed and aggregated in separate partition groups. Output events are also generated for each of the groups. Partitioning can cause some unexpected results when the **GROUP BY** field is not the partition key in the input data stream. For example, the **TollBoothId** field in the previous query is not the partition key of **Input1**. The result is that the data from TollBooth #1 can be spread in multiple partitions.
+When a query is partitioned, the input events are processed and aggregated in separate partition groups. Output events are also generated for each of the groups. Partitioning can cause some unexpected results when the **GROUP BY** field isn't the partition key in the input data stream. For example, the **TollBoothId** field in the previous query isn't the partition key of **Input1**. The result is that the data from TollBooth #1 can be spread in multiple partitions.
Each of the **Input1** partitions will be processed separately by Stream Analytics. As a result, multiple records of the car count for the same tollbooth in the same Tumbling window will be created. If the input partition key can't be changed, this problem can be fixed by adding a non-partition step to aggregate values across partitions, as in the following example:
This query can be scaled to 24 SUs.
## Achieving higher throughputs at scale
-An [embarrassingly parallel](#embarrassingly-parallel-jobs) job is necessary but not sufficient to sustain a higher throughput at scale. Every storage system and its corresponding Stream Analytics output has variations on how to achieve the best possible write throughput. As with any at-scale scenario, there are some challenges which can be solved by using the right configurations. This section discusses configurations for a few common outputs and provides samples for sustaining ingestion rates of 1K, 5K and 10K events per second.
+An [embarrassingly parallel](#embarrassingly-parallel-jobs) job is necessary but not sufficient to sustain a higher throughput at scale. Every storage system, and its corresponding Stream Analytics output, has variations on how to achieve the best possible write throughput. As with any at-scale scenario, there are some challenges which can be solved by using the right configurations. This section discusses configurations for a few common outputs and provides samples for sustaining ingestion rates of 1 K, 5 K and 10 K events per second.
-The following observations use a Stream Analytics job with stateless (passthrough) query, a basic JavaScript UDF which writes to Event Hub, Azure SQL DB, or Cosmos DB.
+The following observations use a Stream Analytics job with stateless (passthrough) query, a basic JavaScript UDF which writes to Event Hubs, Azure SQL DB, or Cosmos DB.
-#### Event Hub
+#### Event Hubs
|Ingestion Rate (events per second) | Streaming Units | Output Resources | |--|||
-| 1K | 1 | 2 TU |
-| 5K | 6 | 6 TU |
-| 10K | 12 | 10 TU |
+| 1 K | 1 | 2 TU |
+| 5 K | 6 | 6 TU |
+| 10 K | 12 | 10 TU |
-The [Event Hub](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-eventhubs) solution scales linearly in terms of streaming units (SU) and throughput, making it the most efficient and performant way to analyze and stream data out of Stream Analytics. Jobs can be scaled up to 192 SU, which roughly translates to processing up to 200 MB/s, or 19 trillion events per day.
+The [Event Hubs](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-eventhubs) service scales linearly in terms of streaming units (SU) and throughput, making it the most efficient and performant way to analyze and stream data out of Stream Analytics. Jobs can be scaled up to 192 SU, which roughly translates to processing up to 200 MB/s, or 19 trillion events per day.
#### Azure SQL |Ingestion Rate (events per second) | Streaming Units | Output Resources | |||-|
-| 1K | 3 | S3 |
-| 5K | 18 | P4 |
-| 10K | 36 | P6 |
+| 1 K | 3 | S3 |
+| 5 K | 18 | P4 |
+| 10 K | 36 | P6 |
[Azure SQL](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-azuresql) supports writing in parallel, called Inherit Partitioning, but it's not enabled by default. However, enabling Inherit Partitioning, along with a fully parallel query, may not be sufficient to achieve higher throughputs. SQL write throughputs depend significantly on your database configuration and table schema. The [SQL Output Performance](./stream-analytics-sql-output-perf.md) article has more detail about the parameters that can maximize your write throughput. As noted in the [Azure Stream Analytics output to Azure SQL Database](./stream-analytics-sql-output-perf.md#azure-stream-analytics) article, this solution doesn't scale linearly as a fully parallel pipeline beyond 8 partitions and may need repartitioning before SQL output (see [INTO](/stream-analytics-query/into-azure-stream-analytics#into-shard-count)). Premium SKUs are needed to sustain high IO rates along with overhead from log backups happening every few minutes. #### Cosmos DB |Ingestion Rate (events per second) | Streaming Units | Output Resources | |-|-||
-| 1K | 3 | 20K RU |
-| 5K | 24 | 60K RU |
-| 10K | 48 | 120K RU |
+| 1 K | 3 | 20K RU |
+| 5 K | 24 | 60K RU |
+| 10 K | 48 | 120K RU |
-[Cosmos DB](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-cosmosdb) output from Stream Analytics has been updated to use native integration under [compatibility level 1.2](./stream-analytics-documentdb-output.md#improved-throughput-with-compatibility-level-12). Compatibility level 1.2 enables significantly higher throughput and reduces RU consumption compared to 1.1, which is the default compatibility level for new jobs. The solution uses CosmosDB containers partitioned on /deviceId and the rest of solution is identically configured.
+[Cosmos DB](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-cosmosdb) output from Stream Analytics has been updated to use native integration under [compatibility level 1.2](./stream-analytics-documentdb-output.md#improved-throughput-with-compatibility-level-12). Compatibility level 1.2 enables significantly higher throughput and reduces RU consumption compared to 1.1, which is the default compatibility level for new jobs. The solution uses Cosmos DB containers partitioned on /deviceId and the rest of solution is identically configured.
-All [Streaming at Scale Azure samples](https://github.com/Azure-Samples/streaming-at-scale) use an Event Hub as input that is fed by load simulating test clients. Each input event is a 1KB JSON document, which translates configured ingestion rates to throughput rates (1MB/s, 5MB/s and 10MB/s) easily. Events simulate an IoT device sending the following JSON data (in a shortened form) for up to 1K devices:
+All [Streaming at Scale Azure samples](https://github.com/Azure-Samples/streaming-at-scale) use Event Hubs as input that is fed by load simulating test clients. Each input event is a 1 KB JSON document, which translates configured ingestion rates to throughput rates (1MB/s, 5MB/s and 10MB/s) easily. Events simulate an IoT device sending the following JSON data (in a shortened form) for up to 1000 devices:
``` {
All [Streaming at Scale Azure samples](https://github.com/Azure-Samples/streamin
### Identifying Bottlenecks
-Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks in your pipeline. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hub metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
+Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks in your pipeline. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hubs metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
## Get help
For further assistance, try our [Microsoft Q&A question page for Azure Stream An
[stream.analytics.introduction]: stream-analytics-introduction.md [stream.analytics.get.started]: stream-analytics-real-time-fraud-detection.md [stream.analytics.query.language.reference]: /stream-analytics-query/stream-analytics-query-language-reference
-[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
+[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
There are some general system constraints that might affect your workload:
| Property | Limitation | |||
-| Max number of Synapse workspaces per subscription | 20 |
+| Max number of Synapse workspaces per subscription | 2 |
| Max number of databases per serverless pool | 20 (not including databases synchronized from Apache Spark pool) | | Max number of databases synchronized from Apache Spark pool | Not limited | | Max number of databases objects per database | The sum of the number of all objects in a database cannot exceed 2,147,483,647 (see [limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) ) |
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/language-packs.md
You need the following things to customize your Windows 10 Enterprise multi-sess
- [Windows 10, version 2004 or later 11C 2021 LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2111C.iso) - [Windows 10, version 2004 or later 01C 2022 LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2201C.iso) - [Windows 10, version 2004 or later 02C 2022 LXP ISO](https://software-static.download.prss.microsoft.com/sg/download/888969d5-f34g-4e03-ac9d-1f9786c66749/LanguageExperiencePack.2202C.iso)
+ - [Windows 10, version 2004 or later 04C 2022 LXP ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66750/LanguageExperiencePack.2204C.iso)
- An Azure Files Share or a file share on a Windows File Server Virtual Machine
virtual-desktop Powershell Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/powershell-module.md
Providers : {Microsoft.RecoveryServices, Microsoft.DesktopVirtualization,
Microsoft.ManagedIdentity, Microsoft.SqlVirtualMachine…} ```
-Once you know your account's location, you can use it in a cmdlet. For example, here's a cmdlet that creates a host pool in the "southeastasia" location:
+Once you know your account's location, you can use it in a cmdlet. For example, here's a cmdlet that creates a host pool in the "uksouth" location:
```powershell
-New-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -WorkspaceName <workspacename> -Location ΓÇ£southeastasiaΓÇ¥
+New-AzWvdHostPool -Name <hostpoolname> -location uksouth -ResourceGroupName <resourcegroupname> -HostPoolType <hostpooltype> -LoadBalancerType <loadbalancertype> -PreferredAppGroupType ,preferredappgroiptype
``` ## Next steps
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
The Azure virtual machines you create for Azure Virtual Desktop must have access
|wvdportalstorageblob.blob.core.windows.net|443|Azure portal support|AzureCloud| | 169.254.169.254 | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | | 168.63.129.16 | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A |
-|gcs.prod.monitoring.core.windows.net|443|Agent traffic (optional)|AzureCloud|
-|production.diagnostics.monitoring.core.windows.net|443|Agent traffic (optional)|AzureCloud|
-|*xt.blob.core.windows.net|443|Agent traffic (optional)|AzureCloud|
-|*eh.servicebus.windows.net|443|Agent traffic (optional)|AzureCloud|
-|*xt.table.core.windows.net|443|Agent traffic (optional)|AzureCloud|
-|*xt.queue.core.windows.net|443|Agent traffic (optional)|AzureCloud|
-A [Service Tag](https://docs.microsoft.com/azure/virtual-network/service-tags-overview) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service Tags can be used in both Network Security Group ([NSG](https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview)) and [Azure Firewall](https://docs.microsoft.com/azure/firewall/service-tags) rules to restrict outbound network access. Service Tags can be also used in User Defined Route ([UDR](https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-overview#user-defined)) to customize traffic routing behavior.
+A [Service Tag](../virtual-network/service-tags-overview.md) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service Tags can be used in both Network Security Group ([NSG](../virtual-network/network-security-groups-overview.md)) and [Azure Firewall](../firewall/service-tags.md) rules to restrict outbound network access. Service Tags can be also used in User Defined Route ([UDR](../virtual-network/virtual-networks-udr-overview.md#user-defined)) to customize traffic routing behavior.
->[!IMPORTANT]
+>[!TIP]
>Azure Virtual Desktop supports the FQDN tag. For more information, see [Use Azure Firewall to protect Azure Virtual Desktop deployments](../firewall/protect-azure-virtual-desktop.md). > >We recommend you use FQDN tags or service tags instead of URLs to prevent service issues. The listed URLs and tags only correspond to Azure Virtual Desktop sites and resources. They don't include URLs for other services like Azure Active Directory.
+> [!IMPORTANT]
+> The following entries have been deprecated and replaced by ***.prod.warm.ingest.monitor.core.windows.net** in the table above. Please update any existing entries.
+>
+> |Address|Outbound TCP port|Purpose|Service Tag|
+> |||||
+> |gcs.prod.monitoring.core.windows.net|443|Agent traffic (deprecated)|AzureCloud|
+> |production.diagnostics.monitoring.core.windows.net|443|Agent traffic (deprecated)|AzureCloud|
+> |*xt.blob.core.windows.net|443|Agent traffic (deprecated)|AzureCloud|
+> |*eh.servicebus.windows.net|443|Agent traffic (deprecated)|AzureCloud|
+> |*xt.table.core.windows.net|443|Agent traffic (deprecated)|AzureCloud|
+> |*xt.queue.core.windows.net|443|Agent traffic (deprecated)|AzureCloud|
+ ### Azure Government cloud The Azure virtual machines you create for Azure Virtual Desktop must have access to the following URLs in the Azure Government cloud:
The Azure virtual machines you create for Azure Virtual Desktop must have access
||||| |*.wvd.azure.us|443|Service traffic|WindowsVirtualDesktop| |*.prod.warm.ingest.monitor.core.usgovcloudapi.net|443|Agent traffic|AzureMonitor|
-|Kms.core.usgovcloudapi.net|1688|Windows activation|Internet|
+|kms.core.usgovcloudapi.net|1688|Windows activation|Internet|
|mrsglobalstugviffx.blob.core.usgovcloudapi.net|443|Agent and SXS stack updates|AzureCloud| |wvdportalstorageblob.blob.core.usgovcloudapi.net|443|Azure portal support|AzureCloud| | 169.254.169.254 | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | | 168.63.129.16 | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A |
-|gcs.monitoring.core.usgovcloudapi.net|443|Agent traffic (optional)|AzureCloud|
-|monitoring.core.usgovcloudapi.net|443|Agent traffic (optional)|AzureCloud|
-|fairfax.warmpath.usgovcloudapi.net|443|Agent traffic (optional)|AzureCloud|
-|*xt.blob.core.usgovcloudapi.net|443|Agent traffic (optional)|AzureCloud|
-|*.servicebus.usgovcloudapi.net|443|Agent traffic (optional)|AzureCloud|
-|*xt.table.core.usgovcloudapi.net|443|Agent traffic (optional)|AzureCloud|
+
+> [!IMPORTANT]
+> The following entries have been deprecated and replaced by ***.prod.warm.ingest.monitor.core.usgovcloudapi.net** in the table above. Please update any existing entries.
+>
+> |Address|Outbound TCP port|Purpose|Service Tag|
+> |||||
+> |gcs.monitoring.core.usgovcloudapi.net|443|Agent traffic (deprecated)|AzureCloud|
+> |monitoring.core.usgovcloudapi.net|443|Agent traffic (deprecated)|AzureCloud|
+> |fairfax.warmpath.usgovcloudapi.net|443|Agent traffic (deprecated)|AzureCloud|
+> |*xt.blob.core.usgovcloudapi.net|443|Agent traffic (deprecated)|AzureCloud|
+> |*.servicebus.usgovcloudapi.net|443|Agent traffic (deprecated)|AzureCloud|
+> |*xt.table.core.usgovcloudapi.net|443|Agent traffic (deprecated)|AzureCloud|
The following table lists optional URLs that your Azure virtual machines can have access to:
virtual-desktop Set Up Golden Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-golden-image.md
Make sure you've done the following things before taking the final snapshot:
- Install the latest Windows updates. - Complete any necessary cleanup, such as cleaning up temporary files, defragmenting disks, and removing unnecessary user profiles. > [!NOTE]
-> If your machine will include an antivirus app, it may cause issues when you start sysprep. To avoid this, disable all antivirus programs before running sysprep.
+> 1. If your machine will include an antivirus app, it may cause issues when you start sysprep. To avoid this, disable all antivirus programs before running sysprep.
+>
+> 1. [Unified Write Filter](/windows-hardware/customize/enterprise/unified-write-filter) (UWF) is not supported for session hosts. Please ensure it is not enabled in your image.
### Take the final snapshot When you are done installing your applications to the image VM, take a final snapshot of the disk. If sysprep or capture fails, you will be able to create a new base VM with your applications already installed from this snapshot. ### Run sysprep
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Default: y
``` Enable or disable auto-update for goal state processing; default is enabled.
+## Linux Guest Agent Automatic Logs Collection
+As of version 2.7+, The azure linux guest agent has a feature to automatically collect some logs and upload them. This feature currently requires systemd, and utilizes a new systemd slice called azure-walinuxagent-logcollector.slice to manage resources while performing the collection. The log collector's goal is facilitate offline analysis, and therefore produces a ZIP file of some diagnostics logs before uploading them to the VM's Host. The ZIP file can then be retreived by Engineering Teams and Support professionals to investigate issues at the behest of the VM owner. More technical information on the files collected by the guest agent can be found in the azurelinuxagent/common/logcollector_manifests.py file in the [agent's github repository](https://github.com/Azure/WALinuxAgent).
+This can be disabled by editing ```/etc/waagent.conf``` updating ```Logs.Collect``` to ```n```
## Ubuntu Cloud Images Ubuntu Cloud Images utilize [cloud-init](https://launchpad.net/ubuntu/+source/cloud-init) to perform many configuration tasks that would otherwise be managed by the Azure Linux Agent. The following differences apply:
virtual-machines Image Builder Api Update Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-api-update-release-notes.md
For API versions 2020-02-14 and older, the error output will look like the follo
``` {
-"error": {
- "code": "ValidationFailed",
- "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute//images//imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+ "code": "ValidationFailed",
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute/images/imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
} ```
For API versions 2021-10-01 and newer, the error output will look like the follo
``` {
- "error": {
+ "error": {
"code": "ValidationFailed",
- "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute//images//imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
- }
-}
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute/images/imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+ }
+}
``` **Improvements**:
For API versions 2021-10-01 and newer, the error output will look like the follo
## Next steps
-Learn more about [Image Builder](image-builder-overview.md).
+Learn more about [Image Builder](image-builder-overview.md).
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 04/26/2022 Last updated : 05/11/2022
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- May 11, 2022: Change in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure](./sap-high-availability-guide-wsfc-shared-disk.md), [Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS](./sap-high-availability-infrastructure-wsfc-shared-disk.md) and [SAP ASCS/SCS instance multi-SID high availability with Windows server failover clustering and Azure shared disk](./sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md) to update instruction about the usage of Azure shared disk for SAP deployment with PPG.
+- May 10, 2022: Changes in Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [HA for SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to adjust parameters per SAP note 3024346
- April 26, 2022: Changes in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) to add Azure Identity python module to installation instructions for Azure Fence Agent - March 30, 2022: Adding information that Red Hat Gluster Storage is being phased out [GlusterFS on Azure VMs on RHEL](./high-availability-guide-rhel-glusterfs.md) - March 30, 2022: Correcting DNN support for older releases of SQL Server in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md)
virtual-machines Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md
vm-windows Previously updated : 04/14/2022 Last updated : 05/10/2022
Currently you can use Azure Premium SSD disks as an Azure shared disk for the SA
- Locally redundant storage (LRS) for premium shared disk (skuName - Premium_LRS) is supported with deployment in availability set. - Zone-redundant storage (ZRS) for premium shared disk (skuName - Premium_ZRS) is supported with deployment in availability zones. - Azure shared disk value [maxShares](../../disks-shared-enable.md?tabs=azure-cli#disk-sizes) determines how many cluster nodes can use the shared disk. Typically for SAP ASCS/SCS instance you will configure two nodes in Windows Failover Cluster, therefore the value for `maxShares` must be set to two.-- [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But if you are using PPG for SAP system, all virtual machines sharing a disk must be part of the same PPG.
+- [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But for SAP deployment with PPG, follow below guidelines:
+ - If you are using PPG for SAP system deployed in a region then all virtual machines sharing a disk must be part of the same PPG.
+ - If you are using PPG for SAP system deployed across zones like described in the document [Proximity placement groups with zonal deployments](sap-proximity-placement-scenarios.md#proximity-placement-groups-with-zonal-deployments), you can attach Premium_ZRS storage to virtual machines sharing a disk.
For further details on limitations for Azure shared disk, please review carefully the [limitations](../../disks-shared.md#limitations) section of Azure Shared Disk documentation.
virtual-machines Sap High Availability Guide Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-guide-wsfc-shared-disk.md
vm-windows Previously updated : 04/14/2022 Last updated : 05/10/2022
Currently you can use Azure Premium SSD disks as an Azure shared disk for the SA
- Locally redundant storage (LRS) for premium shared disk (skuName - Premium_LRS) is supported with deployment in Azure availability set. - Zone-redundant storage (ZRS) for premium shared disk (skuName - Premium_ZRS) is supported with deployment in Azure availability zones. - Azure shared disk value [maxShares](../../disks-shared-enable.md?tabs=azure-cli#disk-sizes) determines how many cluster nodes can use the shared disk. Typically for SAP ASCS/SCS instance you will configure two nodes in Windows Failover Cluster, therefore the value for `maxShares` must be set to two.-- [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But if you are using PPG for SAP system, all virtual machines sharing a disk must be part of the same PPG.
+- [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But for SAP deployment with PPG, follow below guidelines:
+ - If you are using PPG for SAP system deployed in a region then all virtual machines sharing a disk must be part of the same PPG.
+ - If you are using PPG for SAP system deployed across zones like described in the document [Proximity placement groups with zonal deployments](sap-proximity-placement-scenarios.md#proximity-placement-groups-with-zonal-deployments), you can attach Premium_ZRS storage to virtual machines sharing a disk.
For further details on limitations for Azure shared disk, please review carefully the [limitations](../../disks-shared.md#limitations) section of Azure Shared Disk documentation.
virtual-machines Sap High Availability Infrastructure Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-shared-disk.md
vm-windows Previously updated : 04/14/2022 Last updated : 05/10/2022
Before you begin the installation, review this article:
## Create the ASCS VMs
-For SAP ASCS / SCS cluster deploy two VMs in Azure availability set or Azure availability zones based on the type of your deployment. If you are using [Azure proximity placement groups (PPG)](./sap-proximity-placement-scenarios.md), make sure all virtual machines sharing a disk must be part of the same PPG. Once the VMs are deployed:
+For SAP ASCS / SCS cluster deploy two VMs in Azure availability set or Azure availability zones based on the type of your deployment. Once the VMs are deployed:
- Create Azure Internal Load Balancer for SAP ASCS /SCS instance. - Add Windows VMs to the AD domain.
Based on your deployment type, the host names and the IP addresses of the scenar
The steps mentioned in the document remain same for both deployment type. But if your cluster is running in availability set, you need to deploy LRS for Azure premium shared disk (Premium_LRS) and if the cluster is running in availability zone deploy ZRS for Azure premium shared disk (Premium_ZRS). > [!Note]
-> [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But if you are using PPG for SAP system, all virtual machines sharing a disk must be part of the same PPG.
+> [Azure proximity placement group](../../windows/proximity-placement-groups.md) is not required for Azure shared disk. But for SAP deployment with PPG, follow below guidelines:
+> - If you are using PPG for SAP system deployed in a region then all virtual machines sharing a disk must be part of the same PPG.
+> - If you are using PPG for SAP system deployed across zones like described in the document [Proximity placement groups with zonal deployments](sap-proximity-placement-scenarios.md#proximity-placement-groups-with-zonal-deployments), you can attach Premium_ZRS storage to virtual machines sharing a disk.
## <a name="fe0bd8b5-2b43-45e3-8295-80bee5415716"></a> Create Azure internal load balancer
vpn-gateway Point To Site Vpn Client Configuration Radius Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-configuration-radius-certificate.md
+
+ Title: 'Configure a VPN client for P2S RADIUS: certificate authentication'
+
+description: Learn how to configure a VPN client for point-to-site VPN configurations that use RADIUS certificate authentication.
++++ Last updated : 05/11/2022+
+# Configure a VPN client for point-to-site: RADIUS - certificate authentication
+
+To connect to a virtual network over point-to-site (P2S), you need to configure the client device that you'll connect from. This article helps you create and install the VPN client configuration for RADIUS certificate authentication.
+
+When you're using RADIUS authentication, there are multiple authentication instructions: [certificate authentication](point-to-site-vpn-client-configuration-radius-certificate.md), [password authentication](point-to-site-vpn-client-configuration-radius-password.md), and [other authentication methods and protocols](point-to-site-vpn-client-configuration-radius-other.md). The VPN client configuration is different for each type of authentication. To configure a VPN client, you use client configuration files that contain the required settings.
+
+>[!NOTE]
+> [!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
+>
+
+## Workflow
+
+The configuration workflow for P2S RADIUS authentication is as follows:
+
+1. [Set up the Azure VPN gateway for P2S connectivity](point-to-site-how-to-radius-ps.md).
+
+1. [Set up your RADIUS server for authentication](point-to-site-how-to-radius-ps.md#radius).
+
+1. **Obtain the VPN client configuration for the authentication option of your choice and use it to set up the VPN client** (this article).
+
+1. [Complete your P2S configuration and connect](point-to-site-how-to-radius-ps.md).
+
+>[!IMPORTANT]
+>If there are any changes to the point-to-site VPN configuration after you generate the VPN client configuration profile, such as the VPN protocol type or authentication type, you must generate and install a new VPN client configuration on your users' devices.
+>
+
+You can create VPN client configuration files for RADIUS certificate authentication that uses the EAP-TLS protocol. Typically, an enterprise-issued certificate is used to authenticate a user for VPN. Make sure that all connecting users have a certificate installed on their devices, and that your RADIUS server can validate the certificate.
+
+In the commands, `-AuthenticationMethod` is `EapTls`. During certificate authentication, the client validates the RADIUS server by validating its certificate. `-RadiusRootCert` is the .cer file that contains the root certificate that's used to validate the RADIUS server.
+
+Each VPN client device requires an installed client certificate. Sometimes a Windows device has multiple client certificates. During authentication, this can result in a pop-up dialog box that lists all the certificates. The user must then choose the certificate to use. The correct certificate can be filtered out by specifying the root certificate that the client certificate should chain to.
+
+`-ClientRootCert` is the .cer file that contains the root certificate. It's an optional parameter. If the device that you want to connect from has only one client certificate, you don't have to specify this parameter.
+
+## Generate VPN client configuration files
+
+You can generate the VPN client configuration files by using the Azure portal, or by using Azure PowerShell.
+
+### Azure portal
+
+1. Navigate to the virtual network gateway.
+1. Click **Point-to-Site configuration**.
+1. Click **Download VPN client**.
+1. Select the client and fill out any information that is requested.
+1. Click **Download** to generate the .zip file.
+1. The .zip file will download, typically to your Downloads folder.
+
+### Azure PowerShell
+
+Generate VPN client configuration files for use with certificate authentication. You can generate the VPN client configuration files by using the following command:
+
+```azurepowershell-interactive
+New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapTls" -RadiusRootCert <full path name of .cer file containing the RADIUS root> -ClientRootCert <full path name of .cer file containing the client root> | fl
+```
+
+Running the command returns a link. Copy and paste the link to a web browser to download VpnClientConfiguration.zip. Unzip the file to view the following folders:
+
+* **WindowsAmd64** and **WindowsX86**: These folders contain the Windows 64-bit and 32-bit installer packages, respectively.
+* **GenericDevice**: This folder contains general information that's used to create your own VPN client configuration.
+
+If you already created client configuration files, you can retrieve them by using the `Get-AzVpnClientConfiguration` cmdlet. But if you make any changes to your P2S VPN configuration, such as the VPN protocol type or authentication type, the configuration isn’t updated automatically. You must run the `New-AzVpnClientConfiguration` cmdlet to create a new configuration download.
+
+To retrieve previously generated client configuration files, use the following command:
+
+```azurepowershell-interactive
+Get-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" | fl
+```
+
+## Windows VPN client
+
+1. Select a configuration package and install it on the client device. For a 64-bit processor architecture, choose the **VpnClientSetupAmd64** installer package. For a 32-bit processor architecture, choose the **VpnClientSetupX86** installer package. If you see a SmartScreen pop-up, select **More info** > **Run anyway**. You can also save the package to install on other client computers.
+
+1. Each client requires a client certificate for authentication. Install the client certificate. For information about client certificates, see [Client certificates for point-to-site](vpn-gateway-certificates-point-to-site.md). To install a certificate that was generated, see [Install a certificate on Windows clients](point-to-site-how-to-vpn-client-install-azure-cert.md).
+
+1. On the client computer, browse to **Network Settings** and select **VPN**. The VPN connection shows the name of the virtual network that it connects to.
+
+## Mac (macOS) VPN client
+
+You must create a separate profile for every Mac device that connects to the Azure virtual network. This is because these devices require the user certificate for authentication to be specified in the profile. The **Generic** folder has all the information that's required to create a profile:
+
+* **VpnSettings.xml** contains important settings such as server address and tunnel type.
+* **VpnServerRoot.cer** contains the root certificate that's required to validate the VPN gateway during P2S connection setup.
+* **RadiusServerRoot.cer** contains the root certificate that's required to validate the RADIUS server during authentication.
+
+Use the following steps to configure the native VPN client on a Mac for certificate authentication:
+
+1. Import the **VpnServerRoot** and **RadiusServerRoot** root certificates to your Mac. Copy each file to your Mac, double-click it, and then select **Add**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/add-certificate.png" alt-text="Screenshot shows adding the VpnServerRoot certificate." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/add-certificate.png":::
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/radius-root.png" alt-text="Screenshot shows adding the RadiusServerRoot certificate." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/radius-root.png":::
+
+1. Each client requires a client certificate for authentication. Install the client certificate on the client device.
+
+1. Open the **Network** dialog box under **Network Preferences**. Select **+** to create a new VPN client connection profile for a P2S connection to the Azure virtual network.
+
+ The **Interface** value is **VPN**, and the **VPN Type** value is **IKEv2**. Specify a name for the profile in the **Service Name** box, and then select **Create** to create the VPN client connection profile.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/network.png" alt-text="Screenshot shows interface and service name information." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/network.png":::
+
+1. In the **Generic** folder, from the **VpnSettings.xml** file, copy the **VpnServer** tag value. Paste this value in the **Server Address** and **Remote ID** boxes of the profile. Leave the **Local ID** box blank.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/server-tag.png" alt-text="Screenshot shows server information." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/server-tag.png":::
+
+1. Select **Authentication Settings**, and select **Certificate**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/certificate-option.png" alt-text="Screenshot shows Authentication settings." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/certificate-option.png":::
+
+1. Click **Select** to choose the certificate that you want to use for authentication.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/certificate.png" alt-text="Screenshot shows Selecting a certificate for authentication." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/certificate.png":::
+
+1. **Choose An Identity** displays a list of certificates for you to choose from. Select the proper certificate, and then select **Continue**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/identity.png" alt-text="Screenshot shows Choose An Identity list." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/identity.png":::
+
+1. In the **Local ID** box, specify the name of the certificate (from Step 6). In this example, it's **ikev2Client.com**. Then, select the **Apply** button to save the changes.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-certificate/apply-connect.png" alt-text="Screenshot shows Local I D box." lightbox="./media/point-to-site-vpn-client-config-radius-certificate/apply-connect.png":::
+
+1. In the **Network** dialog box, select **Apply** to save all changes. Then, select **Connect** to start the P2S connection to the Azure virtual network.
+
+## Next steps
+
+Return to the article to [complete your P2S configuration](point-to-site-how-to-radius-ps.md).
+
+For P2S troubleshooting information, see [Troubleshooting Azure point-to-site connections](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
vpn-gateway Point To Site Vpn Client Configuration Radius Other https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-configuration-radius-other.md
+
+ Title: 'Configure a VPN client for P2S RADIUS: other authentication methods'
+
+description: Learn how to configure a VPN client for point-to-site VPN configurations that use RADIUS authentication for methods other than certificate or password.
++++ Last updated : 05/11/2022+
+# Configure a VPN client for point-to-site: RADIUS - other methods and protocols
+
+To connect to a virtual network over point-to-site (P2S), you need to configure the client device that you'll connect from. This article helps you create and install the VPN client configuration for RADIUS authentication that uses methods other than certificate or password authentication.
+
+When you're using RADIUS authentication, there are multiple authentication instructions: [certificate authentication](point-to-site-vpn-client-configuration-radius-certificate.md), [password authentication](point-to-site-vpn-client-configuration-radius-password.md). and [other authentication methods and protocols](point-to-site-vpn-client-configuration-radius-other.md). The VPN client configuration is different for each type of authentication. To configure a VPN client, you use client configuration files that contain the required settings.
+
+>[!NOTE]
+> [!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
+>
+
+## Workflow
+
+The configuration workflow for P2S RADIUS authentication is as follows:
+
+1. [Set up the Azure VPN gateway for P2S connectivity](point-to-site-how-to-radius-ps.md).
+
+1. [Set up your RADIUS server for authentication](point-to-site-how-to-radius-ps.md#radius).
+
+1. **Obtain the VPN client configuration for the authentication option of your choice and use it to set up the VPN client** (this article).
+
+1. [Complete your P2S configuration and connect](point-to-site-how-to-radius-ps.md).
+
+>[!IMPORTANT]
+>If there are any changes to the point-to-site VPN configuration after you generate the VPN client configuration profile, such as the VPN protocol type or authentication type, you must generate and install a new VPN client configuration on your users' devices.
+>
+
+To use a different authentication type (for example, OTP), or to use a different authentication protocol (such as PEAP-MSCHAPv2 instead of EAP-MSCHAPv2), you must create your own VPN client configuration profile. To create the profile, you need information such as the virtual network gateway IP address, tunnel type, and split-tunnel routes. You can get this information by using the following steps.
+
+## Generate VPN client configuration files
+
+You can generate the VPN client configuration files by using the Azure portal, or by using Azure PowerShell.
+
+### Azure portal
+
+1. Navigate to the virtual network gateway.
+1. Click **Point-to-Site configuration**.
+1. Click **Download VPN client**.
+1. Select the client and fill out any information that is requested.
+1. Click **Download** to generate the .zip file.
+1. The .zip file will download, typically to your Downloads folder.
+
+### Azure PowerShell
+
+Use the [Get-AzVpnClientConfiguration](/powershell/module/az.network/get-azvpnclientconfiguration.md) cmdlet to generate the VPN client configuration for EapMSChapv2.
+
+## View the files and configure the VPN client
+
+Unzip the VpnClientConfiguration.zip file and look for the **GenericDevice** folder. Ignore the folders that contain the Windows installers for 64-bit and 32-bit architectures.
+
+The **GenericDevice** folder contains an XML file called **VpnSettings**. This file contains all the required information:
+
+* **VpnServer**: FQDN of the Azure VPN gateway. This is the address that the client connects to.
+* **VpnType**: Tunnel type that you use to connect.
+* **Routes**: Routes that you have to configure in your profile so that only traffic that's bound for the Azure virtual network is sent over the P2S tunnel.
+
+The **GenericDevice** folder also contains a .cer file called **VpnServerRoot**. This file contains the root certificate that's required to validate the Azure VPN gateway during P2S connection setup. Install the certificate on all devices that will connect to the Azure virtual network.
+
+Use the settings in the files to configure your VPN client.
+
+## Next steps
+
+Return to the article to [complete your P2S configuration](point-to-site-how-to-radius-ps.md).
+
+For P2S troubleshooting information, see [Troubleshooting Azure point-to-site connections](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
vpn-gateway Point To Site Vpn Client Configuration Radius Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-configuration-radius-password.md
Title: 'Configure a VPN client for P2S RADIUS: password-auth'
+ Title: 'Configure a VPN client for P2S RADIUS: password authentication'
description: Learn how to configure a VPN client for point-to-site VPN configurations that use RADIUS username/password authentication. Previously updated : 05/09/2022 Last updated : 05/11/2022 # Configure a VPN client for point-to-site: RADIUS - password authentication To connect to a virtual network over point-to-site (P2S), you need to configure the client device that you'll connect from. You can create P2S VPN connections from Windows, macOS, and Linux client devices. This article helps you create and install the VPN client configuration for username/password RADIUS authentication.
-When you're using RADIUS authentication, there are multiple authentication options: username/password authentication, certificate authentication, and other authentication types. To configure a VPN client, you use client configuration files that contain the required settings. The VPN client configuration is different for each type of authentication.
+When you're using RADIUS authentication, there are multiple authentication instructions: [certificate authentication](point-to-site-vpn-client-configuration-radius-certificate.md), [password authentication](point-to-site-vpn-client-configuration-radius-password.md). and [other authentication methods and protocols](point-to-site-vpn-client-configuration-radius-other.md). The VPN client configuration is different for each type of authentication. To configure a VPN client, you use client configuration files that contain the required settings.
>[!NOTE] > [!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
vpn-gateway Point To Site Vpn Client Configuration Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-configuration-radius.md
- Title: 'Azure VPN Gateway: Create & install VPN client config files - P2S RADIUS connections'
-description: Create Windows, macOS, and Linux VPN client configuration files for connections that use RADIUS authentication.
----- Previously updated : 09/02/2020----
-# Create and install VPN client configuration files for P2S RADIUS authentication
-
-To connect to a virtual network over point-to-site (P2S), you need to configure the client device that you'll connect from. You can create P2S VPN connections from Windows, macOS, and Linux client devices.
-
-When you're using RADIUS authentication, there are multiple authentication options: username/password authentication, certificate authentication, and other authentication types. The VPN client configuration is different for each type of authentication. To configure the VPN client, you use client configuration files that contain the required settings. This article helps you create and install the VPN client configuration for the RADIUS authentication type that you want to use.
-
->[!IMPORTANT]
->[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
->
-
-The configuration workflow for P2S RADIUS authentication is as follows:
-
-1. [Set up the Azure VPN gateway for P2S connectivity](point-to-site-how-to-radius-ps.md).
-2. [Set up your RADIUS server for authentication](point-to-site-how-to-radius-ps.md#radius). 
-3. **Obtain the VPN client configuration for the authentication option of your choice and use it to set up the VPN client** (this article).
-4. [Complete your P2S configuration and connect](point-to-site-how-to-radius-ps.md).
-
->[!IMPORTANT]
->If there are any changes to the point-to-site VPN configuration after you generate the VPN client configuration profile, such as the VPN protocol type or authentication type, you must generate and install a new VPN client configuration on your users' devices.
->
->
-
-To use the sections in this article, first decide which type of authentication you want to use: username/password, certificate, or other types of authentication. Each section has steps for Windows, macOS, and Linux (limited steps available at this time).
--
-## <a name="adeap"></a>Username/password authentication
-
-You can configure username/password authentication to either use Active Directory or not use Active Directory. With either scenario, make sure that all connecting users have username/password credentials that can be authenticated through RADIUS.
-
-When you configure username/password authentication, you can only create a configuration for the EAP-MSCHAPv2 username/password authentication protocol. In the commands, `-AuthenticationMethod` is `EapMSChapv2`.
-
-### <a name="usernamefiles"></a> 1. Generate VPN client configuration files
-
-You can generate the VPN client configuration files by using the Azure portal, or by using Azure PowerShell.
-
-#### Azure portal
-
-1. Navigate to the virtual network gateway.
-2. Click **Point-to-Site configuration**.
-3. Click **Download VPN client**.
-4. Select the client and fill out any information that is requested.
-5. Click **Download** to generate the .zip file.
-6. The .zip file will download, typically to your Downloads folder.
-
-#### Azure PowerShell
-
-Generate VPN client configuration files for use with username/password authentication. You can generate the VPN client configuration files by using the following command:
-
-```azurepowershell-interactive
-New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapMSChapv2"
-```
-
-Running the command returns a link. Copy and paste the link to a web browser to download **VpnClientConfiguration.zip**. Unzip the file to view the following folders: 
-
-* **WindowsAmd64** and **WindowsX86**: These folders contain the Windows 64-bit and 32-bit installer packages, respectively. 
-* **Generic**: This folder contains general information that you use to create your own VPN client configuration. You don't need this folder for username/password authentication configurations.
-* **Mac**: If you configured IKEv2 when you created the virtual network gateway, you see a folder named **Mac** that contains a **mobileconfig** file. You use this file to configure Mac clients.
-
-If you already created client configuration files, you can retrieve them by using the `Get-AzVpnClientConfiguration` cmdlet. But if you make any changes to your P2S VPN configuration, such as the VPN protocol type or authentication type, the configuration isn’t updated automatically. You must run the `New-AzVpnClientConfiguration` cmdlet to create a new configuration download.
-
-To retrieve previously generated client configuration files, use the following command:
-
-```azurepowershell-interactive
-Get-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW"
-```
-
-### <a name="setupusername"></a> 2. Configure VPN clients
-
-You can configure the following VPN clients:
-
-* [Windows](#adwincli)
-* [Mac (macOS)](#admaccli)
-* [Linux using strongSwan](#adlinuxcli)
-
-#### <a name="adwincli"></a>Windows VPN client setup
-
-You can use the same VPN client configuration package on each Windows client computer, as long as the version matches the architecture for the client. For the list of client operating systems that are supported, see the [FAQ](vpn-gateway-vpn-faq.md#P2S).
-
-Use the following steps to configure the native Windows VPN client for certificate authentication:
-
-1. Select the VPN client configuration files that correspond to the architecture of the Windows computer. For a 64-bit processor architecture, choose the **VpnClientSetupAmd64** installer package. For a 32-bit processor architecture, choose the **VpnClientSetupX86** installer package.
-2. To install the package, double-click it. If you see a SmartScreen pop-up, select **More info** > **Run anyway**.
-3. On the client computer, browse to **Network Settings** and select **VPN**. The VPN connection shows the name of the virtual network that it connects to. 
-
-#### <a name="admaccli"></a>Mac (macOS) VPN client setup
-
-1. Select the **VpnClientSetup mobileconfig** file and send it to each of the users. You can use email or another method.
-
-2. Locate the **mobileconfig** file on the Mac.
-
- ![Location of the mobileconfig file](./media/point-to-site-vpn-client-configuration-radius/admobileconfigfile.png)
-
-3. Optional Step - If you want to specify a custom DNS, add the following lines to the **mobileconfig** file:
-
- ```xml
- <key>DNS</key>
- <dict>
- <key>ServerAddresses</key>
- <array>
- <string>10.0.0.132</string>
- </array>
- <key>SupplementalMatchDomains</key>
- <array>
- <string>TestDomain.com</string>
- </array>
- </dict>
- ```
-4. Double-click the profile to install it, and select **Continue**. The profile name is the same as the name of your virtual network.
-
- ![Installation message](./media/point-to-site-vpn-client-configuration-radius/adinstall.png)
-5. Select **Continue** to trust the sender of the profile and proceed with the installation.
-
- ![Confirmation message](./media/point-to-site-vpn-client-configuration-radius/adcontinue.png)
-6. During profile installation, you have the option to specify the username and password for VPN authentication. It's not mandatory to enter this information. If you do, the information is saved and automatically used when you initiate a connection. Select **Install** to proceed.
-
- ![Username and password boxes for VPN](./media/point-to-site-vpn-client-configuration-radius/adsettings.png)
-7. Enter a username and password for the privileges that are required to install the profile on your computer. Select **OK**.
-
- ![Username and password boxes for profile installation](./media/point-to-site-vpn-client-configuration-radius/adusername.png)
-8. After the profile is installed, it's visible in the **Profiles** dialog box. You can also open this dialog box later from **System Preferences**.
-
- !["Profiles" dialog box](./media/point-to-site-vpn-client-configuration-radius/adsystempref.png)
-9. To access the VPN connection, open the **Network** dialog box from **System Preferences**.
-
- ![Icons in System Preferences](./media/point-to-site-vpn-client-configuration-radius/adnetwork.png)
-10. The VPN connection appears as **IkeV2-VPN**. You can change the name by updating the **mobileconfig** file.
-
- ![Details for the VPN connection](./media/point-to-site-vpn-client-configuration-radius/adconnection.png)
-11. Select **Authentication Settings**. Select **Username** in the list and enter your credentials. If you entered the credentials earlier, then **Username** is automatically chosen in the list and the username and password are pre-populated. Select **OK** to save the settings.
-
- ![Screenshot that shows the "Authentication settings" drop-down with "Username" selected.](./media/point-to-site-vpn-client-configuration-radius/adauthentication.png)
-12. Back in the **Network** dialog box, select **Apply** to save the changes. To initiate the connection, select **Connect**.
-
-#### <a name="adlinuxcli"></a>Linux VPN client setup through strongSwan
-
-The following instructions were created through strongSwan 5.5.1 on Ubuntu 17.0.4. Actual screens might be different, depending on your version of Linux and strongSwan.
-
-1. Open the **Terminal** to install **strongSwan** and its Network Manager by running the command in the example. If you receive an error that's related to `libcharon-extra-plugins`, replace it with `strongswan-plugin-eap-mschapv2`.
-
- ```Terminal
- sudo apt-get install strongswan libcharon-extra-plugins moreutils iptables-persistent network-manager-strongswan
- ```
-2. Select the **Network Manager** icon (up-arrow/down-arrow), and select **Edit Connections**.
-
- !["Edit Connections" selection in Network Manager](./media/point-to-site-vpn-client-configuration-radius/EditConnection.png)
-3. Select the **Add** button to create a new connection.
-
- !["Add" button for a connection](./media/point-to-site-vpn-client-configuration-radius/AddConnection.png)
-4. Select **IPsec/IKEv2 (strongswan)** from the drop-down menu, and then select **Create**. You can rename your connection in this step.
-
- ![Selecting the connection type](./media/point-to-site-vpn-client-configuration-radius/AddIKEv2.png)
-5. Open the **VpnSettings.xml** file from the **Generic** folder of the downloaded client configuration files. Find the tag called `VpnServer` and copy the name, beginning with `azuregateway` and ending with `.cloudapp.net`.
-
- ![Contents of the VpnSettings.xml file](./media/point-to-site-vpn-client-configuration-radius/VpnSettings.png)
-6. Paste this name into the **Address** field of your new VPN connection in the **Gateway** section. Next, select the folder icon at the end of the **Certificate** field, browse to the **Generic** folder, and select the **VpnServerRoot** file.
-7. In the **Client** section of the connection, select **EAP** for **Authentication**, and enter your username and password. You might have to select the lock icon on the right to save this information. Then, select **Save**.
-
- ![Editing connection settings](./media/point-to-site-vpn-client-configuration-radius/editconnectionsettings.png)
-8. Select the **Network Manager** icon (up-arrow/down-arrow) and hover over **VPN Connections**. You see the VPN connection that you created. To initiate the connection, select it.
-
- !["VPN Radius" connection in Network Manager](./media/point-to-site-vpn-client-configuration-radius/ConnectRADIUS.png)
-
-## <a name="certeap"></a>Certificate authentication
-
-You can create VPN client configuration files for RADIUS certificate authentication that uses the EAP-TLS protocol. Typically, an enterprise-issued certificate is used to authenticate a user for VPN. Make sure that all connecting users have a certificate installed on their devices, and that your RADIUS server can validate the certificate.
-
->[!NOTE]
->[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
->
-
-In the commands, `-AuthenticationMethod` is `EapTls`. During certificate authentication, the client validates the RADIUS server by validating its certificate. `-RadiusRootCert` is the .cer file that contains the root certificate that's used to validate the RADIUS server.
-
-Each VPN client device requires an installed client certificate. Sometimes a Windows device has multiple client certificates. During authentication, this can result in a pop-up dialog box that lists all the certificates. The user must then choose the certificate to use. The correct certificate can be filtered out by specifying the root certificate that the client certificate should chain to.
-
-`-ClientRootCert` is the .cer file that contains the root certificate. It's an optional parameter. If the device that you want to connect from has only one client certificate, you don't have to specify this parameter.
-
-### <a name="certfiles"></a>1. Generate VPN client configuration files
-
-Generate VPN client configuration files for use with certificate authentication. You can generate the VPN client configuration files by using the following command:
-
-```azurepowershell-interactive
-New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapTls" -RadiusRootCert <full path name of .cer file containing the RADIUS root> -ClientRootCert <full path name of .cer file containing the client root> | fl
-```
-
-Running the command returns a link. Copy and paste the link to a web browser to download VpnClientConfiguration.zip. Unzip the file to view the following folders:
-
-* **WindowsAmd64** and **WindowsX86**: These folders contain the Windows 64-bit and 32-bit installer packages, respectively. 
-* **GenericDevice**: This folder contains general information that's used to create your own VPN client configuration.
-
-If you already created client configuration files, you can retrieve them by using the `Get-AzVpnClientConfiguration` cmdlet. But if you make any changes to your P2S VPN configuration, such as the VPN protocol type or authentication type, the configuration isn’t updated automatically. You must run the `New-AzVpnClientConfiguration` cmdlet to create a new configuration download.
-
-To retrieve previously generated client configuration files, use the following command:
-
-```azurepowershell-interactive
-Get-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" | fl
-```
-
-### <a name="setupusername"></a> 2. Configure VPN clients
-
-You can configure the following VPN clients:
-
-* [Windows](#certwincli)
-* [Mac (macOS)](#certmaccli)
-* Linux (supported, no article steps yet)
-
-#### <a name="certwincli"></a>Windows VPN client setup
-
-1. Select a configuration package and install it on the client device. For a 64-bit processor architecture, choose the **VpnClientSetupAmd64** installer package. For a 32-bit processor architecture, choose the **VpnClientSetupX86** installer package. If you see a SmartScreen pop-up, select **More info** > **Run anyway**. You can also save the package to install on other client computers.
-2. Each client requires a client certificate for authentication. Install the client certificate. For information about client certificates, see [Client certificates for point-to-site](vpn-gateway-certificates-point-to-site.md). To install a certificate that was generated, see [Install a certificate on Windows clients](point-to-site-how-to-vpn-client-install-azure-cert.md).
-3. On the client computer, browse to **Network Settings** and select **VPN**. The VPN connection shows the name of the virtual network that it connects to.
-
-#### <a name="certmaccli"></a>Mac (macOS) VPN client setup
-
-You must create a separate profile for every Mac device that connects to the Azure virtual network. This is because these devices require the user certificate for authentication to be specified in the profile. The **Generic** folder has all the information that's required to create a profile:
-
-* **VpnSettings.xml** contains important settings such as server address and tunnel type.
-* **VpnServerRoot.cer** contains the root certificate that's required to validate the VPN gateway during P2S connection setup.
-* **RadiusServerRoot.cer** contains the root certificate that's required to validate the RADIUS server during authentication.
-
-Use the following steps to configure the native VPN client on a Mac for certificate authentication:
-
-1. Import the **VpnServerRoot** and **RadiusServerRoot** root certificates to your Mac. Copy each file to your Mac, double-click it, and then select **Add**.
-
- ![Adding the VpnServerRoot certificate](./media/point-to-site-vpn-client-configuration-radius/addcert.png)
-
- ![Adding the RadiusServerRoot certificate](./media/point-to-site-vpn-client-configuration-radius/radiusrootcert.png)
-2. Each client requires a client certificate for authentication. Install the client certificate on the client device.
-3. Open the **Network** dialog box under **Network Preferences**. Select **+** to create a new VPN client connection profile for a P2S connection to the Azure virtual network.
-
- The **Interface** value is **VPN**, and the **VPN Type** value is **IKEv2**. Specify a name for the profile in the **Service Name** box, and then select **Create** to create the VPN client connection profile.
-
- ![Interface and service name information](./media/point-to-site-vpn-client-configuration-radius/network.png)
-4. In the **Generic** folder, from the **VpnSettings.xml** file, copy the **VpnServer** tag value. Paste this value in the **Server Address** and **Remote ID** boxes of the profile. Leave the **Local ID** box blank.
-
- ![Server information](./media/point-to-site-vpn-client-configuration-radius/servertag.png)
-5. Select **Authentication Settings**, and select **Certificate**. 
-
- ![Authentication settings](./media/point-to-site-vpn-client-configuration-radius/certoption.png)
-6. Click **Select** to choose the certificate that you want to use for authentication.
-
- ![Selecting a certificate for authentication](./media/point-to-site-vpn-client-configuration-radius/certificate.png)
-7. **Choose An Identity** displays a list of certificates for you to choose from. Select the proper certificate, and then select **Continue**.
-
- !["Choose An Identity" list](./media/point-to-site-vpn-client-configuration-radius/identity.png)
-8. In the **Local ID** box, specify the name of the certificate (from Step 6). In this example, it's **ikev2Client.com**. Then, select the **Apply** button to save the changes.
-
- !["Local ID" box](./media/point-to-site-vpn-client-configuration-radius/applyconnect.png)
-9. In the **Network** dialog box, select **Apply** to save all changes. Then, select **Connect** to start the P2S connection to the Azure virtual network.
-
-## <a name="otherauth"></a>Working with other authentication types or protocols
-
-To use a different authentication type (for example, OTP), or to use a different authentication protocol (such as PEAP-MSCHAPv2 instead of EAP-MSCHAPv2), you must create your own VPN client configuration profile. To create the profile, you need information such as the virtual network gateway IP address, tunnel type, and split-tunnel routes. You can get this information by using the following steps:
-
-1. Use the `Get-AzVpnClientConfiguration` cmdlet to generate the VPN client configuration for EapMSChapv2.
-
-2. Unzip the VpnClientConfiguration.zip file and look for the **GenericDevice** folder. Ignore the folders that contain the Windows installers for 64-bit and 32-bit architectures.
-
-3. The **GenericDevice** folder contains an XML file called **VpnSettings**. This file contains all the required information:
-
- * **VpnServer**: FQDN of the Azure VPN gateway. This is the address that the client connects to.
- * **VpnType**: Tunnel type that you use to connect.
- * **Routes**: Routes that you have to configure in your profile so that only traffic that's bound for the Azure virtual network is sent over the P2S tunnel.
-
- The **GenericDevice** folder also contains a .cer file called **VpnServerRoot**. This file contains the root certificate that's required to validate the Azure VPN gateway during P2S connection setup. Install the certificate on all devices that will connect to the Azure virtual network.
-
-## Next steps
-
-Return to the article to [complete your P2S configuration](point-to-site-how-to-radius-ps.md).
-
-For P2S troubleshooting information, see [Troubleshooting Azure point-to-site connections](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
web-application-firewall Waf Front Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-monitor.md
Previously updated : 03/21/2022 Last updated : 05/11/2022
WAF with Front Door provides detailed reporting on each threat it detects. Loggi
![WAFDiag](../media/waf-frontdoor-monitor/waf-frontdoor-diagnostics.png)
-[FrontdoorAccessLog](../../frontdoor/front-door-diagnostics.md) logs all requests. FrontdoorWebApplicationFirewallLog logs any request that matches a WAF rule having the below schema:
+[FrontDoorAccessLog](../../frontdoor/standard-premium/how-to-logs.md#access-log) logs all requests. `FrontDoorWebApplicationFirewalllog` logs any request that matches a WAF rule and each log entry has the following schema.
+
+For logging on the classic tier, use [FrontdoorAccessLog](../../frontdoor/front-door-diagnostics.md) logs for Front Door requests and `FrontdoorWebApplicationFirewallLog` logs for matched WAF rules using the following schema:
| Property | Description | | - | - |