Updates from: 04/15/2021 03:14:36
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
* Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). AAD emits the values of `op` as **Add**, **Replace**, and **Remove**. * Microsoft AAD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com). * The attribute that the resources can be queried on should be set as a matching attribute on the application in the [Azure portal](https://portal.azure.com), see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md).
-* Support HTTPS on your SCIM endpoint
+* The entitlements attribute is not supported.
+* Support HTTPS on your SCIM endpoint.
* [Schema discovery](#schema-discovery) * Schema discovery is not currently supported on the custom application, but it is being used on certain gallery applications. Going forward, schema discovery will be used as the primary method to add additional attributes to a connector. * If a value is not present, do not send null values.
active-directory Msal Net Web Browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-web-browsers.md
By default, MSAL.NET supports the system web browser on Xamarin.iOS, Xamarin.And
Using the system browser has the significant advantage of sharing the SSO state with other applications and with web applications without needing a broker (Company portal / Authenticator). The system browser was used, by default, in MSAL.NET for the Xamarin iOS and Xamarin Android platforms because, on these platforms, the system web browser occupies the whole screen, and the user experience is better. The system web view isn't distinguishable from a dialog. On iOS, though, the user might have to give consent for the browser to call back the application, which can be annoying.
-## System browser experience on .NET Core
+## System browser experience on .NET
On .NET Core, MSAL.NET will start the system browser as a separate process. MSAL.NET doesn't have control over this browser, but once the user finishes authentication, the web page is redirected in such a way that MSAL.NET can intercept the Uri.
-You can also configure apps written for .NET Classic to use this browser, by specifying
+You can also configure apps written for .NET Classic or .NET 5 to use this browser by specifying:
```csharp await pca.AcquireTokenInteractive(s_scopes)
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
Azure Active Directory (Azure AD) allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of default user permissions. This is a preview of a new guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so your guest access choices now are:
-Permission level | Access level
-- |
-Same as member users | Guests have the same access to Azure AD resources as member users
-Limited access (default) | Guests can see membership of all non-hidden groups
-**Restricted access (new)** | **Guests can't see membership of any groups**
+Permission level | Access level | Value
+- | | --
+Same as member users | Guests have the same access to Azure AD resources as member users | a0b1b346-4d3e-4e8b-98f8-753987be4970
+Limited access (default) | Guests can see membership of all non-hidden groups | 10dae51f-b6af-4016-8d66-8c2a99b929b3
+**Restricted access (new)** | **Guests can't see membership of any groups** | **2af84b1e-32c8-42b7-82bc-daa82404023b**
When guest access is restricted, guests can view only their own user profile. Permission to view other users isn't allowed even if the guest is searching by User Principal Name or objectId. Restricted access also restricts guest users from seeing the membership of groups they're in. For more information about the overall default user permissions, including guest user permissions, see [What are the default user permissions in Azure Active Directory?](../fundamentals/users-default-permissions.md).
Are there any license requirements for this feature? | No, there are no new lice
- To learn more about existing guest permissions in Azure AD, see [What are the default user permissions in Azure Active Directory?](../fundamentals/users-default-permissions.md) - To see the Microsoft Graph API methods for restricting guest access, see [authorizationPolicy resource type](/graph/api/resources/authorizationpolicy)-- To revoke all access for a user, see [Revoke user access in Azure AD](users-revoke-access.md)
+- To revoke all access for a user, see [Revoke user access in Azure AD](users-revoke-access.md)
active-directory Active Directory Data Storage Australia Newzealand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-data-storage-australia-newzealand.md
Azure Active Directory (Azure AD) stores its Customer Data in a geographical loc
For information about where Azure AD and other Microsoft services' data is located, see the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
-From February 26, 2020, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with an Australian or New Zealand billing address within the Australian datacenters. Between May 1, 2020 and March 31, 2021, Microsoft will migrate existing tenants who have an Australian or New Zealand billing address to the Australian datacenters without requiring any customer action. The migration process doesnΓÇÖt involve any downtime for customers and wonΓÇÖt impact any functionality of a tenant during the migration.
+From February 26, 2020, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with an Australian or New Zealand billing address within the Australian datacenters.
Additionally, certain Azure AD features do not yet support storage of Customer Data in Australia. Please go to the [Azure AD data map](https://msit.powerbi.com/view?r=eyJrIjoiYzEyZTc5OTgtNTdlZS00ZTVkLWExN2ItOTM0OWU4NjljOGVjIiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9), for specific feature information. For example, Microsoft Azure AD Multi-Factor Authentication stores Customer Data in the US and processes it globally. See [Data residency and customer data for Azure AD Multi-Factor Authentication](../authentication/concept-mfa-data-residency.md). > [!NOTE]
-> Microsoft products, services, and third-party applications that integrate with Azure AD have access to Customer Data. Evaluate each product, service, and application you use to determine how Customer Data is processed by that specific product, service, and application, and whether they meet your company's data storage requirements. For more information about Microsoft services' data residency, see the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
+> Microsoft products, services, and third-party applications that integrate with Azure AD have access to Customer Data. Evaluate each product, service, and application you use to determine how Customer Data is processed by that specific product, service, and application, and whether they meet your company's data storage requirements. For more information about Microsoft services' data residency, see the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
active-directory Active Directory Data Storage Australia https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-data-storage-australia.md
For customers who provided an address in Australia or New Zealand, Azure AD keep
- Azure AD Directory Management - Authentication
-All other Azure AD services store customer data in global datacenters. To locate the datacenter for a service, see [Azure Active Directory ΓÇô Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located)
+All other Azure AD services store customer data in global datacenters. To locate the datacenter for a service, see [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/AADDataMap)
## Microsoft Azure AD Multi-Factor Authentication (MFA)
MFA stores Identity Customer Data in global datacenters. To learn more about the
## Next steps For more information about any of the features and functionality described above, see these articles:-- [What is Multi-Factor Authentication?](../authentication/concept-mfa-howitworks.md)
+- [What is Multi-Factor Authentication?](../authentication/concept-mfa-howitworks.md)
active-directory Admin Units Add Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-add-manage-groups.md
In the following example, use the `Add-AzureADMSAdministrativeUnitMember` cmdlet
```powershell $adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'" $GroupObj = Get-AzureADGroup -Filter "displayname eq 'TestGroup'"
-Add-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitObj.ObjectId -RefObjectId $GroupObj.ObjectId
+Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $GroupObj.ObjectId
``` ### Use Microsoft Graph
To display a list of all the members of the administrative unit, run the followi
```powershell $adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
-Get-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitObj.ObjectId
+Get-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id
``` To display all the groups that are members of the administrative unit, use the following code snippet: ```powershell
-foreach ($member in (Get-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitObj.ObjectId))
+foreach ($member in (Get-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id))
{ if($member.ObjectType -eq "Group") {
active-directory Admin Units Add Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-add-manage-users.md
In PowerShell, use the `Add-AzureADAdministrativeUnitMember` cmdlet in the follo
```powershell $adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'" $userObj = Get-AzureADUser -Filter "UserPrincipalName eq 'bill@example.onmicrosoft.com'"
-Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.ObjectId -RefObjectId $userObj.ObjectId
+Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $userObj.ObjectId
```
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 11/04/2020 Last updated : 04/14/2021
You can assign a scoped role by using the Azure portal, PowerShell, or Microsoft
```powershell $adminUser = Get-AzureADUser -ObjectId "Use the user's UPN, who would be an admin on this unit"
-$role = Get-AzureADDirectoryRole | Where-Object -Property DisplayName -EQ -Value "User Account Administrator"
+$role = Get-AzureADDirectoryRole | Where-Object -Property DisplayName -EQ -Value "User Administrator"
$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
-$roleMember = New-Object -TypeName Microsoft.Open.AzureAD.Model.RoleMemberInfo
-$roleMember.ObjectId = $adminUser.ObjectId
-Add-AzureADMSScopedRoleMembership -ObjectId $adminUnitObj.ObjectId -RoleObjectId $role.ObjectId -RoleMemberInfo $roleMember
+$roleMember = New-Object -TypeName Microsoft.Open.MSGraph.Model.MsRoleMemberInfo
+$roleMember.Id = $adminUser.ObjectId
+Add-AzureADMSScopedRoleMembership -Id $adminUnitObj.Id -RoleId $role.ObjectId -RoleMemberInfo $roleMember
``` You can change the highlighted section as required for the specific environment.
You can view all the role assignments created with an administrative unit scope
```powershell $adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
-Get-AzureADMSScopedRoleMembership -ObjectId $adminUnitObj.ObjectId | fl *
+Get-AzureADMSScopedRoleMembership -Id $adminUnitObj.Id | fl *
``` You can change the highlighted section as required for your specific environment.
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-manage.md
In Azure AD, you can remove an administrative unit that you no longer need as a
```powershell $adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'DeleteMe Admin Unit'"
-Remove-AzureADMSAdministrativeUnit -ObjectId $adminUnitObj.ObjectId
+Remove-AzureADMSAdministrativeUnit -Id $adminUnitObj.Id
``` You can modify the values that are enclosed in quotation marks, as required for the specific environment.
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
The following scenarios are not supported right now:
- Use the new [Exchange Admin Center](https://admin.exchange.microsoft.com/) for role assignments via group membership. The old Exchange Admin Center doesnΓÇÖt support this feature yet. Exchange PowerShell cmdlets will work as expected. - Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can [migrate to the unified sensitivity labeling platform](/azure/information-protection/configure-policy-migrate-labels) and then use the Office 365 Security & Compliance center to use group assignments to manage roles. - [Apps Admin Center](https://config.office.com/) doesn't support this feature yet. Assign users directly to Office Apps Administrator role.
+- [M365 Compliance Center](https://compliance.microsoft.com/) doesn't support this feature yet. Assign users directly to appropriate Azure AD roles to use this portal.
We are fixing these issues.
active-directory Codility Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/codility-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Codility | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Codility.
++++++++ Last updated : 04/02/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Codility
+
+In this tutorial, you'll learn how to integrate Codility with Azure Active Directory (Azure AD). When you integrate Codility with Azure AD, you can:
+
+* Control in Azure AD who has access to Codility.
+* Enable your users to be automatically signed-in to Codility with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Codility single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Codility supports **SP and IDP** initiated SSO
+* Codility supports **Just In Time** user provisioning
+
+## Adding Codility from the gallery
+
+To configure the integration of Codility into Azure AD, you need to add Codility from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Codility** in the search box.
+1. Select **Codility** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Codility
+
+Configure and test Azure AD SSO with Codility using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Codility.
+
+To configure and test Azure AD SSO with Codility, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Codility SSO](#configure-codility-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Codility test user](#create-codility-test-user)** - to have a counterpart of B.Simon in Codility that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Codility** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Reply URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.codility.net/social/complete/saml/`
+
+ b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.codility.net`
++
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.codility.net`
+
+ b. In the **Relay State** text box, type a value using the following pattern: `<UNIQUE_IDENTIFIER>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL, Identifier, Sign-on URL and Relay State. Contact [Codility Client support team](mailto:support@codility.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Codility** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Codility.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Codility**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Codility SSO
+
+To configure single sign-on on **Codility** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Codility support team](mailto:support@codility.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Codility test user
+
+In this section, a user called Britta Simon is created in Codility. Codility supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Codility, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Codility Sign on URL where you can initiate the login flow.
+
+* Go to Codility Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Codility for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Codility tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Codility for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Codility you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Cylanceprotect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cylanceprotect-tutorial.md
Previously updated : 04/27/2020 Last updated : 03/24/2021
In this tutorial, you'll learn how to integrate CylancePROTECT with Azure Active
* Enable your users to be automatically signed-in to CylancePROTECT with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* CylancePROTECT supports **IDP** initiated SSO
+* CylancePROTECT supports **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding CylancePROTECT from the gallery
+## Add CylancePROTECT from the gallery
To configure the integration of CylancePROTECT into Azure AD, you need to add CylancePROTECT from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **CylancePROTECT** in the search box. 1. Select **CylancePROTECT** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for CylancePROTECT
+## Configure and test Azure AD SSO for CylancePROTECT
Configure and test Azure AD SSO with CylancePROTECT using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CylancePROTECT.
-To configure and test Azure AD SSO with CylancePROTECT, complete the following building blocks:
+To configure and test Azure AD SSO with CylancePROTECT, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure CylancePROTECT SSO](#configure-cylanceprotect-sso)** - to configure the single sign-on settings on application side.
- * **[Create CylancePROTECT test user](#create-cylanceprotect-test-user)** - to have a counterpart of B.Simon in CylancePROTECT that is linked to the Azure AD representation of user.
+ 1. **[Create CylancePROTECT test user](#create-cylanceprotect-test-user)** - to have a counterpart of B.Simon in CylancePROTECT that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **CylancePROTECT** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **CylancePROTECT** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
- a. In the **Identifier** textbox, type the URL:
+ a. In the **Identifier** textbox, type one of the following URLs:
| Region | URL Value | |-||
Follow these steps to enable Azure AD SSO in the Azure portal.
| North America|`https://login.cylance.com/EnterpriseLogin/ConsumeSaml`| | South America (SAE1)|`https://login-sae1.cylance.com/EnterpriseLogin/ConsumeSaml`|
- b. In the **Reply URL** textbox, type the URL:
+ b. In the **Reply URL** textbox, type one of the following URLs:
| Region | URL Value | |-||
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **CylancePROTECT**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure CylancePROTECT SSO
In this section, you create a user called Britta Simon in CylancePROTECT. Work w
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the CylancePROTECT tile in the Access Panel, you should be automatically signed in to the CylancePROTECT for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the CylancePROTECT for which you set up the SSO.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the CylancePROTECT tile in the My Apps, you should be automatically signed in to the CylancePROTECT for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try CylancePROTECT with Azure AD](https://aad.portal.azure.com/)
+Once you configure CylancePROTECT you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Holmes Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/holmes-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Holmes | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Holmes.
++++++++ Last updated : 04/06/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Holmes
+
+In this tutorial, you'll learn how to integrate Holmes with Azure Active Directory (Azure AD). When you integrate Holmes with Azure AD, you can:
+
+* Control in Azure AD who has access to Holmes.
+* Enable your users to be automatically signed-in to Holmes with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Holmes single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Holmes supports **SP and IDP** initiated SSO.
+
+## Adding Holmes from the gallery
+
+To configure the integration of Holmes into Azure AD, you need to add Holmes from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Holmes** in the search box.
+1. Select **Holmes** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Holmes
+
+Configure and test Azure AD SSO with Holmes using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Holmes.
+
+To configure and test Azure AD SSO with Holmes, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Holmes SSO](#configure-holmes-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Holmes test user](#create-holmes-test-user)** - to have a counterpart of B.Simon in Holmes that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Holmes** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ In the **Identifier** text box, type a URL using the following pattern:
+ `https://<WorkspaceID>.holmescloud.com`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://www.holmescloud.com/login`
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Identifier. Contact [Holmes Client support team](mailto:team-dev@holmescloud.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Holmes** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Holmes.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Holmes**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Holmes SSO
+
+To configure single sign-on on **Holmes** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Holmes support team](mailto:team-dev@holmescloud.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Holmes test user
+
+In this section, you create a user called Britta Simon in Holmes. Work with [Holmes support team](mailto:team-dev@holmescloud.com) to add the users in the Holmes platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Holmes Sign on URL where you can initiate the login flow.
+
+* Go to Holmes Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Holmes for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Holmes tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Holmes for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Holmes you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory M Files Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/m-files-tutorial.md
Previously updated : 02/19/2019 Last updated : 03/24/2021 # Tutorial: Azure Active Directory integration with M-Files
-In this tutorial, you learn how to integrate M-Files with Azure Active Directory (Azure AD).
-Integrating M-Files with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate M-Files with Azure Active Directory (Azure AD). When you integrate M-Files with Azure AD, you can:
-* You can control in Azure AD who has access to M-Files.
-* You can enable your users to be automatically signed-in to M-Files (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to M-Files.
+* Enable your users to be automatically signed-in to M-Files with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with M-Files, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* M-Files single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* M-Files single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* M-Files supports **SP** initiated SSO
+* M-Files supports **SP** initiated SSO.
-## Adding M-Files from the gallery
+## Add M-Files from the gallery
To configure the integration of M-Files into Azure AD, you need to add M-Files from the gallery to your list of managed SaaS apps.
-**To add M-Files from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **M-Files**, select **M-Files** from result panel then click **Add** button to add the application.
-
- ![M-Files in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with M-Files based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in M-Files needs to be established.
-
-To configure and test Azure AD single sign-on with M-Files, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **M-Files** in the search box.
+1. Select **M-Files** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure M-Files Single Sign-On](#configure-m-files-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create M-Files test user](#create-m-files-test-user)** - to have a counterpart of Britta Simon in M-Files that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for M-Files
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with M-Files using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in M-Files.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with M-Files, perform the following steps:
-To configure Azure AD single sign-on with M-Files, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure M-Files SSO](#configure-m-files-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create M-Files test user](#create-m-files-test-user)** - to have a counterpart of B.Simon in M-Files that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **M-Files** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **M-Files** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![M-Files Domain and URLs single sign-on information](common/sp-identifier.png)
- a. In the **Sign on URL** text box, type a URL using the following pattern: `https://<tenantname>.cloudvault.m-files.com/authentication/MFiles.AuthenticationProviders.Core/sso`
To configure Azure AD single sign-on with M-Files, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- b. Azure AD Identifier
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- c. Logout URL
+### Assign the Azure AD test user
-### Configure M-Files Single Sign-On
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to M-Files.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **M-Files**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure M-Files SSO
1. To get SSO configured for your application, contact [M-Files support team](mailto:support@m-files.com) and provide them the downloaded Metadata.
To configure Azure AD single sign-on with M-Files, perform the following steps:
1. Open the **M-Files Desktop Settings** window. Then, click **Add**.
- ![Screenshot shows M-Files Desktop Settings where you can select Add.](./media/m-files-tutorial/tutorial_m_files_10.png)
+ ![Screenshot shows M-Files Desktop Settings where you can select Add.](./media/m-files-tutorial/settings.png)
1. On the **Document Vault Connection Properties** window, perform the following steps:
- ![Screenshot shows Document Vault Connection Properties where you can enter the values described.](./media/m-files-tutorial/tutorial_m_files_11.png)
+ ![Screenshot shows Document Vault Connection Properties where you can enter the values described.](./media/m-files-tutorial/general.png)
Under the Server section type, the values as follows:
To configure Azure AD single sign-on with M-Files, perform the following steps:
f. Click **OK**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to M-Files.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **M-Files**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **M-Files**.
-
- ![The M-Files link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create M-Files test user The objective of this section is to create a user called Britta Simon in M-Files. Work with [M-Files support team](mailto:support@m-files.com) to add the users in the M-Files.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the M-Files tile in the Access Panel, you should be automatically signed in to the M-Files for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to M-Files Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to M-Files Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the M-Files tile in the My Apps, this will redirect to M-Files Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure M-Files you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Sds Chemical Information Management Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sds-chemical-information-management-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with SDS & Chemical Information Management | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and SDS & Chemical Information Management.
++++++++ Last updated : 04/05/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with SDS & Chemical Information Management
+
+In this tutorial, you'll learn how to integrate SDS & Chemical Information Management with Azure Active Directory (Azure AD). When you integrate SDS & Chemical Information Management with Azure AD, you can:
+
+* Control in Azure AD who has access to SDS & Chemical Information Management.
+* Enable your users to be automatically signed-in to SDS & Chemical Information Management with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SDS & Chemical Information Management single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* SDS & Chemical Information Management supports **SP** initiated SSO.
+
+* SDS & Chemical Information Management supports **Just In Time** user provisioning.
++
+## Adding SDS & Chemical Information Management from the gallery
+
+To configure the integration of SDS & Chemical Information Management into Azure AD, you need to add SDS & Chemical Information Management from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SDS & Chemical Information Management** in the search box.
+1. Select **SDS & Chemical Information Management** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for SDS & Chemical Information Management
+
+Configure and test Azure AD SSO with SDS & Chemical Information Management using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SDS & Chemical Information Management.
+
+To configure and test Azure AD SSO with SDS & Chemical Information Management, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SDS & Chemical Information Management SSO](#configure-sds--chemical-information-management-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SDS & Chemical Information Management test user](#create-sds--chemical-information-management-test-user)** - to have a counterpart of B.Simon in SDS & Chemical Information Management that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **SDS & Chemical Information Management** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier** box, type a URL using the following pattern:
+ `https://cs.cloudsds.com/saml/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://cs.cloudsds.com/saml/<ID>/consumeAssertion`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://cs.cloudsds.com/saml/<ID>/Login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-On URL. Contact [SDS & Chemical Information Management Client support team](mailto:info@cloudsds.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SDS & Chemical Information Management.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SDS & Chemical Information Management**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure SDS & Chemical Information Management SSO
+
+To configure single sign-on on **SDS & Chemical Information Management** side, you need to send the **App Federation Metadata Url** to [SDS & Chemical Information Management support team](mailto:info@cloudsds.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create SDS & Chemical Information Management test user
+
+In this section, a user called Britta Simon is created in SDS & Chemical Information Management. SDS & Chemical Information Management supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in SDS & Chemical Information Management, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to SDS & Chemical Information Management Sign-on URL where you can initiate the login flow.
+
+* Go to SDS & Chemical Information Management Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the SDS & Chemical Information Management tile in the My Apps, this will redirect to SDS & Chemical Information Management Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure SDS & Chemical Information Management you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Symantec Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/symantec-tutorial.md
Previously updated : 12/25/2018 Last updated : 03/24/2021 # Tutorial: Azure Active Directory integration with Symantec Web Security Service (WSS)
Integrating Symantec Web Security Service (WSS) with Azure AD provides you with
- Enable the enforcement of user and group level policy rules defined in your WSS account.
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
- ## Prerequisites
-To configure Azure AD integration with Symantec Web Security Service (WSS), you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Symantec Web Security Service (WSS) single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Symantec Web Security Service (WSS) single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Symantec Web Security Service (WSS) supports **IDP** initiated SSO
-
-## Adding Symantec Web Security Service (WSS) from the gallery
-
-To configure the integration of Symantec Web Security Service (WSS) into Azure AD, you need to add Symantec Web Security Service (WSS) from the gallery to your list of managed SaaS apps.
-
-**To add Symantec Web Security Service (WSS) from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* Symantec Web Security Service (WSS) supports **IDP** initiated SSO.
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Symantec Web Security Service (WSS)**, select **Symantec Web Security Service (WSS)** from result panel then click **Add** button to add the application.
-
- ![Symantec Web Security Service (WSS) in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Symantec Web Security Service (WSS) based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Symantec Web Security Service (WSS) needs to be established.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-To configure and test Azure AD single sign-on with Symantec Web Security Service (WSS), you need to complete the following building blocks:
+## Add Symantec Web Security Service (WSS) from the gallery
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **Configure Symantec Web Security Service (WSS) Single Sign-On** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Symantec Web Security Service (WSS) test user](#create-symantec-web-security-service-wss-test-user)** - to have a counterpart of Britta Simon in Symantec Web Security Service (WSS) that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Symantec Web Security Service (WSS) into Azure AD, you need to add Symantec Web Security Service (WSS) from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Symantec Web Security Service (WSS)** in the search box.
+1. Select **Symantec Web Security Service (WSS)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Symantec Web Security Service (WSS)
-To configure Azure AD single sign-on with Symantec Web Security Service (WSS), perform the following steps:
+Configure and test Azure AD SSO with Symantec Web Security Service (WSS) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Symantec Web Security Service (WSS).
-1. In the [Azure portal](https://portal.azure.com/), on the **Symantec Web Security Service (WSS)** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Symantec Web Security Service (WSS), perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Symantec Web Security Service (WSS) SSO](#configure-symantec-web-security-service-wss-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Symantec Web Security Service (WSS) test user](#create-symantec-web-security-service-wss-test-user)** - to have a counterpart of B.Simon in Symantec Web Security Service (WSS) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Symantec Web Security Service (WSS)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** dialog, perform the following steps:
- ![Symantec Web Security Service (WSS) Domain and URLs single sign-on information](common/idp-intiated.png)
-
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the URL:
`https://saml.threatpulse.net:8443/saml/saml_realm`
- b. In the **Reply URL** text box, type a URL:
+ b. In the **Reply URL** text box, type the URL:
`https://saml.threatpulse.net:8443/saml/saml_realm/bcsamlpost` > [!NOTE]
To configure Azure AD single sign-on with Symantec Web Security Service (WSS), p
![The Certificate download link](common/metadataxml.png)
-### Configure Symantec Web Security Service (WSS) Single Sign-On
-
-To configure single sign-on on the Symantec Web Security Service (WSS) side, refer to the WSS online documentation. The downloaded **Federation Metadata XML** will need to be imported into the WSS portal. Contact the [Symantec Web Security Service (WSS) support team](https://www.symantec.com/contact-us) if you need assistance with the configuration on the WSS portal.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Symantec Web Security Service (WSS).
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Symantec Web Security Service (WSS)**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Symantec Web Security Service (WSS).
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Symantec Web Security Service (WSS)**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, type and select **Symantec Web Security Service (WSS)**.
+## Configure Symantec Web Security Service (WSS) SSO
- ![The Symantec Web Security Service (WSS) link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on the Symantec Web Security Service (WSS) side, refer to the WSS online documentation. The downloaded **Federation Metadata XML** will need to be imported into the WSS portal. Contact the [Symantec Web Security Service (WSS) support team](https://www.symantec.com/contact-us) if you need assistance with the configuration on the WSS portal.
### Create Symantec Web Security Service (WSS) test user
In this section, you create a user called Britta Simon in Symantec Web Security
> [!NOTE] > Please click [here](https://www.bing.com/search?q=my+ip+address&qs=AS&pq=my+ip+a&sc=8-7&cvid=29A720C95C78488CA3F9A6BA0B3F98C5&FORM=QBLH&sp=1) to get your machine's public IPaddress.
-### Test single sign-on
-
-In this section, you'll test the single sign-on functionality now that you've configured your WSS account to use your Azure AD for SAML authentication.
+## Test SSO
-After you have configured your web browser to proxy traffic to WSS, when you open your web browser and try to browse to a site then you'll be redirected to the Azure sign-on page. Enter the credentials of the test end user that has been provisioned in the Azure AD (that is, BrittaSimon) and associated password. Once authenticated, you'll be able to browse to the website that you chose. Should you create a policy rule on the WSS side to block BrittaSimon from browsing to a particular site then you should see the WSS block page when you attempt to browse to that site as user BrittaSimon.
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Symantec Web Security Service (WSS) for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Symantec Web Security Service (WSS) tile in the My Apps, you should be automatically signed in to the Symantec Web Security Service (WSS) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Symantec Web Security Service (WSS) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Workfront Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workfront-tutorial.md
Previously updated : 04/03/2019 Last updated : 03/23/2021 # Tutorial: Azure Active Directory integration with Workfront
-In this tutorial, you learn how to integrate Workfront with Azure Active Directory (Azure AD).
-Integrating Workfront with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Workfront with Azure Active Directory (Azure AD). When you integrate Workfront with Azure AD, you can:
-* You can control in Azure AD who has access to Workfront.
-* You can enable your users to be automatically signed-in to Workfront (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Workfront.
+* Enable your users to be automatically signed-in to Workfront with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Workfront, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Workfront single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Workfront single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Workfront supports **SP** initiated SSO
+* Workfront supports **SP** initiated SSO.
-## Adding Workfront from the gallery
+## Add Workfront from the gallery
To configure the integration of Workfront into Azure AD, you need to add Workfront from the gallery to your list of managed SaaS apps.
-**To add Workfront from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Workfront**, select **Workfront** from result panel then click **Add** button to add the application.
-
- ![Workfront in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Workfront based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Workfront needs to be established.
-
-To configure and test Azure AD single sign-on with Workfront, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Workfront** in the search box.
+1. Select **Workfront** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Workfront Single Sign-On](#configure-workfront-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Workfront test user](#create-workfront-test-user)** - to have a counterpart of Britta Simon in Workfront that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for Workfront
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with Workfront using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Workfront.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with Workfront, perform the following steps:
-To configure Azure AD single sign-on with Workfront, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Workfront SSO](#configure-workfront-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Workfront test user](#create-workfront-test-user)** - to have a counterpart of B.Simon in Workfront that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **Workfront** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Workfront** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Workfront Domain and URLs single sign-on information](common/sp-identifier.png)
- a. In the **Sign on URL** text box, type a URL using the following pattern: `https://<companyname>.attask-ondemand.com`
To configure Azure AD single sign-on with Workfront, perform the following steps
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- b. Azure AD Identifier
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Workfront.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Workfront**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Workfront Single Sign-On
+## Configure Workfront SSO
1. Sign-on to your Workfront company site as administrator.
To configure Azure AD single sign-on with Workfront, perform the following steps
3. On the **Single Sign-On** dialog, perform the following steps
- ![Configure Single Sign-On][23]
+ ![Configure Single Sign-On](./media/workfront-tutorial/single-sign-on.png)
a. As **Type**, select **SAML 2.0**.
To configure Azure AD single sign-on with Workfront, perform the following steps
f. Click **Save**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Workfront.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Workfront**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Workfront**.
-
- ![The Workfront link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Workfront test user The objective of this section is to create a user called Britta Simon in Workfront.
The objective of this section is to create a user called Britta Simon in Workfro
4. On the New Person dialog, perform the following steps:
- ![Create an Workfront test user][21]
+ ![Create an Workfront test user](./media/workfront-tutorial/add-person.png)
a. In the **First Name** textbox, type "Britta."
The objective of this section is to create a user called Britta Simon in Workfro
d. Click **Add Person**.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Workfront tile in the Access Panel, you should be automatically signed in to the Workfront for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+## Test SSO
-## Additional Resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to Workfront Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* Go to Workfront Sign-on URL directly and initiate the login flow from there.
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the Workfront tile in the My Apps, this will redirect to Workfront Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-<!--Image references-->
+## Next steps
-[21]:./media/workfront-tutorial/tutorial_attask_08.png
-[23]:./media/workfront-tutorial/tutorial_attask_06.png
+Once you configure Workfront you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md Binary files differ
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-azure-cni.md Binary files differ
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
aks Servicemesh Osm About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-osm-about.md Binary files differ
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-get-started-publish-versions.md Binary files differ
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-get-started-revise-api.md Binary files differ
api-management Mock Api Responses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/mock-api-responses.md Binary files differ
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
api-management Visual Studio Code Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/visual-studio-code-tutorial.md
To test the imported API you imported and the policies that are applied, you nee
### Test an API operation 1. In the Explorer pane, expand the **Operations** node under the *demo-conference-api* that you imported.
-1. Select an operation such as *GetSpeakers*.
+1. Select an operation such as *GetSpeakers*, and then right-click the operation and select **Test Operation**.
1. In the editor window, next to **Ocp-Apim-Subscription-Key**, replace `{{SubscriptionKey}}` with the subscription key that you copied. 1. Select **Send request**.
This tutorial introduced several features of the API Management Extension for Vi
> * Apply API Management policies > * Test the API
-The API Management Extension provides additional features to work with your APIs. For example, [debug polices](api-management-debug-policies.md) (available in the Developer service tier), or create and manage [named values](api-management-howto-properties.md).
+The API Management Extension provides additional features to work with your APIs. For example, [debug polices](api-management-debug-policies.md) (available in the Developer service tier), or create and manage [named values](api-management-howto-properties.md).
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-rest-api.md Binary files differ
app-service Configure Linux Open Ssh Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-linux-open-ssh-session.md Binary files differ
app-service Deploy Ci Cd Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-ci-cd-custom-container.md Binary files differ
app-service Deploy Configure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-configure-credentials.md Binary files differ
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/management-addresses.md Binary files differ
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-ruby.md Binary files differ
app-service Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/samples-cli.md Binary files differ
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-access-storage.md Binary files differ
app-service Cli Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-backup-restore.md Binary files differ
app-service Cli Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-configure-custom-domain.md Binary files differ
app-service Cli Connect To Documentdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-connect-to-documentdb.md Binary files differ
app-service Cli Connect To Redis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-connect-to-redis.md Binary files differ
app-service Cli Connect To Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-connect-to-storage.md Binary files differ
app-service Cli Continuous Deployment Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-continuous-deployment-github.md Binary files differ
app-service Cli Linux Acr Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-linux-acr-aspnetcore.md Binary files differ
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-java-spring-cosmosdb.md Binary files differ
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
attestation Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-powershell.md
In order to manage policies, an Azure AD user requires the following permissions
- Microsoft.Attestation/attestationProviders/attestation/write - Microsoft.Attestation/attestationProviders/attestation/delete
-These permissions can be assigned to an AD user through a role such as "Owner" (wildcard permissions), "Contributor" (wildcard permissions) or "Attestation Contributor" (specific permissions for Azure Attestation only).
+To perform these actions, an Azure AD user must have "Attestation Contributor" role on the attestation provider. These permissions can be also be inherited with roles such as "Owner" (wildcard permissions), "Contributor" (wildcard permissions) on the subscription/ resource group level.
In order to read policies, an Azure AD user requires the following permission for "Actions": - Microsoft.Attestation/attestationProviders/attestation/read
-This permission can be assigned to an AD user through a role such as "Reader" (wildcard permissions) or "Attestation Reader" (specific permissions for Azure Attestation only).
+To perform this action, an Azure AD user must have "Attestation Reader" role on the attestation provider. The read permission can be also be inherited with roles such as "Reader" (wildcard permissions) on the subscription/ resource group level.
Below PowerShell cmdlets provide policy management for an attestation provider (one TEE at a time).
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 02/26/2021 Last updated : 04/08/2021
-# Automation account authentication overview
+# Azure Automation account authentication overview
Azure Automation allows you to automate tasks against resources in Azure, on-premises, and with other cloud providers such as Amazon Web Services (AWS). You can use runbooks to automate your tasks, or a Hybrid Runbook Worker if you have business or operational processes to manage outside of Azure. Working in any one of these environments require permissions to securely access the resources with the minimal rights required.
The Automation resources for each Automation account are associated with a singl
All tasks that you create against resources using Azure Resource Manager and the PowerShell cmdlets in Azure Automation must authenticate to Azure using Azure Active Directory (Azure AD) organizational identity credential-based authentication.
+## Managed identities (Preview)
+
+A managed identity from Azure Active Directory (Azure AD) allows your runbook to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
+
+Here are some of the benefits of using managed identities:
+
+- You can use managed identities to authenticate to any Azure service that supports Azure AD authentication.
+
+- Managed identities can be used without any additional cost.
+
+- You donΓÇÖt have to renew the certificate used by the Automation Run As account.
+
+- You don't have to specify the Run As connection object in your runbook code. You can access resources using your Automation account's managed identity from a runbook without creating certificates, connections, Run As accounts, etc.
+
+An Automation account can be granted two types of identities:
+
+- A system-assigned identity is tied to your application and is deleted if your app is deleted. An app can only have one system-assigned identity.
+
+- A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.
+
+>[!NOTE]
+> User assigned identities are not supported yet.
+
+For details on using managed identities, see [Enable managed identity for Azure Automation (Preview)](enable-managed-identity-for-automation.md).
+ ## Run As accounts Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation:
In a situation where you have separation of duties, the following table shows a
<sup>1</sup> Non-administrator users in your Azure AD tenant can [register AD applications](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app) if the Azure AD tenant's **Users can register applications** option on the **User settings** page is set to **Yes**. If the application registration setting is **No**, the user performing this action must be as defined in this table.
-If you aren't a member of the subscription's Active Directory instance before you're added to the Global Administrator role of the subscription, you're added as a guest. In this situation, you receive a `You do not have permissions to create…` warning on the **Add Automation Account** page.
+If you aren't a member of the subscription's Active Directory instance before you're added to the Global Administrator role of the subscription, you're added as a guest. In this situation, you receive a `You do not have permissions to create…` warning on the **Add Automation account** page.
To verify that the situation producing the error message has been remedied:
For runbooks that use Hybrid Runbook Workers on Azure VMs, you can use [runbook
* To create an Automation account from the Azure portal, see [Create a standalone Azure Automation account](automation-create-standalone-account.md). * If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](quickstart-create-automation-account-template.md).
-* For authentication using Amazon Web Services, see [Authenticate runbooks with Amazon Web Services](automation-config-aws-account.md).
+* For authentication using Amazon Web Services, see [Authenticate runbooks with Amazon Web Services](automation-config-aws-account.md).
+* For a list of Azure services that support the managed identities for Azure resources feature, see [Services that support managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities).
automation Disable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/disable-managed-identity-for-automation.md
+
+ Title: Disable your Azure Automation account managed identity (Preview)
+description: This article explains how to disable and remove a managed identity for an Azure Automation account.
++ Last updated : 04/04/2021+++
+# Disable your Azure Automation account managed identity (Preview)
+
+There are two ways to disable a system-assigned identity in Azure Automation. You can complete this task from the Azure portal, or by using an Azure Resource Manager (ARM) template.
+
+## Disable managed identity in the Azure portal
+
+You can disable the managed identity from the Azure portal no matter how the managed identity was originally set up.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Automation account and select **Identity** under **Account Settings**.
+
+1. Set the **System assigned** option to **Off** and press **Save**. When you're prompted to confirm, press **Yes**.
+
+The managed identity is removed and no longer has access to the target resource.
+
+## Disable using Azure Resource Manager template
+
+If you created the managed identity for your Automation account using an Azure Resource Manager template, you can disable the managed identity by reusing that template and modifying its settings. Set the type of the identity object's child property to **None** as shown in the following example, and then re-run the template.
+
+```json
+"identity": {
+ "type": "None"
+}
+```
+
+Removing a system-assigned identity using this method also deletes it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the app resource that they are assigned to is deleted.
+
+## Next steps
+
+- For more information about enabling managed identity in Azure Automation, see [Enable and use managed identity for Automation (Preview)](enable-managed-identity-for-automation.md).
+
+- For an overview of Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
+
+ Title: Enable a managed identity for your Azure Automation account (Preview)
+description: This article describes how to set up managed identity for Azure Automation accounts.
++ Last updated : 04/09/2021++
+# Enable a managed identity for your Azure Automation account (Preview)
+
+This topic shows you how to create a managed identity for an Azure Automation account and how to use it to access other resources. For more information on how managed identity works with Azure Automation, see [Managed identities](automation-security-overview.md#managed-identities-preview).
+
+## Prerequisites
+
+- An Azure account and subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Both the managed identity and the target Azure resources that your runbook manages using that identity must be in the same Azure subscription.
+
+- The latest version of Azure Automation account modules.
+
+- An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant.
+
+- If you want to execute hybrid jobs using a managed identity, update the Hybrid Runbook Worker to the latest version. The minimum required versions are:
+
+ - Windows Hybrid Runbook Worker: version 7.3.1125.0
+ - Linux Hybrid Runbook Worker: version 1.7.4.0
+
+## Enable system-assigned identity
+
+>[!NOTE]
+>User-assigned identities are not supported yet.
+
+Setting up system-assigned identities for Azure Automation can be done one of two ways. You can either use the Azure portal, or the Azure REST API.
+
+### Enable system-assigned identity in Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Automation account and select **Identity** under **Account Settings**.
+
+1. Set the **System assigned** option to **On** and press **Save**. When you're prompted to confirm, select **Yes**.
++
+Your Automation account can now use the system-assigned identity, which is registered with Azure Active Directory (Azure AD) and is represented by an object ID.
++
+### Enable system-assigned identity through the REST API
+
+You can configure a system-assigned managed identity to the Automation account by using the following REST API call.
+
+```http
+PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
+```
+
+Request body
+```json
+{
+ "identity":
+ {
+ "type": "SystemAssigned"
+ }
+}
+```
+
+```json
+{
+ "name": "automation-account-name",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name",
+ .
+ .
+ "identity": {
+ "type": "SystemAssigned",
+ "principalId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
+ "tenantId": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
+ },
+.
+.
+}
+```
+
+|Property (JSON) | Value | Description|
+|-|--||
+| principalid | \<principal-ID\> | The Globally Unique Identifier (GUID) of the service principal object for the managed identity that represents your Automation account in the Azure AD tenant. This GUID sometimes appears as an "object ID" or objectID. |
+| tenantid | \<Azure-AD-tenant-ID\> | The Globally Unique Identifier (GUID) that represents the Azure AD tenant where the Automation account is now a member. Inside the Azure AD tenant, the service principal has the same name as the Automation account. |
+
+## Give identity access to Azure resources by obtaining a token
+
+An Automation account can use its managed identity to get tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens do not represent any specific user of the application. Instead, they represent the application thatΓÇÖs accessing the resource. For example, in this case, the token represents an Automation account.
+
+Before you can use your system-assigned managed identity for authentication, set up access for that identity on the Azure resource where you plan to use the identity. To complete this task, assign the appropriate role to that identity on the target Azure resource.
+
+This example shows how to assign the Contributor role in the subscription to the target Azure resource using Azure PowerShell.
+
+```powershell
+New-AzRoleAssignment -ObjectId <automation-Identity-object-id> -Scope "/subscriptions/<subscription-id>" -RoleDefinitionName "Contributor"
+```
+
+## Authenticate access with managed identity
+
+After you enable the managed identity for your Automation account and give an identity access to the target resource, you can specify that identity in runbooks against resources that support managed identity. For identity support, use the Az cmdlet `Connect-AzAccount` cmdlet. See [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) in the PowerShell reference.
+
+```powershell
+Connect-AzAccount -Identity
+```
+
+>[!NOTE]
+>If your organization is still using the deprecated AzureRM cmdlets, you can use `Connect-AzureRMAccount -Identity`.
+
+## Generate an access token without using Azure cmdlets
+
+For HTTP Endpoints make sure of the following.
+- The metadata header must be present and should be set to ΓÇ£trueΓÇ¥.
+- A resource must be passed along with the request, as a query parameter for a GET request and as form data for a POST request.
+- The X-IDENTITY-HEADER should be set to the value of the environment variable IDENTITY_HEADER for Hybrid Runbook Workers.
+- Content Type for the Post request must be 'application/x-www-form-urlencoded'.
+
+### Sample GET request
+
+```powershell
+$resource= "?resource=https://management.azure.com/"
+$url = $env:IDENTITY_ENDPOINT + $resource
+$Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+$Headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER)
+$Headers.Add("Metadata", "True")
+$accessToken = Invoke-RestMethod -Uri $url -Method 'GET' -Headers $Headers
+Write-Output $accessToken.access_token
+```
+
+### Sample POST request
+```powershell
+$url = $env:IDENTITY_ENDPOINT
+$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+$headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER)
+$headers.Add("Metadata", "True")
+$body = @{resource='https://management.azure.com/' }
+$accessToken = Invoke-RestMethod $url -Method 'POST' -Headers $headers -ContentType 'application/x-www-form-urlencoded' -Body $body
+Write-Output $accessToken.access_token
+```
+
+## Sample runbooks using managed identity
+
+### Sample runbook to access a SQL database without using Azure cmdlets
+
+```powershell
+$queryParameter = "?resource=https://database.windows.net/"
+$url = $env:IDENTITY_ENDPOINT + $queryParameter
+$Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+$Headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER)
+$Headers.Add("Metadata", "True")
+$content =[System.Text.Encoding]::Default.GetString((Invoke-WebRequest -UseBasicParsing -Uri $url -Method 'GET' -Headers $Headers).RawContentStream.ToArray()) | ConvertFrom-Json
+$Token = $content.access_token
+echo "The managed identities for Azure resources access token is $Token"
+$SQLServerName = "<ServerName>" # Azure SQL logical server name
+$DatabaseName = "<DBname>" # Azure SQL database name
+Write-Host "Create SQL connection string"
+$conn = New-Object System.Data.SqlClient.SQLConnection
+$conn.ConnectionString = "Data Source=$SQLServerName.database.windows.net;Initial Catalog=$DatabaseName;Connect Timeout=30"
+$conn.AccessToken = $Token
+Write-host "Connect to database and execute SQL script"
+$conn.Open()
+$ddlstmt = "CREATE TABLE Person( PersonId INT IDENTITY PRIMARY KEY, FirstName NVARCHAR(128) NOT NULL)"
+Write-host " "
+Write-host "SQL DDL command"
+$ddlstmt
+$command = New-Object -TypeName System.Data.SqlClient.SqlCommand($ddlstmt, $conn)
+Write-host "results"
+$command.ExecuteNonQuery()
+$conn.Close()
+```
+
+### Sample runbook to access a key vault using Azure cmdlets
+
+```powershell
+Write-Output "Connecting to azure via Connect-AzAccount -Identity"
+Connect-AzAccount -Identity
+Write-Output "Successfully connected with Automation account's Managed Identity"
+Write-Output "Trying to fetch value from key vault using MI. Make sure you have given correct access to Managed Identity"
+$secret = Get-AzKeyVaultSecret -VaultName '<KVname>' -Name '<KeyName>'
+
+$ssPtr = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secret.SecretValue)
+try {
+ $secretValueText = [System.Runtime.InteropServices.Marshal]::PtrToStringBSTR($ssPtr)
+ Write-Output $secretValueText
+} finally {
+ [System.Runtime.InteropServices.Marshal]::ZeroFreeBSTR($ssPtr)
+}
+```
+
+### Sample Python runbook to get a token
+
+```python
+#!/usr/bin/env python3
+import os
+import requests
+# printing environment variables
+endPoint = os.getenv('IDENTITY_ENDPOINT')+"?resource=https://management.azure.com/"
+identityHeader = os.getenv('IDENTITY_HEADER')
+payload={}
+headers = {
+ 'X-IDENTITY-HEADER': identityHeader,
+ 'Metadata': 'True'
+}
+response = requests.request("GET", endPoint, headers=headers, data=payload)
+print(response.text)
+```
+
+## Next steps
+
+- If you need to disable a managed identity, see [Disable your Azure Automation account managed identity (Preview)](disable-managed-identity-for-automation.md).
+
+- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021 #
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md Binary files differ
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md Binary files differ
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md Binary files differ
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/troubleshoot-agent-onboard.md
Title: Troubleshoot Azure Arc enabled servers agent connection issues description: This article tells how to troubleshoot and resolve issues with the Connected Machine agent that arise with Azure Arc enabled servers when trying to connect to the service. Previously updated : 09/02/2020 Last updated : 04/12/2021
-# Troubleshoot the Connected Machine agent connection issues
+# Troubleshoot Azure Arc enabled servers agent connection issues
This article provides information on troubleshooting and resolving issues that may occur while attempting to configure the Azure Arc enabled servers Connected Machine agent for Windows or Linux. Both the interactive and at-scale installation methods when configuring connection to the service are included. For general information, see [Arc enabled servers overview](./overview.md).
+## Agent error codes
+
+If you receive an error when configuring the Azure Arc enabled servers agent, the following table can help you identify the probable cause and suggested steps to resolve your problem. You will need the `AZCM0000` ("0000" can be any 4 digit number) error code printed to the console or script output to proceed.
+
+| Error code | Probable cause | Suggested remediation |
+||-|--|
+| AZCM0000 | The action was successful | N/A |
+| AZCM0001 | An unknown error occurred | Contact Microsoft Support for further assistance |
+| AZCM0011 | The user canceled the action (CTRL+C) | Retry the previous command |
+| AZCM0012 | The access token provided is invalid | Obtain a new access token and try again |
+| AZCM0013 | The tags provided are invalid | Check that the tags are enclosed in double quotes, separated by commas, and that any names or values with spaces are enclosed in single quotes: `--tags "SingleName='Value with spaces',Location=Redmond"`
+| AZCM0014 | The cloud is invalid | Specify a supported cloud: `AzureCloud` or `AzureUSGovernment` |
+| AZCM0015 | The correlation ID specified is not a valid GUID | Provide a valid GUID for `--correlation-id` |
+| AZCM0016 | Missing a mandatory parameter | Review the output to identify which parameters are missing |
+| AZCM0017 | The resource name is invalid | Specify a name that only uses alphanumeric characters, hyphens and/or underscores. The name cannot end with a hyphen or underscore. |
+| AZCM0018 | The command was executed without administrative privileges | Retry the command with administrator or root privileges in an elevated command prompt or console session. |
+| AZCM0041 | The credentials supplied are invalid | For device logins, verify the user account specified has access to the tenant and subscription where the server resource will be created. For service principal logins, check the client ID and secret for correctness, the expiration date of the secret, and that the service principal is from the same tenant where the server resource will be created. |
+| AZCM0042 | Creation of the Arc enabled server resource failed | Verify that the user/service principal specified has access to create Arc enabled server resources in the specified resource group. |
+| AZCM0043 | Deletion of the Arc enabled server resource failed | Verify that the user/service principal specified has access to delete Arc enabled server resources in the specified resource group. If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. |
+| AZCM0044 | A resource with the same name already exists | Specify a different name for the `--resource-name` parameter or delete the existing Arc enabled server in Azure and try again. |
+| AZCM0061 | Unable to reach the agent service | Verify you are running the command in an elevated user context (administrator/root) and that the HIMDS service is running on your server. |
+| AZCM0062 | An error occurred while connecting the server | Review other error codes in the output for more specific information. If the error occurred after the Azure resource was created, you need to delete the Arc server from your resource group before retrying. |
+| AZCM0063 | An error occurred while disconnecting the server | Review other error codes in the output for more specific information. If you continue to encounter this error, you can delete the resource in Azure and then run `azcmagent disconnect --force-local-only` on the server to disconnect the agent. |
+| AZCM0064 | The agent service is not responding | Check the status of the `himds` service to ensure it is running. Start the service if it is not running. If it is running, wait a minute then try again. |
+| AZCM0065 | An internal agent communication error occurred | Contact Microsoft Support for assistance |
+| AZCM0066 | The agent web service is not responding or unavailable | Contact Microsoft Support for assistance |
+| AZCM0067 | The agent is already connected to Azure | Follow the steps in [disconnect the agent](manage-agent.md#unregister-machine) first, then try again. |
+| AZCM0068 | An internal error occurred while disconnecting the server from Azure | Contact Microsoft Support for assistance |
+| AZCM0081 | An error occurred while downloading the Azure Active Directory managed identity certificate | If this message is encountered while attempting to connect the server to Azure, the agent won't be able to communicate with the Azure Arc service. Delete the resource in Azure and try connecting again. |
+| AZCM0101 | The command was not parsed successfully | Run `azcmagent <command> --help` to review the correct command syntax |
+| AZCM0102 | Unable to retrieve the computer hostname | Run `hostname` to check for any system-level error messages, then contact Microsoft Support. |
+| AZCM0103 | An error occurred while generating RSA keys | Contact Microsoft Support for assistance |
+| AZCM0104 | Failed to read system information | Verify the identity used to run `azcmagent` has administrator/root privileges on the system and try again. |
+ ## Agent verbose log Before following the troubleshooting steps described later in this article, the minimum information you need is the verbose log. It contains the output of the **azcmagent** tool commands, when the verbose (-v) argument is used. The log files are written to `%ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log` for Windows, and Linux to `/var/opt/azcmagent/log/azcmagent.log`.
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Some features aren't supported with geo-replication:
After geo-replication is configured, the following restrictions apply to your linked cache pair: -- The secondary linked cache is read-only; you can read from it, but you can't write any data to it.
+- The secondary linked cache is read-only; you can read from it, but you can't write any data to it. If you choose to read from the Geo-Secondary instance, it is important to note that whenever a full data sync is happening between the Geo-Primary and the Geo-Secondary (happens when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios as well), the Geo-Secondary instance will throw erorrs (stating that a full data sync is in progress) on any Redis operation against it until the full data sync between Geo-Primary and Geo-Secondary is complete. Applications reading from Geo-Seocndary should be built to fall back to the Geo-Primary whenever the Geo-Seocndary is throwing such errors.
- Any data that was in the secondary linked cache before the link was added is removed. If the geo-replication is later removed however, the replicated data remains in the secondary linked cache. - You can't [scale](cache-how-to-scale.md) either cache while the caches are linked. - You can't [change the number of shards](cache-how-to-premium-clustering.md) if the cache has clustering enabled.
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Follow the steps below to create a data collection rule and association
## Create association using Resource Manager template
-You cannot create a data collection rule using a Resource Manager template, but you can create an association between an Azure virtual machine or Azure Arc enabled server using a Resource Manager template. See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates.
+You can create an association between an Azure virtual machine or Azure Arc enabled server using a Resource Manager template. See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates.
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/transaction-diagnostics.md
Timelines are adjusted for clock skews in the transaction chart. You can see the
This is by design. All of the related items, across all components, are already available on the left side (top and bottom sections). The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline.
+*I see more events than expected in the transaction diagnostics experience when using the Application Insights JavaScript SDK. Is there a way to see fewer events per transaction?*
+
+The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that share an [Operation Id](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a Single Page Application (SPA), only one page view event will be generated and a single Operation Id will be used for all telemetry generated, this can result in many events being correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your single page app. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation Id, you can do so by calling `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event will also reset the Operation Id.
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/diagnostic-settings.md
Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-se
> [!IMPORTANT] > You cannot use this method for the Azure Activity log. Instead, use [Create diagnostic setting in Azure Monitor using a Resource Manager template](./resource-manager-diagnostic-settings.md) to create a Resource Manager template and deploy it with CLI.
-Following is an example CLI command to create a diagnostic setting using all three destinations.
+Following is an example CLI command to create a diagnostic setting using all three destinations. The syntax is slightly difference depending on your client.
+# [CMD](#tab/CMD)
+```azurecli
+az monitor diagnostic-settings create ^
+--name KeyVault-Diagnostics ^
+--resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault ^
+--logs "[{""category"": ""AuditEvent"",""enabled"": true}]" ^
+--metrics "[{""category"": ""AllMetrics"",""enabled"": true}]" ^
+--storage-account /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount ^
+--workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace ^
+--event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey
+```
+# [PowerShell](#tab/PowerShell)
+```azurecli
+az monitor diagnostic-settings create `
+--name KeyVault-Diagnostics `
+--resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault `
+--logs '[{""category"": ""AuditEvent"",""enabled"": true}]' `
+--metrics '[{""category"": ""AllMetrics"",""enabled"": true}]' `
+--storage-account /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount `
+--workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace `
+--event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey
+```
+# [Bash](#tab/Bash)
```azurecli az monitor diagnostic-settings create \ --name KeyVault-Diagnostics \
az monitor diagnostic-settings create \
--logs '[{"category": "AuditEvent","enabled": true}]' \ --metrics '[{"category": "AllMetrics","enabled": true}]' \ --storage-account /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount \workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/oi-default-east-us/providers/microsoft.operationalinsights/workspaces/myworkspace \
+--workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace \
--event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey ```+ ## Create using Resource Manager template See [Resource Manager template samples for diagnostic settings in Azure Monitor](./resource-manager-diagnostic-settings.md) to create or update diagnostic settings with a Resource Manager template.
If you receive this error, update your deployments to replace any metric categor
## Next steps -- [Read more about Azure platform Logs](./platform-logs-overview.md)
+- [Read more about Azure platform Logs](./platform-logs-overview.md)
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
Open Azure SQL Database with [SQL Server Management Studio](../../azure-sql/data
Run the following script to create a user with the required permissions. Replace *user* with a username and *mystrongpassword* with a password.
-```
+```sql
CREATE USER [user] WITH PASSWORD = N'mystrongpassword'; GO GRANT VIEW DATABASE STATE TO [user];
Verify the user was created.
:::image type="content" source="media/sql-insights-enable/telegraf-user-database-verify.png" alt-text="Verify telegraf user script." lightbox="media/sql-insights-enable/telegraf-user-database-verify.png":::
+```sql
+select name as username,
+ create_date,
+ modify_date,
+ type_desc as type,
+ authentication_type_desc as authentication_type
+from sys.database_principals
+where type not in ('A', 'G', 'R', 'X')
+ and sid is not null
+order by username
+```
+ ### Azure SQL Managed Instance Log into your Azure SQL Managed Instance and use [SQL Server Management Studio](../../azure-sql/database/connect-query-ssms.md) or similar tool to run the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a password.
-```
+```sql
USE master; GO CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
GO
Log into your Azure virtual machine running SQL Server and use [SQL Server Management Studio](../../azure-sql/database/connect-query-ssms.md) or similar tool to run the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a password.
-```
+```sql
USE master; GO CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
GRANT VIEW ANY DEFINITION TO [user];
GO ```
+Verify the user was created.
+
+```sql
+select name as username,
+ create_date,
+ modify_date,
+ type_desc as type,
+from sys.server_principals
+where type not in ('A', 'G', 'R', 'X')
+ and sid is not null
+order by username
+```
+ ## Create Azure Virtual Machine You will need to create one or more Azure virtual machines that will be used to collect data to monitor SQL.
Enter the connection string in the form:
``` sqlAzureConnections":ΓÇ»[
- "Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User Id=$username;Password=$password;"
+ "Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User Id=$username;Password=$password;"
} ```
Get the details from the **Connection strings** menu item for the database.
:::image type="content" source="media/sql-insights-enable/connection-string-sql-database.png" alt-text="SQL database connection string" lightbox="media/sql-insights-enable/connection-string-sql-database.png":::
-To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string.
+To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string. SQL Insights supports monitoring a single secondary. The collected data will be tagged to reflect primary or secondary.
#### Azure virtual machines running SQL Server
Enter the connection string in the form:
``` "sqlVmConnections":ΓÇ»[
- "Server=MyServerIPAddress;Port=1433;User Id=$username;Password=$password;"
+ "Server=MyServerIPAddress;Port=1433;User Id=$username;Password=$password;"
] ```
If your monitoring virtual machine is in the same VNET, use the private IP addre
:::image type="content" source="media/sql-insights-enable/sql-vm-security.png" alt-text="SQL virtual machine security" lightbox="media/sql-insights-enable/sql-vm-security.png":::
-To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string.
- ### Azure SQL Managed Instances Enter the connection string in the form: ``` "sqlManagedInstanceConnections":ΓÇ»[
-      "Server= mysqlserver.database.windows.net;Port=1433;User Id=$username;Password=$password;",
+      "Server= mysqlserver.database.windows.net;Port=1433;User Id=$username;Password=$password;",
    ] ``` Get the details from the **Connection strings** menu item for the managed instance.
Get the details from the **Connection strings** menu item for the managed instan
:::image type="content" source="media/sql-insights-enable/connection-string-sql-managed-instance.png" alt-text="SQL Managed Instance connection string" lightbox="media/sql-insights-enable/connection-string-sql-managed-instance.png":::
-To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string.
-
+To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string. SQL Insights supports monitoring of a single secondary and the collected data will be tagged to reflect Primary or Secondary.
## Monitoring profile created
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-monitor Workbooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-overview.md
Workbooks provide a rich set of capabilities for visualizing your data. For deta
* [Graphs](../visualize/workbooks-graph-visualizations.md) * [Composite bar](../visualize/workbooks-composite-bar.md) +
+### Pinning Visualizations
+
+Text, query, and metrics steps in a workbook can be pinned by using the pin button on those items while the workbook is in pin mode, or if the workbook author has enabled settings for that element to make the pin icon visible.
+
+To access pin mode, click **Edit** to enter editing mode, and select the blue pin icon in the top bar. An individual pin icon will then appear above each corresponding workbook part's *Edit* box on the right-hand side of your screen.
++
+> [!NOTE]
+> The state of the workbook is saved at the time of the pin, and pinned workbooks on a dashboard will not update if the underlying workbook is modified. In order to update a pinned workbook part, you will need to delete and re-pin that part.
## Getting started
To explore the workbooks experience, first navigate to the Azure Monitor service
Then select **Workbooks**. ### Gallery
Under the hood, templates also differ from saved workbooks. Saving a workbook cr
Select **Application Failure Analysis** to see one of the default application workbook templates. As stated previously, opening the template creates a temporary workbook for you to be able to interact with. By default, the workbook opens in reading mode which displays only the information for the intended analysis experience that was created by the original template author.
To understand how this workbook template is put together you need to swap to edi
Once you have switched to editing mode you will notice a number of **Edit** boxes appear to the right corresponding with each individual aspect of your workbook. If we select the edit button immediately under the grid of request data we can see that this part of our workbook consists of a Kusto query against data from an Application Insights resource. -
-Clicking the other **Edit** buttons on the right will reveal a number of the core components that make up workbooks like markdown-based [text boxes](../visualize/workbooks-text-visualizations.md), [parameter selection](../visualize/workbooks-parameters.md) UI elements, and other [chart/visualization types](#visualizations).
+Selecting the other **Edit** buttons on the right will reveal a number of the core components that make up workbooks like markdown-based [text boxes](../visualize/workbooks-text-visualizations.md), [parameter selection](../visualize/workbooks-parameters.md) UI elements, and other [chart/visualization types](#visualizations).
Exploring the pre-built templates in edit-mode and then modifying them to fit your needs and save your own custom workbook is an excellent way to start to learn about what is possible with Azure Monitor workbooks.
-## Pinning Visualizations
-
-Text, query, and metrics steps in a workbook can be pinned by using the pin button on those items while the workbook is in pin mode, or if the workbook author has enabled settings for that element to make the pin icon visible.
-
-To access pin mode, click **Edit** to enter editing mode, and select the blue pin icon in the top bar. An individual pin icon will then appear above each corresponding workbook part's *Edit* box on the right-hand side of your screen.
--
-> [!NOTE]
-> The state of the workbook is saved at the time of the pin, and pinned workbooks on a dashboard will not update if the underlying workbook is modified. In order to update a pinned workbook part, you will need to delete and re-pin that part.
- ## Dashboard time ranges Pinned workbook query parts will respect the dashboard's time range if the pinned item is configured to use a *Time Range* parameter. The dashboard's time range value will be used as the time range parameter's value, and any change of the dashboard time range will cause the pinned item to update. If a pinned part is using the dashboard's time range, you will see the subtitle of the pinned part update to show the dashboard's time range whenever the time range changes.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Title: "Azure Monitor docs: What's new for February 1, 2021 - February 28, 2021"
-description: "What's new in the Azure Monitor docs for February 1, 2021 - February 28, 2021."
+ Title: "Azure Monitor docs: What's new for March 2021"
+description: "What's new in the Azure Monitor docs for March 2021."
Previously updated : 03/04/2021 Last updated : 04/10/2021
-# Azure Monitor docs: What's new for February 1, 2021 - February 28, 2021
+# Azure Monitor docs: What's new for March 2021
-Welcome to what's new in the Azure Monitor docs from February 1, 2021 through February 28, 2021. This article lists some of the significant changes to docs during this period.
+Welcome to what's new in the Azure Monitor docs from March 1, 2021 through March 31, 2021. This article lists some of the major changes to docs during this period.
-## Alerts
+## General
**Updated articles** -- [Create an action group with a Resource Manager template](./alerts/action-groups-create-resource-manager-template.md)-- [How to trigger complex actions with Azure Monitor alerts](./alerts/action-groups-logic-app.md)
+- [Azure Monitor Frequently Asked Questions](faq.md)
+- [Azure Monitor for existing Operations Manager customers](azure-monitor-operations-manager.md)
+- [Deploy Azure Monitor at scale using Azure Policy](deploy-scale.md)
+- [Deploy Azure Monitor](deploy.md)
+
+## Agents
-## Application Insights
+**Updated articles**
-**New articles**
+- [Configure data collection for the Azure Monitor agent (preview)](agents/data-collection-rule-azure-monitor-agent.md)
+- [Overview of Azure Monitor agents](agents/agents-overview.md)
+- [Collect Windows and Linux performance data sources with Log Analytics agent](agents/data-sources-performance-counters.md)
-- [Downtime, SLA, and outages workbook](./app/sla-report.md)-- [Work Item Integration (preview)](./app/work-item-integration.md)
+## Alerts
**Updated articles** -- [Export telemetry from Application Insights](./app/export-telemetry.md)-- [Telemetry processor examples - Azure Monitor Application Insights for Java](./app/java-standalone-telemetry-processors-examples.md)-- [Telemetry processors (preview) - Azure Monitor Application Insights for Java](./app/java-standalone-telemetry-processors.md)-- [What is auto-instrumentation or codeless attach - Azure Monitor Application Insights?](./app/codeless-overview.md)-- [Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets](./app/azure-vm-vmss-apps.md)-- [Application Map: Triage Distributed Applications](./app/app-map.md)
+- [Action rules (preview)](alerts/alerts-action-rules.md)
+- [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md)
+- [Troubleshoot problems in IT Service Management Connector](alerts/itsmc-troubleshoot-overview.md)
-## Change analysis
+## Application Insights
**New articles** -- [Troubleshoot Application Change Analysis](./app/change-analysis-troubleshoot.md)-- [Visualizations for Application Change Analysis](./app/change-analysis-visualizations.md)
+- [Sampling overrides (preview) - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)
+- [Configuring JMX metrics](app/java-jmx-metrics-configuration.md)
**Updated articles** -- [Use Application Change Analysis (preview) in Azure Monitor](./app/change-analysis.md)
+- [Application Insights for web pages](app/javascript.md)
+- [Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)
+- [Quickstart: Start monitoring your website with Azure Monitor Application Insights](app/website-monitoring.md)
+- [Visualizations for Application Change Analysis (preview)](app/change-analysis-visualizations.md)
+- [Use Application Change Analysis (preview) in Azure Monitor](app/change-analysis.md)
+- [Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)
+- [Java codeless application monitoring Azure Monitor Application Insights](app/java-in-process-agent.md)
+- [Enable Snapshot Debugger for .NET apps in Azure App Service](app/snapshot-debugger-appservice.md)
+- [Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](app/snapshot-debugger-function-app.md)
+- [<a id=troubleshooting></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](app/snapshot-debugger-troubleshoot.md)
+- [Release notes for Azure Web App extension for Application Insights](app/web-app-extension-release-notes.md)
+- [Set up Azure Monitor for your Python application](app/opencensus-python.md)
+- [Upgrading from Application Insights Java 2.x SDK](app/java-standalone-upgrade-from-2x.md)
+- [Use Stream Analytics to process exported data from Application Insights](app/export-stream-analytics.md)
+- [Troubleshooting guide: Azure Monitor Application Insights for Java](app/java-standalone-troubleshoot.md)
## Containers
-**New articles**
+**Updated articles**
-- [Enable AKS monitoring addon using Azure Policy](./containers/container-insights-enable-aks-policy.md)
+- [Troubleshooting Container insights](containers/container-insights-troubleshoot.md)
+- [How to view Kubernetes logs, events, and pod metrics in real-time](containers/container-insights-livedata-overview.md)
+- [How to query logs from Container insights](containers/container-insights-log-search.md)
+- [Configure PV monitoring with Container insights](containers/container-insights-persistent-volumes.md)
+- [Monitor your Kubernetes cluster performance with Container insights](containers/container-insights-analyze.md)
+- [Configure Azure Red Hat OpenShift v3 with Container insights](containers/container-insights-azure-redhat-setup.md)
+- [Configure Azure Red Hat OpenShift v4.x with Container insights](containers/container-insights-azure-redhat4-setup.md)
+- [Enable monitoring of Azure Arc enabled Kubernetes cluster](containers/container-insights-enable-arc-enabled-clusters.md)
+- [Configure hybrid Kubernetes clusters with Container insights](containers/container-insights-hybrid-setup.md)
+- [Recommended metric alerts (preview) from Container insights](containers/container-insights-metric-alerts.md)
+- [Enable Container insights](containers/container-insights-onboard.md)
+- [Container insights overview](containers/container-insights-overview.md)
+- [Configure scraping of Prometheus metrics with Container insights](containers/container-insights-prometheus-integration.md)
## Essentials **Updated articles** -- [Supported metrics with Azure Monitor](./essentials/metrics-supported.md)
+- [Advanced features of the Azure metrics explorer](essentials/metrics-charts.md)
+- [Application Insights log-based metrics](essentials/app-insights-metrics.md)
+- [Getting started with Azure Metrics Explorer](essentials/metrics-getting-started.md)
+
+## Insights
+
+**Updated articles**
+
+- [Azure Monitor Network Insights](insights/network-insights-overview.md)
+- [Wire Data 2.0 (Preview) solution in Azure Monitor (Retired)](insights/wire-data.md)
+- [Monitor your SQL deployments with SQL insights (preview)](insights/sql-insights-overview.md)
## Logs **Updated articles** -- [Use Azure Private Link to securely connect networks to Azure Monitor](./logs/private-link-security.md)
+- [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md)
+- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md)
+- [Log data ingestion time in Azure Monitor](logs/data-ingestion-time.md)
+
+## Virtual Machines
+
+**New articles**
+
+- [Troubleshoot VM insights](vm/vminsights-troubleshoot.md)
+
+**Updated articles**
+
+- [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)
+- [Enable VM insights overview](vm/vminsights-enable-overview.md)
+- [Troubleshoot Azure Monitor for VMs guest health (preview)](vm/vminsights-health-troubleshoot.md)
+- [Monitoring Azure virtual machines with Azure Monitor](vm/monitor-vm-azure.md)
+- [Integrate System Center Operations Manager with VM insights Map feature](vm/service-map-scom.md)
+- [How to create alerts from VM insights](vm/vminsights-alerts.md)
+- [Configure Log Analytics workspace for VM insights](vm/vminsights-configure-workspace.md)
+- [Enable VM insights by using Azure Policy](vm/vminsights-enable-policy.md)
+- [Enable VM insights using Resource Manager templates](vm/vminsights-enable-resource-manager.md)
+- [VM insights Generally Available (GA) Frequently Asked Questions](vm/vminsights-ga-release-faq.md)
+- [Enable VM insights guest health (preview)](vm/vminsights-health-enable.md)
+- [Disable monitoring of your VMs in VM insights](vm/vminsights-optout.md)
+- [Overview of VM insights](vm/vminsights-overview.md)
+- [How to chart performance with VM insights](vm/vminsights-performance.md)
+
+## Visualizations
+
+**Updated articles**
+- [Programmatically manage workbooks](visualize/workbooks-automate.md)
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md Binary files differ
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-resource-manager Test Createuidefinition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/test-createuidefinition.md
If the portal hangs at the summary screen, there might be a bug in the output se
## Test your solution files
-Now that you've verified your portal interface is working as expected, it's time to validate that your createUiDefinition file is properly integrated with your mainTemplate.json file. You can run a validation script test to test the content of your solution files, including the createUiDefinition file. The script validates the JSON syntax, checks for regex expressions on text fields, and makes sure the output values of the portal interface match the parameters of your template. For information on running this script, see [Run static validation checks for templates](https://github.com/Azure/azure-quickstart-templates/tree/master/test).
+Now that you've verified your portal interface is working as expected, it's time to validate that your createUiDefinition file is properly integrated with your mainTemplate.json file. You can run a validation script test to test the content of your solution files, including the createUiDefinition file. The script validates the JSON syntax, checks for regex expressions on text fields, and makes sure the output values of the portal interface match the parameters of your template. For information on running this script, see [Run static validation checks for templates](https://aka.ms/arm-ttk).
## Next steps
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | | | | | > | availabilitySets | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. | > | diskEncryptionSets | resource group | 1-80 | Alphanumerics and underscores. |
-> | disks | resource group | 1-80 | Alphanumerics and underscores. |
+> | disks | resource group | 1-80 | Alphanumerics, underscores, and hyphens. |
> | galleries | resource group | 1-80 | Alphanumerics and periods.<br><br>Start and end with alphanumeric. | > | galleries / applications | gallery | 1-80 | Alphanumerics, hyphens, and periods.<br><br>Start and end with alphanumeric. | > | galleries / applications/versions | application | 32-bit integer | Numbers and periods. |
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-resource-manager Copy Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-properties.md
Loops can be used declare multiple properties by:
```bicep <property-name>: [for <item> in <collection>: { <properties>
- }
+ }]
``` - Iterating over the elements of an array
Loops can be used declare multiple properties by:
```bicep <property-name>: [for (<item>, <index>) in <collection>: { <properties>
- }
+ }]
``` - Using loop index
Loops can be used declare multiple properties by:
```bicep <property-name>: [for <index> in range(<start>, <stop>): { <properties>
- }
+ }]
```
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-resources.md
Loops can be used declare multiple resources by:
@batchSize(<number>) resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <item> in <collection>: { <resource-properties>
- }
+ }]
``` - Iterating over the elements of an array
Loops can be used declare multiple resources by:
@batchSize(<number>) resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for (<item>, <index>) in <collection>: { <resource-properties>
- }
+ }]
``` - Using loop index
Loops can be used declare multiple resources by:
@batchSize(<number>) resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <index> in range(<start>, <stop>): { <resource-properties>
- }
+ }]
```
azure-resource-manager Error Sku Not Available https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/error-sku-not-available.md
Title: SKU not available errors description: Describes how to troubleshoot the SKU not available error when deploying resources with Azure Resource Manager. Previously updated : 02/18/2020 Last updated : 04/14/2021 # Resolve errors for SKU not available This article describes how to resolve the **SkuNotAvailable** error. If you're unable to find a suitable SKU in that region/zone or an alternative region/zone that meets your business needs, submit a [SKU request](/troubleshoot/azure/general/region-access-request-process) to Azure Support. - ## Symptom When deploying a resource (typically a virtual machine), you receive the following error code and error message:
for subscription '<subscriptionID>'. Please try another tier or deploy to a diff
You receive this error when the resource SKU you've selected (such as VM size) isn't available for the location you've selected.
-If you are deploying an Azure Spot VM or Spot scale set instance, there isn't any capacity for Azure Spot in this location. For more information, see [Spot error messages](../../virtual-machines/error-codes-spot.md).
+If you're deploying an Azure Spot VM or Spot scale set instance, there isn't any capacity for Azure Spot in this location. For more information, see [Spot error messages](../../virtual-machines/error-codes-spot.md).
## Solution 1 - PowerShell
virtualMachines Standard_A2 centralus NotAvailableForSubscr
virtualMachines Standard_D1_v2 centralus {2, 1, 3} MaxResourceVolumeMB ```
-Some additional samples:
+To filter by location and SKU, use:
```azurepowershell-interactive
-Get-AzComputeResourceSku | where {$_.Locations.Contains("centralus") -and $_.ResourceType.Contains("virtualMachines") -and $_.Name.Contains("Standard_DS14_v2")}
-Get-AzComputeResourceSku | where {$_.Locations.Contains("centralus") -and $_.ResourceType.Contains("virtualMachines") -and $_.Name.Contains("v3")} | fc
+$SubId = (Get-AzContext).Subscription.Id
+
+$Region = "centralus" # change region here
+$VMSku = "Standard_M" # change VM SKU here
+
+$VMSKUs = Get-AzComputeResourceSku | where {$_.Locations.Contains($Region) -and $_.ResourceType.Contains("virtualMachines") -and $_.Name.Contains($VMSku)}
+
+$OutTable = @()
+
+foreach ($SkuName in $VMSKUs.Name)
+ {
+ $LocRestriction = if ((($VMSKUs | where Name -EQ $SkuName).Restrictions.Type | Out-String).Contains("Location")){"NotAvavalableInRegion"}else{"Available - No region restrictions applied" }
+ $ZoneRestriction = if ((($VMSKUs | where Name -EQ $SkuName).Restrictions.Type | Out-String).Contains("Zone")){"NotAvavalableInZone: "+(((($VMSKUs | where Name -EQ $SkuName).Restrictions.RestrictionInfo.Zones)| Where-Object {$_}) -join ",")}else{"Available - No zone restrictions applied"}
+
+
+ $OutTable += New-Object PSObject -Property @{
+ "Name" = $SkuName
+ "Location" = $Region
+ "Applies to SubscriptionID" = $SubId
+ "Subscription Restriction" = $LocRestriction
+ "Zone Restriction" = $ZoneRestriction
+ }
+ }
+
+$OutTable | select Name, Location, "Applies to SubscriptionID", "Region Restriction", "Zone Restriction" | Sort-Object -Property Name | FT
```
-Appending ΓÇ£fcΓÇ¥ at the end returns more details.
+The command returns results like:
+
+```output
+Name Location Applies to SubscriptionID Region Restriction Zone Restriction
+- -- - -
+Standard_M128 centralus xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Available - No region restrictions applied Available - No zone restrictions applied
+Standard_M128-32ms centralus xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Available - No region restrictions applied Available - No zone restrictions applied
+Standard_M128-64ms centralus xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Available - No region restrictions applied Available - No zone restrictions applied
+Standard_M128dms_v2 centralus xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx NotAvavalableInRegion NotAvavalableInZone: 1,2,3
+```
## Solution 2 - Azure CLI
-To determine which SKUs are available in a region, use the `az vm list-skus` command. Use the `--location` parameter to filter output to location you are using. Use the `--size` parameter to search by a partial size name.
+To determine which SKUs are available in a region, use the [az vm list-skus](/cli/azure/vm#az_vm_list_skus) command. Use the `--location` parameter to filter output by location. Use the `--size` parameter to search by a partial size name. Use the `--all` parameter to show all information, including sizes that aren't available for the current subscription.
+
+You must have Azure CLI version 2.15.0 or later. To check your version, use `az --version`. If needed, [update your installation](/cli/azure/update-azure-cli).
```azurecli-interactive
-az vm list-skus --location southcentralus --size Standard_F --output table
+az vm list-skus --location southcentralus --size Standard_F --all --output table
``` The command returns results like: ```output
-ResourceType Locations Name Zones Capabilities Restrictions
- -- - - -- --
-virtualMachines southcentralus Standard_F1 ... None
-virtualMachines southcentralus Standard_F2 ... None
-virtualMachines southcentralus Standard_F4 ... None
+ResourceType Locations Name Zones Restrictions
+ -- - - --
+virtualMachines southcentralus Standard_F1 1,2,3 None
+virtualMachines southcentralus Standard_F2 1,2,3 None
+virtualMachines southcentralus Standard_F4 1,2,3 None
+...
+virtualMachines southcentralus Standard_F72s_v2 1,2,3 NotAvailableForSubscription, type: Zone, locations: southcentralus, zones: 1,2,3
... ```
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
The following section describes the configuration of auditing using the Azure po
> [!NOTE] > - Enabling auditing on a paused dedicated SQL pool is not possible. To enable auditing, un-pause the dedicated SQL pool. Learn more about [dedicated SQL pool](../..//synapse-analytics/sql/best-practices-dedicated-sql-pool.md).
- > - When auditing is configured to a Log Analytics workspace or to an Even Hub destination via the Azure portal or PowerShell cmdlet, a [Diagnostic Setting](../../azure-monitor/essentials/diagnostic-settings.md) is created with "SQLSecurityAuditEvents" category enabled.
+ > - When auditing is configured to a Log Analytics workspace or to an Event Hub destination via the Azure portal or PowerShell cmdlet, a [Diagnostic Setting](../../azure-monitor/essentials/diagnostic-settings.md) is created with "SQLSecurityAuditEvents" category enabled.
1. Go to the [Azure portal](https://portal.azure.com). 2. Navigate to **Auditing** under the Security heading in your **SQL database** or **SQL server** pane.
azure-sql Auto Failover Group Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-configure.md Binary files differ
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
Details of how traffic shall be migrated to new Gateways in specific regions are
| Brazil South | 191.233.200.14, 191.234.144.16, 191.234.152.3 | | Canada Central | 40.85.224.249, 52.246.152.0, 20.38.144.1 | | Canada East | 40.86.226.166, 52.242.30.154, 40.69.105.9 , 40.69.105.10 |
-| Central US | 13.67.215.62, 52.182.137.15, 23.99.160.139, 104.208.16.96, 104.208.21.1, 13.89.169.20 |
+| Central US | 13.67.215.62, 52.182.137.15, 104.208.16.96, 104.208.21.1, 13.89.169.20 |
| China East | 139.219.130.35 | | China East 2 | 40.73.82.1 | | China North | 139.219.15.17 |
Details of how traffic shall be migrated to new Gateways in specific regions are
| Switzerland West | 51.107.152.0, 51.107.153.0 | | UAE Central | 20.37.72.64 | | UAE North | 65.52.248.0 |
-| UK South | 51.140.184.11, 51.105.64.0 |
+| UK South | 51.140.184.11, 51.105.64.0, 51.140.144.36, 51.105.72.32 |
| UK West | 51.141.8.11 |
-| West Central US | 13.78.145.25, 13.78.248.43 |
+| West Central US | 13.78.145.25, 13.78.248.43, 13.71.193.32, 13.71.193.33 |
| West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 | | West US | 104.42.238.205, 13.86.216.196 | | West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 |
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md Binary files differ
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
The most up-to-date information will be maintained in the [Azure SQL Database ga
## Status updates # [In progress](#tab/in-progress-ip)
+## May 2021
+New SQL Gateways are being added to the following regions:
+- UK South: 51.140.144.36, 51.105.72.32
+- West Central US: 13.71.193.32, 13.71.193.33
+
+This SQL Gateway shall start accepting customer traffic on 17 May 2021.
## April 2021 New SQL Gateways are being added to the following regions: - East US 2: 40.70.144.193+ This SQL Gateway shall start accepting customer traffic on 30 April 2021. New SQL Gateways are being added to the following regions:
These SQL Gateways shall start accepting customer traffic on 5 April 2021.
## March 2021 The following SQL Gateways in multiple regions are in the process of being deactivated:- - Brazil South: 104.41.11.5 - East Asia: 191.234.2.139 - East US: 191.238.6.43
The following SQL Gateways in multiple regions are in the process of being deact
No customer impact is anticipated since these Gateways (running on older hardware) are not routing any customer traffic. The IP addresses for these Gateways shall be deactivated on 15th March 2021.
+# [Completed](#tab/completed-ip)
+The following gateway migrations are complete:
+ ## February 2021 New SQL Gateways are being added to the following regions:
New SQL Gateways are being added to the following regions:
These SQL Gateways shall start accepting customer traffic on 31 January 2021.
-# [Completed](#tab/completed-ip)
-The following gateway migrations are complete:
+ ### October 2020
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
azure-sql Sql Data Sync Sql Server Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-sql-server-configure.md
For frequently asked questions about the client agent, see [Agent FAQ](sql-data-
Yes, you must manually approve the service managed private endpoint, in the Private endpoint connections page of the Azure portal during the sync group deployment or by using PowerShell.
+**Why do I get a firewall error when the Sync job is provisioning my Azure database?**
+
+This may happen because Azure resources are not allowed to access your server. Ensure that the firewall on the Azure database has "Allow Azure services and resources to access this serverΓÇ¥ setting set to "Yes".
++ ## Next steps Congratulations. You've created a sync group that includes both a SQL Database instance and a SQL Server database.
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md Binary files differ
azure-sql Failover Cluster Instance Premium File Share Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-premium-file-share-manually-configure.md
For more details about cluster connectivity options, see [Route HADR connections
- Microsoft Distributed Transaction Coordinator (MSDTC) is not supported on Windows Server 2016 and earlier. - Filestream isn't supported for a failover cluster with a premium file share. To use filestream, deploy your cluster by using [Storage Spaces Direct](failover-cluster-instance-storage-spaces-direct-manually-configure.md) or [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md) instead. - Only registering with the SQL IaaS Agent extension in [lightweight management mode](sql-server-iaas-agent-extension-automate-management.md#management-modes) is supported.
+- Database Snapshots are not currently supported with [Azure Files due to sparse files limitations](/rest/api/storageservices/features-not-supported-by-the-azure-file-service).
## Next steps
To learn more, see an overview of [FCI with SQL Server on Azure VMs](failover-cl
For more information, see: - [Windows cluster technologies](/windows-server/failover-clustering/failover-clustering-overview) -- [SQL Server failover cluster instances](/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server)
+- [SQL Server failover cluster instances](/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server)
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql.md Binary files differ
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
batch Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/disk-encryption.md Binary files differ
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
blockchain Hyperledger Fabric Consortium Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/templates/hyperledger-fabric-consortium-azure-kubernetes-service.md Binary files differ
certification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/overview.md
# What is the Azure Certified Device program?
-Thank you for your interest in the Azure Certified Device program! This program is your one-stop shop for easily differentiating, promoting, and finding IoT devices built to run on Azure. From intelligent cameras to connected sensors to edge infrastructure, this enhanced IoT device certification program helps device builders increase their product visibility and saves customers time in building solutions.
+Thank you for your interest in the Azure Certified Device program! Azure Certified Device is a free program that enables you to differentiating, certify, and promote your IoT devices built to run on Azure. From intelligent cameras to connected sensors to edge infrastructure, this enhanced IoT device certification program helps device builders increase their product visibility and saves customers time in building solutions.
## Our certification promise
Certifying a device involves several major steps on the [Azure Certified Device
1. Validate device functionality 1. Submit and complete the review process
-Once you have certified your device, you then can optionally complete two of the following activities:
+> [!Note]
+> The review process can take up to a week to complete, though sometimes may take longer.
+
+Once you have certified your device, you then can optionally complete two of the following activities:
1. Publishing to the Azure Certified Device Catalog (optional) 1. Updating your project after it has been approved/published (optional)
Once you have certified your device, you then can optionally complete two of the
Ready to get started with your certification journey? View our resources below to start the device certification process! - [Starting the certification process](tutorial-00-selecting-your-certification.md)-- If you have additional questions or feedback, contact [the Azure Certified Device team](mailto:iotcert@microsoft.com).
+- If you have other questions or feedback, contact [the Azure Certified Device team](mailto:iotcert@microsoft.com).
cognitive-services Howtodetectfacesinimage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoDetectFacesinImage.md
To learn more about each of the attributes, see the [Face detection and attribut
## Next steps
-In this guide, you learned how to use the various functionalities of face detection. Next, integrate these features into your app by following an in-depth tutorial.
+In this guide, you learned how to use the various functionalities of face detection. Next, integrate these features into an app to add face data from users.
-- [Tutorial: Create a WPF app to display face data in an image](../Tutorials/FaceAPIinCSharpTutorial.md)
+- [Tutorial: Add users to a Face service](../enrollment-overview.md)
## Related topics
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
This documentation contains the following types of articles:
* The [quickstarts](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. * The [how-to guides](./Face-API-How-to-Topics/HowtoDetectFacesinImage.md) contain instructions for using the service in more specific or customized ways. * The [conceptual articles](./concepts/face-detection.md) provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./Tutorials/FaceAPIinCSharpTutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
+* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
## Face detection
For more information on face detection, see the [Face detection](concepts/face-d
## Face verification
-The Verify API builds on Detection and addresses the question, ΓÇ£Are these two images the same person?ΓÇ¥. Verification is also called ΓÇ£one-to-oneΓÇ¥ matching because the probe image is compared to only one enrolled template. Verification can be used in identity verification or access control scenarios to verify a picture matches a previously captured image (such as from a photo from a government issued ID card). For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) reference documentation.
+The Verify API builds on Detection and addresses the question, "Are these two images the same person?". Verification is also called "one-to-one" matching because the probe image is compared to only one enrolled template. Verification can be used in identity verification or access control scenarios to verify a picture matches a previously captured image (such as from a photo from a government issued ID card). For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) reference documentation.
## Face identification
-The Identify API also starts with Detection and answers the question, ΓÇ£Can this detected face be matched to any enrolled face in a database?ΓÇ¥ Because it's like face recognition search, is also called ΓÇ£one-to-manyΓÇ¥ matching. Candidate matches are returned based on how closely the probe template with the detected face matches each of the enrolled templates.
+The Identify API also starts with Detection and answers the question, "Can this detected face be matched to any enrolled face in a database?" Because it's like face recognition search, is also called "one-to-many" matching. Candidate matches are returned based on how closely the probe template with the detected face matches each of the enrolled templates.
The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
cognitive-services Faceapiincsharptutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Tutorials/FaceAPIinCSharpTutorial.md
- Title: "Tutorial: Detect and display face data in an image using the .NET SDK"-
-description: In this tutorial, you will create a Windows app that uses the Face service to detect and frame faces in an image.
------- Previously updated : 11/23/2020--
-#Customer intent: As a developer of image management software, I want to learn how to detect faces and display face data on the UI, so that I can follow a similar process for my specific features and needs.
--
-# Tutorial: Create a Windows Presentation Framework (WPF) app to display face data in an image
-
-In this tutorial, you'll learn how to use the Azure Face service, through the .NET client SDK, to detect faces in an image and then present that data in the UI. You'll create a WPF application that detects faces, draws a frame around each face, and displays a description of the face in the status bar.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> - Create a WPF application
-> - Install the Face client library
-> - Use the client library to detect faces in an image
-> - Draw a frame around each detected face
-> - Display a description of the highlighted face on the status bar
-
-![Screenshot showing detected faces framed with rectangles](../Images/getting-started-cs-detected.png)
-
-The complete sample code is available in the [Cognitive Face CSharp sample](https://github.com/Azure-Samples/Cognitive-Face-CSharp-sample) repository on GitHub.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
--
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
- * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-* [Create environment variables](../../cognitive-services-apis-create-account.md#configure-an-environment-variable-for-authentication) for the key and service endpoint string, named `FACE_SUBSCRIPTION_KEY` and `FACE_ENDPOINT`, respectively.
-- Any edition of [Visual Studio](https://www.visualstudio.com/downloads/).-
-## Create the Visual Studio project
-
-Follow these steps to create a new WPF application project.
-
-1. In Visual Studio, open the New Project dialog. Expand **Installed**, then **Visual C#**, then select **WPF App (.NET Framework)**.
-1. Name the application **FaceTutorial**, then click **OK**.
-1. Get the required NuGet packages. Right-click on your project in the Solution Explorer and select **Manage NuGet Packages**; then, find and install the following package:
- - [Microsoft.Azure.CognitiveServices.Vision.Face 2.6.0-preview.1](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face/2.6.0-preview.1)
-
-## Add the initial code
-
-In this section, you will add the basic framework of the app without its face-specific features.
-
-### Create the UI
-
-Open *MainWindow.xaml* and replace the contents with the following code&mdash;this code creates the UI window. The `FacePhoto_MouseMove` and `BrowseButton_Click` methods are event handlers that you will define later on.
-
-[!code-xaml[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml?name=snippet_xaml)]
-
-### Create the main class
-
-Open *MainWindow.xaml.cs* and add the client library namespaces, along with other necessary namespaces.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_using)]
-
-Next, insert the following code in the **MainWindow** class. This code creates a **FaceClient** instance using the subscription key and endpoint.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_mainwindow_fields)]
-
-Next add the **MainWindow** constructor. It checks your endpoint URL string and then associates it with the client object.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_mainwindow_constructor)]
-
-Finally, add the **BrowseButton_Click** and **FacePhoto_MouseMove** methods to the class. These methods correspond to the event handlers declared in *MainWindow.xaml*. The **BrowseButton_Click** method creates an **OpenFileDialog**, which allows the user to select a .jpg image. It then displays the image in the main window. You will insert the remaining code for **BrowseButton_Click** and **FacePhoto_MouseMove** in later steps. Also note the `faceList` reference&mdash;a list of **DetectedFace** objects. This reference is where your app will store and call the actual face data.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_browsebuttonclick_start)]
-
-<!-- [!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_browsebuttonclick_end)] -->
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_mousemove_start)]
-
-<!-- [!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_mousemove_end)] -->
-
-### Try the app
-
-Press **Start** on the menu to test your app. When the app window opens, click **Browse** in the lower left corner. A **File Open** dialog should appear. Select an image from your filesystem and verify that it displays in the window. Then, close the app and advance to the next step.
-
-![Screenshot showing unmodified image of faces](../Images/getting-started-cs-ui.png)
-
-## Upload image and detect faces
-
-Your app will detect faces by calling the **FaceClient.Face.DetectWithStreamAsync** method, which wraps the [Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) REST API for uploading a local image.
-
-Insert the following method in the **MainWindow** class, below the **FacePhoto_MouseMove** method. This method defines a list of face attributes to retrieve and reads the submitted image file into a **Stream**. Then it passes both of these objects to the **DetectWithStreamAsync** method call.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_uploaddetect)]
-
-## Draw rectangles around faces
-
-Next, you will add the code to draw a rectangle around each detected face in the image. In the **MainWindow** class, insert the following code at the end of the **BrowseButton_Click** method, after the `FacePhoto.Source = bitmapSource` line. This code populates a list of detected faces from the call to **UploadAndDetectFaces**. Then it draws a rectangle around each face and displays the modified image in the main window.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_browsebuttonclick_mid)]
-
-## Describe the faces
-
-Add the following method to the **MainWindow** class, below the **UploadAndDetectFaces** method. This method converts the retrieved face attributes into a string describing the face.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_facedesc)]
-
-## Display the face description
-
-Add the following code to the **FacePhoto_MouseMove** method. This event handler displays the face description string in `faceDescriptionStatusBar` when the cursor hovers over a detected face rectangle.
-
-[!code-csharp[](~/Cognitive-Face-CSharp-sample/FaceTutorialCS/FaceTutorialCS/MainWindow.xaml.cs?name=snippet_mousemove_mid)]
-
-## Run the app
-
-Run the application and browse for an image containing a face. Wait a few seconds to allow the Face service to respond. You should see a red rectangle on each of the faces in the image. If you move the mouse over a face rectangle, the description of that face should appear in the status bar.
-
-![Screenshot showing detected faces framed with rectangles](../Images/getting-started-cs-detected.png)
--
-## Next steps
-
-In this tutorial, you learned the basic process for using the Face service .NET SDK and created an application to detect and frame faces in an image. Next, learn more about the details of face detection.
-
-> [!div class="nextstepaction"]
-> [How to Detect Faces in an Image](../Face-API-How-to-Topics/HowtoDetectFacesinImage.md)
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/build-enrollment-app.md
Title: Build an enrollment app for Android with React
+ Title: Build a React app to add users to a Face service
-description: Learn how to set up your development environment and deploy a Face enrollment app to get consent from customers.
+description: Learn how to set up your development environment and deploy a Face app to get consent from customers.
Last updated 11/17/2020
-# Build an enrollment app for Android with React
+# Build a React app to add users to a Face service
-This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to enroll users into a face recognition service and acquire high-accuracy face data. An integrated system could use an enrollment app like this to provide touchless access control, identity verification, attendance tracking, personalization kiosk, or identity verification, based on their face data.
+This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, personalization kiosk, or identity verification, based on their face data.
When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
-The sample enrollment app is written using JavaScript and the React Native framework. It can currently be deployed on Android devices; more deployment options are coming in the future.
+The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android devices; more deployment options are coming in the future.
## Prerequisites
The sample enrollment app is written using JavaScript and the React Native frame
## Set up the development environment
-1. Clone the git repository for the [sample enrollment app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation </a>. Select **React Native CLI Quickstart** as your development OS and select **Android** as the target OS. Complete the sections **Installing dependencies** and **Android development environment**. 1. Open the env.json file in your preferred text editor, such as [Visual Studio Code](https://code.visualstudio.com/), and add your endpoint and key. You can get your endpoint and key in the Azure portal under the **Overview** tab of your resource. This step is only for local testing purposes&mdash;don't check in your Face API key to your remote repository. 1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation </a>.
-## Create an enrollment experience
+## Create a user add experience
-Now that you have set up the sample enrollment app, you can tailor it to your own enrollment experience needs.
+Now that you have set up the sample app, you can tailor it to your own needs.
For example, you may want to add situation-specific information on your consent page: > [!div class="mx-imgBorder"] > ![app consent page](./media/enrollment-app/1-consent-1.jpg)
-The service provides image quality checks to help you make the choice of whether the image is of sufficient quality to enroll the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, select the highest-quality frames, and enroll the detected face into the Face API service.
+The service provides image quality checks to help you make the choice of whether the image is of sufficient quality to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, select the highest-quality frames, and add the detected face into the Face API service.
Many face recognition issues are caused by low-quality reference images. Some factors that can degrade model performance are: * Face size (faces that are distant from the camera)
Many face recognition issues are caused by low-quality reference images. Some fa
> [!div class="mx-imgBorder"] > ![app image capture instruction page](./media/enrollment-app/4-instruction.jpg)
-Notice the app also offers functionality for deleting the user's enrollment and the option to re-enroll.
+Notice the app also offers functionality for deleting the user's information and the option to re-add.
> [!div class="mx-imgBorder"] > ![profile management page](./media/enrollment-app/10-manage-2.jpg)
-To extend the app's functionality to cover the full enrollment experience, read the [overview](enrollment-overview.md) for additional features to implement and best practices.
+To extend the app's functionality to cover the full experience, read the [overview](enrollment-overview.md) for additional features to implement and best practices.
-## Deploy the enrollment app
+## Deploy the app
### Android
Once you've created a signed APK, see the <a href="https://developer.android.com
## Next steps
-In this guide, you learned how to set up your development environment and get started with the sample enrollment app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](Overview.md). Read the other sections on enrollment app documentation before you begin development.
+In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](Overview.md). Read the other sections on adding users before you begin development.
cognitive-services Enrollment Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/enrollment-overview.md
Title: Face API enrollment overview
+ Title: Best practices for adding users to a Face service
description: Learn about the process of Face enrollment to register users in a face recognition service.
Last updated 11/17/2020
-# Face API enrollment
+# Best practices for adding users to a Face service
In order to use the Cognitive Services Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup**. This deep-dive demonstrates best practices for gathering meaningful consent from users as well as example logic to create high-quality enrollments that will optimize recognition accuracy.
cognitive-services Luis Concept Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-best-practices.md
description: Learn the best practices to get the best results from your LUIS app
Previously updated : 05/17/2020 Last updated : 04/13/2021 # Best practices for building a language understanding (LUIS) app
Machine learned entities are tailored to your app and require labeling to be suc
Machine learned entities can use other entities as features. These other entities can be custom entities such as regular expression entities or list entities, or you can use prebuilt entities as features.
-Learn about [effective machine learned entities](luis-concept-entity-types.md#effective-machine-learned-entities).
+Learn about [effective machine learned entities](luis-concept-entity-types.md#machine-learned-ml-entity).
<a name="#do-build-the-app-iteratively"></a>
cognitive-services Luis Concept Data Extraction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-data-extraction.md
Previously updated : 05/01/2020 Last updated : 04/13/2021 # Extract data from utterance text with intents and entities
Location names are set and known such as cities, counties, states, provinces, an
### New and emerging names
-Some apps need to be able to find new and emerging names such as products or companies. These types of names are the most difficult type of data extraction. Begin with a **[simple entity](luis-concept-entity-types.md#simple-entity)** and add a [phrase list](luis-concept-feature.md). [Review](./luis-how-to-review-endpoint-utterances.md) endpoint utterances on a regular basis to label any names that were not predicted correctly.
+Some apps need to be able to find new and emerging names such as products or companies. These types of names are the most difficult type of data extraction. Begin with a **[simple entity](luis-concept-entity-types.md)** and add a [phrase list](luis-concept-feature.md). [Review](./luis-how-to-review-endpoint-utterances.md) endpoint utterances on a regular basis to label any names that were not predicted correctly.
## Pattern.any entity data
cognitive-services Luis Concept Entity Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-entity-types.md
description: An entity extracts data from a user utterance at prediction runtime
Previously updated : 08/06/2020 Last updated : 04/13/2021
-# Extract data with entities
+# Entities in LUIS
-An entity extracts data from a user utterance at prediction runtime. An _optional_, secondary purpose is to boost the prediction of the intent or other entities by using the entity as a feature.
+An entity is an item or an element that is relevant to the user's intent. Entities define data that can be extracted from the utterance and is essential to complete a user's required action. For example:
-There are several types of entities:
-
-* [machine-learning entity](reference-entity-machine-learned-entity.md) - this is the primary entity. You should design your schema with this entity type before using other entities.
-* Non-machine-learning used as a required [feature](luis-concept-feature.md) - for exact text matches, pattern matches, or detection by prebuilt entities
-* [Pattern.any](#patternany-entity) - to extract free-form text such as book titles from a [Pattern](reference-entity-pattern-any.md)
-
-machine-learning entities provide the widest range of data extraction choices. Non-machine-learning entities work by text matching and are used as a [required feature](#design-entities-for-decomposition) for a machine-learning entity or intent.
-
-## Entities represent data
+|Utterance|Intent predicted|Entities extracted|Explanation|
+|--|--|--|--|
+|Hello, how are you?|Greeting|-|Nothing to extract.|
+|I want to order a small pizza|orderPizza| "small" | "Size" entity is extracted as "small" .|
+|Turn off bedroom light|turnOff| "bedroom" | "Room" entity is extracted as "bedroom" .|
+|Check balance in my savings account ending in 4406|checkBalance| "savings", "4406" | "accountType" entity is extracted as "savings" and "accountNumber" entity is extracted as "4406".|
+|Buy 3 tickets to New York|buyTickets| "3", "New York" | "ticketsCount" entity is extracted as "3" and "Destination" entity is extracted as "New York".|
-Entities are data you want to pull from the utterance, such as names, dates, product names, or any significant group of words. An utterance can include many entities or none at all. A client application _may_ need the data to perform its task.
+Entities are optional but recommended. You don't need to create entities for every concept in your app, only for those where:
-Entities need to be labeled consistently across all training utterances for each intent in a model.
+* The client application needs the data, or
+* The entity acts as a hint or signal to another entity or intent. To learn more about entities as Features go to [Entities as features](#entities-as-features).
- You can define your own entities or use prebuilt entities to save time for common concepts such as [datetimeV2](luis-reference-prebuilt-datetimev2.md), [ordinal](luis-reference-prebuilt-ordinal.md), [email](luis-reference-prebuilt-email.md), and [phone number](luis-reference-prebuilt-phonenumber.md).
+## Entity types
-|Utterance|Entity|Data|
-|--|--|--|
-|Buy 3 tickets to New York|Prebuilt number<br>Destination|3<br>New York|
+To create an entity, you have to give it a name and a type. There are several types of entities in LUIS.
+### List Entity
-### Entities are optional but recommended
+A list entity represents a fixed, closed set of related words along with their synonyms. You can use list entities to recognize multiple synonyms or variations and extract a normalized output for them. Use the *recommend* option to see suggestions for new words based on the current list.
-While [intents](luis-concept-intent.md) are required, entities are optional. You do not need to create entities for every concept in your app, but only for those where the client application needs the data or the entity acts as a hint or signal to another entity or intent.
+A list entity isn't machine-learned, meaning that LUIS does not discover additional values for list entities. LUIS marks any match to an item in any list as an entity in the response.
-As your application develops and a new need for data is identified, you can add appropriate entities to your LUIS model later.
+Matching in list entities is both case sensitive and it has to be an exact match to be extracted. Normalized values are also used when matching the list entity. For example:
-<a name="entity-compared-to-intent"></a>
+|Normalized value|Synonyms|
+|--|--|
+|Small|sm, sml, tiny, smallest|
+|Medium|md, mdm, regular, average, middle|
+|Large|lg, lrg, big|
-## Entity represents data extraction
+See the [list entities reference article](reference-entity-list.md) for more information.
-The entity represents a data concept _inside the utterance_. An intent classifies the _entire utterance_.
+### Regex Entity
-Consider the following four utterances:
+A regular expression entity extracts an entity based on a regular expression pattern you provide. It ignores case and ignores cultural variant. Regular expression is best for structured text or a predefined sequence of alphanumeric values that are expected in a certain format. For example:
-|Utterance|Intent predicted|Entities extracted|Explanation|
-|--|--|--|--|
-|Help|help|-|Nothing to extract.|
-|Send something|sendSomething|-|Nothing to extract. The model does not have a required feature to extract `something` in this context, and there is no recipient stated.|
-|Send Bob a present|sendSomething|`Bob`, `present`|The model extracts `Bob` by adding a required feature of prebuilt entity `personName`. A machine-learning entity has been used to extract `present`.|
-|Send Bob a box of chocolates|sendSomething|`Bob`, `box of chocolates`|The two important pieces of data, `Bob` and the `box of chocolates`, have been extracted by machine-learning entities.|
-
-## Label entities in all intents
+|Entity|Regular expression|Example|
+|--|--|--|
+|Flight Number|flight [A-Z]{2} [0-9]{4}| flight AS 1234|
+|Credit Card Number|[0-9]{16}|5478789865437632|
-Entities extract data regardless of the predicted intent. Make sure you label _all_ example utterances in all intents. The `None` intent missing entity labeling causes confusion even if there were far more training utterances for the other intents.
+See the [regex entities reference article](reference-entity-regular-expression.md) for more information.
-## Design entities for decomposition
+### Prebuilt Entity
-machine-learning entities allow you to design your app schema for decomposition, breaking a large concept into subentities.
+LUIS offers a set of prebuilt entities for recognizing common types of data like name, date, number, and currency. The behavior of prebuilt entities is fixed. Prebuilt entity support varies according to the culture of the LUIS app. For example:
-Designing for decomposition allows LUIS to return a deep degree of entity resolution to your client application. This allows your client application to focus on business rules and leave data resolution to LUIS.
+|Prebuilt entity|Example value|
+|--|--|
+|PersonName|James, Bill, Tom|
+|DatetimeV2|2019-05-02, May 2nd, 8am on may 2nd 2019|
-A machine-learning entity triggers based on the context learned through example utterances.
+See the [prebuilt entities reference article](./luis-reference-prebuilt-entities.md) for more information.
-[**machine-learning entities**](tutorial-machine-learned-entity.md) are the top-level extractors. Subentities are child entities of machine-learning entities.
+### Pattern.Any Entity
-## Effective machine learned entities
+A pattern.Any entity is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. It follows a specific rule or pattern and best used for sentences with fixed lexical structure. For example:
-To build the machine learned entities effectively:
+|Example utterance|Pattern|Entity|
+|--|--|--|
+|Can I have a burger please?|Can I have a {meal} [please][?]| burger
+|Can I have a pizza?|Can I have a {meal} [please][?]| pizza
+|Where can I find The Great Gatsby?|Where can I find {bookName}?| The Great Gatsby|
-* Your labeling should be consistent across the intents. This includes even utterances you provide in the **None** intent that include this entity. Otherwise the model will not be able to determine the sequences effectively.
-* If you have a machine learned entity with subentities, make sure that the different orders and variants of the entity and subentities are presented in the labeled utterances. Labeled example utterances should include all valid forms, and include entities that appear and are absent and also reordered within the utterance.
-* You should avoid overfitting the entities to a very fixed set. **Overfitting** happens when the model doesn't generalize well, and is a common problem in machine learning models. This implies the app would not work on new data adequately. In turn, you should vary the labeled example utterances so the app is able to generalize beyond the limited examples you provide. You should vary the different subentities with enough change for the model to think more of the concept instead of just the examples shown.
+See the [Pattern.Any entities reference article](./reference-entity-pattern-any.md) for more information.
-## Effective prebuilt entities
+### Machine learned (ML) Entity
-To build effective entities that extract common data, such as those provided by the [prebuilt entities](luis-reference-prebuilt-entities.md), we recommend the following process.
+Machine learned entity uses context to extract entities based on labeled examples. It is the preferred entity for building LUIS applications. It relies on machine learning algorithms and requires labeling to be tailored to your application successfully. Use an ML entity to identify data that is not always well formatted but have the same meaning.
-Improve the extraction of data by bringing your own data to an entity as a feature. That way all the additional labels from your data will learn the context of where person names exist in your application.
+|Example utterance|Extracted *product* entity|
+|--|--|
+|I want to buy a book.|"book"|
+|Can I get these shoes please?|"shoes"|
+|Add those shorts to my basket.|"shorts"|
-<a name="composite-entity"></a>
-<a name="list-entity"></a>
-<a name="patternany-entity"></a>
-<a name="prebuilt-entity"></a>
-<a name="regular-expression-entity"></a>
-<a name="simple-entity"></a>
+You can learn more about Machine learned entities [here](./reference-entity-machine-learned-entity.md).
-## Types of entities
+See the [machine learned entities reference article](./reference-entity-pattern-any.md) for more information.
-A subentity to a parent should be a machine-learning entity. The subentity can use a non-machine-learning entity as a [feature](luis-concept-feature.md).
+#### ML Entity with Structure
-Choose the entity based on how the data should be extracted and how it should be represented after it is extracted.
+An ML entity can be composed of smaller sub-entities, each of which can have its own properties. For example, *Address* could have the following structure:
-|Entity type|Purpose|
-|--|--|
-|[**Machine-learned**](tutorial-machine-learned-entity.md)|Extract nested, complex data learned from labeled examples. |
-|[**List**](reference-entity-list.md)|List of items and their synonyms extracted with **exact text match**.|
-|[**Pattern.any**](#patternany-entity)|Entity where finding the end of entity is difficult to determine because the entity is free-form. Only available in [patterns](luis-concept-patterns.md).|
-|[**Prebuilt**](luis-reference-prebuilt-entities.md)|Already trained to extract specific kind of data such as URL or email. Some of these prebuilt entities are defined in the open-source [Recognizers-Text](https://github.com/Microsoft/Recognizers-Text) project. If your specific culture or entity isn't currently supported, contribute to the project.|
-|[**Regular Expression**](reference-entity-regular-expression.md)|Uses regular expression for **exact text match**.|
+* Address: 4567 Main Street, NY, 98052, USA
+ * Building Number: 4567
+ * Street Name: Main Street
+ * State: NY
+ * Zip Code: 98052
+ * Country: USA
-## Extraction versus resolution
+### Building effective ML entities
-Entities extract data as the data appears in the utterance. Entities do not change or resolve the data. The entity won't provide any resolution if the text is a valid value for the entity or not.
+To build machine learned entities effectively, follow these best practices:
-There are ways to bring resolution into the extraction, but you should be aware that this limits the ability of the app to be immune against variations and mistakes.
+* If you have a machine learned entity with sub-entities, make sure that the different orders and variants of the entity and sub-entities are presented in the labeled utterances. Labeled example utterances should include all valid forms, and include entities that appear and are absent and also reordered within the utterance.
-List entities and regular expression (text-matching) entities can be used as [required features](luis-concept-feature.md#required-features) to a subentity and that acts as a filter to the extraction. You should use this carefully as not to hinder the ability of the app to predict.
+* Avoid overfitting the entities to a very fixed set. Overfitting happens when the model doesn't generalize well, and is a common problem in machine learning models. This implies the app would not work on new types of examples adequately. In turn, you should vary the labeled example utterances so the app can generalize beyond the limited examples you provide.
-## Extracting contextually related data
+* Your labeling should be consistent across the intents. This includes even utterances you provide in the *None* intent that includes this entity. Otherwise the model will not be able to determine the sequences effectively.
-An utterance may contain two or more occurrences of an entity where the meaning of the data is based on context within the utterance. An example is an utterance for booking a flight that has two geographical locations, origin and destination.
+## Entities as features
-`Book a flight from Seattle to Cairo`
+Another important function of entities is to use them as features or distinguishing traits for another intents or entities so that your system observes and learns through them.
-The two locations need to be extracted in a way that the client-application knows the type of each location in order to complete the ticket purchase.
+### Entities as features for intents
-To extract the origin and destination, create two subentities as part of the ticket order machine-learning entity. For each of the subentities, create a required feature that uses geographyV2.
+You can use entities as a signal for an intent. For example, the presence of a certain entity in the utterance can distinguish which intent does it fall under.
-<a name="using-component-constraints-to-help-define-entity"></a>
-<a name="using-subentity-constraints-to-help-define-entity"></a>
+|Example utterance|Entity|Intent|
+|--|--|--|
+|Book me a *fight to New York*.|City|Book Flight|
+|Book me the *main conference room*.|Room|Reserve Room|
-### Using required features to constrain entities
+### Entities as Feature for Entities
-Learn more about [required features](luis-concept-feature.md)
+You can also use entities as an indicator of the presence of other entities. A common example of this is using a prebuilt entity as a feature for another ML entity.
+If you are building a flight booking system and your utterance looks like "Book me a flight from Cairo to Seattle", you will have *Origin City* and *Destination City* as ML entities. A good practice would be to use the prebuilt `GeographyV2` entity as a feature for both entities.
-## Pattern.any entity
+See the [GeographyV2 entities reference article](./luis-reference-prebuilt-geographyv2.md) for more information.
-A Pattern.any is only available in a [Pattern](luis-concept-patterns.md).
+You can also use entities as required features for other entities. This helps in the resolution of extracted entities. For example, if you are creating a pizza ordering application and you have a `Size` ML entity, you can create `SizeList` list entity and use it as a required feature for the `Size` entity. Your application will return the normalized value as the extracted entity from the utterance.
-<a name="if-you-need-more-than-the-maximum-number-of-entities"></a>
-## Exceeding app limits for entities
+See [features](luis-concept-feature.md) for more information, and [prebuilt entities](./luis-reference-prebuilt-entities.md) to learn more about prebuilt entities resolution available in your culture.
-If you need more than the [limit](luis-limits.md#model-limits), contact support. To do so, gather detailed information about your system, go to the [LUIS](luis-reference-regions.md#luis-website) website, and then select **Support**. If your Azure subscription includes support services, contact [Azure technical support](https://azure.microsoft.com/support/options/).
## Entity prediction status and errors
-The LUIS portal shows when the entity has a different entity prediction than the entity you selected for an example utterance. This different score is based on the current trained model.
+The LUIS portal shows the following when the entity has a different entity prediction than the entity you labeled for an example utterance. This different score is based on the current trained model.
-The erroring text is highlighted within the example utterance, and the example utterance line has an error indicator to the right, shown as a red triangle.
+The text causing the error is highlighted within the example utterance, and the example utterance line has an error indicator to the right, shown as a red triangle.
-Use this information to resolve entity errors using one or more of the following:
-* The highlighted text is mislabeled. To fix, review, correct, and retrain.
-* Create a [feature](luis-concept-feature.md) for the entity to help identify the entity's concept
-* Add more [example utterances](luis-concept-utterance.md) and label with the entity
-* [Review active learning suggestions](luis-concept-review-endpoint-utterances.md) for any utterances received at the prediction endpoint that can help identify the entity's concept.
-
-## Next steps
+To resolve entity errors, try one or more of the following:
-Learn concepts about good [utterances](luis-concept-utterance.md).
+* The highlighted text is mislabeled. To fix, review the label, correct it, and retrain the app.
+* Create a [feature](luis-concept-feature.md) for the entity to help identify the entity's concept.
+* Add more [example utterances](luis-concept-utterance.md) and label with the entity.
+* [Review active learning suggestions](luis-concept-review-endpoint-utterances.md) for any utterances received at the prediction endpoint that can help identify the entity's concept.
-See [Add entities](luis-how-to-add-entities.md) to learn more about how to add entities to your LUIS app.
-See [Tutorial: Extract structured data from user utterance with machine-learning entities in Language Understanding (LUIS)](tutorial-machine-learned-entity.md) to learn how to extract structured data from an utterance using the machine-learning entity.
+## Next steps
+* Learn about good example [utterances](luis-concept-utterance.md).
+* See [Add entities](luis-how-to-add-entities.md) to learn more about how to add entities to your LUIS app.
+* Learn more about LUIS [application limits](./luis-limits.md).
+* Use a [tutorial](tutorial-machine-learned-entity.md) to learn how to extract structured data from an utterance using the machine-learning entity.
cognitive-services Luis Concept Intent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-intent.md
Previously updated : 10/10/2019 Last updated : 04/13/2021 # Intents in your LUIS app
Create an intent when the user's _intention_ would trigger an action in your cli
|Intent | Entity | Example utterance | ||||
-| CheckWeather | { "type": "location", "entity": "seattle" }<br>{ "type": "builtin.datetimeV2.date","entity": "tomorrow","resolution":"2018-05-23" } | What's the weather like in `Seattle` `tomorrow`? |
+| CheckWeather | { "type": "location", "entity": "Seattle" }<br>{ "type": "builtin.datetimeV2.date","entity": "tomorrow","resolution":"2018-05-23" } | What's the weather like in `Seattle` `tomorrow`? |
| CheckWeather | { "type": "date_range", "entity": "this weekend" } | Show me the forecast for `this weekend` | ||||
If you want to determine negative and positive intentions, such as "I **want** a
## Intents and patterns
-If you have example utterances, which can be defined in part or whole as a regular expression, consider using the [regular expression entity](luis-concept-entity-types.md#regular-expression-entity) paired with a [pattern](luis-concept-patterns.md).
+If you have example utterances, which can be defined in part or whole as a regular expression, consider using the [regular expression entity](luis-concept-entity-types.md#regex-entity) paired with a [pattern](luis-concept-patterns.md).
Using a regular expression entity guarantees the data extraction so that the pattern is matched. The pattern matching guarantees an exact intent is returned.
cognitive-services Luis How To Batch Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-batch-test.md
Previously updated : 12/29/2020 Last updated : 04/13/2021
App version ID
Submit a batch file of utterances, known as a *data set*, for batch testing. The data set is a JSON-formatted file containing a maximum of 1,000 labeled utterances. You can test up to 10 data sets in an app. If you need to test more, delete a data set and then add a new one. All custom entities in the model appear in the batch test entities filter even if there are no corresponding entities in the batch file data.
-The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](luis-concept-entity-types.md#types-of-entities) you expect to be detected.
+The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](luis-concept-entity-types.md#machine-learned-ml-entity) you expect to be detected.
### Batch syntax template for intents with entities
cognitive-services Luis Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-limits.md
If your app exceeds the LUIS model limits, consider using a [LUIS dispatch](luis
| External entities | no limits | | [Intents][intents]|500 per application: 499 custom intents, and the required _None_ intent.<br>[Dispatch-based](https://aka.ms/dispatch-tool) application has corresponding 500 dispatch sources.| | [List entities](./luis-concept-entity-types.md) | Parent: 50, child: 20,000 items. Canonical name is *default character max. Synonym values have no length restriction. |
-| [machine-learning entities + roles](./luis-concept-entity-types.md):<br> composite,<br>simple,<br>entity role|A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 10 children per level.|
+| [machine-learning entities + roles](./luis-concept-entity-types.md):<br> composite,<br>simple,<br>entity role|A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 20 children per level.|
|Model as a feature| Maximum number of models that can be used as a feature to a specific model to be 10 models. The maximum number of phrase lists used as a feature for a specific model to be 10 phrase lists.| | [Preview - Dynamic list entities](./luis-migration-api-v3.md)|2 lists of ~1k per query prediction endpoint request| | [Patterns](luis-concept-patterns.md)|500 patterns per application.<br>Maximum length of pattern is 400 characters.<br>3 Pattern.any entities per pattern<br>Maximum of 2 nested optional texts in pattern|
cognitive-services Luis Reference Prebuilt Entities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-reference-prebuilt-entities.md
Previously updated : 07/20/2020 Last updated : 04/13/2021
Language Understanding (LUIS) provides prebuilt entities.
## Entity resolution When a prebuilt entity is included in your application, LUIS includes the corresponding entity resolution in the endpoint response. All example utterances are also labeled with the entity.
-The behavior of prebuilt entities can't be modified but you can improve resolution by [adding the prebuilt entity as a feature to a machine-learning entity or subentity](luis-concept-entity-types.md#effective-prebuilt-entities).
+The behavior of prebuilt entities can't be modified but you can improve resolution by [adding the prebuilt entity as a feature to a machine-learning entity or sub-entity](luis-concept-entity-types.md#prebuilt-entity).
## Availability Unless otherwise noted, prebuilt entities are available in all LUIS application locales (cultures). The following table shows the prebuilt entities that are supported for each culture.
cognitive-services Reference Entity Machine Learned Entity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/reference-entity-machine-learned-entity.md
Previously updated : 04/30/2020 Last updated : 04/13/2021 # Machine-learning entity
This entity isn't available in the V2 prediction runtime.
## Next steps
-Learn more about the machine-learning entity including a [tutorial](tutorial-machine-learned-entity.md), [concepts](luis-concept-entity-types.md#design-entities-for-decomposition), and [how-to guide](luis-how-to-add-entities.md#create-a-machine-learned-entity).
+Learn more about the machine-learning entity including a [tutorial](tutorial-machine-learned-entity.md), [concepts](luis-concept-entity-types.md#machine-learned-ml-entity), and [how-to guide](luis-how-to-add-entities.md#create-a-machine-learned-entity).
Learn about the [list](reference-entity-list.md) entity and [regular expression](reference-entity-regular-expression.md) entity.
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/troubleshooting.md
description: This article contains answers to frequently asked questions about L
Previously updated : 05/06/2020 Last updated : 04/13/2021 # Language Understanding Frequently Asked Questions (FAQ)
See the [boundaries](luis-limits.md) reference.
See [Best practices for intents](luis-concept-intent.md#if-you-need-more-than-the-maximum-number-of-intents).
-### I want to build an app in LUIS with more than the maximum number of entities. What should I do?
-
-See [Best practices for entities](luis-concept-entity-types.md#if-you-need-more-than-the-maximum-number-of-entities)
- ### What are the limits on the number and size of phrase lists? For the maximum length of a [phrase list](./luis-concept-feature.md), see the [boundaries](luis-limits.md) reference.
cognitive-services What Is Luis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/what-is-luis.md
Title: What is Language Understanding (LUIS)?
+ Title: Language Understanding (LUIS) Overview
description: Language Understanding (LUIS) - a cloud-based API service using machine-learning to conversational, natural language to predict meaning and extract information. keywords: Azure, artificial intelligence, ai, natural language processing, nlp, natural language understanding, nlu, LUIS, conversational AI, ai chatbot, nlp ai, azure luis Previously updated : 03/22/2021 Last updated : 04/13/2021
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
-Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.
+Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information. LUIS provides access through its [custom portal](https://www.luis.ai), [APIs][endpoint-apis] and [SDK client libraries](client-libraries-rest-api.md).
-A client application for LUIS is any conversational application that communicates with a user in natural language to complete a task. Examples of client applications include social media apps, AI chatbots, and speech-enabled desktop applications.
-
-![Conceptual image of 3 client applications working with Cognitive Services Language Understanding (LUIS)](./media/luis-overview/luis-entry-point.png "Conceptual image of 3 client applications working with Cognitive Services Language Understanding (LUIS)")
+For first time users, follow these steps to [sign in to LUIS portal](sign-in-luis-portal.md "sign in to LUIS portal")
+To get started, you can try LUIS [prebuilt domain](luis-get-started-create-app.md) apps, or you can [build your app](get-started-portal-build-app.md).
This documentation contains the following article types:
This documentation contains the following article types:
* [**Concepts**](artificial-intelligence.md) provide in-depth explanations of the service functionality and features. * [**Tutorials**](tutorial-intents-only.md) are longer guides that show you how to use the service as a component in broader business solutions.
-## Use LUIS in a chat bot
-
-<a name="Accessing-LUIS"></a>
-
-Once the Azure LUIS app is published, a client application sends utterances (text) to the LUIS natural language processing endpoint [API][endpoint-apis] and receives the results as JSON responses. A common client application for LUIS is a chat bot.
--
-![Conceptual imagery of LUIS working with Chat bot to predict user text with natural language understanding (NLP)](./media/luis-overview/LUIS-chat-bot-request-response.svg "Conceptual imagery of LUIS working with Chat bot to predict user text with natural language understanding (NLP")
-
-|Step|Action|
-|:--|:--|
-|1|The client application sends the user _utterance_ (text in their own words), "I want to call my HR rep." to the LUIS endpoint as an HTTP request.|
-|2|LUIS enables you to craft your custom language models to add intelligence to your application. Machine learned language models take the user's unstructured input text and returns a JSON-formatted response, with a top intent, `HRContact`. The minimum JSON endpoint response contains the query utterance, and the top scoring intent. It can also extract data such as the _Contact Type_ entity.|
-|3|The client application uses the JSON response to make decisions about how to fulfill the user's requests. These decisions can include decision tree in the bot framework code and calls to other services. |
-
-The LUIS app provides intelligence so the client application can make smart choices. LUIS doesn't provide those choices.
-
-<a name="Key-LUIS-concepts"></a>
-<a name="what-is-a-luis-model"></a>
-
-## Natural language understanding (NLU)
-
-[LUIS provides artificial intelligence (AI)](artificial-intelligence.md "LUIS provides artificial intelligence (AI)") in the form of NLU, a subset of natural language processing AI.
-
-Your LUIS app contains a domain-specific natural language model. You can start the LUIS app with a prebuilt domain model, build your own model, or blend pieces of a prebuilt domain with your own custom information.
-
-* **Prebuilt model** LUIS has many prebuilt domain models including intents, utterances, and prebuilt entities. You can use the prebuilt entities without having to use the intents and utterances of the prebuilt model. [Prebuilt domain models](./howto-add-prebuilt-models.md "Prebuilt domain models") include the entire design for you and are a great way to start using LUIS quickly.
-
-* **Custom model** LUIS gives you several ways to identify your own custom models including intents, and entities. Entities include machine-learning entities, specific or literal entities, and a combination of machine-learning and literal.
-
-Learn more about [NLP AI](artificial-intelligence.md "NLP"), and the LUIS-specific area of NLU.
-
-## Step 1: Design and build your model
-
-Design your model with categories of user intentions called **[intents](luis-concept-intent.md "intents")**. Each intent needs examples of user **[utterances](luis-concept-utterance.md "utterances")**. Each utterance can provide data that needs to be extracted with [machine-learning entities](luis-concept-entity-types.md#effective-machine-learned-entities "machine-learning entities").
-
-|Example user utterance|Intent|Extracted data|
-|--|--|--|
-|`Book a flight to Seattle?`|BookFlight|Seattle|
-|`When does your store open?`|StoreHoursAndLocation|open|
-|`Schedule a meeting at 1pm with Bob in Distribution`|ScheduleMeeting|1pm, Bob|
-
-Build the model with the [authoring](https://go.microsoft.com/fwlink/?linkid=2092087 "authoring") APIs, or with the **[LUIS portal](https://www.luis.ai "LUIS portal")**, or both. Learn more how to build with the [portal](get-started-portal-build-app.md "portal") and the [SDK client libraries](./client-libraries-rest-api.md?pivots=rest-api "SDK client libraries").
-
-## Step 2: Get the query prediction
-
-After your app's model is trained and published to the endpoint, a client application (such as a chat bot) sends utterances to the prediction [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356 "endpoint") API. The API applies the model to the utterance for analysis and responds with the prediction results in a JSON format.
-
-The minimum JSON endpoint response contains the query utterance, and the top scoring intent. It can also extract data such as the following **Contact Type** entity and overall sentiment.
-
-```JSON
-{
- "query": "I want to call my HR rep",
- "prediction": {
- "topIntent": "HRContact",
- "intents": {
- "HRContact": {
- "score": 0.8582669
- }
- },
- "entities": {
- "Contact Type": [
- "call"
- ]
- },
- "sentiment": {
- "label": "neutral",
- "score": 0.5
- }
- }
-}
-```
-
-## Step 3: Improve model prediction
-
-After your LUIS app is published and receives real user utterances, LUIS provides [active learning](luis-concept-review-endpoint-utterances.md "active learning") of endpoint utterances to improve prediction accuracy. Review these suggestions as part of your regular maintenance work in your development lifecycle.
-
-<a name="using-luis"></a>
-
-## Development lifecycle and tools
-LUIS provides tools, versioning, and collaboration with other LUIS authors to integrate into the full [development life cycle](luis-concept-app-iteration.md "development life cycle").
-
-Language Understanding (LUIS), as a REST API, can be used with any product, service, or framework with an HTTP request. LUIS also provides client libraries (SDKs) for several top programming languages. Learn more about the [developer resources](developer-reference-resource.md "developer resources") provided.
-
-Tools to quickly and easily use LUIS with a bot:
-* [LUIS CLI](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/LUIS "LUIS CLI") The NPM package provides authoring and prediction with as either a stand-alone command-line tool or as import.
-* [LUISGen](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/LUISGen "LUISGen") LUISGen is a tool for generating strongly typed C# and typescript source code from an exported LUIS model.
-* [Dispatch](https://aka.ms/dispatch-tool "Dispatch") allows several LUIS and QnA Maker apps to be used from a parent app using dispatcher model.
-* [LUDown](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/Ludown "LUDown") LUDown is a command-line tool that helps manage language models for your bot.
-
-## Integrate with a bot
-
-Use the [Azure Bot service](/azure/bot-service/ "Azure Bot service") with the [Microsoft Bot Framework](https://dev.botframework.com/ "Microsoft Bot Framework") to build and deploy a chat bot. Design and develop with the graphical interface tool, [Composer](/composer/ "Composer"), or [working bot samples](https://github.com/microsoft/BotBuilder-Samples "working bot samples") designed for top bot scenarios.
+## What does LUIS Offer
-## Integrate with other Cognitive Services
+* **Simplicity**: LUIS offloads you from the need of in-house AI expertise or any prior machine learning knowledge. With only a few clicks you can build your own conversational AI application. You can build your custom application by following one of our [quickstarts](get-started-portal-build-app.md), or you can use one of our [prebuilt domain](luis-get-started-create-app.md) apps.
+* **Security, Privacy and Compliance**: Backed by Azure infrastructure, LUIS offers enterprise-grade security, privacy, and compliance. Your data remains yours; you can delete your data at any time. Your data is encrypted while itΓÇÖs in storage. Learn more about this [here](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy).
+* **Integration**: easily integrate your LUIS app with other Microsoft services like [Microsoft Bot framework](https://docs.microsoft.com/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../Speech-Service/quickstarts/intent-recognition.md).
-Other Cognitive Services used with LUIS:
-* [QnA Maker](../QnAMaker/overview/overview.md "QnA Maker") allows several types of text to combine into a question and answer knowledge base.
-* [Speech service](../Speech-Service/overview.md "Speech service") converts spoken language requests into text.
-LUIS provides functionality from Text Analytics as part of your existing LUIS resources. This functionality includes [sentiment analysis](luis-how-to-publish-app.md#configuring-publish-settings "sentiment analysis") and [key phrase extraction](luis-reference-prebuilt-keyphrase.md "key phrase extraction") with the prebuilt keyPhrase entity.
+## LUIS Scenarios
+* [Build an enterprise-grade conversational bot](https://docs.microsoft.com/azure/architecture/reference-architectures/ai/conversational-bot): This reference architecture describes how to build an enterprise-grade conversational bot (chatbot) using the Azure Bot Framework.
+* [Commerce Chatbot](https://docs.microsoft.com/azure/architecture/solution-ideas/articles/commerce-chatbot): Together, the Azure Bot Service and Language Understanding service enable developers to create conversational interfaces for various scenarios like banking, travel, and entertainment.
+* [Controlling IoT devices using a Voice Assistant](https://docs.microsoft.com/azure/architecture/solution-ideas/articles/iot-controlling-devices-with-voice-assistant): Create seamless conversational interfaces with all of your internet-accessible devices-from your connected television or fridge to devices in a connected power plant.
-## Learn with the Quickstarts
-Learn about LUIS with hands-on quickstarts using the [portal](get-started-portal-build-app.md "portal") and the [SDK client libraries](./client-libraries-rest-api.md?pivots=rest-api "SDK client libraries").
+## Application Development life cycle
+![LUIS app development life cycle](./media/luis-overview/luis-dev-lifecycle.png "LUIS Application Develooment Lifecycle")
-## Deploy on premises using Docker containers
+- **Plan**: Identify the scenarios that users might use your application for. Define the actions and relevant information that needs to be recognized.
+- **Build**: Use your authoring resource to develop your app. Start by defining [intents](luis-concept-intent.md) and [entities](luis-concept-entity-types.md). Then, add training [utterances](luis-concept-utterance.md) for each intent.
+- **Test and Improve**: Start testing your model with other utterances to get a sense of how the app behaves, and you can decide if any improvement is needed. You can improve your application by following these [best practices](luis-concept-best-practices.md).
+- **Publish**: Deploy your app for prediction and query the endpoint using your prediction resource. Learn more about authoring and prediction resources [here](luis-how-to-azure-subscription.md#luis-resources).
+- **Connect**: Connect to other services such as [Microsoft Bot framework](https://docs.microsoft.com/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../Speech-Service/quickstarts/intent-recognition.md).
+- **Refine**: [Review endpoint utterances](luis-concept-review-endpoint-utterances.md) to improve your application with real life examples
-[Use LUIS containers](luis-container-howto.md) to deploy API features on-premises. These Docker containers enable you to bring the service closer to your data for compliance, security or other operational reasons.
+Learn more about planning and building your application [here](luis-how-plan-your-app.md).
## Next steps * [What's new](whats-new.md "What's new") with the service and documentation
+* [Build a LUIS app](tutorial-intents-only.md)
+* [API reference][endpoint-apis]
+* [Best practices](luis-concept-best-practices.md)
+* [Developer resources](developer-reference-resource.md "Developer resources") for LUIS.
* [Plan your app](luis-how-plan-your-app.md "Plan your app") with [intents](luis-concept-intent.md "intents") and [entities](luis-concept-entity-types.md "entities"). [bot-framework]: /bot-framework/ [flow]: /connectors/luis/ [authoring-apis]: https://go.microsoft.com/fwlink/?linkid=2092087 [endpoint-apis]: https://go.microsoft.com/fwlink/?linkid=2092356
-[qnamaker]: https://qnamaker.ai/
+[qnamaker]: https://qnamaker.ai/
cognitive-services Get Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-operations.md
The following information is returned in a successful response.
|summary.cancelled|integer|Count of documents canceled.| |summary.totalCharacterCharged|integer|Total count of characters charged.|
-###Error response
+### Error response
|Name|Type|Description| | | | |
cognitive-services V3 0 Translate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/v3-0-translate.md
Request parameters passed on the query string are:
</tr> <tr> <td>textType</td>
- <td><em>Optional parameter</em>.<br/>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. When translating HTML text, the output text has the following special characters in escaped form: ΓÇÿ&ΓÇÖ, ΓÇÿ<ΓÇÖ, and ΓÇÿ>ΓÇÖ. This is irrespective of whether the input HTML text has the characters escaped. Possible values are: <code>plain</code> (default) or <code>html</code>.</td>
+ <td><em>Optional parameter</em>.<br/>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: <code>plain</code> (default) or <code>html</code>.</td>
</tr> <tr> <td>category</td>
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
The following list presents the set of features which are currently available in
| | Get notified when participants are actively typing a message in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Get all messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Send Unicode emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-|Real-time notifications (enabled by proprietary signaling package**)| Chat clients can subscribe to get real-time updates for incoming messages and other operations occurring in a chat thread. To see a list of supported updates for real-time notifications, see [Chat concepts](concepts.md#real-time-notifications) | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+|Real-time notifications (enabled by proprietary signaling package**)| Chat clients can subscribe to get real-time updates for incoming messages and other operations occurring in a chat thread. To see a list of supported updates for real-time notifications, see [Chat concepts](concepts.md#real-time-notifications) | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ |
| Integration with Azure Event Grid | Use the chat events available in Azure Event Grid to plug custom notification services or post that event to a webhook to execute business logic like updating CRM records after a chat is finished | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | Reporting </br>(This info is available under Monitoring tab for your Communication Services resource on Azure portal) | Understand API traffic from your chat app by monitoring the published metrics in Azure Metrics Explorer and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Monitor and debug your Communication Services solution by enabling diagnostic logging for your resource | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/get-phone-number.md
zone_pivot_groups: acs-azp-java-net-python-csharp-js
[!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)] + ::: zone pivot="platform-azp" [!INCLUDE [Azure portal](./includes/phone-numbers-portal.md)] ::: zone-end
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md Binary files differ
confidential-computing Confidential Nodes Out Of Proc Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-out-of-proc-attestation.md
Title: Out-of-proc attestation support with Intel SGX quote helper Daemonset on Azure (preview)
-description: DaemonSet for generating the quote outside of the SGX application process. This article explains how the out-of-proc attestation facility is rovided for confidential workloads running inside a container.
+ Title: Attestation support with Intel SGX quote helper DaemonSet on Azure (preview)
+description: A DaemonSet for generating the quote outside of the Intel SGX application process. This article explains how the out-of-process attestation facility is provided for confidential workloads that run inside a container.
Last updated 2/12/2021
-# Platform Software Management with SGX quote helper daemon set (preview)
+# Platform software management with Intel SGX quote helper DaemonSet (preview)
-[Enclave applications](confidential-computing-enclaves.md) that perform remote attestation requires a generated QUOTE. This QUOTE provides cryptographic proof of the identity and the state of the application, as well as the environment the enclave is running. The generation of the QUOTE requires trusted software components that are part of the IntelΓÇÖs Platform Software Components (PSW).
+[Enclave applications](confidential-computing-enclaves.md) that perform remote attestation require a generated quote. This quote provides cryptographic proof of the identity and the state of the application, as well as the environment that the enclave is running. The generation of the quote requires trusted software components that are part of the Intel Platform Software Components (PSW).
## Overview Intel supports two attestation modes to run the quote generation:-- **in-proc**: hosts the trusted software components inside the enclave application process -- **out-of-proc**: hosts the trusted software components outside of the enclave application.
+- *In-process* hosts the trusted software components inside the enclave application process.
+
+- *Out-of-process* hosts the trusted software components outside of the enclave application.
-SGX applications built using Open Enclave SDK by default use in-proc attestation mode. SGX-based applications allow out-of-proc and would require extra hosting and exposing the required components such as Architectural Enclave Service Manager (AESM), external to the application.
+Intel Software Guard Extension (Intel SGX) applications built by using the Open Enclave SDK use the in-process attestation mode, by default. Intel SGX-based applications do allow the out-of-process attestation mode. If you want to use this mode, you need extra hosting, and you need to expose the required components, such as Architectural Enclave Service Manager (AESM), external to the application.
-Utilizing this feature is **highly recommended**, as it enhances uptime for your enclave apps during Intel Platform updates or DCAP driver updates.
+This feature enhances uptime for your enclave apps during Intel platform updates or DCAP driver updates. For this reason, we recommend using it.
-To enable this feature on AKS Cluster please modify add --enable-sgxquotehelper command to the CLI when enabling the confidential computing add-on. Detailed CLI instructions are [here](confidential-nodes-aks-get-started.md):
+To enable this feature on an Azure Kubernetes Services (AKS) cluster, add the `--enable-sgxquotehelper` command to the Azure CLI when you're enabling the confidential computing add-on.
```azurecli-interactive # Create a new AKS cluster with system node pool with Confidential Computing addon enabled and SGX Quote Helper az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addon confcom --enable-sgxquotehelper ```
-## Why and What are the benefits of out-of-proc?
+For more information, see [Quickstart: Deploy an AKS cluster with confidential computing nodes by using the Azure CLI](confidential-nodes-aks-get-started.md).
+
+## Benefits of the out-of-process mode
-- No updates are required for quote generation components of PSW for each containerized application:
-With out-of-proc, container owners donΓÇÖt need to manage updates within their container. Container owners instead rely on the provider provided interface that invokes the centralized service outside of the container, which will be updated and managed by provider.
+The following list describes some of the main benefits of this attestation mode:
-- No need to worry about attestation failures due to out-of-date PSW components:
-The quote generation involves the trusted SW components - Quoting Enclave (QE) & Provisioning Certificate Enclave (PCE), which are part of the trusted computing base (TCB). These SW components must be up to date to maintain the attestation requirements. Since the provider manages the updates to these components, customers will never have to deal with attestation failures due to out-of-date trusted SW components within their container.
+- No updates are required for quote generation components of PSW for each containerized application. Container owners donΓÇÖt need to manage updates within their container. Container owners instead rely on the provider interface that invokes the centralized service outside of the container. The provider updates and manages the container.
-- Better utilization of EPC memory
-In in-proc attestation mode, each enclave application needs to instantiate the copy of QE and PCE for remote attestation. With out-of-proc, there is no need for the container to host those enclaves, and thus doesnΓÇÖt consume enclave memory from the container quota.
+- You don't need to worry about attestation failures due to out-of-date PSW components. The provider manages the updates to these components.
-- Safeguards against Kernel enforcement
-When the SGX driver is up streamed into Linux kernel, there will be enforcement for an enclave to have higher privilege. This privilege allows the enclave to invoke PCE, which will break the enclave application running in in-proc mode. By default, enclaves don't get this permission. Granting this privilege to an enclave application requires changes to the application installation process. This is handled easily for out-of-proc model as the provider of the service that handles out-of-proc requests will make sure the service is installed with this privilege.
+- The out-of-process mode provides better utilization of EPC memory than the in-process mode does. In in-process mode, each enclave application needs to instantiate the copy of QE and PCE for remote attestation. In out-of-process mode, there's no need for the container to host those enclaves, and therefore it doesn't consume enclave memory from the container quota.
-- No need to check for backward compatibility with PSW & DCAP. The updates to the quote generation components of PSW are validated for backward compatibility by the provider before updating. This will help in handling the compatibility issues upfront and address them before deploying updates for confidential workloads.
+- When you upstream the Intel SGX driver into a Linux kernel, there is enforcement for an enclave to have higher privilege. This privilege allows the enclave to invoke PCE, which will break the enclave application running in in-process mode. By default, enclaves don't get this permission. Granting this privilege to an enclave application requires changes to the application installation process. By contrast, in the out-of-process mode, the provider of the service that handles out-of-process requests ensures that the service is installed with this privilege.
-## How does the out-of-proc attestation mode work for confidential workloads scenario?
+- You don't need to check for backward compatibility with PSW and DCAP. The updates to the quote generation components of PSW are validated for backward compatibility by the provider before updating. This helps you handle compatibility issues before deploying updates for confidential workloads.
-The high-level design follows the model where the quote requestor and quote generation are executed separately, but on the same physical machine. The quote generation will be done in a centralized manner and serves requests for QUOTES from all entities. The interface needs to be properly defined and discoverable for any entity to request quotes.
+## Confidential workloads
-![sgx quote helper aesm](./media/confidential-nodes-out-of-proc-attestation/aesmmanager.png)
+The quote requestor and quote generation run separately, but on the same physical machine. Quote generation is centralized, and serves requests for quotes from all entities. For any entity to request quotes, the interface needs to be properly defined and discoverable.
-The above abstract model applies to confidential workload scenario, by taking advantage of already available AESM service. AESM is containerized and deployed as a daemonSet across the Kubernetes cluster. Kubernetes guarantees a single instance of an AESM service container, wrapped in a Pod, to be deployed on each agent node. The new SGX Quote daemonset will have a dependency on the sgx-device-plugin daemonset, since the AESM service container would request EPC memory from sgx-device-plugin for launching QE and PCE enclaves.
+![Diagram showing the relationships among the quote requestor, quote generation, and interface.](./media/confidential-nodes-out-of-proc-attestation/aesmmanager.png)
-Each container needs to opt in to use out-of-proc quote generation by setting the environment variable **SGX_AESM_ADDR=1** during creation. The container should also include the package libsgx-quote-ex that is responsible to direct the request to default Unix domain socket
+This abstract model applies to the confidential workload scenario, by taking advantage of the AESM service that's already available. AESM is containerized and deployed as a DaemonSet across the Kubernetes cluster. Kubernetes guarantees a single instance of an AESM service container, wrapped in a pod, to be deployed on each agent node. The new Intel SGX quote DaemonSet will have a dependency on the sgx-device-plugin DaemonSet, because the AESM service container requests EPC memory from the sgx-device-plugin for launching QE and PCE enclaves.
-An application can still use the in-proc attestation as before, but both in-proc and out-of-proc canΓÇÖt be used simultaneously within an application. The out-of-proc infrastructure is available by default and consumes resources.
+Each container needs to opt in to use out-of-process quote generation by setting the environment variable `SGX_AESM_ADDR=1` during creation. The container should also include the package, libsgx-quote-ex, that's responsible to direct the request to the default Unix domain socket.
-## Sample Implementation
+An application can still use the in-process attestation as before, but in-process and out-of-process canΓÇÖt be used simultaneously within an application. The out-of-process infrastructure is available by default, and consumes resources.
-The below docker file is a sample for an Open Enclave-based application. Set the SGX_AESM_ADDR=1 environment variable in the docker file or by set it on the deployment file. Follow the below sample for docker file and deployment yaml details.
+## Sample implementation
+
+The following Docker file is a sample for an application based on Open Enclave. Set the `SGX_AESM_ADDR=1` environment variable in the Docker file, or by setting it on the deployment file. The following sample provides details for the Docker file and deployment.
> [!Note]
- > The **libsgx-quote-ex** from Intel needs to be packaged in the application container for out-of-proc attestation to work properly.
+ > For the out-of-process attestation to work properly, the libsgx-quote-ex from Intel needs to be packaged in the application container.
```yaml
-# Refer to Intel_SGX_Installation_Guide_Linux for detail
+# Refer to Intel_SGX_Installation_Guide_Linux for details
FROM ubuntu:18.04 as sgx_base RUN apt-get update && apt-get install -y \ wget \
RUN apt-get update && apt-get install -y \
WORKDIR /opt/openenclave/share/openenclave/samples/remote_attestation RUN . /opt/openenclave/share/openenclave/openenclaverc \ && make build
-# this sets the flag for out of proc attestation mode. alternatively you can set this flag on the deployment files
+# This sets the flag for out-of-process attestation mode. Alternatively you can set this flag on the deployment files.
ENV SGX_AESM_ADDR=1 CMD make run ```
-Alternatively, the out-of-proc attestation mode can be set in the deployment yaml file as shown below
+Alternatively, you can set the out-of-process attestation mode in the deployment .yaml file. Here's how:
```yaml apiVersion: batch/v1
spec:
path: /var/run/aesmd ```
-## Next Steps
-[Provision Confidential Nodes (DCsv2-Series) on AKS](./confidential-nodes-aks-get-started.md)
+## Next steps
-[Quick starter samples confidential containers](https://github.com/Azure-Samples/confidential-container-samples)
-
-[DCsv2 SKU List](../virtual-machines/dcv2-series.md)
-
-<!-- LINKS - external -->
-[Azure Attestation]: ../attestation/index.yml
+[Quickstart: Deploy an AKS cluster with confidential computing nodes by using the Azure CLI](./confidential-nodes-aks-get-started.md)
+[Quick starter samples confidential containers](https://github.com/Azure-Samples/confidential-container-samples)
-<!-- LINKS - internal -->
-[DC Virtual Machine]: /confidential-computing/virtual-machine-solutions
+[DCsv2 SKUs](../virtual-machines/dcv2-series.md)
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-private-link.md
Title: Set up private link
+ Title: Set up private endpoint with private link
description: Set up a private endpoint on a container registry and enable access over a private link in a local virtual network. Private link access is a feature of the Premium service tier. Previously updated : 10/01/2020 Last updated : 03/31/2021 # Connect privately to an Azure container registry using Azure Private Link
az network vnet subnet update \
### Configure the private DNS zone
-Create a [private DNS zone](../dns/private-dns-privatednszone.md) for the private Azure container registry domain. In later steps, you create DNS records for your registry domain in this DNS zone.
+Create a [private Azure DNS zone](../dns/private-dns-privatednszone.md) for the private Azure container registry domain. In later steps, you create DNS records for your registry domain in this DNS zone. For more information, see [DNS configuration options](#dns-configuration-options), later in this article.
To use a private zone to override the default DNS resolution for your Azure container registry, the zone must be named **privatelink.azurecr.io**. Run the following [az network private-dns zone create][az-network-private-dns-zone-create] command to create the private zone:
az network private-endpoint create \
--connection-name myConnection ```
-### Get private IP addresses
+### Get endpoint IP configuration
-Run [az network private-endpoint show][az-network-private-endpoint-show] to query the endpoint for the network interface ID:
+To configure DNS records, get the IP configuration of the private endpoint. Associated with the private endpoint's network interface in this example are two private IP addresses for the container registry: one for the registry itself, and one for the registry's data endpoint.
+
+First, run [az network private-endpoint show][az-network-private-endpoint-show] to query the private endpoint for the network interface ID:
```azurecli NETWORK_INTERFACE_ID=$(az network private-endpoint show \
NETWORK_INTERFACE_ID=$(az network private-endpoint show \
--output tsv) ```
-Associated with the network interface in this example are two private IP addresses for the container registry: one for the registry itself, and one for the registry's data endpoint. The following [az resource show][az-resource-show] commands get the private IP addresses for the container registry and the registry's data endpoint:
+The following [az network nic show][az-network-nic-show] commands get the private IP addresses for the container registry and the registry's data endpoint:
```azurecli
-PRIVATE_IP=$(az resource show \
+REGISTRY_PRIVATE_IP=$(az network nic show \
+ --ids $NETWORK_INTERFACE_ID \
+ --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry'].privateIpAddress" \
+ --output tsv)
+
+DATA_ENDPOINT_PRIVATE_IP=$(az network nic show \
--ids $NETWORK_INTERFACE_ID \
- --api-version 2019-04-01 \
- --query 'properties.ipConfigurations[1].properties.privateIPAddress' \
+ --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REGISTRY_LOCATION'].privateIpAddress" \
--output tsv)
-DATA_ENDPOINT_PRIVATE_IP=$(az resource show \
+# An FQDN is associated with each IP address in the IP configurations
+
+REGISTRY_FQDN=$(az network nic show \
--ids $NETWORK_INTERFACE_ID \
- --api-version 2019-04-01 \
- --query 'properties.ipConfigurations[0].properties.privateIPAddress' \
+ --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry'].privateLinkConnectionProperties.fqdns" \
+ --output tsv)
+
+DATA_ENDPOINT_FQDN=$(az network nic show \
+ --ids $NETWORK_INTERFACE_ID \
+ --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REGISTRY_LOCATION'].privateLinkConnectionProperties.fqdns" \
--output tsv) ```
az network private-dns record-set a add-record \
--record-set-name $REGISTRY_NAME \ --zone-name privatelink.azurecr.io \ --resource-group $RESOURCE_GROUP \
- --ipv4-address $PRIVATE_IP
+ --ipv4-address $REGISTRY_PRIVATE_IP
# Specify registry region in data endpoint name az network private-dns record-set a add-record \
az acr private-endpoint-connection list \
When you set up a private endpoint connection using the steps in this article, the registry automatically accepts connections from clients and services that have Azure RBAC permissions on the registry. You can set up the endpoint to require manual approval of connections. For information about how to approve and reject private endpoint connections, see [Manage a Private Endpoint Connection](../private-link/manage-private-endpoint.md).
-## Add zone records for replicas
-
-As shown in this article, when you add a private endpoint connection to a registry, you create DNS records in the `privatelink.azurecr.io` zone for the registry and its data endpoints in the regions where the registry is [replicated](container-registry-geo-replication.md).
-
-If you later add a new replica, you need to manually add a new zone record for the data endpoint in that region. For example, if you create a replica of *myregistry* in the *northeurope* location, add a zone record for `myregistry.northeurope.data.azurecr.io`. For steps, see [Create DNS records in the private zone](#create-dns-records-in-the-private-zone) in this article.
+> [!IMPORTANT]
+> Currently, if you delete a private endpoint from a registry, you might also need to delete the virtual network's link to the private zone. If the link isn't deleted, you may see an error similar to `unresolvable host`.
## DNS configuration options
-The private endpoint in this example integrates with a private DNS zone associated with a basic virtual network. This setup uses the Azure-provided DNS service directly to resolve the registry's public FQDN to its private IP address in the virtual network.
+The private endpoint in this example integrates with a private DNS zone associated with a basic virtual network. This setup uses the Azure-provided DNS service directly to resolve the registry's public FQDN to its private IP addresses in the virtual network.
Private link supports additional DNS configuration scenarios that use the private zone, including with custom DNS solutions. For example, you might have a custom DNS solution deployed in the virtual network, or on-premises in a network you connect to the virtual network using a VPN gateway or Azure ExpressRoute.
To resolve the registry's public FQDN to the private IP address in these scenari
> [!IMPORTANT] > If for high availability you created private endpoints in several regions, we recommend that you use a separate resource group in each region and place the virtual network and the associated private DNS zone in it. This configuration also prevents unpredictable DNS resolution caused by sharing the same private DNS zone.
+### Manually configure DNS records
+
+For some scenarios, you may need to manually configure DNS records in a private zone instead of using the Azure-provided private zone. Be sure to create records for each of the following endpoints: the registry endpoint, the registry's data endpoint, and the data endpoint for any additional regional replica. If all records aren't configured, the registry may be unreachable.
+
+> [!IMPORTANT]
+> If you later add a new replica, you need to manually add a new DNS record for the data endpoint in that region. For example, if you create a replica of *myregistry* in the northeurope location, add a record for `myregistry.northeurope.data.azurecr.io`.
+
+The FQDNs and private IP addresses you need to create DNS records are associated with the private endpoint's network interface. You can obtain this information using the Azure CLI or from the portal:
+
+* Using the Azure CLI, run the [az network nic show][az-network-nic-show] command. For example commands, see [Get endpoint IP configuration](#get-endpoint-ip-configuration), earlier in this article.
+
+* In the portal, navigate to your private endpoint, and select **DNS configuration**.
+
+After creating DNS records, make sure that the registry FQDNs resolve properly to their respective private IP addresses.
+ ## Clean up resources If you created all the Azure resources in the same resource group and no longer need them, you can optionally delete the resources by using a single [az group delete](/cli/azure/group) command:
To clean up your resources in the portal, navigate to your resource group. Once
## Next steps * To learn more about Private Link, see the [Azure Private Link](../private-link/private-link-overview.md) documentation.+ * If you need to set up registry access rules from behind a client firewall, see [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md).
+* [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md)
+ <!-- LINKS - external --> [docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms [docker-login]: https://docs.docker.com/engine/reference/commandline/login/
To clean up your resources in the portal, navigate to your resource group. Once
[az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az-network-private-dns-link-vnet-create [az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az-network-private-dns-record-set-a-create [az-network-private-dns-record-set-a-add-record]: /cli/azure/network/private-dns/record-set/a#az-network-private-dns-record-set-a-add-record
-[az-resource-show]: /cli/azure/resource#az-resource-show
+[az-network-nic-show]: /cli/azure/network/nic#az-network-nic-show
[quickstart-portal]: container-registry-get-started-portal.md [quickstart-cli]: container-registry-get-started-azure-cli.md [azure-portal]: https://portal.azure.com
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-transfer-images.md
az resource delete \
* Storage blob in target storage account might not be deleted after successful import run. Confirm that the DeleteBlobOnSuccess option is set in the import run and the SAS token has sufficient permissions. * Storage blob not created or deleted. Confirm that container specified in export or import run exists, or specified storage blob exists for manual import run. * **AzCopy issues**
- * See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md#troubleshoot-issues).
+ * See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md).
* **Artifacts transfer problems** * Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you are transferring a maximum of 50 artifacts. * Pipeline run might not have completed. An export or import run can take some time.
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-access.md
May include one or more of the following:
* Unable to add or modify virtual network settings or public access rules * ACR Tasks is unable to push or pull images * Azure Security Center can't scan images in registry, or scan results don't appear in Azure Security Center
+* You receive error `host is not reachable` when attempting to access a registry configured with a private endpoint.
## Causes
Related links:
Confirm that the virtual network is configured with either a private endpoint for Private Link or a service endpoint (preview). Currently an Azure Bastion endpoint isn't supported.
+If a private endpoint is configured, confirm that DNS resolves the registry's public FQDN such as *myregistry.azurecr.io* to the registry's private IP address. Use a network utility such as `dig` or `nslookup` for DNS lookup. Ensure that [DNS records are configured](container-registry-private-link.md#dns-configuration-options) for the registry FQDN and for each of the data endpoint FQDNs.
+ Review NSG rules and service tags used to limit traffic from other resources in the network to the registry. If a service endpoint to the registry is configured, confirm that a network rule is added to the registry that allows access from that network subnet. The service endpoint only supports access from virtual machines and AKS clusters in the network.
If you want to restrict registry access using a virtual network in a different A
If Azure Firewall or a similar solution is configured in the network, check that egress traffic from other resources such as an AKS cluster is enabled to reach the registry endpoints.
-If a private endpoint is configured, confirm that DNS resolves the registry's public FQDN such as *myregistry.azurecr.io* to the registry's private IP address. Use a network utility such as `dig` or `nslookup` for DNS lookup.
- Related links: * [Connect privately to an Azure container registry using Azure Private Link](container-registry-private-link.md)
+* [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md)
* [Restrict access to a container registry using a service endpoint in an Azure virtual network](container-registry-vnet.md) * [Required outbound network rules and FQDNs for AKS clusters](../aks/limit-egress-traffic.md#required-outbound-network-rules-and-fqdns-for-aks-clusters) * [Kubernetes: Debugging DNS resolution](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/)
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/zone-redundancy.md Binary files differ
cosmos-db Analytical Store Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-private-endpoints.md
-# Configure private endpoints for Azure Cosmos DB analytical store
+# Configure Azure Private Link for Azure Cosmos DB analytical store
[!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-In this article, you will learn how to set up managed private endpoints for Azure Cosmos DB analytical store. If you are using the transactional store, see [Private endpoints for the transactional store](how-to-configure-private-endpoints.md) article. Using managed private endpoints, you can restrict network access of Azure Cosmos DB analytical store, to Azure Synapse managed virtual network. Managed private endpoints establish a private link to your analytical store.
+In this article, you will learn how to set up managed private endpoints for Azure Cosmos DB analytical store. If you are using the transactional store, see [Private endpoints for the transactional store](how-to-configure-private-endpoints.md) article. Using [managed private endpoints](../synapse-analytics/security/synapse-workspace-managed-private-endpoints.md), you can restrict network access of your Azure Cosmos DB analytical store, to a Managed Virtual Network associated with your Azure Synapse workspace. Managed private endpoints establish a private link to your analytical store.
-## Enable private endpoint for the analytical store
+## Enable a private endpoint for the analytical store
-### Set up an Azure Synapse Analytics workspace with a managed virtual network
+### Set up Azure Synapse Analytics workspace with a managed virtual network
[Create a workspace in Azure Synapse Analytics with data-exfiltration enabled.](../synapse-analytics/security/how-to-create-a-workspace-with-data-exfiltration-protection.md) With [data-exfiltration protection](../synapse-analytics/security/workspace-data-exfiltration-protection.md), you can ensure that malicious users cannot copy or transfer data from your Azure resources to locations outside your organizationΓÇÖs scope.
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-dotnet-v4.md Binary files differ
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-dotnet.md Binary files differ
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-manage-consistency.md Binary files differ
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/managed-identity-based-authentication.md Binary files differ
cosmos-db Mongodb Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-feature-support-40.md
Azure Cosmos DB's API for MongoDB supports the following database commands:
| $limit | Yes | | $listLocalSessions | No | | $listSessions | No |
-| $lookup | Yes |
+| $lookup | Partial |
| $match | Yes | | $out | Yes | | $project | Yes |
Azure Cosmos DB's API for MongoDB supports the following database commands:
| $sortByCount | Yes | | $unwind | Yes |
+> [!NOTE]
+> `$lookup` does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
+ ### Boolean expressions | Command | Supported |
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
cosmos-db Synapse Link Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-frequently-asked-questions.md
Yes, the analytical store can be enabled on containers with autoscale provisione
Azure Cosmos DB guarantees performance isolation between the transactional and analytical workloads. Enabling the analytical store on a container will not impact the RU/s provisioned on the Azure Cosmos DB transactional store. The transactions (read & write) and storage costs for the analytical store will be charged separately. See the [pricing for Azure Cosmos DB analytical store](analytical-store-introduction.md#analytical-store-pricing) for more details.
-### Can I restrict access to Azure Cosmos DB analytical store?
+### Can I restrict network access to Azure Cosmos DB analytical store?
-Yes you can configure a [managed private endpoint](analytical-store-private-endpoints.md) and restrict network access of analytical store to Azure Synapse managed virtual network. Managed private endpoints establish a private link to your analytical store. This private endpoint will also restrict write access to transactional store, among other Azure data services.
+Yes you can configure a [managed private endpoint](analytical-store-private-endpoints.md) and restrict network access of analytical store to Azure Synapse managed virtual network. Managed private endpoints establish a private link to your analytical store.
-You can add both transactional store and analytical store private endpoints to the same Azure Cosmos DB account in an Azure Synapse Analytics workspace. If you only want to run analytical queries, you may only want to map the analytical private endpoint.
+You can add both transactional store and analytical store private endpoints to the same Azure Cosmos DB account in an Azure Synapse Analytics workspace. If you only want to run analytical queries, you may only want to enable the analytical private endpoint in Synapse Analytics workspace.
### Can I use customer-managed keys with the Azure Cosmos DB analytical store?
-You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Using customer-managed keys with the Azure Cosmos DB analytical store currently requires additional configuration on your account. Please contact the [Azure Cosmos DB team](mailto:azurecosmosdbcmk@service.microsoft.com) for details.
+You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner.
+To use customer-managed keys with the analytical store, you need to use your Azure Cosmos DB account's system-assigned managed identity in your Azure Key Vault access policy. This is described [here](how-to-setup-cmk.md#using-managed-identity). You should then be able to enable the analytical store on your account.
### Are delete and update operations on the transactional store reflected in the analytical store?
None. You will only be charged when you create an analytical store enabled conta
Authentication with the analytical store is the same as a transactional store. For a given database, you can authenticate with the primary or read-only key. You can leverage linked service in Azure Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. Access to this Linked Service is available for everyone who has access to the workspace.
+When using Synapse serverless SQL pools, you can query the Azure Cosmos DB analytical store by pre-creating SQL credentials storing the account keys and referencing these in the OPENROWSET function. To learn more, see [Query with a serverless SQL pool in Azure Synapse Link](../synapse-analytics/sql/query-cosmos-db-analytical-store.md) article.
+ ## Synapse run-times ### What are the currently supported Synapse run-times to access Azure Cosmos DB analytical store?
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link.md
This integration enables the following HTAP scenarios for different users:
For more information on Azure Synapse Analytics runtime support for Azure Cosmos DB, see [Azure Synapse Analytics for Cosmos DB support](../synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md).
-## Security
-
-Synapse Link enables you to run near real-time analytics over your mission-critical data in Azure Cosmos DB. It is vital to make sure that critical business data is stored securely across both transactional and analytical stores. Azure Synapse Link for Azure Cosmos DB is designed to help meet these security requirements through the following features:
-
-* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
-
-* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. To learn more, see how to [Configure customer-managed keys](how-to-setup-cmk.md) article.
-
-* **Secure key management** - Accessing the data in analytical store from Synapse Spark and Synapse serverless SQL pools requires managing Azure Cosmos DB keys within Synapse Analytics workspaces. Instead of using the Azure Cosmos DB account keys inline in Spark jobs or SQL scripts, Azure Synapse Link provides more secure capabilities.
-
- * When using Synapse serverless SQL pools, you can query the Azure Cosmos DB analytical store by pre-creating SQL credentials storing the account keys and referencing these in the `OPENROWSET` function. To learn more, see [Query with a serverless SQL pool in Azure Synapse Link](../synapse-analytics/sql/query-cosmos-db-analytical-store.md) article.
-
- * When using Synapse Spark, you can store the account keys in linked service objects pointing to an Azure Cosmos DB database and reference this in the Spark configuration at runtime. To learn more, see [Copy data into a dedicated SQL pool using Apache Spark](../synapse-analytics/synapse-link/how-to-copy-to-sql-pool.md) article.
- ## When to use Azure Synapse Link for Azure Cosmos DB? Synapse Link is recommended in the following cases:
Synapse Link is not recommended if you are looking for traditional data warehous
* Accessing the Azure Cosmos DB analytics store with Synapse SQL provisioned is currently not available.
+## Security
+
+Synapse Link enables you to run near real-time analytics over your mission-critical data in Azure Cosmos DB. It is vital to make sure that critical business data is stored securely across both transactional and analytical stores. Azure Synapse Link for Azure Cosmos DB is designed to help meet these security requirements through the following features:
+
+* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
+
+* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. To learn more, see how to [Configure customer-managed keys](how-to-setup-cmk.md) article.
+
+* **Secure key management** - Accessing the data in analytical store from Synapse Spark and Synapse serverless SQL pools requires managing Azure Cosmos DB keys within Synapse Analytics workspaces. Instead of using the Azure Cosmos DB account keys inline in Spark jobs or SQL scripts, Azure Synapse Link provides more secure capabilities.
+
+ * When using Synapse serverless SQL pools, you can query the Azure Cosmos DB analytical store by pre-creating SQL credentials storing the account keys and referencing these in the `OPENROWSET` function. To learn more, see [Query with a serverless SQL pool in Azure Synapse Link](../synapse-analytics/sql/query-cosmos-db-analytical-store.md) article.
+
+ * When using Synapse Spark, you can store the account keys in linked service objects pointing to an Azure Cosmos DB database and reference this in the Spark configuration at runtime. To learn more, see [Copy data into a dedicated SQL pool using Apache Spark](../synapse-analytics/synapse-link/how-to-copy-to-sql-pool.md) article.
++ ## Pricing The billing model of Azure Synapse Link includes the costs incurred by using the Azure Cosmos DB analytical store and the Synapse runtime. To learn more, see the [Azure Cosmos DB analytical store pricing](analytical-store-introduction.md#analytical-store-pricing) and [Azure Synapse Analytics pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/) articles.
cosmos-db Tutorial Develop Mongodb Nodejs Part4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-develop-mongodb-nodejs-part4.md Binary files differ
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-acm-cost-analysis.md Binary files differ
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/tutorial-export-acm-data.md Binary files differ
cost-management-billing Change Azure Account Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/change-azure-account-profile.md
Title: Change contact information for an Azure billing account description: Describes how to change the contact information of your Azure billing account--++ tags: billing Previously updated : 10/26/2020 Last updated : 04/08/2021
If you want to update your Azure Active Directory user profile information, only
1. Enter the new address and then select **Save**. ![Screenshot that shows updating the address](./media/change-azure-account-profile/update-bill-to-save-mca.png)
+## Update a PO number
+
+By default, an invoice for billing profile doesn't have an associated PO number. After you add a PO number for a billing profile, it appears on invoices for the billing profile.
+
+To add or change the PO number for a billing profile, use the following steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for **Cost Management + Billing** and then select **Billing scopes**.
+1. Select your billing scope.
+1. In the left menu under **Billing**, select **Billing profiles**.
+1. Select the appropriate billing profile.
+1. In the left menu under **Settings**, select **Properties**.
+1. Select **Update PO number**.
+1. Enter a PO number and then select **Update**.
+ ## Service and marketing emails You're prompted in the Azure portal to verify or update your email address every 90 days. Microsoft sends emails to this email address with Azure account-related information for:
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mca-request-billing-ownership.md
Title: Get billing ownership of Azure subscriptions
-description: Learn how to request billing ownership of Azure subscriptions from other users.
+ Title: Transfer Azure subscription billing ownership for a Microsoft Customer Agreement
+description: Learn how to transfer billing ownership of Azure subscriptions.
tags: billing Previously updated : 12/09/2020 Last updated : 04/08/2021
-# Get billing ownership of Azure subscriptions from other accounts
+# Transfer Azure subscription billing ownership for a Microsoft Customer Agreement
You might want to take over ownership of Azure subscriptions if the existing billing owner is leaving your organization, or when you want to pay for the subscriptions through your billing account. Taking ownership transfers subscription billing responsibilities to your account.
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
+
+ Title: Manage tenants in your Microsoft Customer Agreement billing account - Azure
+description: The article helps you understand and manage tenants associated with your Microsoft Customer Agreement billing account.
+
+tags: billing
+++ Last updated : 04/06/2021++++
+# Manage tenants in your Microsoft Customer Agreement billing account
+
+The article helps you understand and manage tenants associated with your Microsoft Customer Agreement billing account. Use the information to manage tenants, transfer subscriptions, and administer billing ownership while you ensure secure access to your billing environment.
+
+## What's a tenant?
+
+A tenant is a digital representation of your organization and is primarily associated with a domain, like Microsoft.com. It's an environment managed through Azure Active Directory that enables you to assign users permissions to manage Azure resources and billing.
+
+Each tenant is distinct and separate from other tenants, yet you can allow guest users from other tenants to access your tenant to track your costs and manage billing.
+
+## How tenants and subscriptions relate to billing account
+
+You use your Microsoft Customer Agreement (billing account) to track costs and manage billing. Each billing account has at least one billing profile. The billing profile allows you to manage your invoice and payment method. Each billing profile includes one invoice section, by default. You can create more invoice sections to group, track, and manage costs at a more granular level if needed.
+
+- Your billing account is associated with a single tenant. It means only users who are part of the tenant can access your billing account.
+- When you create a new Azure subscription for your billing account, it's always created in your billing account tenant. However, you can move subscriptions to other tenants. You can also link existing subscriptions from other tenants to your billing account. It allows you to centrally manage billing through one tenant while keeping resources and subscriptions in other tenants based on your needs.
+
+The following diagram shows how billing account and subscriptions are linked to tenants. The Contoso MCA billing account is associated with Tenant 1 while Contoso PAYG account is associated with Tenant 2. Let's assume Contoso wants to pay for their PAYG subscription through their MCA billing account, they can use a billing ownership transfer to link the subscription to their MCA billing account. The subscription and its resources will still be associated with Tenant 2, but they're paid for using the MCA billing account.
++
+## Manage subscriptions under multiple tenants in a single Microsoft Customer Agreement
+
+Billing owners can create subscriptions when they have the [appropriate permissions](../manage/understand-mca-roles.md#subscription-billing-roles-and-tasks) to the billing account. By default, any new subscriptions created under the Microsoft Customer Agreement are in the Microsoft Customer Agreement tenant.
+
+- You can link subscriptions from other tenants to your Microsoft Customer Agreement billing account. Taking billing ownership of a subscription only changes the invoicing arrangement. It doesn't affect the service tenant or Azure RBAC roles.
+- To change the subscription owner in the service tenant, you must transfer the [subscription to a different Azure Active Directory directory](../../role-based-access-control/transfer-subscription.md).
+
+## Add guest users to your Microsoft Customer Agreement tenant
+
+Users that are added to your Microsoft Customer Agreement billing tenant, to manage billing responsibilities from a different tenant, must be invited as a guest.
+
+To invite someone as a guest, the user must have an existing email address with a domain that's different from your Azure Active Directory (AD) domain. Azure AD sends the guest user an email with a link for authentication.
++
+When a user is added to the Microsoft Customer Agreement tenant, they must [accept the invitation](../../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md#accept-the-invitation).
+
+When they select the **Accept invitation** link, they're prompted to authenticate with Azure.
++
+Then they select **Accept**.
++
+After they accept, they can [view the Microsoft Customer Agreement billing account under Cost Management + Billing](../understand/mca-overview.md#check-access-to-a-microsoft-customer-agreement).
++
+Authorization to invite guest users is controlled by your Azure AD settings. The value of the settings is shown under **Settings** on the **Organizational relationships** page. Ensure that the setting is selected, otherwise the invitation fails.For more information, see [Restrict guest user access permissions](../../active-directory/enterprise-users/users-restrict-guest-permissions.md).
++
+> [!IMPORTANT]
+> Guest users get access to the Microsoft Customer Agreement tenant, which can potentially pose a security concern. For more information, see [Learn how to restrict guest users' default permissions](../../active-directory/fundamentals/users-default-permissions.md#restrict-member-users-default-permissions).
+
+## Manage multiple Microsoft cloud services under an Azure AD tenant
+
+You can manage multiple cloud services for your organization under a single Azure AD tenant. User accounts for all of Microsoft's cloud offerings are stored in the Azure AD tenant, which contains user accounts and groups. The following diagram shows an example of an organization with multiple services using a common Azure AD tenant containing accounts. Each service has its own portal, in blue text, where users manage their services.
++
+## Next steps
+
+Read the following articles to learn how to administer flexible billing ownership and ensure secure access to your Microsoft Customer Agreement.
+
+- [How to set up a tenant](../../active-directory/develop/quickstart-create-new-tenant.md)
+- [Azure built-in roles](../../role-based-access-control/built-in-roles.md)
+- [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md)
+- [Restrict guest access permissions (preview) in Azure Active Directory](../../active-directory/enterprise-users/users-restrict-guest-permissions.md)
+- [Add guest users to your directory in the Azure portal](../../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md#accept-the-invitation)
+- [What are the default user permissions in Azure Active Directory?](../../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md#accept-the-invitation)
+- [What is Azure Active Directory?](../../active-directory/fundamentals/active-directory-whatis.md)
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
+
+ Title: Key next steps after accepting your Microsoft Customer Agreement - Azure
+description: This article helps you get started as you begin to manage Azure billing and subscriptions under your new Microsoft Customer Agreement.
+
+tags: billing
+++ Last updated : 04/06/2021++++
+# Key next steps after accepting your Microsoft Customer Agreement
+
+This article helps you get started as you begin to manage Azure billing and subscriptions under your new Microsoft Customer Agreement.
+
+Some of the benefits under the agreement include:
+
+- Flexibility to choose how you want to pay for your Azure consumption.
+- Free tools to help you understand and optimize your costs.
+- A single place to manage your Azure purchases at Azure.com.
+
+## How billing works under the agreement
+
+When you or your organization signed the Microsoft Customer Agreement, a billing account was automatically created. You use your Microsoft Customer Agreement (billing account) to track costs and manage billing. By default, the user who accepted the Microsoft Customer Agreement becomes the owner of the billing account. They have permission to manage billing for the account. The user can add other users, who also have permission to view and manage the billing account.
+
+- [Get started with your Microsoft Azure billing account](../understand/mca-overview.md).
+- [Organize your costs](https://www.youtube.com/watch?v=7RxTfShGHwU) and [customize billing to meet your needs](../manage/mca-section-invoice.md).
+
+## Start building your solutions in Azure
+
+When you move existing subscriptions to your Microsoft Customer Agreement billing profile, service isn't changed and there's no service downtime.
+
+If you're a new customer, Azure automatically creates a default subscription for you. You can use the subscription to create resources and build your solutions. When you have existing pay-as-you-go subscriptions, you can link your subscriptions to the new MCA billing account by using billing ownership transfer.
+
+- [Move your existing pay-as-you-go subscriptions](../manage/mca-request-billing-ownership.md).
+- [Move your existing EA subscriptions](../manage/mca-setup-account.md).
+- No previous Azure subscriptions? [Create an additional Azure subscription](../manage/create-subscription.md).
+
+After your subscriptions are moved, access to the subscriptions is unchanged for your users. All consumption against the subscriptions route invoices under your new contract.
+
+When you start consuming Azure services, your new invoice under the Microsoft Customer Agreement is generated on the fifth day of every month ΓÇô ensure you [update your PO number in your billing profile](../manage/change-azure-account-profile.md). Your default payment method is wire transfer. To learn how to set up your payment method to avoid delays, see [How to pay for your subscription](../understand/pay-bill.md#wire-bank-details). The article explains how to get the required bank payment information.
+
+## Confirm payment details
+
+When you move from a pay-as-you-go or an enterprise agreement to a Microsoft Customer Agreement, your payment method changes. The following table compares your previous payment method against your new one.
+
+| MCA purchase method | Previous payment method - Credit card | Previous payment method - Invoice | New payment method under MCA - Credit card | New payment method under MCA - Invoice |
+| | | | | |
+| Through a Microsoft representative | | Γ£ö | Γ£ö <sup>4</sup> | Γ£ö <sup>2</sup> |
+| Azure website | Γ£ö | Γ£ö <sup>1</sup> | Γ£ö | Γ£ö <sup>3</sup> |
+
+<sup>1</sup> By request.
+
+<sup>2</sup> You continue to pay by invoice/wire transfer under the MCA but will need to send your payments to a different bank account. For information about where to send your payment, see [Pay your bill](../understand/pay-bill.md#wire-bank-details) after you select your country in the list.
+
+<sup>3</sup> For more information, see [Pay for your Azure subscription by invoice](../manage/pay-by-invoice.md).
+
+<sup>4</sup> For more information, see [Pay your bill for Microsoft Azure](../understand/pay-bill.md#pay-now-in-the-azure-portal).
+
+## Complete outstanding payments
+
+Make sure that you complete any outstanding payments for your older [pay-as-you-go](../understand/download-azure-invoice.md) or [EA](../manage/ea-portal-enrollment-invoices.md) contract subscription invoices. For more information, see [Understand your Microsoft Customer Agreement Invoice in Azure](../understand/mca-understand-your-invoice.md#billing-period).
+
+## Update your tax ID
+
+Ensure you update your tax ID after moving your subscriptions. The tax ID is used for tax exemption calculations and appears on your invoice.
+
+**To update billing account information**
+
+1. Sign in to the [Microsoft Store for Business](https://businessstore.microsoft.com/) or [Microsoft Store for Education](https://educationstore.microsoft.com/).
+1. Select **Manage**, and then select **Billing accounts**.
+1. On **Overview**, select **Edit billing account information**.
+1. Make your updates, and then select **Save**.
+
+[Learn more about how to update your billing account settings](/microsoft-store/update-microsoft-store-for-business-account-settings).
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- [Learn how about the charges on your invoice](https://www.youtube.com/watch?v=e2LGZZ7GubA)
+- [Take a step-by-step invoice tutorial](../understand/review-customer-agreement-bill.md)
cost-management-billing Troubleshoot Subscription Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/troubleshoot-subscription-access.md
+
+ Title: Troubleshoot subscription access after you sign a Microsoft Customer Agreement - Azure
+description: This article helps you troubleshoot subscription access after you sign a new Microsoft Customer Agreement.
+
+tags: billing
+++ Last updated : 04/07/2021++++
+# Troubleshoot subscription access after you sign a Microsoft Customer Agreement
+
+This article helps you troubleshoot subscription access after you sign a new Microsoft Customer Agreement. Use the following information to troubleshoot and resolve common problems.
+
+## Troubleshoot subscription access
+
+Make sure that your billing account type is Microsoft Customer Agreement. You can see your billing account type in the Azure portal in **Cost Management + Billing**. For more information, see [Check if you have access to your MCA](../understand/mca-understand-your-usage.md#check-access-to-a-microsoft-customer-agreement).
+
+## Troubleshoot viewing your billing account
+
+If you have trouble viewing your billing account and you have multiple tenants, try switching directories in the Azure portal.
+
+1. In the upper right of the Azure portal, select your Azure account.
+1. Select Switch directory.
+ :::image type="content" source="./media/troubleshoot-subscription-access/switch-directory.png" alt-text="Screenshot showing the Switch directory option." lightbox="./media/troubleshoot-subscription-access/switch-directory.png" :::
+1. In the Directory + subscription window, select the other directory to switch to it.
+ :::image type="content" source="./media/troubleshoot-subscription-access/select-directory.png" alt-text="Screenshot showing where to select another directory." lightbox="./media/troubleshoot-subscription-access/select-directory.png" :::
+
+## Troubleshoot account access
+
+You could have multiple accounts with Microsoft that use the same email address. You might have a personal account and a work account. The Microsoft Customer Agreement uses your work or school account. For more information, see [Ensure you sign in with your Work or school Account](https://support.microsoft.com/office/which-account-do-you-want-to-use-2b5bbd7a-7df6-4283-beff-8015e28eb7b9).
+
+- Sign in with your Work or school account.
+ :::image type="content" source="./media/troubleshoot-subscription-access/two-accounts.png" alt-text="Screenshot showing work or school account selection." lightbox="./media/troubleshoot-subscription-access/two-accounts.png" :::
+
+## Next steps
+
+- Read the [Microsoft Customer Agreement documentation](https://docs.microsoft.com/azure/cost-management-billing/microsoft-customer-agreement/).
cost-management-billing Mca Understand Your Invoice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/mca-understand-your-invoice.md
tags: billing
Previously updated : 08/20/2020 Last updated : 04/08/2021
Watch the [Understand your Microsoft Customer Agreement invoice](https://www.you
You are invoiced on a monthly basis. You can find out which day of the month you receive invoices by checking *invoice date* under billing profile properties in the [Azure portal](https://portal.azure.com/). Charges that occur between the end of the billing period and the invoice date are included in the next month's invoice, since they are in the next billing period. The billing period start and end dates for each invoice are listed in the invoice PDF above **Billing Summary**.
+If you're migrating from an EA to a Microsoft Customer Agreement, you continue to receive invoices for your EA until the migration date. The new invoice for your Microsoft Customer Agreement is generated on the fifth day of the month after you migrate. The first invoice shows a partial charge from the migration date. Later invoices are generated every month and show all the charges for each month.
+
+### Changes for pay-as-you-go subscriptions
+
+When a subscription is transitioned, transferred, or canceled, the last invoice generated contains charges for the previous billing cycle and the new incomplete billing cycle.
+
+For example:
+
+Assume that your pay-as-you-go subscription billing cycle is from the day 8 to day 7 of each month. The subscription was transferred to a Microsoft Customer Agreement on November 16. The last pay-as-you-go invoice has charges for October 8, 2020 through November 7, 2020. It also has the charges for the new partial billing cycle for the Microsoft Customer Agreement from November 8, 2020 through November 16, 2020. Here's an example image.
++ ## Invoice terms and descriptions The following sections list important terms that you see on your invoice and provide descriptions for each term.
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 01/13/2021 Last updated : 04/08/2021
This article applies to customers with a Microsoft Customer Agreement (MCA).
There are two ways to pay for your bill for Azure. You can pay with the default payment method of your billing profile or you can make a one-time payment called **Pay now**.
-If you signed up for Azure through a Microsoft representative, then your default payment method will always be set to *check or wire transfer*.
+If you signed up for Azure through a Microsoft representative, then your default payment method will always be set to *check or wire transfer*. Automatic credit card payment isn't an option if you signed up for Azure through a Microsoft representative. Instead, you can [pay with a credit card for individual invoices](#pay-now-in-the-azure-portal).
If you have Azure credits, they automatically apply to your invoice each billing period.
If the default payment method of your billing profile is check or wire transfer,
Alternatively, if your invoice is under the threshold amount for your currency, you can make a one-time payment in the Azure portal with a credit or debit card using **Pay now**. If your invoice amount exceeds the threshold, you can't pay your invoice with a credit or debit card. You'll find the threshold amount for your currency in the Azure portal after selecting **Pay now**.
+#### Bank details used to send wire transfer payments
+<a name="wire-bank-details"></a>
+
+If your default payment method is wire transfer, check your invoice for payment instructions. Find payment instructions for your country or region in the following list.
+
+> [!div class="op_single_selector"]
+> - **Choose your country or region**
+> - [Afghanistan](/legal/pay/afghanistan)
+> - [Albania](/legal/pay/albania)
+> - [Algeria](/legal/pay/algeria)
+> - [Angola](/legal/pay/angola)
+> - [Argentina](/legal/pay/argentina)
+> - [Armenia](/legal/pay/armenia)
+> - [Australia](/legal/pay/australia)
+> - [Austria](/legal/pay/austria)
+> - [Azerbaijan](/legal/pay/azerbaijan)
+> - [Bahamas](/legal/pay/bahamas)
+> - [Bahrain](/legal/pay/bahrain)
+> - [Bangladesh](/legal/pay/bangladesh)
+> - [Barbados](/legal/pay/barbados)
+> - [Belarus](/legal/pay/belarus)
+> - [Belgium](/legal/pay/belgium)
+> - [Belize](/legal/pay/belize)
+> - [Bermuda](/legal/pay/bermuda)
+> - [Bolivia](/legal/pay/bolivia)
+> - [Bosnia and Herzegovina](/legal/pay/bosnia-and-herzegovina)
+> - [Botswana](/legal/pay/botswana)
+> - [Brazil](/legal/pay/brazil)
+> - [Brunei](/legal/pay/brunei)
+> - [Bulgaria](/legal/pay/bulgaria)
+> - [Cameroon](/legal/pay/cameroon)
+> - [Canada](/legal/pay/canada)
+> - [Cape Verde](/legal/pay/cape-verde)
+> - [Cayman Islands](/legal/pay/cayman-islands)
+> - [Chile](/legal/pay/chile)
+> - [China (PRC)](/legal/pay/china-prc)
+> - [Colombia](/legal/pay/colombia)
+> - [Costa Rica](/legal/pay/costa-rica)
+> - [C├┤te d'Ivoire](/legal/pay/cote-divoire)
+> - [Croatia](/legal/pay/croatia)
+> - [Curacao](/legal/pay/curacao)
+> - [Cyprus](/legal/pay/cyprus)
+> - [Czech Republic](/legal/pay/czech-republic)
+> - [Democratic Republic of Congo](/legal/pay/democratic-republic-of-congo)
+> - [Denmark](/legal/pay/denmark)
+> - [Dominican Republic](/legal/pay/dominican-republic)
+> - [Ecuador](/legal/pay/ecuador)
+> - [Egypt](/legal/pay/egypt)
+> - [El Salvador](/legal/pay/el-salvador)
+> - [Estonia](/legal/pay/estonia)
+> - [Ethiopia](/legal/pay/ethiopia)
+> - [Faroe Islands](/legal/pay/faroe-islands)
+> - [Fiji](/legal/pay/fiji)
+> - [Finland](/legal/pay/finland)
+> - [France](/legal/pay/france)
+> - [French Guiana](/legal/pay/french-guiana)
+> - [Georgia](/legal/pay/georgia)
+> - [Germany](/legal/pay/germany)
+> - [Ghana](/legal/pay/ghana)
+> - [Greece](/legal/pay/greece)
+> - [Grenada](/legal/pay/grenada)
+> - [Guadeloupe](/legal/pay/guadeloupe)
+> - [Guam](/legal/pay/guam)
+> - [Guatemala](/legal/pay/guatemala)
+> - [Guyana](/legal/pay/guyana)
+> - [Haiti](/legal/pay/haiti)
+> - [Honduras](/legal/pay/honduras)
+> - [Hong Kong](/legal/pay/hong-kong)
+> - [Hungary](/legal/pay/hungary)
+> - [Iceland](/legal/pay/iceland)
+> - [India](/legal/pay/india)
+> - [Indonesia](/legal/pay/indonesia)
+> - [Iraq](/legal/pay/iraq)
+> - [Ireland](/legal/pay/ireland)
+> - [Israel](/legal/pay/israel)
+> - [Italy](/legal/pay/italy)
+> - [Jamaica](/legal/pay/jamaica)
+> - [Japan](/legal/pay/japan)
+> - [Jordan](/legal/pay/jordan)
+> - [Kazakhstan](/legal/pay/kazakhstan)
+> - [Kenya](/legal/pay/kenya)
+> - [Korea](/legal/pay/korea)
+> - [Kuwait](/legal/pay/kuwait)
+> - [Kyrgyzstan](/legal/pay/kyrgyzstan)
+> - [Latvia](/legal/pay/latvia)
+> - [Lebanon](/legal/pay/lebanon)
+> - [Libya](/legal/pay/libya)
+> - [Liechtenstein](/legal/pay/liechtenstein)
+> - [Lithuania](/legal/pay/lithuania)
+> - [Luxembourg](/legal/pay/luxembourg)
+> - [Macao](/legal/pay/macao)
+> - [Macedonia, Former Yugoslav Republic of](/legal/pay/macedonia)
+> - [Malaysia](/legal/pay/malaysia)
+> - [Malta](/legal/pay/malta)
+> - [Mauritius](/legal/pay/mauritius)
+> - [Mexico](/legal/pay/mexico)
+> - [Moldova](/legal/pay/moldova)
+> - [Monaco](/legal/pay/monaco)
+> - [Mongolia](/legal/pay/mongolia)
+> - [Montenegro](/legal/pay/montenegro)
+> - [Morocco](/legal/pay/morocco)
+> - [Namibia](/legal/pay/namibia)
+> - [Nepal](/legal/pay/nepal)
+> - [Netherlands](/legal/pay/netherlands)
+> - [New Zealand](/legal/pay/new-zealand)
+> - [Nicaragua](/legal/pay/nicaragua)
+> - [Nigeria](/legal/pay/nigeria)
+> - [Norway](/legal/pay/norway)
+> - [Oman](/legal/pay/oman)
+> - [Pakistan](/legal/pay/pakistan)
+> - [Palestinian Authority](/legal/pay/palestinian-authority)
+> - [Panama](/legal/pay/panama)
+> - [Paraguay](/legal/pay/paraguay)
+> - [Peru](/legal/pay/peru)
+> - [Philippines](/legal/pay/philippines)
+> - [Poland](/legal/pay/poland)
+> - [Portugal](/legal/pay/portugal)
+> - [Puerto Rico](/legal/pay/puerto-rico)
+> - [Qatar](/legal/pay/qatar)
+> - [Romania](/legal/pay/romania)
+> - [Russia](/legal/pay/russia)
+> - [Rwanda](/legal/pay/rwanda)
+> - [Saint Kitts and Nevis](/legal/pay/saint-kitts-and-nevis)
+> - [Saint Lucia](/legal/pay/saint-lucia)
+> - [Saint Vincent and the Grenadines](/legal/pay/saint-vincent-and-the-grenadines)
+> - [Saudi Arabia](/legal/pay/saudi-arabia)
+> - [Senegal](/legal/pay/senegal)
+> - [Serbia](/legal/pay/serbia)
+> - [Singapore](/legal/pay/singapore)
+> - [Slovakia](/legal/pay/slovakia)
+> - [Slovenia](/legal/pay/slovenia)
+> - [South Africa](/legal/pay/south-africa)
+> - [Spain](/legal/pay/spain)
+> - [Sri Lanka](/legal/pay/sri-lanka)
+> - [Suriname](/legal/pay/suriname)
+> - [Sweden](/legal/pay/sweden)
+> - [Switzerland](/legal/pay/switzerland)
+> - [Taiwan](/legal/pay/taiwan)
+> - [Tajikistan](/legal/pay/tajikistan)
+> - [Tanzania](/legal/pay/tanzania)
+> - [Thailand](/legal/pay/thailand)
+> - [Trinidad and Tobago](/legal/pay/trinidad-and-tobago)
+> - [Turkmenistan](/legal/pay/turkmenistan)
+> - [Tunisia](/legal/pay/tunisia)
+> - [Turkey](/legal/pay/turkey)
+> - [Uganda](/legal/pay/uganda)
+> - [Ukraine](/legal/pay/ukraine)
+> - [United Arab Emirates](/legal/pay/united-arab-emirates)
+> - [United Kingdom](/legal/pay/united-kingdom)
+> - [United States](/legal/pay/united-states)
+> - [Uruguay](/legal/pay/uruguay)
+> - [Uzbekistan](/legal/pay/uzbekistan)
+> - [Venezuela](/legal/pay/venezuela)
+> - [Vietnam](/legal/pay/vietnam)
+> - [Virgin Islands, US](/legal/pay/virgin-islands)
+> - [Yemen](/legal/pay/yemen)
+> - [Zambia](/legal/pay/zambia)
+> - [Zimbabwe](/legal/pay/zimbabwe)
+ ## Pay now in the Azure portal To pay invoices in the Azure portal, you must have the correct [MCA permissions](../manage/understand-mca-roles.md) or be the Billing Account admin. The Billing Account admin is the user who originally signed up for the MCA account.
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-overview.md
Title: Mapping data flows
description: An overview of mapping data flows in Azure Data Factory -
The top bar contains actions that affect the whole data flow, like saving and va
View the [mapping data flow transformation overview](data-flow-transformation-overview.md) to get a list of available transformations.
+## Data flow data types
+
+array
+binary
+boolean
+complex
+decimal
+date
+float
+integer
+long
+map
+short
+string
+timestamp
+ ## Data flow activity Mapping data flows are operationalized within ADF pipelines using the [data flow activity](control-flow-execute-data-flow-activity.md). All a user has to do is specify which integration runtime to use and pass in parameter values. For more information, learn about the [Azure integration runtime](concepts-integration-runtime.md#azure-integration-runtime).
data-factory How To Run Self Hosted Integration Runtime In Windows Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-run-self-hosted-integration-runtime-in-windows-container.md
Azure Data Factory are delivering the official windows container support of Self
## Prerequisites - [Windows container requirements](/virtualization/windowscontainers/deploy-containers/system-requirements) - Docker Version 2.3 and later -- Self-Hosted Integration Runtime Version 4.11.7512.1 and later
+- Self-Hosted Integration Runtime Version 5.2.7713.1 and later
## Get started 1. Install Docker and enable Windows Container 2. Download the source code from https://github.com/Azure/Azure-Data-Factory-Integration-Runtime-in-Windows-Container
docker build . -t "yourDockerImageName" 
``` 6. Run docker container: ```console
-docker run -d -e NODE_NAME="irNodeName" -e AUTH_KEY="IR_AUTHENTICATION_KEY" -e ENABLE_HA=true HA_PORT=8060 "yourDockerImageName"   
+docker run -d -e NODE_NAME="irNodeName" -e AUTH_KEY="IR_AUTHENTICATION_KEY" -e ENABLE_HA=true -e HA_PORT=8060 "yourDockerImageName"   
``` > [!NOTE] > AUTH_KEY is mandatory for this command. NODE_NAME, ENABLE_HA and HA_PORT are optional. If you don't set the value, the command will use default values. The default value of ENABLE_HA is false and HA_PORT is 8060.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/managed-virtual-network-private-endpoint.md
New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${integrationRuntimeReso
## Limitations and known issues ### Supported Data Sources Below data sources are supported to connect through private link from ADF Managed Virtual Network.-- Azure Blob Storage-- Azure Table Storage-- Azure Files
+- Azure Blob Storage (not including Storage account V1)
+- Azure Table Storage (not including Storage account V1)
+- Azure Files (not including Storage account V1)
- Azure Data Lake Gen2 - Azure SQL Database (not including Azure SQL Managed Instance) - Azure Synapse Analytics
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 03/31/2021 Last updated : 04/14/2021 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/policy-reference.md
ms.devlang: na na Previously updated : 04/07/2021 Last updated : 04/14/2021
the link in the **Version** column to view the source on the
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). - Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
dedicated-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/overview.md
After they're provisioned, HSM devices are connected directly to a customerΓÇÖs
### FIPS 140-2 Level-3 compliance
-Many organizations have stringent industry regulations that dictate that cryptographic keys must be stored in [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated HSMs. Azure Dedicated HSM and a new single-tenant offering, [Azure Key Vault Managed HSM (preview)](https://docs.microsoft.com/azure/key-vault/managed-hsm), help customers from various industry segments, such as financial services industry, government agencies, and others meet FIPS 140-2 Level-3 requirements. While MicrosoftΓÇÖs multi-tenant [Azure Key Vault](https://docs.microsoft.com/azure/key-vault) service currently uses FIPS 140-2 Level-2 validated HSMs.
+Many organizations have stringent industry regulations that dictate that cryptographic keys must be stored in [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated HSMs. Azure Dedicated HSM and a new single-tenant offering, [Azure Key Vault Managed HSM](https://docs.microsoft.com/azure/key-vault/managed-hsm), help customers from various industry segments, such as financial services industry, government agencies, and others meet FIPS 140-2 Level-3 requirements. While MicrosoftΓÇÖs multi-tenant [Azure Key Vault](https://docs.microsoft.com/azure/key-vault) service currently uses FIPS 140-2 Level-2 validated HSMs.
### Single-tenant devices
Azure Dedicated HSM is not a good fit for the following type of scenario: Micros
### It depends
-Whether Azure Dedicated HSM will work for you depends on a potentially complex mix of requirements and compromises that you can or cannot make. An example is the FIPS 140-2 Level 3 requirement. This requirement is common, and Azure Dedicated HSM and a new single-tenant offering, [Azure Key Vault Managed HSM (preview)](https://docs.microsoft.com/azure/key-vault/managed-hsm) are currently the only options for meeting it. If these mandated requirements aren't relevant, then often it's a choice between Azure Key Vault and Azure Dedicated HSM. Assess your requirements before making a decision.
+Whether Azure Dedicated HSM will work for you depends on a potentially complex mix of requirements and compromises that you can or cannot make. An example is the FIPS 140-2 Level 3 requirement. This requirement is common, and Azure Dedicated HSM and a new single-tenant offering, [Azure Key Vault Managed HSM](https://docs.microsoft.com/azure/key-vault/managed-hsm) are currently the only options for meeting it. If these mandated requirements aren't relevant, then often it's a choice between Azure Key Vault and Azure Dedicated HSM. Assess your requirements before making a decision.
Situations in which you will have to weigh your options include:
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/alert-engine-messages.md
Title: Alert types and descriptions description: Review Defender for IoT Alert descriptions.--- Last updated 4/8/2021 - # Alert types and descriptions
This article describes all of the alert types, that may be generated from the De
## Policy engine alerts
-Policy engine alerts describe deviations from learned baseline network behavior.
+Policy engine alerts describe detected deviations from learned baseline behavior.
| Title | Description | Severity | |--|--|--|
Policy engine alerts describe deviations from learned baseline network behavior.
## Anomaly engine alerts
+Anomaly engine alerts describe detected anomalies in network activity.
+ | Title | Description | Severity | |--|--|--| | Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This may be the result of an operational issue. | Minor |
Policy engine alerts describe deviations from learned baseline network behavior.
## Protocol violation engine alerts
+Protocol engine alerts describe detected deviations in the packet structure, or field values compared to protocol specifications.
+ | Title | Description | Severity | |--|--|--| | Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This might indicate erroneous communications, or an attempt to manipulate the targeted device. | Major |
Policy engine alerts describe deviations from learned baseline network behavior.
## Malware engine alerts
+Malware engine alerts describe detected malicious network activity.
+ | Title | Description| Severity | |--|--|--| | Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
Policy engine alerts describe deviations from learned baseline network behavior.
## Operational engine alerts
+Operational engine alerts describe detected operational incidents, or malfunctioning entities.
+ | Title | Description | Severity | |--|--|--| | An S7 Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
dev-spaces Install Dev Spaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dev-spaces/how-to/install-dev-spaces.md
Last updated "07/24/2019" description: "Learn how to enable Azure Dev Spaces on an AKS cluster and install the client-side tools."+ keywords: "Docker, Kubernetes, Azure, AKS, Azure Kubernetes Service, containers, Helm, service mesh, service mesh routing, kubectl, k8s"
keywords: "Docker, Kubernetes, Azure, AKS, Azure Kubernetes Service, containers,
This article shows you several ways to enable Azure Dev Spaces on an AKS cluster as well as install the client-side tools.
-## Enable Azure Dev Spaces using the CLI
+## Enable Azure Dev Spaces using the Azure CLI
Before you can enable Dev Spaces using the CLI, you need: * An Azure subscription. If you don't have an Azure subscription, you can create a [free account][az-portal-create-account].
You can use the Azure Dev Spaces client-side tools to interact with dev spaces o
* In [Visual Studio 2019][visual-studio], install the Azure Development workload. * Download and install the [Windows][cli-win], [Mac][cli-mac], or [Linux][cli-linux] CLI.
-## Remove Azure Dev Spaces using the CLI
+## Remove Azure Dev Spaces using the Azure CLI
To remove Azure Dev Spaces from your AKS cluster, use the `azds remove` command.
dev-spaces Migrate To Bridge To Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dev-spaces/migrate-to-bridge-to-kubernetes.md
Learn more about how Bridge to Kubernetes works.
> [How Bridge to Kubernetes works][how-it-works-bridge-to-kubernetes]
-[azds-delete]: how-to/install-dev-spaces.md#remove-azure-dev-spaces-using-the-cli
[kubernetes-extension]: https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools [btk-sample-app]: /visualstudio/containers/bridge-to-kubernetes#install-the-sample-application [how-it-works-bridge-to-kubernetes]: /visualstudio/containers/overview-bridge-to-kubernetes
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-high-availability-disaster-recovery.md
For best practices on HA/DR, see the following Azure guidance on this topic:
Read more about getting started with Azure Digital Twins solutions: * [*What is Azure Digital Twins?*](overview.md)
-* [*Quickstart: Explore a sample scenario*](quickstart-adt-explorer.md)
+* [*Quickstart: Explore a sample scenario*](quickstart-azure-digital-twins-explorer.md)
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-convert.md
The sample is a .NET Core command-line application called **RdfToDtdlConverter**
You can get the sample here: [**RdfToDtdlConverter**](/samples/azure-samples/rdftodtdlconverter/digital-twins-model-conversion-samples/).
-To download the code to your machine, hit the *Download ZIP* button underneath the title on the sample landing page. This will download a *ZIP* file under the name *RdfToDtdlConverter_sample_application_to_convert_RDF_to_DTDL.zip*, which you can then unzip and explore.
+To download the code to your machine, select the **Browse code** button underneath the title on the sample page, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a *.ZIP* file called *RdfToDtdlConverter-main.zip*. You can then unzip the file and explore the code.
+ You can use this sample to see the conversion patterns in context, and to have as a building block for your own applications performing model conversions according to your own specific needs.
digital-twins Concepts Ontologies Extend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-extend.md
In the DTDL-based RealEstateCore ontology, the Space hierarchy is used to define
A portion of the hierarchy looks like the diagram below. For more information about the RealEstateCore ontology, see [*Concepts: Adopting industry-standard ontologies*](concepts-ontologies-adopt.md#realestatecore-smart-building-ontology).
To extend the industry ontology with this new concept, create a new interface th
After adding the focus room interface, the extended hierarchy shows the new room type. ### Add additional capabilities to existing interfaces
To extend the industry ontology, you create your own interfaces that extend from
After extending the portion of the hierarchy shown above, the extended hierarchy looks like the diagram below. Here the extended Space interface adds the `drawingId` property that will contain an ID that associates the digital twin with the 3D drawing. Additionally, the ConferenceRoom interface adds an "online" property that will contain the online status of the conference room. Through inheritance, the ConferenceRoom interface contains all capabilities from the RealEstateCore ConferenceRoom interface, as well as all capabilities from the extended Space interface. ## Using the extended space hierarchy
When you create digital twins using the extended Space hierarchy, each digital t
Each digital twin's model will be an interface from the extended hierarchy, shown in the diagram below. When querying for digital twins using the model ID (the `IS_OF_MODEL` operator), the model IDs from the extended hierarchy should be used. For example, `SELECT * FROM DIGITALTWINS WHERE IS_OF_MODEL('dtmi:com:example:Office;1')`.
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration.md
# Create an app registration to use with Azure Digital Twins
-When working with an Azure Digital Twins instance, it is common to interact with that instance through client applications, such as a custom client app or a sample like [Azure Digital Twins Explorer](quickstart-adt-explorer.md). Those applications need to authenticate with Azure Digital Twins in order to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
+When working with an Azure Digital Twins instance, it is common to interact with that instance through client applications, such as a custom client app or a sample like [Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md). Those applications need to authenticate with Azure Digital Twins in order to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
This is not required for all authentication scenarios. However, if you are using an authentication strategy or code sample that does require an app registration, including a **client ID** and **tenant ID**, this article shows you how to set one up.
digital-twins How To Create Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-azure-function.md
# Mandatory fields. Title: Set up a function in Azure for processing data
+ Title: Set up a function in Azure to process data
description: See how to create a function in Azure that can access and be triggered by digital twins.
# Connect function apps in Azure for processing data
-Updating digital twins based on data is handled using [**event routes**](concepts-route-events.md) through compute resources, such as a function that's made by using [Azure Functions](../azure-functions/functions-overview.md). Functions can be used to update a digital twin in response to:
-* device telemetry data coming from IoT Hub
-* property change or other data coming from another digital twin within the twin graph
+Digital twins can be updated based on data by using [event routes](concepts-route-events.md) through compute resources. For example, a function that's made by using [Azure Functions](../azure-functions/functions-overview.md) can update a digital twin in response to:
+* Device telemetry data from Azure IoT Hub.
+* A property change or other data from another digital twin within the twin graph.
-This article walks you through creating a function in Azure for use with Azure Digital Twins.
+This article shows you how to create a function in Azure for use with Azure Digital Twins. To create a function, you'll follow these basic steps:
-Here is an overview of the steps it contains:
+1. Create an Azure Functions project in Visual Studio.
+2. Write a function that has an [Azure Event Grid](../event-grid/overview.md) trigger.
+3. Add authentication code to the function so you can access Azure Digital Twins.
+4. Publish the function app to Azure.
+5. Set up [security](concepts-security.md) for the function app.
-1. Create an Azure Functions project in Visual Studio
-2. Write a function with an [Event Grid](../event-grid/overview.md) trigger
-3. Add authentication code to the function (to be able to access Azure Digital Twins)
-4. Publish the function app to Azure
-5. Set up [security](concepts-security.md) access for the function app
-
-## Prerequisite: Set up Azure Digital Twins instance
+## Prerequisite: Set up Azure Digital Twins
[!INCLUDE [digital-twins-prereq-instance.md](../../includes/digital-twins-prereq-instance.md)] ## Create a function app in Visual Studio
-In Visual Studio 2019, select _File > New > Project_ and search for the _Azure Functions_ template. Select _Next_.
+In Visual Studio 2019, select **File** > **New** > **Project**. Search for the **Azure Functions** template. Select **Next**.
:::image type="content" source="media/how-to-create-azure-function/create-azure-function-project.png" alt-text="Screenshot of Visual Studio showing the new project dialog. The Azure Functions project template is highlighted.":::
-Specify a name for the function app and select _Create_.
+Specify a name for the function app and then select __Create__.
-Select the function app type of *Event Grid trigger* and select _Create_.
+Select the function app type **Event Grid trigger** and then select __Create__.
:::image type="content" source="media/how-to-create-azure-function/event-grid-trigger-function.png" alt-text="Screenshot of Visual Studio showing the dialog to create a new Azure Functions application. The Event Grid trigger option is highlighted.":::
-Once your function app is created, Visual Studio will generate a code sample in a **Function1.cs** file in your project folder. This short function is used to log events.
+After your function app is created, Visual Studio generates a code sample in a *Function1.cs* file in your project folder. This short function is used to log events.
-## Write a function with an Event Grid trigger
+## Write a function that has an Event Grid trigger
-You can write a function by adding SDK to your function app. The function app interacts with Azure Digital Twins using the [Azure Digital Twins SDK for .NET (C#)](/dotnet/api/overview/azure/digitaltwins/client).
+You can write a function by adding an SDK to your function app. The function app interacts with Azure Digital Twins by using the [Azure Digital Twins SDK for .NET (C#)](/dotnet/api/overview/azure/digitaltwins/client).
-In order to use the SDK, you'll need to include the following packages into your project. You can either install the packages using Visual Studio's NuGet package manager, or add the packages using `dotnet` in a command-line tool.
+To use the SDK, you'll need to include the following packages in your project. Install the packages by using the Visual Studio NuGet package manager. Or add the packages by using `dotnet` in a command-line tool.
* [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core/) * [Azure.Identity](https://www.nuget.org/packages/Azure.Identity/) * [System.Net.Http](https://www.nuget.org/packages/System.Net.Http/) * [Azure.Core](https://www.nuget.org/packages/Azure.Core/)
-Next, in your Visual Studio Solution Explorer, open the _Function1.cs_ file where you have sample code and add the following `using` statements for these packages to your function.
+Next, in Visual Studio Solution Explorer, open the _Function1.cs_ file that includes your sample code. Add the following `using` statements for the packages.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="Function_dependencies"::: ## Add authentication code to the function
-You will now declare class level variables and add authentication code that will allow the function to access Azure Digital Twins. You will add the following to your function in the _Function1.cs_ file.
+Now declare class-level variables and add authentication code that will allow the function to access Azure Digital Twins. Add the variables and code to your function in the _Function1.cs_ file.
-* Code to read the Azure Digital Twins service URL as an **environment variable**. It's a good practice to read the service URL from an environment variable, rather than hard-coding it in the function. You'll set the value of this environment variable [later in this article](#set-up-security-access-for-the-function-app). For more information about environment variables, see [*Manage your function app*](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
+* **Code to read the Azure Digital Twins service URL as an environment variable.** It's a good practice to read the service URL from an environment variable rather than hard-coding it in the function. You'll set the value of this environment variable [later in this article](#set-up-security-access-for-the-function-app). For more information about environment variables, see [Manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="ADT_service_URL":::
-* A static variable to hold an HttpClient instance. HttpClient is relatively expensive to create, and we want to avoid having to do this for every function invocation.
+* **A static variable to hold an HttpClient instance.** HttpClient is relatively expensive to create, so we want to avoid creating it for every function invocation.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="HTTP_client":::
-* You can use the managed identity credentials in Azure Functions.
+* **Managed identity credentials.**
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="ManagedIdentityCredential":::
-* Add a local variable _DigitalTwinsClient_ inside of your function to hold your Azure Digital Twins client instance. Do *not* make this variable static inside your class.
+* **A local variable _DigitalTwinsClient_.** Add the variable inside your function to hold your Azure Digital Twins client instance. Do *not* make this variable static inside your class.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="DigitalTwinsClient":::
-* Add a null check for _adtInstanceUrl_ and wrap your function logic in a try/catch block to catch any exceptions.
+* **A null check for _adtInstanceUrl_.** Add the null check and then wrap your function logic in a try/catch block to catch any exceptions.
-After these changes, your function code will be similar to the following:
+After these changes, your function code will look like the following example.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs":::
-Now that your application is written, you can publish it to Azure using the steps in the next section.
+Now that your application is written, you can publish it to Azure.
## Publish the function app to Azure [!INCLUDE [digital-twins-publish-azure-function.md](../../includes/digital-twins-publish-azure-function.md)]
-### Verify function publish
+### Verify the publication of your function
+
+1. Sign in by using your credentials in the [Azure portal](https://portal.azure.com/).
+2. In the search box at the top of the window, search for your function app name and then select it.
-1. Sign in with your credentials in the [Azure portal](https://portal.azure.com/).
-2. In the search bar on the top of the window, search for your **function app name**.
+ :::image type="content" source="media/how-to-create-azure-function/search-function-app.png" alt-text="Screenshot showing the Azure portal. In the search field, enter the function app name." lightbox="media/how-to-create-azure-function/search-function-app.png":::
- :::image type="content" source="media/how-to-create-azure-function/search-function-app.png" alt-text="Search for your function app with its name in the Azure portal." lightbox="media/how-to-create-azure-function/search-function-app.png":::
+3. On the **Function app** page that opens, in the menu on the left, choose **Functions**. If your function is successfully published, its name appears in the list.
-3. In the *Function app* page that opens, choose *Functions* in the menu options on the left. If your function is successfully published, you'll see your function name in the list.
-Note that you might have to wait a few min or refresh the page couple of times before you can see your function listed in the published functions list.
+ > [!Note]
+ > You might have to wait a few minutes or refresh the page couple of times before your function appears in the list of published functions.
:::image type="content" source="media/how-to-create-azure-function/view-published-functions.png" alt-text="View published functions in the Azure portal." lightbox="media/how-to-create-azure-function/view-published-functions.png":::
-For your function app to be able to access Azure Digital Twins, it will need to have a system-managed identity with permissions to access your Azure Digital Twins instance. You'll set that up next.
+To access Azure Digital Twins, your function app needs a system-managed identity with permissions to access your Azure Digital Twins instance. You'll set that up next.
## Set up security access for the function app
-You can set up security access for the function app using either the Azure CLI or the Azure portal. Follow the steps for your preferred option below.
+You can set up security access for the function app by using either the Azure CLI or the Azure portal. Follow the steps for your preferred option.
# [CLI](#tab/cli)
-You can run these commands in [Azure Cloud Shell](https://shell.azure.com) or a [local Azure CLI installation](/cli/azure/install-azure-cli).
-You can use the function app's system-managed identity to give it the _**Azure Digital Twins Data Owner**_ role for your Azure Digital Twins instance. This will give the function app permission in the instance to perform data plane activities. Then, make the URL of Azure Digital Twins instance accessible to your function by setting an environment variable.
+Run these commands in [Azure Cloud Shell](https://shell.azure.com) or a [local Azure CLI installation](/cli/azure/install-azure-cli).
+You can use the function app's system-managed identity to give it the **Azure Digital Twins Data Owner** role for your Azure Digital Twins instance. The role gives the function app permission in the instance to perform data plane activities. Then make the URL of the instance accessible to your function by setting an environment variable.
-### Assign access role
+### Assign an access role
[!INCLUDE [digital-twins-permissions-required.md](../../includes/digital-twins-permissions-required.md)]
-The function skeleton from earlier examples requires that a bearer token to be passed to it, in order to be able to authenticate with Azure Digital Twins. To make sure that this bearer token is passed, you'll need to set up [Managed Service Identity (MSI)](../active-directory/managed-identities-azure-resources/overview.md) permissions for the function app to access Azure Digital Twins. This only needs to be done once for each function app.
+The function skeleton in earlier examples requires a bearer token to be passed to it. If the bearer token isn't passed, the function app can't authenticate with Azure Digital Twins.
+
+To make sure the bearer token is passed, set up [managed identities](../active-directory/managed-identities-azure-resources/overview.md) permissions so the function app can access Azure Digital Twins. You set up these permissions only once for each function app.
-1. Use the following command to see the details of the system-managed identity for the function. Take note of the _principalId_ field in the output.
+1. Use the following command to see the details of the system-managed identity for the function. Take note of the `principalId` field in the output.
```azurecli-interactive az functionapp identity show -g <your-resource-group> -n <your-App-Service-(function-app)-name> ``` >[!NOTE]
- > If the result is empty instead of showing details of an identity, create a new system-managed identity for the function using this command:
+ > If the result is empty instead of showing identity details, create a new system-managed identity for the function by using this command:
> >```azurecli-interactive >az functionapp identity assign -g <your-resource-group> -n <your-App-Service-(function-app)-name> >``` >
- > The output will then display details of the identity, including the _principalId_ value required for the next step.
+ > The output displays details of the identity, including the `principalId` value required for the next step.
-1. Use the _principalId_ value in the following command to assign the function app's identity to the _Azure Digital Twins Data Owner_ role for your Azure Digital Twins instance.
+1. Use the `principalId` value in the following command to assign the function app's identity to the _Azure Digital Twins Data Owner_ role for your Azure Digital Twins instance.
```azurecli-interactive az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<principal-ID>" --role "Azure Digital Twins Data Owner"
The function skeleton from earlier examples requires that a bearer token to be p
### Configure application settings
-Lastly, make the URL of your Azure Digital Twins instance accessible to your function by setting an **environment variable** for it. For more information about environment variables, see [*Manage your function app*](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
+Make the URL of your instance accessible to your function by setting an environment variable for it. For more information about environment variables, see [Manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
> [!TIP]
-> The Azure Digital Twins instance's URL is made by adding *https://* to the beginning of your Azure Digital Twins instance's *host name*. To see the host name, along with all the properties of your instance, you can run `az dt show --dt-name <your-Azure-Digital-Twins-instance>`.
+> The Azure Digital Twins instance's URL is made by adding *https://* to the beginning of your instance's host name. To see the host name, along with all the properties of your instance, run `az dt show --dt-name <your-Azure-Digital-Twins-instance>`.
```azurecli-interactive az functionapp config appsettings set -g <your-resource-group> -n <your-App-Service-(function-app)-name> --settings "ADT_SERVICE_URL=https://<your-Azure-Digital-Twins-instance-host-name>"
az functionapp config appsettings set -g <your-resource-group> -n <your-App-Serv
Complete the following steps in the [Azure portal](https://portal.azure.com/).
-### Assign access role
+### Assign an access role
[!INCLUDE [digital-twins-permissions-required.md](../../includes/digital-twins-permissions-required.md)]
-A system assigned managed identity enables Azure resources to authenticate to cloud services (for example, Azure Key Vault) without storing credentials in code. Once enabled, all necessary permissions can be granted via Azure role-based access control. The lifecycle of this type of managed identity is tied to the lifecycle of this resource. Additionally, each resource can only have one system assigned managed identity.
+A system-assigned managed identity enables Azure resources to authenticate to cloud services (for example, Azure Key Vault) without storing credentials in code. After you enable system-assigned managed identity, all necessary permissions can be granted through Azure role-based access control.
-1. In the [Azure portal](https://portal.azure.com/), search for your function app by typing its name into the search bar. Select your app from the results.
+The lifecycle of this type of managed identity is tied to the lifecycle of this resource. Additionally, each resource can have only one system-assigned managed identity.
- :::image type="content" source="media/how-to-create-azure-function/portal-search-for-function-app.png" alt-text="Screenshot of the Azure portal: The function app's name is being searched in the portal search bar and the search result is highlighted.":::
+1. In the [Azure portal](https://portal.azure.com/), search for your function app by typing its name in the search box. Select your app from the results.
-1. On the function app page, select _Identity_ in the navigation bar on the left to work with a managed identity for the function. On the _System assigned_ page, verify that the _Status_ is set to **On** (if it's not, set it now and *Save* the change).
+ :::image type="content" source="media/how-to-create-azure-function/portal-search-for-function-app.png" alt-text="Screenshot of the Azure portal. The function app's name is in the portal search bar, and the search result is highlighted.":::
- :::image type="content" source="media/how-to-create-azure-function/verify-system-managed-identity.png" alt-text="Screenshot of the Azure portal: In the Identity page for the function app, the Status option is set to On." lightbox="media/how-to-create-azure-function/verify-system-managed-identity.png":::
+1. On the function app page, in the menu on the left, select __Identity__ to work with a managed identity for the function. On the __System assigned__ page, verify that the __Status__ is set to **On**. If it's not, set it now and then **Save** the change.
-1. Select the _Azure role assignments_ button, which will open up the *Azure role assignments* page.
+ :::image type="content" source="media/how-to-create-azure-function/verify-system-managed-identity.png" alt-text="Screenshot of the Azure portal. On the Identity page for the function app, the Status option is set to On." lightbox="media/how-to-create-azure-function/verify-system-managed-identity.png":::
- :::image type="content" source="media/how-to-create-azure-function/add-role-assignment-1.png" alt-text="Screenshot of the Azure portal: A highlight around the Azure role assignments button under Permissions in the Azure Function's Identity page." lightbox="media/how-to-create-azure-function/add-role-assignment-1.png":::
+1. Select __Azure role assignments__.
- Select _+ Add role assignment (Preview)_.
+ :::image type="content" source="media/how-to-create-azure-function/add-role-assignment-1.png" alt-text="Screenshot of the Azure portal. On the Azure Function's Identity page, under Permissions, the button Azure role assignments is highlighted." lightbox="media/how-to-create-azure-function/add-role-assignment-1.png":::
- :::image type="content" source="media/how-to-create-azure-function/add-role-assignment-2.png" alt-text="Screenshot of the Azure portal: A highlight around + Add role assignment (Preview) in the Azure role assignments page." lightbox="media/how-to-create-azure-function/add-role-assignment-2.png":::
+ Select __+ Add role assignment (Preview)__.
-1. On the _Add role assignment (Preview)_ page that opens up, select the following values:
+ :::image type="content" source="media/how-to-create-azure-function/add-role-assignment-2.png" alt-text="Screenshot of the Azure portal. On the Azure role assignments page, the button Add role assignment (Preview) is highlighted." lightbox="media/how-to-create-azure-function/add-role-assignment-2.png":::
- * **Scope**: Resource group
- * **Subscription**: Select your Azure subscription
- * **Resource group**: Select your resource group from the dropdown
- * **Role**: Select _Azure Digital Twins Data Owner_ from the dropdown
+1. On the __Add role assignment (Preview)__ page, select the following values:
- Then, save your details by hitting the _Save_ button.
+ * **Scope**: _Resource group_
+ * **Subscription**: Select your Azure subscription.
+ * **Resource group**: Select your resource group.
+ * **Role**: _Azure Digital Twins Data Owner_
- :::image type="content" source="media/how-to-create-azure-function/add-role-assignment-3.png" alt-text="Screenshot of the Azure portal: Dialog to add a new role assignment (Preview). There are fields for the Scope, Subscription, Resource group, and Role.":::
+ Save the details by selecting __Save__.
+
+ :::image type="content" source="media/how-to-create-azure-function/add-role-assignment-3.png" alt-text="Screenshot of the Azure portal, showing how to add a new role assignment. The dialog shows fields for the Scope, Subscription, Resource group, and Role.":::
### Configure application settings
-To make the URL of your Azure Digital Twins instance accessible to your function, you can set an **environment variable** for it. For more information about environment variables, see [*Manage your function app*](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal). Application settings are exposed as environment variables to access the Azure Digital Twins instance.
+To make the URL of your Azure Digital Twins instance accessible to your function, you can set an environment variable. Application settings are exposed as environment variables to allow access to the Azure Digital Twins instance. For more information about environment variables, see [Manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
+
+To set an environment variable with the URL of your instance, first find your instance's host name:
-To set an environment variable with the URL of your instance, first get the URL by finding your Azure Digital Twins instance's host name. Search for your instance in the [Azure portal](https://portal.azure.com) search bar. Then, select _Overview_ on the left navigation bar to view the _Host name_. Copy this value.
+1. Search for your instance in the [Azure portal](https://portal.azure.com).
+1. In the menu on the left, select __Overview__.
+1. Copy the __Host name__ value.
+ :::image type="content" source="media/how-to-create-azure-function/instance-host-name.png" alt-text="Screenshot of the Azure portal. On the instance's Overview page, the host name value is highlighted.":::
-You can now create an application setting with these steps:
+You can now create an application setting:
-1. Search for your function app in the portal search bar, and select it from the results.
+1. In the portal search bar, search for your function app and then select it from the results.
- :::image type="content" source="media/how-to-create-azure-function/portal-search-for-function-app.png" alt-text="Screenshot of the Azure portal: The function app's name is being searched in the portal search bar and the search result is highlighted.":::
+ :::image type="content" source="media/how-to-create-azure-function/portal-search-for-function-app.png" alt-text="Screenshot of the Azure portal. The function app's name is being searched in the portal search bar. The search result is highlighted.":::
-1. Select _Configuration_ on the navigation bar on the left. In the _Application settings_ tab, select _+ New application setting_.
+1. On the left, select __Configuration__. Then on the __Application settings__ tab, select __+ New application setting__.
- :::image type="content" source="media/how-to-create-azure-function/application-setting.png" alt-text="Screenshot of the Azure portal: In the Configuration page for the function app, the button to create a New application setting is highlighted.":::
+ :::image type="content" source="media/how-to-create-azure-function/application-setting.png" alt-text="Screenshot of the Azure portal. On the Configuration tab for the function app, the button to create a New application setting is highlighted.":::
-1. In the window that opens up, use the host name value copied above to create an application setting.
+1. In the window that opens, use the host name value you copied to create an application setting.
* **Name**: ADT_SERVICE_URL * **Value**: https://{your-azure-digital-twins-host-name}
- Select _OK_ to create an application setting.
+ Select __OK__ to create an application setting.
- :::image type="content" source="media/how-to-create-azure-function/add-application-setting.png" alt-text="Screenshot of the Azure portal: The OK button is highlighted after filling out the Name and Value fields in the Add/Edit application setting page.":::
+ :::image type="content" source="media/how-to-create-azure-function/add-application-setting.png" alt-text="Screenshot of the Azure portal. On the Add/Edit application setting page, the Name and Value fields are filled out. The O K button is highlighted.":::
-1. After creating the setting, you should see it appear back in the _Application settings_ tab. Verify *ADT_SERVICE_URL* appears in the list, then save the new application setting by selecting the _Save_ button.
+1. After you create the setting, it should appear on the __Application settings__ tab. Verify that **ADT_SERVICE_URL** appears on the list. Then save the new application setting by selecting __Save__.
- :::image type="content" source="media/how-to-create-azure-function/application-setting-save-details.png" alt-text="Screenshot of the Azure portal: The application settings page, with the new ADT_SERVICE_URL setting highlighted. The Save button is also highlighted.":::
+ :::image type="content" source="media/how-to-create-azure-function/application-setting-save-details.png" alt-text="Screenshot of the Azure portal. On the application settings tab, the new A D T SERVICE U R L setting is highlighted. The Save button is also highlighted.":::
-1. Any changes to the application settings require an application restart to take effect, so select _Continue_ to restart your application when prompted.
+1. Any changes to the application settings require an application restart, so select __Continue__ to restart your application when prompted.
- :::image type="content" source="media/how-to-create-azure-function/save-application-setting.png" alt-text="Screenshot of the Azure portal: There is a notice that any changes to application settings with restart your application. The Continue button is highlighted.":::
+ :::image type="content" source="media/how-to-create-azure-function/save-application-setting.png" alt-text="Screenshot of the Azure portal. A note states that any changes to application settings will restart your application. The Continue button is highlighted.":::
## Next steps
-In this article, you followed the steps to set up a function app in Azure for use with Azure Digital Twins.
-
-Next, see how to build on your basic function to ingest IoT Hub data into Azure Digital Twins:
-* [*How-to: Ingest telemetry from IoT Hub*](how-to-ingest-iot-hub-data.md)
+In this article, you set up a function app in Azure for use with Azure Digital Twins. Next, see how to build on your basic function to [ingest IoT Hub data into Azure Digital Twins](how-to-ingest-iot-hub-data.md).
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-move-regions.md
If the sample isn't able to handle the size of your graph, you can export and im
To proceed with Azure Digital Twins Explorer, first download the sample application code and set it up to run on your machine.
-To get the sample, see [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). Select the **Download ZIP** button to download a .zip file of this sample code to your machine as **Azure_Digital_Twins__ADT__explorer.zip**. Unzip the file.
+To get the sample, go to [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). Select the **Browse code** button underneath the title, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a *.ZIP* file to your machine.
-Next, set up and configure permissions for Azure Digital Twins Explorer. Follow the instructions in the [Set up Azure Digital Twins and Azure Digital Twins Explorer](quickstart-adt-explorer.md#set-up-azure-digital-twins-and-azure-digital-twins-explorer) section of the Azure Digital Twins quickstart. This section walks you through the following steps:
+
+Unzip the file.
+
+Next, set up and configure permissions for Azure Digital Twins Explorer. Follow the instructions in the [Set up Azure Digital Twins and Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md#set-up-azure-digital-twins-and-azure-digital-twins-explorer) section of the Azure Digital Twins quickstart. This section walks you through the following steps:
1. Set up an Azure Digital Twins instance. You can skip this part because you already have an instance. 1. Set up local Azure credentials to provide access to your instance.
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
Before you can set up the provisioning, you'll need to set up the following:
* an **IoT hub**. For instructions, see the *Create an IoT Hub* section of this [IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md). * an [**Azure function**](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [*How to: Ingest IoT hub data*](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
-This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the *Download ZIP* button underneath the title. Unzip the downloaded folder.
+This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the **Browse code** button underneath the title. This will take you to the GitHub repo for the sample, which you can download as a *.ZIP* file by selecting the **Code** button and **Download ZIP**.
++
+Unzip the downloaded folder.
You'll need [**Node.js**](https://nodejs.org/download) installed on your machine. The device simulator is based on **Node.js**, version 10.0.x or later.
You'll need [**Node.js**](https://nodejs.org/download) installed on your machine
The image below illustrates the architecture of this solution using Azure Digital Twins with Device Provisioning Service. It shows both the device provision and retire flow. This article is divided into two sections: * [*Auto-provision device using Device Provisioning Service*](#auto-provision-device-using-device-provisioning-service)
For deeper explanations of each step in the architecture, see their individual s
In this section, you'll be attaching Device Provisioning Service to Azure Digital Twins to auto-provision devices through the path below. This is an excerpt from the full architecture shown [earlier](#solution-architecture). Here is a description of the process flow: 1. Device contacts the DPS endpoint, passing identifying information to prove its identity.
Start by opening the function app project in Visual Studio on your machine and f
Add a new function of type *HTTP-trigger* to the function app project in Visual Studio. #### Step 2: Fill in function code
Next, choose the *Select a new function* button to link your function app to the
Save your details. After creating the enrollment, the **Primary Key** for the enrollment will be used later to configure the device simulator for this article. ### Set up the device simulator
-This sample uses a device simulator that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). If you haven't already downloaded the sample, get it now by navigating to the sample link and selecting the *Download ZIP* button underneath the title. Unzip the downloaded folder.
+This sample uses a device simulator that includes provisioning using the Device Provisioning Service. The device simulator is located in the [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/) that you downloaded in the [Prerequisites](#prerequisites) section.
#### Upload the model
Next, in your device simulator directory, copy the .env.template file to a new f
* PROVISIONING_IDSCOPE: To get this value, navigate to your device provisioning service in the [Azure portal](https://portal.azure.com/), then select *Overview* in the menu options and look for the field *ID Scope*.
- :::image type="content" source="media/how-to-provision-using-dps/id-scope.png" alt-text="Screenshot of the Azure portal view of the device provisioning overview page to copy the ID Scope value." lightbox="media/how-to-provision-using-dps/id-scope.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/id-scope.png" alt-text="Screenshot of the Azure portal view of the device provisioning overview page to copy the ID Scope value." lightbox="media/how-to-provision-using-device-provisioning-service/id-scope.png":::
* PROVISIONING_REGISTRATION_ID: You can choose a registration ID for your device. * ADT_MODEL_ID: `dtmi:contosocom:DigitalTwins:Thermostat;1` * PROVISIONING_SYMMETRIC_KEY: This is the primary key for the enrollment you set up earlier. To get this value again, navigate to your device provisioning service in the Azure portal, select *Manage enrollments*, then select the enrollment group that you created earlier and copy the *Primary Key*.
- :::image type="content" source="media/how-to-provision-using-dps/sas-primary-key.png" alt-text="Screenshot of the Azure portal view of the device provisioning service manage enrollments page to copy the SAS primary key value." lightbox="media/how-to-provision-using-dps/sas-primary-key.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/sas-primary-key.png" alt-text="Screenshot of the Azure portal view of the device provisioning service manage enrollments page to copy the SAS primary key value." lightbox="media/how-to-provision-using-device-provisioning-service/sas-primary-key.png":::
Now, use the values above to update the .env file settings.
node .\adt_custom_register.js
``` You should see the device being registered and connected to IoT Hub, and then starting to send messages. ### Validate
az dt twin show -n <Digital Twins instance name> --twin-id "<Device Registration
``` You should see the twin of the device being found in the Azure Digital Twins instance. ## Auto-retire device using IoT Hub lifecycle events In this section, you will be attaching IoT Hub lifecycle events to Azure Digital Twins to auto-retire devices through the path below. This is an excerpt from the full architecture shown [earlier](#solution-architecture). Here is a description of the process flow: 1. An external or manual process triggers the deletion of a device in IoT Hub.
Next, you'll create an Azure [event hub](../event-hubs/event-hubs-about.md) to r
Follow the steps described in the [*Create an event hub*](../event-hubs/event-hubs-create.md) quickstart. Name your event hub *lifecycleevents*. You'll use this event hub name when you set up IoT Hub route and an Azure function in the next sections. The screenshot below illustrates the creation of the event hub. #### Create SAS policy for your event hub
To do this,
2. Select **Add**. In the *Add SAS Policy* window that opens, enter a policy name of your choice and select the *Listen* checkbox. 3. Select **Create**. #### Configure event hub with function app
Next, configure the Azure function app that you set up in the [prerequisites](#p
1. Open the policy that you just created and copy the **Connection string-primary key** value.
- :::image type="content" source="media/how-to-provision-using-dps/event-hub-sas-policy-connection-string.png" alt-text="Screenshot of the Azure portal to copy the connection string-primary key." lightbox="media/how-to-provision-using-dps/event-hub-sas-policy-connection-string.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/event-hub-sas-policy-connection-string.png" alt-text="Screenshot of the Azure portal to copy the connection string-primary key." lightbox="media/how-to-provision-using-device-provisioning-service/event-hub-sas-policy-connection-string.png":::
2. Add the connection string as a variable in the function app settings with the following Azure CLI command. The command can be run in [Cloud Shell](https://shell.azure.com), or locally if you have the Azure CLI [installed on your machine](/cli/azure/install-azure-cli).
Start by opening the function app project in Visual Studio on your machine and f
Add a new function of type *Event Hub Trigger* to the function app project in Visual Studio. #### Step 2: Fill in function code
Follow these steps to create an event hub endpoint:
2. Select the **Custom endpoints** tab. 3. Select **+ Add** and choose **Event hubs** to add an event hubs type endpoint.
- :::image type="content" source="media/how-to-provision-using-dps/event-hub-custom-endpoint.png" alt-text="Screenshot of the Visual Studio window to add an event hub custom endpoint." lightbox="media/how-to-provision-using-dps/event-hub-custom-endpoint.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png" alt-text="Screenshot of the Visual Studio window to add an event hub custom endpoint." lightbox="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png":::
4. In the window *Add an event hub endpoint* that opens, choose the following values: * **Endpoint name**: Choose an endpoint name.
Follow these steps to create an event hub endpoint:
* **Event hub instance**: Choose the event hub name that you created in the previous step. 5. Select **Create**. Keep this window open to add a route in the next step.
- :::image type="content" source="media/how-to-provision-using-dps/add-event-hub-endpoint.png" alt-text="Screenshot of the Visual Studio window to add an event hub endpoint." lightbox="media/how-to-provision-using-dps/add-event-hub-endpoint.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/add-event-hub-endpoint.png" alt-text="Screenshot of the Visual Studio window to add an event hub endpoint." lightbox="media/how-to-provision-using-device-provisioning-service/add-event-hub-endpoint.png":::
Next, you'll add a route that connects to the endpoint you created in the above step, with a routing query that sends the delete events. Follow these steps to create a route: 1. Navigate to the *Routes* tab and select **Add** to add a route.
- :::image type="content" source="media/how-to-provision-using-dps/add-message-route.png" alt-text="Screenshot of the Visual Studio window to add a route to send events." lightbox="media/how-to-provision-using-dps/add-message-route.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/add-message-route.png" alt-text="Screenshot of the Visual Studio window to add a route to send events." lightbox="media/how-to-provision-using-device-provisioning-service/add-message-route.png":::
2. In the *Add a route* page that opens, choose the following values:
Next, you'll add a route that connects to the endpoint you created in the above
3. Select **Save**.
- :::image type="content" source="media/how-to-provision-using-dps/lifecycle-route.png" alt-text="Screenshot of the Azure portal window to add a route to send lifecycle events." lightbox="media/how-to-provision-using-dps/lifecycle-route.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png" alt-text="Screenshot of the Azure portal window to add a route to send lifecycle events." lightbox="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png":::
Once you have gone through this flow, everything is set to retire devices end-to-end.
Follow the steps below to delete the device in the Azure portal:
2. You'll see a device with the device registration ID you chose in the [first half of this article](#auto-provision-device-using-device-provisioning-service). Alternatively, you can choose any other device to delete, as long as it has a twin in Azure Digital Twins so you can verify that the twin is automatically deleted after the device is deleted. 3. Select the device and choose **Delete**. It might take a few minutes to see the changes reflected in Azure Digital Twins.
az dt twin show -n <Digital Twins instance name> --twin-id "<Device Registration
You should see that the twin of the device cannot be found in the Azure Digital Twins instance anymore. ## Clean up resources
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
You can view a list of **common IoT terms** and their uses across the Azure IoT
## Next steps
-* Dive into working with Azure Digital Twins in the quickstart: [*Quickstart: Explore a sample scenario*](quickstart-adt-explorer.md).
+* Dive into working with Azure Digital Twins in the quickstart: [*Quickstart: Explore a sample scenario*](quickstart-azure-digital-twins-explorer.md).
* Or, start reading about Azure Digital Twins concepts with [*Concepts: Custom models*](concepts-models.md).
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
+
+# Mandatory fields.
+ Title: Quickstart - Explore a sample scenario
+
+description: Quickstart - Use the Azure Digital Twins Explorer sample to visualize and explore a prebuilt scenario.
++ Last updated : 9/24/2020+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Quickstart - Explore a sample Azure Digital Twins scenario using Azure Digital Twins Explorer
+
+With Azure Digital Twins, you can create and interact with live models of your real-world environments. First, you model individual elements as **digital twins**. Then you connect them into a knowledge **graph** that can respond to live events and be queried for information.
+
+In this quickstart, you'll explore a prebuilt Azure Digital Twins graph, with the help of a sample application called [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). You use Azure Digital Twins Explorer to:
+
+- Upload a digital representation of an environment.
+- View visual images of the twins and graph that are created to represent the environment in Azure Digital Twins.
+- Perform other management activities through a browser-based, visual experience.
+
+The quickstart contains the following major steps:
+
+1. Set up an Azure Digital Twins instance and Azure Digital Twins Explorer.
+1. Upload prebuilt models and graph data to construct the sample scenario.
+1. Explore the scenario graph that's created.
+1. Make changes to the graph.
+
+The sample graph you'll be working with represents a building with two floors and two rooms. The graph will look like this image:
++
+## Prerequisites
+
+You'll need an Azure subscription to complete this quickstart. If you don't have one already, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
+
+You'll also need **Node.js** on your machine. To get the latest version, see [Node.js](https://nodejs.org/).
+
+Finally, you'll also need to download the sample to use during the quickstart. The sample application is **Azure Digital Twins Explorer**. This sample contains the app you use in the quickstart to load and explore an Azure Digital Twins scenario. It also contains the sample scenario files. To get the sample, go to [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). Select the **Browse code** button underneath the title, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a *.ZIP* file.
++
+Unzip the **digital-twins-explorer-main.zip** folder, and extract the files.
+
+## Set up Azure Digital Twins and Azure Digital Twins Explorer
+
+The first step in working with Azure Digital Twins is to set up an Azure Digital Twins instance. After you create an instance of the service and set up your credentials to authenticate with Azure Digital Twins Explorer, you can connect to the instance in Azure Digital Twins Explorer and populate it with the example data later in the quickstart.
+
+The rest of this section walks you through these steps.
+
+### Set up an Azure Digital Twins instance
++
+### Set up local Azure credentials
+
+The Azure Digital Twins Explorer application uses [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) (part of the `Azure.Identity` library) to authenticate users with the Azure Digital Twins instance when you run it on your local machine. For more information on different ways a client app can authenticate with Azure Digital Twins, see [Write app authentication code](how-to-authenticate-client.md).
+
+With this type of authentication, Azure Digital Twins Explorer will search for credentials within your local environment, such as an Azure sign-in in a local [Azure CLI](/cli/azure/install-azure-cli) or in Visual Studio or Visual Studio Code. For this reason, you should **sign in to Azure locally** through one of these mechanisms to set up credentials for the Azure Digital Twins Explorer app.
+
+If you're already signed in to Azure through one of these ways, you can skip to the [next section](#run-and-configure-azure-digital-twins-explorer).
+
+Otherwise, you can install the local Azure CLI with these steps:
+
+1. Follow the process at [this installation link](/cli/azure/install-azure-cli) to complete the installation that matches your OS.
+1. Open a console window on your machine.
+1. Run `az login`, and follow the authentication prompts to sign in to your Azure account.
+1. Possible last step: If you use multiple Azure subscriptions under this account, set the authentication context to the Azure subscription that contains your Azure Digital Twins instance by running `az account set --subscription "<your-subscription-name-or-ID>"` (either the name or ID value of the subscription will work).
+
+After you sign in, Azure Digital Twins Explorer should pick up your Azure credentials automatically when you run it in the next section.
+
+You can close the authentication console window if you want. Or, you can keep it open to use in the next step.
+
+### Run and configure Azure Digital Twins Explorer
+
+Next, run the Azure Digital Twins Explorer application and configure it for your Azure Digital Twins instance.
+
+1. Go to the downloaded and unzipped **digital-twins-explorer-main** folder.
+Open a console window to the folder location **digital-twins-explorer-main/client/src**.
+
+1. Run `npm install` to download all the required dependencies.
+
+1. Start the app by running `npm run start`.
+
+ After a few seconds, a browser window opens and the app appears in the browser.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/explorer-blank.png" alt-text="Browser window showing an app running at localhost:3000. The app is called Azure Digital Twins Explorer and contains boxes for Query Explorer, Model View, Graph View, and Property Explorer. There's no onscreen data yet." lightbox="media/quickstart-azure-digital-twins-explorer/explorer-blank.png":::
+
+1. Select the **Sign In** button in the upper-right corner of the window, as shown in the following image, to configure Azure Digital Twins Explorer to work with the instance you've set up.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/sign-in.png" alt-text="Azure Digital Twins Explorer highlighting the Sign In icon near the top of the window. The icon shows a simple silhouette of a person overlaid with a silhouette of a key." lightbox="media/quickstart-azure-digital-twins-explorer/sign-in.png":::
+
+1. Enter the Azure Digital Twins instance URL that you gathered earlier in the [Set up an Azure Digital Twins instance](#set-up-an-azure-digital-twins-instance) section, in the format *https://<instance-host-name>*.
+
+> [!TIP]
+> If a `SignalRService.subscribe` error message appears when you connect, make sure that your Azure Digital Twins URL begins with *https://*.
+>
+> If an authentication error appears, you may want to check your **environment variables** to make sure any credentials included there are valid for Azure Digital Twins. The `DefaultAzureCredential` attempts to authenticate against credential types in a [specific order](/dotnet/api/overview/azure/identity-readme#defaultazurecredential), and environment variables are evaluated first.
+
+If you see a **Permissions requested** pop-up window from Microsoft, grant consent for this application and accept to continue.
+
+>[!NOTE]
+> You can revisit or edit this information at any time by selecting the same icon to open the **Sign In** box again. It will keep the values that you passed in.
+
+## Add the sample data
+
+Next, you'll import the sample scenario and graph into Azure Digital Twins Explorer. The sample scenario is also located in the **digital-twins-explorer-main** folder you downloaded earlier.
+
+### Models
+
+The first step in an Azure Digital Twins solution is to define the vocabulary for your environment. You'll create custom [models](concepts-models.md) that describe the types of entity that exist in your environment.
+
+Each model is written in a language like JSON-LD called Digital Twin Definition Language (DTDL). Each model describes a single type of entity in terms of its **properties**, **telemetry**, **relationships**, and **components**. Later, you'll use these models as the basis for digital twins that represent specific instances of these types.
+
+Typically, when you create a model, you'll complete three steps:
+
+1. Write the model definition. In the quickstart, this step is already done as part of the sample solution.
+1. Validate it to make sure the syntax is accurate. In the quickstart, this step is already done as part of the sample solution.
+1. Upload it to your Azure Digital Twins instance.
+
+For this quickstart, the model files are already written and validated for you. They're included with the solution you downloaded. In this section, you'll upload two prewritten models to your instance to define these components of a building environment:
+
+* Floor
+* Room
+
+#### Upload models
+
+Follow these steps to upload models.
+
+1. In the **MODEL VIEW** box, select the **Upload a Model** icon.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/upload-model.png" alt-text="In the Model View box, the middle icon is highlighted. It shows an arrow pointing into a cloud." lightbox="media/quickstart-azure-digital-twins-explorer/upload-model.png":::
+
+1. In the file selector box that appears, go to the **digital-twins-explorer-main/client/examples** folder in the downloaded repository.
+1. Select **Room.json** and **Floor.json**, and select **OK**. You can upload additional models if you want, but they won't be used in this quickstart.
+1. Follow the pop-up dialog box that asks you to sign in to your Azure account.
+
+>[!NOTE]
+>If you see the following error message:
+> :::image type="content" source="media/quickstart-azure-digital-twins-explorer/error-models-popup.png" alt-text="A pop-up box reading 'Error: Error fetching models: ClientAuthError: Error opening popup window. This can happen if you are using IE or if popups are blocked in the browser.' with a Close button at the bottom." border="false":::
+> Try disabling your pop-up blocker or using a different browser.
+
+Azure Digital Twins Explorer now uploads these model files to your Azure Digital Twins instance. They should show up in the **MODEL VIEW** box and display their friendly names and full model IDs. You can select the **View Model** information icons to see the DTDL code behind them.
+
+ :::column:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/model-info.png" alt-text="A view of the Model View box with two model definitions listed inside, Floor (dtmi:example:Floor;1) and Room (dtmi:example:Room;1). The View Model information icon showing a letter 'i' in a circle is highlighted for each model." lightbox="media/quickstart-azure-digital-twins-explorer/model-info.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+
+### Twins and the twin graph
+
+Now that some models have been uploaded to your Azure Digital Twins instance, you can add [digital twins](concepts-twins-graph.md) that follow the model definitions.
+
+Digital twins represent the actual entities within your business environment. They can be things like sensors on a farm, lights in a car, orΓÇöin this quickstartΓÇörooms on a building floor. You can create many twins of any given model type, such as multiple rooms that all use the *Room* model. You connect them with relationships into a **twin graph** that represents the full environment.
+
+In this section, you'll upload precreated twins that are connected into a precreated graph. The graph contains two floors and two rooms, connected in the following layout:
+
+* Floor0
+ - Contains Room0
+* Floor1
+ - Contains Room1
+
+#### Import the graph
+
+Follow these steps to import the graph.
+
+1. In the **GRAPH VIEW** box, select the **Import Graph** icon.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/import-graph.png" alt-text="In the Graph View box, an icon is highlighted. It shows an arrow pointing into a cloud." lightbox="media/quickstart-azure-digital-twins-explorer/import-graph.png":::
+
+2. In the file selector box, go to the **digital-twins-explorer-main/client/examples** folder, and select the **buildingScenario.xlsx** spreadsheet file. This file contains a description of the sample graph. Select **OK**.
+
+ After a few seconds, Azure Digital Twins Explorer opens an **Import** view that shows a preview of the graph to be loaded.
+
+3. To confirm the graph upload, select the **Save** icon in the upper-right corner of the **GRAPH VIEW** box.
+
+ :::row:::
+ :::column:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/graph-preview-save.png" alt-text="Highlighting the Save icon in the Graph Preview pane." lightbox="media/quickstart-azure-digital-twins-explorer/graph-preview-save.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::row-end:::
+
+4. Azure Digital Twins Explorer now uses the uploaded file to create the requested twins and relationships between them. A dialog box appears when it's finished. Select **Close**.
+
+ :::row:::
+ :::column:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/import-success.png" alt-text="A dialog box indicating graph import success. It reads 'Import successful. 4 twins imported. 2 relationships imported.'" lightbox="media/quickstart-azure-digital-twins-explorer/import-success.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::row-end:::
+
+5. The graph has now been uploaded to Azure Digital Twins Explorer. To see the graph, select the **Run Query** button in the **GRAPH EXPLORER** box, near the top of the Azure Digital Twins Explorer window.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/run-query.png" alt-text="The Run Query button in the upper-right corner of the window is highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/run-query.png":::
+
+This action runs the default query to select and display all digital twins. Azure Digital Twins Explorer retrieves all twins and relationships from the service. It draws the graph defined by them in the **GRAPH VIEW** box.
+
+## Explore the graph
+
+Now you can see the uploaded graph of the sample scenario.
++
+The circles (graph "nodes") represent digital twins. The lines represent relationships. The **Floor0** twin contains **Room0**, and the **Floor1** twin contains **Room1**.
+
+If you're using a mouse, you can drag pieces of the graph to move them around.
+
+### View twin properties
+
+You can select a twin to see a list of its properties and their values in the **PROPERTY EXPLORER** box.
+
+Here are the properties of Room0:
+
+ :::column:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room0.png" alt-text="Highlight around the Property Explorer box showing properties for Room0, which include (among others) a $dtId field of Room0, a Temperature field of 70, and a Humidity field of 30." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room0.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+
+Room0 has a temperature of 70.
+
+Here are the properties of Room1:
+
+ :::column:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room1.png" alt-text="Highlight around the Property Explorer box showing properties for Room1, which include (among others) a $dtId field of Room1, a Temperature field of 80, and a Humidity field of 60." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room1.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+
+Room1 has a temperature of 80.
+
+### Query the graph
+
+A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment.
+
+One way to query the twins in your graph is by their **properties**. Querying based on properties can help answer a variety of questions. For example, you can find outliers in your environment that might need attention.
+
+In this section, you'll run a query to answer the question of how many twins in your environment have a temperature above 75.
+
+To see the answer, run the following query in the **QUERY EXPLORER** box.
++
+Recall from viewing the twin properties earlier that Room0 has a temperature of 70, and Room1 has a temperature of 80. For this reason, only Room1 shows up in the results here.
+
+
+>[!TIP]
+> Other comparison operators (<,>, =, or !=) are also supported within the preceding query. You can try plugging these operators, different values, or different twin properties into the query to try out answering your own questions.
+
+## Edit data in the graph
+
+You can use Azure Digital Twins Explorer to edit the properties of the twins represented in your graph. In this section, we'll raise the temperature of Room0 to 76.
+
+To start, select **Room0** to bring up its property list in the **PROPERTY EXPLORER** box.
+
+The properties in this list are editable. Select the temperature value of **70** to enable entering a new value. Enter **76**, and select the **Save** icon to update the temperature to **76**.
+
+ :::column:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png" alt-text="The Property Explorer box showing properties for Room0. The temperature value is an editable box showing 76, and there's a highlight around the Save icon." lightbox="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+
+Now, you'll see a **Patch Information** window where the patch code appears that was used behind the scenes with the Azure Digital Twins [APIs](how-to-use-apis-sdks.md) to make the update. Select **Close**.
+
+### Query to see the result
+
+To verify that the graph successfully registered your update to the temperature for Room0, rerun the query from earlier to get all the twins in the environment with a temperature above 75.
++
+Now that the temperature of Room0 has been changed from 70 to 76, both twins should show up in the result.
++
+## Review and contextualize learnings
+
+In this quickstart, you created an Azure Digital Twins instance, connected it to Azure Digital Twins Explorer, and populated it with a sample scenario.
+
+You then explored the graph, by:
+
+* Using a query to answer a question about the scenario.
+* Editing a property on a digital twin.
+* Running the query again to see how the answer changed as a result of your update.
+
+The intent of this exercise is to demonstrate how you can use the Azure Digital Twins graph to answer questions about your environment, even as the environment continues to change.
+
+In this quickstart, you made the temperature update manually. It's common in Azure Digital Twins to connect digital twins to real IoT devices so that they receive updates automatically, based on telemetry data. In this way, you can build a live graph that always reflects the real state of your environment. You can use queries to get information about what's happening in your environment in real time.
+
+## Clean up resources
+
+To wrap up the work for this quickstart, first end the running console app. This action shuts off the connection to the Azure Digital Twins Explorer app in the browser. You'll no longer be able to view live data in the browser. You can close the browser tab.
+
+Then, you can choose which resources you'd like to remove, depending on what you'd like to do next.
+
+* **If you plan to continue to the Azure Digital Twins tutorials**, you can reuse the instance in this quickstart for those articles, and you don't need to remove it.
+
+
+
+You may also want to delete the project folder from your local machine.
+
+## Next steps
+
+Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins scenario and interaction tools.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Code a client app](tutorial-code.md)
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-service-limits.md
These are the service limits of Azure Digital Twins.
## Working with limits
-When a limit is reached, the service throttles additional requests. This will result in a 404 error response from these requests.
+When a limit is reached, the service throttles additional requests. This will result in a 429 error response from these requests.
To manage this, here are some recommendations for working with limits. * **Use retry logic.** The [Azure Digital Twins SDKs](how-to-use-apis-sdks.md) implement retry logic for failed requests, so if you are working with a provided SDK, this is already built-in. Otherwise, consider implementing retry logic in your own application. The service sends back a `Retry-After` header in the failure response, which you can use to determine how long to wait before retrying.
digital-twins Resources Compare Original Release https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/resources-compare-original-release.md
For a list of Azure Digital Twins limits, see [*Azure Digital Twins service limi
## Next steps
-* Dive into working with the current release in the quickstart: [*Quickstart: Explore a sample scenario*](quickstart-adt-explorer.md).
+* Dive into working with the current release in the quickstart: [*Quickstart: Explore a sample scenario*](quickstart-azure-digital-twins-explorer.md).
* Or, start reading about key concepts with [*Concepts: Custom models*](concepts-models.md).
digital-twins Troubleshoot Error Azure Digital Twins Explorer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-error-azure-digital-twins-explorer-authentication.md
+
+ Title: "Azure Digital Twins Explorer authentication error"
+description: "Causes and resolutions for 'Authentication failed.' in Azure Digital Twins Explorer."
++++ Last updated : 4/8/2021++
+# Authentication failed
+
+This article describes causes and resolution steps for receiving an 'Authentication failed' error while running the [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/) sample on your local machine.
+
+## Symptoms
+
+When setting up and running the Azure Digital Twins Explorer application, attempts to authenticate with the app are met with the following error message:
++
+## Causes
+
+### Cause #1
+
+The Azure Digital Twins Explorer application uses [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) (part of the `Azure.Identity` library), which will search for credentials within your local environment.
+
+As the error text states, this error may occur if you have not provided local credentials for `DefaultAzureCredential` to pick up.
+
+For more information about using local credentials with Azure Digital Twins Explorer, see the [*Set up local Azure credentials*](quickstart-adt-explorer.md#set-up-local-azure-credentials) section of the Azure Digital Twins *Quickstart: Explore a sample scenario*.
+
+### Cause #2
+
+This error may also occur if your Azure account does not have the required Azure role-based access control (Azure RBAC) permissions set on your Azure Digital Twins instance. In order to access data in your instance, you must have the **Azure Digital Twins Data Reader** or **Azure Digital Twins Data Owner** role on the instance you are trying to read or manage, respectively.
+
+For more information about security and roles in Azure Digital Twins, see [*Concepts: Security for Azure Digital Twins solutions*](concepts-security.md).
+
+## Solutions
+
+### Solution #1
+
+First, ensure that you've provided necessary credentials to the application.
+
+#### Provide local credentials
+
+`DefaultAzureCredential` authenticates to the service using the information from a local Azure sign-in. You can provide your Azure credentials by signing into your Azure account in a local [Azure CLI](/cli/azure/install-azure-cli) window, or in Visual Studio or Visual Studio Code.
+
+You can view the credential types that `DefaultAzureCredential` accepts, as well as the order in which they're attempted, in the [Azure Identity documentation for DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
+
+If you're already signed in locally to the right Azure account and the issue is not resolved, continue to the next solution.
+
+### Solution #2
+
+Verify that your Azure user has the **Azure Digital Twins Data Reader** role on the Azure Digital Twins instance if you're just trying to read its data, or the **Azure Digital Twins Data Owner** role on the instance if you're trying to manage its data.
+
+Note that this role is different from...
+* the former name for this role during preview, *Azure Digital Twins Owner (Preview)* (the role is the same, but the name has changed)
+* the *Owner* role on the entire Azure subscription. *Azure Digital Twins Data Owner* is a role within Azure Digital Twins and is scoped to this individual Azure Digital Twins instance.
+* the *Owner* role in Azure Digital Twins. These are two distinct Azure Digital Twins management roles, and *Azure Digital Twins Data Owner* is the role that should be used for management.
+
+ If you do not have this role, set it up to resolve the issue.
+
+#### Check current setup
++
+#### Fix issues
+
+If you do not have this role assignment, someone with an Owner role in your **Azure subscription** should run the following command to give your Azure user the appropriate role on the **Azure Digital Twins instance**.
+
+If you're an Owner on the subscription, you can run this command yourself. If you're not, contact an Owner to run this command on your behalf. The role name is either **Azure Digital Twins Data Owner** for edit access or **Azure Digital Twins Data Reader** for read access.
+
+```azurecli-interactive
+az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<your-Azure-AD-email>" --role "<role-name>"
+```
+
+For more details about this role requirement and the assignment process, see the [*Set up your user's access permissions* section](how-to-set-up-instance-CLI.md#set-up-user-access-permissions) of *How-to: Set up an instance and authentication (CLI or portal)*.
+
+## Next steps
+
+Read the setup steps for creating and authenticating a new Azure Digital Twins instance:
+* [*How-to: Set up an instance and authentication (CLI)*](how-to-set-up-instance-cli.md)
+
+Read more about security and permissions on Azure Digital Twins:
+* [*Concepts: Security for Azure Digital Twins solutions*](concepts-security.md)
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online.md
To complete all the database objects like table schemas, indexes and stored proc
6. If there are ENUM data type in any tables, it's recommended that you temporarily update it to a ΓÇÿcharacter varyingΓÇÖ datatype in the target table. After data replication is done, revert the datatype to ENUM.
-## Provisioning an instance of DMS using the CLI
+## Provisioning an instance of DMS using the Azure CLI
1. Install the dms sync extension: * Sign in to Azure by running the following command:
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
Title: Geo-disaster recovery - Azure Event Hubs| Microsoft Docs description: How to use geographical regions to fail over and perform disaster recovery in Azure Event Hubs Previously updated : 02/10/2021 Last updated : 04/14/2021 # Azure Event Hubs - Geo-disaster recovery
Advantage of this approach is that failover can happen at the application layer
> For guidance on geo-disaster recovery of a virtual network, see [Virtual Network - Business Continuity](../virtual-network/virtual-network-disaster-recovery-guidance.md). ## Next steps-
-* The [sample on GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/GeoDRClient) walks through a simple workflow that creates a geo-pairing and initiates a failover for a disaster recovery scenario.
-* The [REST API reference](/rest/api/eventhub/) describes APIs for performing the Geo-disaster recovery configuration.
-
-For more information about Event Hubs, visit the following links:
--- Get started with Event Hubs
- - [.NET Core](event-hubs-dotnet-standard-getstarted-send.md)
- - [Java](event-hubs-java-get-started-send.md)
- - [Python](event-hubs-python-get-started-send.md)
- - [JavaScript](event-hubs-node-get-started-send.md)
-* [Event Hubs FAQ](event-hubs-faq.yml)
-* [Sample applications that use Event Hubs](https://github.com/Azure/azure-event-hubs/tree/master/samples)
+Review the following samples or reference documentation.
+- [.NET GeoDR sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/Management/DotNet/GeoDRClient)
+- [Java GeoDR sample](https://github.com/Azure-Samples/eventhub-java-manage-event-hub-geo-disaster-recovery)
+- [.NET - Azure.Messaging.EventHubs samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples)
+- [.NET - Microsoft.Azure.EventHubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet)
+- [Java - azure-messaging-eventhubs samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/eventhubs/azure-messaging-eventhubs/src/samples/java/com/azure/messaging/eventhubs)
+- [Java - azure-eventhubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/Java)
+- [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples)
+- [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/eventhub/event-hubs/samples/javascript)
+- [TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/eventhub/event-hubs/samples/typescript)
+- [REST API reference](/rest/api/eventhub/)
[1]: ./media/event-hubs-geo-dr/geo1.png [2]: ./media/event-hubs-geo-dr/geo2.png
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | 10G, 100G | Equinix, Megaport | | **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | 10G | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | 10G | AIS, UIH |
-| **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | 10G | Equinix, NTT Global DataCenters EMEA|
+| **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | 10G | Colt, Equinix, NTT Global DataCenters EMEA|
| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | 10G | Equinix | | **Busan** | [LG CNS](https://www.lgcns.com/En/Service/DataCenter) | 2 | Korea South | n/a | LG CNS | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | 10G, 100G | CDC |
The following table shows connectivity locations and the service providers for e
| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems | | **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Equinix | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport, Swisscom |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
-| **Hong Kong2** | [MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
+| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
| **Jakarta** | Telin, Telkom Indonesia | 4 | n/a | 10G | Telin | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, Orange, Teraco | | **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom |
The following table shows connectivity locations and the service providers for e
| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Cologix, Megaport, Telus | | **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | 1 | East US, East US 2 | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Washington DC2** | [Coresite Reston](https://www.coresite.com/data-center-locations/northern-virginia-washington-dc) | 1 | East US, East US 2 | 10G, 100G | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo |
-| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | 10G, 100G | Equinix, Intercloud, Interxion, Megaport, Swisscom |
+| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | 10G, 100G | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom |
**+** denotes coming soon
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported |Taipei | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported |Miami | | **[Cologix](https://www.cologix.com/hyperscale/microsoft-azure/)** |Supported |Supported |Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Chicago, Dublin, Frankfurt, London, London2, Milan, Newport, New York, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Washington DC |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Hong Kong, London, London2, Milan, Newport, New York, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Washington DC, Zurich |
| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported |Chicago, Silicon Valley, Washington DC | | **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported |Chicago, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 | | **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported |Amsterdam2, Dubai2, Frankfurt, Marseille, Mumbai, Munich, New York |
firewall Integrate Lb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/integrate-lb.md
Previously updated : 09/25/2020 Last updated : 04/14/2021
There's no asymmetric routing issue with this scenario. The incoming packets arr
So, you can deploy this scenario similar to the public load balancer scenario, but without the need for the firewall public IP address host route.
->[!NOTE]
->The virtual machines in the backend pool will not have outbound internet connectivity with this configuration. </br> For more information on providing outbound connectivity see: </br> **[Outbound connections in Azure](../load-balancer/load-balancer-outbound-connections.md)**</br> Options for providing connectivity: </br> **[Outbound-only load balancer configuration](../load-balancer/egress-only.md)** </br> [**What is Virtual Network NAT?**](../virtual-network/nat-overview.md)
+The virtual machines in the backend pool can have outbound Internet connectivity through the Azure Firewall. Configure a user defined route on the virtual machine's subnet with the firewall as the next hop.
## Additional security
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/snat-private-range.md
Previously updated : 01/11/2021 Last updated : 04/14/2021
If your organization uses a public IP address range for private networks, Azure
> [!IMPORTANT] > If you want to specify your own private IP address ranges, and keep the default IANA RFC 1918 address ranges, make sure your custom list still includes the IANA RFC 1918 range.
+You can configure the SNAT private IP addresses using the following methods. You must configure the SNAT private addresses using the method appropriate for your configuration. Firewalls associated with a firewall policy must specify the range in the policy and not use `AdditionalProperties`.
++
+|Method |Using classic rules |Using firewall policy |
+||||
+|Azure portal | [supported](#classic-rules-3)| [supported](#firewall-policy-1)|
+|Azure PowerShell |[configure `PrivateRange`](#classic-rules)|currently unsupported|
+|Azure CLI|[configure `--private-ranges`](#classic-rules-1)|currently unsupported|
+|ARM template |[configure `AdditionalProperties` in firewall property](#classic-rules-2)|[configure `snat/privateRanges` in firewall policy](#firewall-policy)|
++ ## Configure SNAT private IP address ranges - Azure PowerShell
+### Classic rules
You can use Azure PowerShell to specify private IP address ranges for the firewall.
-### New firewall
+> [!NOTE]
+> The firewall `PrivateRange` property is ignored for firewalls associated with a Firewall Policy. You must use the `SNAT` property in `firewallPolicies` as described in [Configure SNAT private IP address ranges - ARM template](#firewall-policy).
+
+#### New firewall
-For a new firewall, the Azure PowerShell cmdlet is:
+For a new firewall using classic rules, the Azure PowerShell cmdlet is:
```azurepowershell $azFw = @{
New-AzFirewall @azFw
For more information, see [New-AzFirewall](/powershell/module/az.network/new-azfirewall).
-### Existing firewall
+#### Existing firewall
-To configure an existing firewall, use the following Azure PowerShell cmdlets:
+To configure an existing firewall using classic rules, use the following Azure PowerShell cmdlets:
```azurepowershell $azfw = Get-AzFirewall -Name '<fw-name>' -ResourceGroupName '<resourcegroup-name>'
Set-AzFirewall -AzureFirewall $azfw
``` ## Configure SNAT private IP address ranges - Azure CLI
+### Classic rules
-You can use Azure CLI to specify private IP address ranges for the firewall.
+You can use Azure CLI to specify private IP address ranges for the firewall using classic rules.
-### New firewall
+#### New firewall
-For a new firewall, the Azure CLI command is:
+For a new firewall using classic rules, the Azure CLI command is:
```azurecli-interactive az network firewall create \
az network firewall create \
> Deploying Azure Firewall using Azure CLI command `az network firewall create` requires additional configuration steps to create public IP addresses and IP configuration. See [Deploy and configure Azure Firewall using Azure CLI](deploy-cli.md) for a full deployment guide. > [!NOTE]
-> IANAPrivateRanges is expanded to the current defaults on Azure Firewall while the other ranges are added to it. To keep the IANAPrivateRanges default in your private range specification, it must remain in your `PrivateRange` specification as shown in the following examples.
+> IANAPrivateRanges is expanded to the current defaults on Azure Firewall while the other ranges are added to it. To keep the IANAPrivateRanges default in your private range specification, it must remain in your `private-ranges` specification as shown in the following examples.
-### Existing firewall
+#### Existing firewall
-To configure an existing firewall, the Azure CLI command is:
+To configure an existing firewall using classic rules, the Azure CLI command is:
```azurecli-interactive az network firewall update \
az network firewall update \
--private-ranges 192.168.1.0/24 192.168.1.10 IANAPrivateRanges ```
-## Configure SNAT private IP address ranges - ARM Template
+## Configure SNAT private IP address ranges - ARM template
+### Classic rules
To configure SNAT during ARM Template deployment, you can add the following to the `additionalProperties` property:
To configure SNAT during ARM Template deployment, you can add the following to t
"Network.SNAT.PrivateRanges": "IANAPrivateRanges , IPRange1, IPRange2" }, ```
+### Firewall policy
+
+Azure Firewalls associated with a firewall policy have supported SNAT private ranges since the 2020-11-01 API version. Currently, you can use a template to update the SNAT private range on the Firewall Policy. The following sample configures the firewall to **always** SNAT network traffic:
+
+```json
+{
+
+            "type": "Microsoft.Network/firewallPolicies",
+            "apiVersion": "2020-11-01",
+            "name": "[parameters('firewallPolicies_DatabasePolicy_name')]",
+            "location": "eastus",
+            "properties": {
+                "sku": {
+                    "tier": "Standard"
+                },
+                "snat": {
+                    "privateRanges": [255.255.255.255/32]
+                }
+            }
+```
## Configure SNAT private IP address ranges - Azure portal
+### Classic rules
You can use the Azure portal to specify private IP address ranges for the firewall.
You can use the Azure portal to specify private IP address ranges for the firewa
1. By default, **IANAPrivateRanges** is configured. 2. Edit the private IP address ranges for your environment and then select **Save**.
+### Firewall policy
+
+1. Select your resource group, and then select your firewall policy.
+2. Select **Private IP ranges (SNAT)** in the **Settings** column.
+
+ By default, **Use the default Azure Firewall Policy SNAT behavior** is selected.
+3. To customize the SNAT configuration, clear the check box, and under **Perform SNAT** select the conditions to perform SNAT for your environment.
+ :::image type="content" source="media/snat-private-range/private-ip-ranges-snat.png" alt-text="Private IP ranges (SNAT)":::
++
+4. Select **Apply**.
+ ## Next steps - Learn about [Azure Firewall forced tunneling](forced-tunneling.md).
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/policy-for-kubernetes.md
Title: Learn Azure Policy for Kubernetes
description: Learn how Azure Policy uses Rego and Open Policy Agent to manage clusters running Kubernetes in Azure or on-premises. Last updated 03/22/2021 + # Understand Azure Policy for Kubernetes clusters
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
The following mappings are to the **Azure Security Benchmark** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
-Then, find and select the **Azure Security Benchmark v2** Regulatory Compliance built-in
+Then, find and select the **Azure Security Benchmark** Regulatory Compliance built-in
initiative definition. > [!IMPORTANT]
initiative definition.
> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your > overall compliance status. The associations between compliance domains, controls, and Azure Policy > definitions for this compliance standard may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/asb_v2.json).
+> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Security%20Center/AzureSecurityCenter.json).
## Network Security
initiative definition.
||||| |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | |[All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) |
initiative definition.
|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) |
initiative definition.
||||| |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | |[All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | |[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
initiative definition.
|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[RDP access from the Internet should be blocked](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe372f825-a257-4fb8-9175-797a8a8627d6) |This policy audits any network security rule that allows RDP access from Internet |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_RDPAccess_Audit.json) |
-|[SSH access from the Internet should be blocked](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c89a2e5-7285-40fe-afe0-ae8654b92fab) |This policy audits any network security rule that allows SSH access from Internet |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_SSHAccess_Audit.json) |
|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) | |[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) | |[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+### Eliminate unintended credential exposure
+
+**ID**: Azure Security Benchmark IM-7
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, deny, disabled |[2.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) |
+|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) |
+ ## Privileged Access ### Protect and limit highly privileged users
initiative definition.
|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) | |[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+### Automate entitlement management
+
+**ID**: Azure Security Benchmark PA-5
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, deny, disabled |[2.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) |
+|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) |
+ ### Follow just enough administration (least privilege principle) **ID**: Azure Security Benchmark PA-7
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Custom subscription owner roles should not exist](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10ee2ea2-fb4d-45b8-a7e9-a2e770044cd9) |This policy ensures that no custom subscription owner roles exist. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/CustomSubscription_OwnerRole_Audit.json) |
|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## Data Protection
initiative definition.
|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) | |[Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) | |[Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
+|[Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2bdd0062-9d75-436e-89df-487dd8e4b3c7) |This policy audits any Cognitive Services account not using data encryption. For each Cognitive Services account with storage, should enable data encryption with either customer managed or Microsoft managed key. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_Encryption_Audit.json) |
|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) | |[Cognitive Services accounts should use customer owned storage or enable data encryption.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11566b39-f7f7-4b82-ab06-68d8700eb0a4) |This policy audits any Cognitive Services account not using customer owned storage nor data encryption. For each Cognitive Services account with storage, use either customer owned storage or enable data encryption. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_BYOX_Audit.json) | |[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
## Logging and Threat Detection
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Advanced data security should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Arc enabled Kubernetes clusters should have Azure Defender's extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Azure Defender's extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all control plane (master) nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/security-center/defender-for-kubernetes-azure-arc](https://docs.microsoft.com/azure/security-center/defender-for-kubernetes-azure-arc). |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Audit_Azure_Defender_Kubernetes_Arc_Extension.json) |
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for container registries should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc25d9a16-bc35-4e15-a7e5-9db606bf9ed4) |Azure Defender for container registries provides vulnerability scanning of any images pulled within the last 30 days, pushed to your registry, or imported, and exposes detailed findings per image. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainerRegistry_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | |[Azure Defender for Kubernetes should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F523b5cd1-3e23-492f-a539-13118b6d1e3a) |Azure Defender for Kubernetes provides real-time threat protection for containerized environments and generates alerts for suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKubernetesService_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Advanced data security should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Arc enabled Kubernetes clusters should have Azure Defender's extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Azure Defender's extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all control plane (master) nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/security-center/defender-for-kubernetes-azure-arc](https://docs.microsoft.com/azure/security-center/defender-for-kubernetes-azure-arc). |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Audit_Azure_Defender_Kubernetes_Arc_Extension.json) |
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for container registries should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc25d9a16-bc35-4e15-a7e5-9db606bf9ed4) |Azure Defender for container registries provides vulnerability scanning of any images pulled within the last 30 days, pushed to your registry, or imported, and exposes detailed findings per image. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainerRegistry_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | |[Azure Defender for Kubernetes should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F523b5cd1-3e23-492f-a539-13118b6d1e3a) |Azure Defender for Kubernetes provides real-time threat protection for containerized environments and generates alerts for suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKubernetesService_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |
|[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) | |[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVm.json) | |[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) | |[Log Analytics agent should be installed on your Windows Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd69b1763-b96d-40b8-a2d9-ca31e9fd0d3e) |This policy audits Windows Azure Arc machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Windows_LogAnalytics_Audit.json) |
+|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) |
+
+### Configure log storage retention
+
+**ID**: Azure Security Benchmark LT-6
+**Ownership**: Customer
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
## Incident Response
initiative definition.
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for container registries should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc25d9a16-bc35-4e15-a7e5-9db606bf9ed4) |Azure Defender for container registries provides vulnerability scanning of any images pulled within the last 30 days, pushed to your registry, or imported, and exposes detailed findings per image. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainerRegistry_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | |[Azure Defender for Kubernetes should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F523b5cd1-3e23-492f-a539-13118b6d1e3a) |Azure Defender for Kubernetes provides real-time threat protection for containerized environments and generates alerts for suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKubernetesService_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
initiative definition.
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for container registries should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc25d9a16-bc35-4e15-a7e5-9db606bf9ed4) |Azure Defender for container registries provides vulnerability scanning of any images pulled within the last 30 days, pushed to your registry, or imported, and exposes detailed findings per image. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainerRegistry_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | |[Azure Defender for Kubernetes should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F523b5cd1-3e23-492f-a539-13118b6d1e3a) |Azure Defender for Kubernetes provides real-time threat protection for containerized environments and generates alerts for suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKubernetesService_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
initiative definition.
|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) | |[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, deny, disabled |[6.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |
+|[Operating system version should be the most current version for your cloud service roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5a913c68-0590-402c-a531-e57e19379da3) |Keeping the operating system (OS) on the most recent supported version for your cloud service roles enhances the systems security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UpdateOsVersion.json) |
|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) | |[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Linux machines should meet requirements for the Azure security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines should meet the requirements for the Azure security baseline |AuditIfNotExists, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | |[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Windows machines should meet requirements of the Azure Security Center baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure Security Center baseline. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
### Perform software vulnerability assessments
initiative definition.
|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities on each pushed container image and exposes detailed findings for each image (powered by Qualys). Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | |[Vulnerabilities on your SQL databases should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor Vulnerability Assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[Vulnerabilities on your SQL servers on machine should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL Vulnerability Assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | |[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
initiative definition.
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
-|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
### Encrypt backup data
initiative definition.
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
-|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
### Mitigate risk of lost keys
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 R4 description: Details of the NIST SP 800-53 R4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/31/2021 Last updated : 04/14/2021
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md Binary files differ
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
HDInsight is gradually migrating to Azure virtual machine scale sets. Network in
The following changes will happen in upcoming releases. ### OS version upgrade
-HDInsight will be upgrading OS version from Ubuntu 16.04 to 18.04. The upgrade will complete before April 2021.
+HDInsight clusters are currently running on Ubuntu 16.04 LTS. As referenced in [UbuntuΓÇÖs release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. WeΓÇÖll start rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 in May 2021. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
+
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](https://docs.microsoft.com/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
+
+You need to drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Please plan to create or recreate your cluster after Ubuntu 18.04 support becomes available. WeΓÇÖll send another notification after the new image becomes available in all regions.
+
+ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create a simple Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/101-vm-simple-linux/), then create and use a [secure shell (SSH) key pair](https://docs.microsoft.com/azure/virtual-machines/linux/mac-create-ssh-keys#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
### Basic support for HDInsight 3.6 starting July 1, 2021 Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-upload-data.md Binary files differ
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
iot-edge How To Access Host Storage From Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-access-host-storage-from-module.md
You can find more details about create options from [docker docs](https://docs.d
## Encrypted data in module storage
-When modules invoke the IoT Edge daemon's workload API to encrypt data, the encryption key is derived using the module ID and module's generation ID. A generation ID is used to protect secrets if a module is removed from the deployment and then another module with the same module ID is later deployed to the same device. You can view a module's generation id using the Azure CLI command [az iot hub module-identity show](/cli/azure/ext/azure-iot/iot/hub/module-identity).
+When modules invoke the IoT Edge daemon's workload API to encrypt data, the encryption key is derived using the module ID and module's generation ID. A generation ID is used to protect secrets if a module is removed from the deployment and then another module with the same module ID is later deployed to the same device. You can view a module's generation id using the Azure CLI command [az iot hub module-identity show](/cli/azure/iot/hub/module-identity).
If you want to share files between modules across generations, they must not contain any secrets or they will fail to be decrypted.
iot-edge How To Authenticate Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-authenticate-downstream-device.md
When you create the new device identity, provide the following information:
> >You can configure the IoT Edge hub to go back to the previous behavior by setting the environment variable **AuthenticationMode** to the value **CloudAndScope**.
-You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same operation. The following example uses the [az iot hub device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) command to create a new IoT device with symmetric key authentication and assign a parent device:
+You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with symmetric key authentication and assign a parent device:
```azurecli az iot hub device-identity create -n {iothub name} -d {new device ID} --pd {existing gateway device ID}
For X.509 self-signed authentication, sometimes referred to as thumbprint authen
* Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples/send-event-x509) * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/async-hub-scenarios/send_message_x509.py)
-You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) command to create a new IoT device with X.509 self-signed authentication and assigns a parent device:
+You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 self-signed authentication and assigns a parent device:
```azurecli az iot hub device-identity create -n {iothub name} -d {device ID} --pd {gateway device ID} --am x509_thumbprint --ptp {primary thumbprint} --stp {secondary thumbprint}
This section is based on the instructions detailed in the IoT Hub article [Set u
* Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples/send-event-x509) * Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/async-hub-scenarios/send_message_x509.py)
-You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) command to create a new IoT device with X.509 CA signed authentication and assigns a parent device:
+You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 CA signed authentication and assigns a parent device:
```azurecli az iot hub device-identity create -n {iothub name} -d {device ID} --pd {gateway device ID} --am x509_ca
iot-edge How To Auto Provision Simulated Device Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-linux.md
Retrieve the provisioning information from your virtual machine, and use that to
When you create an enrollment in DPS, you have the opportunity to declare an **Initial Device Twin State**. In the device twin, you can set tags to group devices by any metric you need in your solution, like region, environment, location, or device type. These tags are used to create [automatic deployments](how-to-deploy-at-scale.md). > [!TIP]
-> In the Azure CLI, you can create an [enrollment](/cli/azure/ext/azure-iot/iot/dps/enrollment) and use the **edge-enabled** flag to specify that a device is an IoT Edge device.
+> In the Azure CLI, you can create an [enrollment](/cli/azure/iot/dps/enrollment) and use the **edge-enabled** flag to specify that a device is an IoT Edge device.
1. In the [Azure portal](https://portal.azure.com), navigate to your instance of IoT Hub Device Provisioning Service.
iot-edge How To Auto Provision Simulated Device Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-windows.md
Choose the SDK language that you want to use to create the simulated device, and
When you create the individual enrollment, select **True** to declare that the simulated TPM device on your Windows development machine is an **IoT Edge device**. > [!TIP]
-> In the Azure CLI, you can create an [enrollment](/cli/azure/ext/azure-iot/iot/dps/enrollment) or an [enrollment group](/cli/azure/ext/azure-iot/iot/dps/enrollment-group) and use the **edge-enabled** flag to specify that a device, or group of devices, is an IoT Edge device.
+> In the Azure CLI, you can create an [enrollment](/cli/azure/iot/dps/enrollment) or an [enrollment group](/cli/azure/iot/dps/enrollment-group) and use the **edge-enabled** flag to specify that a device, or group of devices, is an IoT Edge device.
Simulated device and individual enrollment guides:
iot-edge How To Auto Provision Symmetric Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-symmetric-keys.md
When you create an enrollment in DPS, you have the opportunity to declare an **I
1. Select **True** to declare that the enrollment is for an IoT Edge device. For a group enrollment, all devices must be IoT Edge devices or none of them can be. > [!TIP]
- > In the Azure CLI, you can create an [enrollment](/cli/azure/ext/azure-iot/iot/dps/enrollment) or an [enrollment group](/cli/azure/ext/azure-iot/iot/dps/enrollment-group) and use the **edge-enabled** flag to specify that a device, or group of devices, is an IoT Edge device.
+ > In the Azure CLI, you can create an [enrollment](/cli/azure/iot/dps/enrollment) or an [enrollment group](/cli/azure/iot/dps/enrollment-group) and use the **edge-enabled** flag to specify that a device, or group of devices, is an IoT Edge device.
1. Accept the default value from the Device Provisioning Service's allocation policy for **how you want to assign devices to hubs** or choose a different value that is specific to this enrollment.
iot-edge How To Auto Provision X509 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-x509-certs.md
When you create an enrollment in DPS, you have the opportunity to declare an **I
For more information about enrollments in the Device Provisioning Service, see [How to manage device enrollments](../iot-dps/how-to-manage-enrollments.md). > [!TIP]
- > In the Azure CLI, you can create an [enrollment](/cli/azure/ext/azure-iot/iot/dps/enrollment) or an [enrollment group](/cli/azure/ext/azure-iot/iot/dps/enrollment-group) and use the **edge-enabled** flag to specify that a device, or group of devices, is an IoT Edge device.
+ > In the Azure CLI, you can create an [enrollment](/cli/azure/iot/dps/enrollment) or an [enrollment group](/cli/azure/iot/dps/enrollment-group) and use the **edge-enabled** flag to specify that a device, or group of devices, is an IoT Edge device.
1. In the [Azure portal](https://portal.azure.com), navigate to your instance of IoT Hub Device Provisioning Service.
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
You can also create or manage parent/child relationships for existing devices.
# [Azure CLI](#tab/azure-cli)
-The [azure-iot](/cli/azure/ext/azure-iot) extension for the Azure CLI provides commands to manage your IoT resources. You can manage the parent/child relationship of IoT and IoT Edge devices when you create new device identities or by editing existing devices.
+The [azure-iot](/cli/azure/iot) extension for the Azure CLI provides commands to manage your IoT resources. You can manage the parent/child relationship of IoT and IoT Edge devices when you create new device identities or by editing existing devices.
-The [az iot hub device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) set of commands allow you to manage the parent/child relationships for a given device.
+The [az iot hub device-identity](/cli/azure/iot/hub/device-identity) set of commands allow you to manage the parent/child relationships for a given device.
The `create` command includes parameters for adding children devices and setting a parent device at the time of device creation.
iot-edge How To Continuous Integration Continuous Deployment Classic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
This pipeline is now configured to run automatically when you push new code to y
>[!NOTE] >If you wish to use **layered deployments** in your pipeline, layered deployments are not yet supported in Azure IoT Edge tasks in Azure DevOps. >
->However, you can use an [Azure CLI task in Azure DevOps](/azure/devops/pipelines/tasks/deploy/azure-cli) to create your deployment as a layered deployment. For the **Inline Script** value, you can use the [az iot edge deployment create command](/cli/azure/ext/azure-iot/iot/edge/deployment):
+>However, you can use an [Azure CLI task in Azure DevOps](/azure/devops/pipelines/tasks/deploy/azure-cli) to create your deployment as a layered deployment. For the **Inline Script** value, you can use the [az iot edge deployment create command](/cli/azure/iot/edge/deployment):
> > ```azurecli-interactive > az iot edge deployment create -d {deployment_name} -n {hub_name} --content modules_content.json --layered true
iot-edge How To Deploy Cli At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-cli-at-scale.md
For more information about device twins and tags, see [Understand and use device
You deploy modules to your target devices by creating a deployment that consists of the deployment manifest as well as other parameters.
-Use the [az iot edge deployment create](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-create) command to create a deployment:
+Use the [az iot edge deployment create](/cli/azure/iot/edge/deployment) command to create a deployment:
```azurecli az iot edge deployment create --deployment-id [deployment id] --hub-name [hub name] --content [file path] --labels "[labels]" --target-condition "[target query]" --priority [int]
If you update the target condition, the following updates occur:
You cannot update the content of a deployment, which includes the modules and routes defined in the deployment manifest. If you want to update the content of a deployment, you do so by creating a new deployment that targets the same devices with a higher priority. You can modify certain properties of an existing module, including the target condition, labels, metrics, and priority.
-Use the [az iot edge deployment update](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-update) command to update a deployment:
+Use the [az iot edge deployment update](/cli/azure/iot/edge/deployment) command to update a deployment:
```azurecli az iot edge deployment update --deployment-id [deployment id] --hub-name [hub name] --set [property1.property2='value']
The deployment update command takes the following parameters:
When you delete a deployment, any devices take on their next highest priority deployment. If your devices don't meet the target condition of any other deployment, then the modules are not removed when the deployment is deleted.
-Use the [az iot edge deployment delete](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-delete) command to delete a deployment:
+Use the [az iot edge deployment delete](/cli/azure/iot/edge/deployment) command to delete a deployment:
```azurecli az iot edge deployment delete --deployment-id [deployment id] --hub-name [hub name]
iot-edge How To Monitor Iot Edge Deployments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-monitor-iot-edge-deployments.md
To make changes to your deployment, see [Modify a deployment](how-to-deploy-at-s
## Monitor a deployment with Azure CLI
-Use the [az IoT Edge deployment show](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-show) command to display the details of a single deployment:
+Use the [az iot edge deployment show](/cli/azure/iot/edge/deployment) command to display the details of a single deployment:
```azurecli az iot edge deployment show --deployment-id [deployment id] --hub-name [hub name]
Inspect the deployment in the command window. The **metrics** property lists a
* **reportedSuccessfulCount** - A device metric that specifies the number of IoT Edge devices in the deployment reporting success from the IoT Edge client runtime. * **reportedFailedCount** - A device metric that specifies the number of IoT Edge devices in the deployment reporting failure from the IoT Edge client runtime.
-You can show a list of device IDs or objects for each of the metrics with the [az IoT Edge deployment show-metric](/cli/azure/ext/azure-iot/iot/edge/deployment#ext-azure-iot-az-iot-edge-deployment-show-metric) command:
+You can show a list of device IDs or objects for each of the metrics with the [az iot edge deployment show-metric](/cli/azure/iot/edge/deployment) command:
```azurecli az iot edge deployment show-metric --deployment-id [deployment id] --metric-id [metric id] --hub-name [hub name]
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-monitor-module-twins.md
If you make changes, select **Update Module Twin** above the code in the editor
To see if IoT Edge is running, use the [az iot hub invoke-module-method](how-to-edgeagent-direct-method.md#ping) to ping the IoT Edge agent.
-The [az iot hub module-twin](/cli/azure/ext/azure-iot/iot/hub/module-twin) structure provides these commands:
+The [az iot hub module-twin](/cli/azure/iot/hub/module-twin) structure provides these commands:
* **az iot hub module-twin show** - Show a module twin definition. * **az iot hub module-twin update** - Update a module twin definition.
iot-edge How To Register Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-register-device.md
In the output screen, you see the result of the command. The device info is prin
# [Azure CLI](#tab/azure-cli)
-Use the [az iot hub device-identity create](/cli/azure/ext/azure-iot/iot/hub/device-identity#ext-azure-iot-az-iot-hub-device-identity-create) command to create a new device identity in your IoT hub. For example:
+Use the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity) command to create a new device identity in your IoT hub. For example:
```azurecli az iot hub device-identity create --device-id [device id] --hub-name [hub name] --edge-enabled
Currently, the Azure IoT extension for Visual Studio Code doesn't support device
# [Azure CLI](#tab/azure-cli)
-Use the [az iot hub device-identity create](/cli/azure/ext/azure-iot/iot/hub/device-identity#ext-azure-iot-az-iot-hub-device-identity-create) command to create a new device identity in your IoT hub. For example:
+Use the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity) command to create a new device identity in your IoT hub. For example:
```azurecli az iot hub device-identity create --device-id [device id] --hub-name [hub name] --edge-enabled --auth-method x509_thumbprint --primary-thumbprint [SHA thumbprint] --secondary-thumbprint [SHA thumbprint]
You can also select **Get Device Info** from the right-click menu to see all the
### View IoT Edge devices with the Azure CLI
-Use the [az iot hub device-identity list](/cli/azure/ext/azure-iot/iot/hub/device-identity#ext-azure-iot-az-iot-hub-device-identity-list) command to view all devices in your IoT hub. For example:
+Use the [az iot hub device-identity list](/cli/azure/iot/hub/device-identity) command to view all devices in your IoT hub. For example:
```azurecli az iot hub device-identity list --hub-name [hub name]
Any device that is registered as an IoT Edge device will have the property **cap
### Retrieve the connection string with the Azure CLI
-When you're ready to set up your device, you need the connection string that links your physical device with its identity in the IoT hub. Use the [az iot hub device-identity connection-string show](/cli/azure/ext/azure-iot/iot/hub/device-identity/connection-string#ext_azure_iot_az_iot_hub_device_identity_connection_string_show) command to return the connection string for a single device:
+When you're ready to set up your device, you need the connection string that links your physical device with its identity in the IoT hub. Use the [az iot hub device-identity connection-string show](/cli/azure/iot/hub/device-identity/connection-string) command to return the connection string for a single device:
```azurecli az iot hub device-identity connection-string show --device-id [device id] --hub-name [hub name]
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/nested-virtualization.md
There are two forms of nested virtualization compatible with Azure IoT Edge for
> [!NOTE] >
-> Ensure to enable one [netowrking option](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#networking-options) for nested virtualization. Failing to do so will result in EFLOW installation errors.
+> Ensure to enable one [networking option](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#networking-options) for nested virtualization. Failing to do so will result in EFLOW installation errors.
## Deployment on local VM This is the baseline approach for any Windows VM that hosts Azure IoT Edge for Linux on Windows. For this case, nested virtualization needs to be enabled before starting the deployment. Read [Run Hyper-V in a Virtual Machine with Nested Virtualization](https://docs.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) for more information on how to configure this scenario.
iot-edge Offline Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/offline-capabilities.md
You can declare the parent-child relationship when creating a new device. Or for
#### Option 2: Use the `az` command-line tool
-Using the [Azure command-line interface](/cli/azure/) with [IoT extension](https://github.com/azure/azure-iot-cli-extension) (v0.7.0 or newer), you can manage parent child relationships with the [device-identity](/cli/azure/ext/azure-iot/iot/hub/device-identity) subcommands. The example below uses a query to assign all non-IoT Edge devices in the hub to be child devices of an IoT Edge device.
+Using the [Azure command-line interface](/cli/azure/) with [IoT extension](https://github.com/azure/azure-iot-cli-extension) (v0.7.0 or newer), you can manage parent child relationships with the [device-identity](/cli/azure/iot/hub/device-identity/) subcommands. The example below uses a query to assign all non-IoT Edge devices in the hub to be child devices of an IoT Edge device.
```azurecli # Set IoT Edge parent device
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-store-data-sql-server.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
- * ARM devices, like Raspberry Pis, cannot run SQL Server. If you want to use SQL on an ARM device, you can sign up to try [Azure SQL Edge](https://azure.microsoft.com/services/sql-edge/) in preview.
+ * ARM devices, like Raspberry Pis, cannot run SQL Server. If you want to use SQL on an ARM device, you can use [Azure SQL Edge](../azure-sql-edge/overview.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-endpoints.md
All IoT Hub endpoints use the [TLS](https://tools.ietf.org/html/rfc5246) protoco
## Custom endpoints
-You can link existing Azure services in your subscription to your IoT hub to act as endpoints for message routing. These endpoints act as service endpoints and are used as sinks for message routes. Devices cannot write directly to the additional endpoints. Learn more about [message routing](../iot-hub/iot-hub-devguide-messages-d2c.md).
+You can link existing Azure services in your Azure subscriptions to your IoT hub to act as endpoints for message routing. These endpoints act as service endpoints and are used as sinks for message routes. Devices cannot write directly to the additional endpoints. Learn more about [message routing](../iot-hub/iot-hub-devguide-messages-d2c.md).
IoT Hub currently supports the following Azure services as additional endpoints:
Other reference topics in this IoT Hub developer guide include:
* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) * [Quotas and throttling](iot-hub-devguide-quotas-throttling.md) * [IoT Hub MQTT support](iot-hub-mqtt-support.md)
-* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
+* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-identity-registry.md
A more complex implementation could include the information from [Azure Monitor]
## Device and module lifecycle notifications
-IoT Hub can notify your IoT solution when an identity is created or deleted by sending lifecycle notifications. To do so, your IoT solution needs to create a route and to set the Data Source equal to *DeviceLifecycleEvents* or *ModuleLifecycleEvents*. By default, no lifecycle notifications are sent, that is, no such routes pre-exist. The notification message includes properties, and body.
+IoT Hub can notify your IoT solution when a device identity is created or deleted by sending lifecycle notifications. To do so, your IoT solution needs to create a route and to set the Data Source equal to *DeviceLifecycleEvents*. By default, no lifecycle notifications are sent, that is, no such routes pre-exist. By creating a route with Data Source equal to *DeviceLifecycleEvents*, lifecycle events will be sent for both device identities and module identities; however, the message contents will differ depending on whether the events are generated for module identities or device identities. It should be noted that for IoT Edge modules, the module identity creation flow is different than for other modules, as a result for IoT Edge modules the create notification is only sent if the corresponding IoT Edge Device for the updated IoT Edge module identity is running. For all other modules, lifecycle notifications are sent whenever the module identity is updated on the IoT Hub side. The notification message includes properties, and body.
Properties: Message system properties are prefixed with the `$` symbol.
To try out some of the concepts described in this article, see the following IoT
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](https://azure.microsoft.com/documentation/services/iot-dps)
+* [Azure IoT Hub Device Provisioning Service](https://azure.microsoft.com/documentation/services/iot-dps)
iot-hub Iot Hub Java Java Device Management Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-device-management-getstarted.md Binary files differ
iot-hub Iot Hub Node Node Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-device-management-get-started.md Binary files differ
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-schedule-jobs.md Binary files differ
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-twin-getstarted.md Binary files differ
iot-hub Iot Hub Python Python Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-python-python-device-management-get-started.md Binary files differ
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
iot-hub Quickstart Device Streams Proxy C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-c.md Binary files differ
iot-hub Quickstart Device Streams Proxy Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-csharp.md Binary files differ
iot-hub Quickstart Device Streams Proxy Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-nodejs.md Binary files differ
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-portal.md
To add a certificate to the vault, you just need to take a couple of additional
- **Method of Certificate Creation**: Generate. - **Certificate Name**: ExampleCertificate. - **Subject**: CN=ExampleDomain
- - Leave the other values to their defaults. Click **Create**.
+ - Leave the other values to their defaults. (By default, if you don't specify anything special in Advanced policy, it'll be usable as a client auth certificate.)
+ 4. Click **Create**.
Once that you receive the message that the certificate has been successfully created, you may click on it on the list. You can then see some of the properties. If you click on the current version, you can see the value you specified in the previous step.
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-powershell.md
tags: azure-resource-manager
-+ Last updated 01/27/2021 #Customer intent:As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
Get-AzKeyVaultCertificate -VaultName "<your-unique-keyvault-name>" -Name "Exampl
Now, you have created a Key Vault, stored a certificate, and retrieved it.
+**Troubleshooting**:
+
+Operation returned an invalid status code 'Forbidden'
+
+If you receive this error, the account accessing the Azure Key Vault does not have the proper permissions to create certificates.
+
+Run the following Azure PowerShell command to assign the proper permissions:
+
+```azurepowershell-interactive
+Set-AzKeyVaultAccessPolicy -VaultName <KeyVaultName> -ObjectId <AzureObjectID> -PermissionsToCertificates get,list,update,create
+```
+ ## Clean up resources [!INCLUDE [Create a key vault](../../../includes/key-vault-powershell-delete-resources.md)]
key-vault Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/backup.md Binary files differ
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/rbac-guide.md Binary files differ
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/best-practices.md
Managed HSM is a cloud service that safeguards encryption keys. As these keys are sensitive and business critical, make sure to secure access to your managed HSMs by allowing only authorized applications and users. This [article](access-control.md) provides an overview of the access model. It explains authentication and authorization, and role-based access control. - Create an [Azure Active Directory Security Group](../../active-directory/fundamentals/active-directory-manage-groups.md) for the HSM Administrators (instead of assigning Administrator role to individuals). This will prevent "administration lock-out" in case of individual account deletion. - Lock down access to your management groups, subscriptions, resource groups and Managed HSMs - Use Azure RBAC to control access to your management groups, subscriptions, and resource groups-- Create per key role assignments using [Managed HSM local RBAC](access-control.md#data-plane-and-managed-hsm-local-rbac)-- Use least privilege access principal to assign roles
+- Create per key role assignments using [Managed HSM local RBAC](access-control.md#data-plane-and-managed-hsm-local-rbac).
+- To maintain separation of duties avoid assigning multiple roles to same principals.
+- Use least privilege access principal to assign roles.
+- Create custom role definition with precise set of permissions.
## Choose regions that support availability zones
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/overview.md
#Customer intent: As an IT Pro, Decision maker or developer I am trying to learn what Managed HSM is and if it offers anything that could be used in my organization.
-# What is Azure Key Vault Managed HSM (preview)?
+# What is Azure Key Vault Managed HSM?
Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs.
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation-dual.md Binary files differ
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
lab-services How To Manage Classroom Labs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-manage-classroom-labs.md
To set up a classroom lab in a lab account, you must be a member of the **Lab Cr
## Create a classroom lab
-1. Navigate to [Azure Lab Services website](https://labs.azure.com). Internet Explorer 11 is not supported yet.
+1. Navigate to [Azure Lab Services website](https://labs.azure.com).
1. Select **Sign in** and enter your credentials. Select or enter a **user ID** that is a member of the **Lab Creator** role in the lab account, and enter password. Azure Lab Services supports organizational accounts and Microsoft accounts. 1. Select **New lab**.
lighthouse Onboard Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/onboard-customer.md
Get-AzManagedServicesAssignment
# Log in first with az login if you're not using Cloud Shell az account list+
+# Confirm successful onboarding for Azure Lighthouse
+
+az managedservices definition list
+az managedservices assignment list
``` If you need to make changes after the customer has been onboarded, you can [update the delegation](update-delegation.md). You can also [remove access to the delegation](remove-delegation.md) completely.
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
load-balancer Load Balancer Distribution Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-distribution-mode.md Binary files differ
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-outbound-connections.md
To maintain unique flows, the host rewrites the source port of each outbound pac
* Virtual machine without public IP. * Virtual machine without public IP and without standard load balancer.
- ### <a name="scenario1"></a> Scenario 1: Virtual machine with public IP
+ ### <a name="scenario1"></a> Scenario 1: Virtual machine with public IP either with or without a load balancer.
| Associations | Method | IP protocols | | - | | |
- | Public load balancer or stand-alone | [SNAT (Source Network Address Translation)](#snat) </br> not used. | TCP (Transmission Control Protocol) </br> UDP (User Datagram Protocol) </br> ICMP (Internet Control Message Protocol) </br> ESP (Encapsulating Security Payload) |
+ | Public load balancer or stand-alone | [SNAT (Source Network Address Translation)](#snat) </br> is not used. | TCP (Transmission Control Protocol) </br> UDP (User Datagram Protocol) </br> ICMP (Internet Control Message Protocol) </br> ESP (Encapsulating Security Payload) |
#### Description
+ All traffic will return to the requesting client from the virtual machine's public IP address (Instance Level IP).
+
Azure uses the public IP assigned to the IP configuration of the instance's NIC for all outbound flows. The instance has all ephemeral ports available. It doesn't matter whether the VM is load balanced or not. This scenario takes precedence over the others. A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and implemented as a stateless 1:1 NAT.
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create). Spe
It can take a few minutes for the VMs to deploy. ++ ### Create the load balancer This section details how you can create and configure the following components of the load balancer:
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create). Spe
``` It can take a few minutes for the VMs to deploy. ++ ### Create the load balancer This section details how you can create and configure the following components of the load balancer:
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
These VMs are added to the backend pool of the load balancer that was created ea
| Availability zone | **2** | **3** | | Network security group | Select the existing **myNSG**| Select the existing **myNSG** | # [**Basic SKU**](#tab/option-1-create-internal-load-balancer-basic)
These VMs are added to the backend pool of the load balancer that was created ea
| Availability set | Select **myAvailabilitySet** | Select **myAvailabilitySet** | | Network security group | Select the existing **myNSG** | Select the existing **myNSG** | + ### Add virtual machines to the backend pool The VMs created in the previous steps must be added to the backend pool of **myLoadBalancer**.
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
Id Name PSJobTypeName State HasMoreData Location
4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM ``` + # [**Basic SKU**](#tab/option-1-create-load-balancer-basic) >[!NOTE]
Id Name PSJobTypeName State HasMoreData Location
4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM ``` + ## Install IIS
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
``` It may take a few minutes for the VMs to deploy. + ## Create a public IP address - Standard To access your web app on the Internet, you need a public IP address for the load balancer.
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
``` It may take a few minutes for the VMs to deploy. + ## Create a public IP address - Basic To access your web app on the Internet, you need a public IP address for the load balancer.
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
These VMs are added to the backend pool of the load balancer that was created ea
| Availability zone | **2** |**3**| | Network security group | Select the existing **myNSG**| Select the existing **myNSG**| + ## Create outbound rule configuration Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
These VMs are added to the backend pool of the load balancer that was created ea
| Availability set| Select **myAvailabilitySet** | Select **myAvailabilitySet**| | Network security group | Select the existing **myNSG**| Select the existing **myNSG**| + ### Add virtual machines to the backend pool The VMs created in the previous steps must be added to the backend pool of **myLoadBalancer**.
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
Id Name PSJobTypeName State HasMoreData Location
4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM ``` + ## Create outbound rule configuration Load balancer outbound rules configure outbound source network address translation (SNAT) for VMs in the backend pool.
Id Name PSJobTypeName State HasMoreData Location
4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM ``` + ## Install IIS
load-balancer Update Load Balancer With Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/update-load-balancer-with-vm-scale-set.md Binary files differ
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021 ms.suite: integration
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Time Zone Index Values](https://support.microsoft.com/help/973627/microsoft-time-zone-index-values), but you might have to remove any punctuation from the time zone name. |
+| <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Windows Default Time Zones](https://docs.microsoft.com/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. | |||||
convertTimeZone('<timestamp>', '<sourceTimeZone>', '<destinationTimeZone>', '<fo
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Time Zone Index Values](https://support.microsoft.com/help/973627/microsoft-time-zone-index-values), but you might have to remove any punctuation from the time zone name. |
-| <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Time Zone Index Values](https://support.microsoft.com/help/973627/microsoft-time-zone-index-values), but you might have to remove any punctuation from the time zone name. |
+| <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](https://docs.microsoft.com/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
+| <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Windows Default Time Zones](https://docs.microsoft.com/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. | |||||
convertToUtc('<timestamp>', '<sourceTimeZone>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Time Zone Index Values](https://support.microsoft.com/help/973627/microsoft-time-zone-index-values), but you might have to remove any punctuation from the time zone name. |
+| <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](https://docs.microsoft.com/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. | |||||
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-plan-manage-cost.md Binary files differ
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-container-instance.md
For more information on the classes, methods, and parameters used in this exampl
* [Model.deploy](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) * [Webservice.wait_for_deployment](/python/api/azureml-core/azureml.core.webservice%28class%29#wait-for-deployment-show-output-false-)
-### Using the CLI
+### Using the Azure CLI
To deploy using the CLI, use the following command. Replace `mymodel:1` with the name and version of the registered model. Replace `myservice` with the name to give this service:
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-with-triton.md Binary files differ
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-event-grid.md Binary files differ
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-pipeline-yaml.md Binary files differ
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/create-cluster-portal.md Binary files differ
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
marketplace Determine Your Listing Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/determine-your-listing-type.md
Last updated 01/14/2021
# Introduction to listing options
-When you create an offer type, you choose one or more listing options. These options determine the buttons customers see on the offer listing page in the online stores. The listing options include _Free Trial_, _Test Drive_, _Contact Me_, and _Get It Now_.
+When you create an offer type, you choose one or more listing options. These options determine the buttons that customers see on the offer listing page in the online stores. The listing options include **Free Trial**, **Test Drive**, **Contact Me**, and **Get It Now**.
-This table shows which listing options are available for each offer type.
+This table shows which listing options are available for each offer type:
| Offer type | Free Trial | Test Drive | Contact Me | Get It Now `*` | | | - | - | - | - |
This table shows which listing options are available for each offer type.
| Software as a service | &#10004; | &#10004; | &#10004; | &#10004; | ||||||
-&#42; The Get It Now listing option includes Get It Now (Free), bring your own license (BYOL), Subscription, and Usage-based pricing. For details, see [Get It Now](#get-it-now).
+&#42; The **Get It Now** listing option includes Get It Now (Free), bring your own license (BYOL), Subscription, and Usage-based pricing. For more information, see [Get It Now](#get-it-now).
-## Changing offer type
+## Change the offer type
[!INCLUDE [change-offer-type](./includes/change-offer-type.md)] ## Free Trial
-Use the commercial marketplace to enhance discoverability and automate provisioning of your solution's trial experience. This enables prospective customers to use your software as a service (SaaS), IaaS or Microsoft in-app experience at no cost from 30 days to six months, depending on the offer type.
+Use the commercial marketplace to enhance discoverability and automate provisioning of your solution's trial experience. This enables prospective customers to use your software as a service (SaaS), infrastructure as a service (IaaS), or Microsoft in-app experience at no cost from 30 days to six months, depending on the offer type.
-Customers use the _Free Trial_ button on your offerΓÇÖs listing page to try your offer. If you provide a free trial on multiple plans within the same offer, customers can switch to a free trial on another plan, but the trial period does not restart.
+Customers use the **Free Trial** button on your offer's listing page to try your offer. If you provide a free trial on multiple plans within the same offer, customers can switch to a free trial on another plan, but the trial period doesn't restart.
For virtual machine offers, customers are charged Azure infrastructure costs for using the offer during a trial period. Upon expiration of the trial period, customers are automatically charged for the last plan they tried based on standard rates unless they cancel before the end of the trial period. ## Test Drive
-Customers use the _Test Drive_ button on your offerΓÇÖs listing page to get access to a preconfigured environment for a fixed number of hours. To learn more about test drives, see [What is a test drive?](what-is-test-drive.md)
+Customers use the **Test Drive** button on your offer's listing page to get access to a preconfigured environment for a fixed number of hours. To learn more about test drives, see [What is a test drive?](what-is-test-drive.md).
> [!TIP]
-> A test drive is different from a free trial. You can offer a test drive, free trial, or both. They both provide your customers with your solution for a fixed period-of-time. But a test drive also includes a hands-on, self-guided tour of your productΓÇÖs key features and benefits being demonstrated in a real-world implementation scenario.
+> The Test Drive option is different from the Free Trial. You can offer Test Drive, Free Trial, or both. They both provide your customers with your solution for a fixed time period. However, the Test Drive also includes a hands-on, self-guided tour of your product's key features and benefits being demonstrated in a real-world implementation scenario.
## Contact Me
-Simple listing of your application or service. Customers use the _Contact Me_ button on your offerΓÇÖs listing page to request that you connect with them about your offer.
+This option is a simple listing of your application or service. Customers use the **Contact Me** button on your offer's listing page to request that you connect with them about your offer.
## Get It Now
-This listing option includes transactable offers (subscriptions and user-based pricing), bring your own license offers, and Get It Now (Free). Transactable offers are sold through the commercial marketplace. Microsoft is responsible for billing and collections. Customers use the _Get It Now button_ to get the offer.
+This listing option includes transactable offers (subscriptions or user-based pricing), bring your own license (BYOL) offers, and **Get It Now (Free)**. Transactable offers are sold through the commercial marketplace. Microsoft is responsible for billing and collections. Customers use the **Get It Now** button to get the offer.
-The Get It Now listing option can include the following pricing options, depending on the offer type:
--- Get It Now (Free)-- Bring your own license (BYOL)-- Subscription-- Usage-based pricing-
-This table shows which offer types support the additional pricing options that are included with the Get It Now listing option.
+This table shows which offer types support the pricing options that are included with the **Get It Now** listing option.
| Offer type | Get It Now (Free) | BYOL | Subscription | Usage-based pricing | | | - | - | - | - |
This table shows which offer types support the additional pricing options that a
| Software as a service | &#10004; | | &#10004; | &#10004; | ||||||
-**Legend**
-
-<sup>1</sup> The **Pricing model** column of the **Plan overview** tab shows _Free_ or _BYOL_ but itΓÇÖs not selectable.
+<sup>1</sup> The **Pricing model** column of the **Plan overview** tab shows **Free** or **BYOL**, but it's not selectable.
<sup>2</sup> Priced per hour and billed monthly. ### Get It Now (Free)
-Use this listing option to offer your application for free. Customers use the _Get It Now_ button to get your free offer.
+Use this listing option to offer your application for free. Customers use the **Get It Now** button to get your free offer.
> [!NOTE]
-> Get It Now (Free) offers are not eligible for Marketplace Rewards benefits for transactable offers. Because there is no transaction through the storefront, these are categorized as ΓÇ£Trial.ΓÇ¥ See [Marketplace Rewards](#marketplace-rewards) below.
+> Get It Now (Free) offers aren't eligible for Marketplace Rewards benefits for transactable offers. Because there's no transaction through the storefront, these are categorized as **Trial**. See [Marketplace Rewards](#marketplace-rewards).
### Bring Your Own License (BYOL)
-Use this listing option to let customers deploy your offer using a license purchased outside the commercial marketplace. This option is ideal for on-premises-to-cloud migrations. Customers use the _Get It Now_ button to purchase your offer using a license they pre-purchased from you.
+Use this listing option to let customers deploy your offer using a license purchased outside the commercial marketplace. This option is ideal for on-premises-to-cloud migrations. Customers use the **Get It Now** button to purchase your offer using a license they pre-purchased from you.
> [!NOTE]
-> BYOL offers are not eligible for Marketplace Rewards benefits for transactable offers. Because these require a customer to acquire the license from the partner and there is no transaction through the commercial marketplace storefront, these are categorized as ΓÇ£List.ΓÇ¥ See [Marketplace Rewards](#marketplace-rewards) below.
+> BYOL offers aren't eligible for Marketplace Rewards benefits for transactable offers. Because these require a customer to acquire the license from the partner and there's no transaction through the commercial marketplace storefront, these are categorized as **List**. See [Marketplace Rewards](#marketplace-rewards).
### Subscription You can charge a flat fee for these offer types: -- Azure Application (Managed app) offers support monthly subscriptions.-- SaaS offers support both monthly and annual subscriptions.
+- Azure Application (Managed app) offers support for monthly subscriptions.
+- SaaS offers support for both monthly and annual subscriptions.
### Usage-based pricing The following offer types support usage-based pricing: -- Azure Application (Managed app) offer support metered billing. For more details, see [Managed application metered billing](partner-center-portal/azure-app-metered-billing.md).-- SaaS offers supports Metered billing and per user (per seat) pricing. For more information about metered billing, see [Metered billing for SaaS using the commercial marketplace metering service](partner-center-portal/saas-metered-billing.md).-- Azure virtual machine offers support Per core, Per core size, and Per market and core size pricing. These pricing options are priced per hour and billed monthly.
+- Azure Application (Managed app) offer support for metered billing. For more information, see [Managed application metered billing](partner-center-portal/azure-app-metered-billing.md).
+- SaaS offers support for Metered billing and per user (per seat) pricing. For more information about metered billing, see [Metered billing for SaaS using the commercial marketplace metering service](partner-center-portal/saas-metered-billing.md).
+- Azure virtual machine offers support for **Per core**, **Per core size**, and **Per market and core size** pricing. These options are priced per hour and billed monthly.
-When creating a transactable offer, it is important to understand the pricing, billing, invoicing, and payout considerations before selecting an offer type and creating your offer. To learn more, see [Commercial marketplace online stores](overview.md#commercial-marketplace-online-stores).
+When you create a transactable offer, it's important to understand the pricing, billing, invoicing, and payout considerations before you select an offer type and create your offer. To learn more, see [Commercial marketplace online stores](overview.md#commercial-marketplace-online-stores).
## Sample offer
-After your offer is published, the listing option(s) you chose appear as a button in the upper-left corner of the listing page in the online store(s). For example, the following screen shows an offer listing page in the Microsoft AppSource online store with the **Get It Now** and **Test Drive** buttons:
+After your offer is published, the listing options you chose appear as buttons in the upper-left corner of the listing page in the online store. For example, the following image shows an offer listing page in the Microsoft AppSource online store with the **Get It Now** and **Test Drive** buttons:
## Listing and pricing options by online store
-Based on a variety of criteria, we determine whether your offer is listed on Azure Marketplace, Microsoft AppSource, or both online stores. For more information about the differences between the two online stores, see [Commercial marketplace online stores](overview.md#commercial-marketplace-online-stores).
+Based on various criteria, we determine whether your offer is listed on Azure Marketplace, Microsoft AppSource, or both online stores. For more information about the differences between the two online stores, see [Commercial marketplace online stores](overview.md#commercial-marketplace-online-stores).
-The following table shows the options that are available for different offer types and add-ins and which online stores your offer can be listed on.
+The following table shows the options that are available for different offer types and add-ins, and which online stores your offer can be listed on.
| Offer types and add-ins | Contact Me | Free Trial | Get It Now (Free) | BYOL | Get It Now (Transact) | | | - | - | - | - | - |
The following table shows the options that are available for different offer typ
| Consulting service | Both online stores | | | | | | SaaS | Both online stores | Both online stores | Both online stores | | Both online stores &#42; | | Microsoft 365 App | AppSource | AppSource | | | AppSource &#42;&#42; |
-| Dynamics 365 business central | AppSource | AppSource | | | |
+| Dynamics 365 Business Central | AppSource | AppSource | | | |
| Dynamics 365 for Customer Engagements & PowerApps | AppSource | AppSource | | | |
-| Dynamics 365 for operations | AppSource | AppSource | | | |
+| Dynamics 365 Operations | AppSource | AppSource | | | |
| Power BI App | | | AppSource | | | |||||||
-&#42; SaaS transactable offers in AppSource are currently credit card only.
+&#42; SaaS transactable offers in AppSource only accept credit cards at this time.
-&#42;&#42; Microsoft 365 add-ins are free to install and can be monetized using a SaaS offer. For more information, see [Monetize your Office 365 add-in through the Microsoft commercial marketplace](/office/dev/store/monetize-addins-through-microsoft-commercial-marketplace).
+&#42;&#42; Microsoft 365 add-ins are free to install and can be monetized using an SaaS offer. For more information, see [Monetize your app through the commercial marketplace](/office/dev/store/monetize-addins-through-microsoft-commercial-marketplace).
## Marketplace Rewards
-Your Marketplace Rewards are differentiated based on the listing option you choose. To learn more, see [Your commercial marketplace benefits](gtm-your-marketplace-benefits.md).
+Your Marketplace Rewards benefits depend on the listing option you choose. To learn more, see [Your commercial marketplace benefits](gtm-your-marketplace-benefits.md).
If your offer is transactable, you will earn benefits as you increase your billed sales.
Non-transactable offers earn benefits based on whether or not a free trial is at
## Next steps -- To choose an offer type to create, see [publishing guide by offer type](publisher-guide-by-offer-type.md).
+To choose an offer type to create, see [Publishing guide by offer type](publisher-guide-by-offer-type.md).
marketplace Create New Business Central Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/create-new-business-central-offer.md
Before starting, [Create a Commercial Marketplace account in Partner Center](../
## New offer
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+Enter an **Offer ID**. This value is a unique identifier for each offer in your account.
- This ID is visible to customers in the web address for the marketplace offer and Azure Resource Manager templates, if applicable. - The Offer ID combined with the Publisher ID must be under 40 characters in length.
Select **Create** to generate the offer and continue.
### Alias
-Enter a descriptive name that we'll use to refer to this offer solely within Partner Center. This name (pre-populated with what your entered when you created the offer) won't be used in the marketplace and is different than the offer name shown to customers. If you want to update the offer name later, go to the [Offer Listing](#offer-listing) page.
+Enter a descriptive name that we'll use to refer to this offer solely within Partner Center. This name (pre-populated with what you have entered when you created the offer) won't be used in the marketplace and is different than the offer name shown to customers. If you want to update the offer name later, go to the [Offer Listing](#offer-listing) page.
### Setup details
This page lets you define offer details such as offer name, description, links,
> [!NOTE] > Provide offer listing details in one language only. It is not required to be in English, as long as the offer description begins with the phrase, "This application is available only in [non-English language]." It is also acceptable to provide a *Useful link URL* to offer content in a language other than the one used in the Offer listing content.
-Here's an example of how offer information appears in Microsoft AppSource (any listed prices are for example purposes only and not intended to reflect actual costs):
-<!-- update screen? -->
+Here's an example of how offer information appears in Microsoft AppSource (any listed prices are examples only and not intended to reflect actual costs):
+ :::image type="content" source="media/example-d365-business-central.png" alt-text="Illustrates how this offer appears in Microsoft AppSource.":::
-#### Call-out descriptions
+### Call-out descriptions
1. Logo 2. Products
Provide logos and images that will be used when showing your offer to customers.
[!INCLUDE [logo tips](../includes/graphics-suggestions.md)] >[!Note]
->If you have an issue uploading files, make sure your local network does not block the https://upload.xboxlive.com service used by Partner Center.
+>If you have an issue uploading files, make sure your local network does not block the `https://upload.xboxlive.com` service used by Partner Center.
#### Logos
-Provide a PNG file for the **Large** size logo. Partner Center will use this to create other required sizes. You can optionally replace this with a different image later.
+Provide a PNG file for the **Large** size logo. Partner Center will use this initial file to create other required sizes. You can optionally replace the resized image with your own image later.
These logos are used in different places in the listing:
Add screenshots that show how your offer works. At least three screenshots are r
You can optionally add up to four videos that demonstrate your offer. Videos must be hosted on an external site. For each one, enter the video's name, its address, and a thumbnail image of the video (1280 x 720 pixels).
-For additional marketplace listing resources, see [Best practices for marketplace offer listings](../gtm-offer-listing-best-practices.md).
+For more marketplace listing resources, see [Best practices for marketplace offer listings](../gtm-offer-listing-best-practices.md).
Select **Save draft** before continuing.
Select **Save draft** before continuing.
This page defines the technical details used to connect to your offer. This connection enables us to provision your offer for the end customer if they choose to acquire it.
+Extensions submitted for your offer must meet the requirements specified in the [Technical Validation Checklist](/dynamics365/business-central/dev-itpro/developer/devenv-checklist-submission).
+ ### File upload If you previously selected **Add On**, where you'll upload your offer's package file, along with the package files for any extension on which it has dependencies.
Required if your offer must be installed along with another extension that will
Select **Save draft** before continuing.
-<!-- ## Test drive technical configuration
-
-This page lets you set up a demonstration ("test drive") that allows customers to try your offer before purchasing it. Learn more in [What is test drive](../what-is-test-drive.md).
-
-To enable a test drive, select the **Enable a test drive** check box on the [Offer setup](#test-drive) tab. To remove test drive from your offer, clear this check box.
-
-When you've finished setting up your test drive, select **Save draft** before continuing.
> ## Supplemental content This page lets you provide additional information to help us validate your offer. This information is not shown to customers or published to the marketplace.
After completing all required sections of the offer, select **Review and publish
If it's your first time publishing this offer, you can: - See the completion status for each section of the offer.
- - **Not started** - Section has not been touched and needs to be completed.
- - **Incomplete** - Section has errors that need to be fixed or requires more information. Go back to the section(s) and update it.
- - **Complete** - Section is complete, all required data has been provided and there are no errors. All sections of the offer must be in a complete state before you can submit the offer.
+ - **Not started** - Section has not been touched and needs to be completed.
+ - **Incomplete** - Section has errors that need to be fixed or requires more information. Go back to the section(s) and update it.
+ - **Complete** - Section is complete, all required data has been provided and there are no errors. All sections of the offer must be in a complete state before you can submit the offer.
- In the **Notes for certification** section, provide testing instructions to the certification team to ensure that your app is tested correctly, in addition to any supplementary notes helpful for understanding your app. - Submit the offer for publishing by selecting **Submit**. We will email you when a preview version of the offer is available to review and approve. Return to Partner Center and select **Go-live** to publish your offer to the public. ## Next steps -- [Update an existing offer in the Commercial Marketplace](./update-existing-offer.md)
+- [Update an existing offer in the Commercial Marketplace](./update-existing-offer.md)
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
mysql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-maintenance.md
When specifying preferences for the maintenance schedule, you can pick a day of
> > However, in case of a critical emergency update such as a severe vulnerability, the notification window could be shorter than five days. The critical update may be applied to your server even if a successful scheduled maintenance was performed in the last 30 days.
-You can update scheduling settings at any time. If there is a maintenance scheduled for your Flexible server and you update scheduling preferences, the current event will proceed as scheduled and the scheduling settings change will become effective upon its successful completion.
+You can update scheduling settings at any time. If there is a maintenance scheduled for your Flexible server and you update scheduling preferences, the current rollout will proceed as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance.
-In rare cases, maintenance event can be canceled by the system or may fail to complete successfully. In this case, the system will create a notification about canceled or failed maintenance event respectively. The next attempt to perform maintenance will be scheduled as per current scheduling settings and you will receive notification about it five days in advance.
+You can define system-managed schedule or custom schedule for each flexible server in your Azure subscription.
+* With custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a one-hour time window.
+* With system-managed schedule, the system will pick any one-hour window between 11pm and 7am in your server's region time.
+
+As part of rolling out changes, we apply the updates to the servers configured with system-managed schedule first followed by servers with custom schedule after a minimum gap of 7-days within a given region. If you intend to receive early updates on fleet of development and test environment servers, we recommend you configure system-managed schedule for servers used in development and test environment. This will allow you to receive the latest update first in your Dev/Test environment for testing and evaluation for validation. If you encounter any behavior or breaking changes, you will have time to address them before the same update is rolled out to production servers with custom-managed schedule. The update starts to roll out on custom-schedule flexible servers after 7 days and is applied to your server at the defined maintenance window. At this time, there is no option to defer the update after the notification has been sent. Custom-schedule is recommended for production environments only.
+
+In rare cases, maintenance event can be canceled by the system or may fail to complete successfully. If the update fails, the update will be reverted, and the previous version of the binaries is restored. In such failed update scenarios, you may still experience restart of the server during the maintenance window. If the update is canceled or failed, the system will create a notification about canceled or failed maintenance event respectively notifying you. The next attempt to perform maintenance will be scheduled as per your current scheduling settings and you will receive notification about it five days in advance.
## Next steps
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/connect-azure-cli.md Binary files differ
mysql How To Configure Audit Log Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-configure-audit-log-cli.md
+
+ Title: Configure audit logs with Azure CLI - Azure Database for MySQL - Flexible Server
+description: This article describes how to configure and access the audit logs in Azure Database for MySQL Flexible Server from the Azure CLI.
++++ Last updated : 03/30/2021++
+# Configure and access audit logs for Azure Database for MySQL - Flexible Server using the Azure CLI
+
+> [!IMPORTANT]
+> Azure Database for MySQL - Flexible Server is currently in public preview.
+
+The article shows you how to configure [audit logs](concepts-audit-logs.md) for your MySQL flexible server using Azure CLI.
+
+## Prerequisites
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+ ```azurecli-interactive
+ az login
+ ````
+
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+`
+ ```azurecli
+ az account set --subscription <subscription id>
+ ```
+
+- Create a MySQL Flexible Server if you have not already created one using the ```az mysql flexible-server create``` command.
+
+ ```azurecli
+ az mysql flexible-server create --resource-group myresourcegroup --name myservername
+ ```
+
+## Configure audit logging
+
+>[!IMPORTANT]
+> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted.
+
+Enable and configure audit logging.
+
+```azurecli
+# Enable audit logs
+az mysql flexible-server parameter set \
+--name audit_log_enabled \
+--resource-group myresourcegroup \
+--server-name mydemoserver \
+--value ON
+```
+
+## Next steps
+- Learn more about [Audit logs](concepts-audit-logs.md)
mysql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-configure-high-availability-cli.md
+
+ Title: Manage zone redundant high availability - Azure CLI - Azure Database for MySQL Flexible Server
+description: This article describes how to configure zone redundant high availability in Azure Database for MySQL flexible Server with the Azure CLI.
++++ Last updated : 04/1/2021+++
+# Manage zone redundant high availability in Azure Database for MySQL Flexible Server with Azure CLI
+
+> [!NOTE]
+> Azure Database for MySQL Flexible Server is in public preview.
+
+The article describes how you can enable or disable zone redundant high availability configuration at the time of server creation in your flexible server. You can disable zone redundant high availability after server creation too. Enabling zone redundant high availability after server creation is not supported.
+
+High availability feature provisions physically separate primary and standby replica in different zones. For more information, see [high availability concepts documentation](./concepts/../concepts-high-availability.md). Enabling or disabling high availability does not change your other settings including VNET configuration, firewall settings, and backup retention. Disabling of high availability does not impact your application connectivity and operations.
+
+> [!IMPORTANT]
+> Zone redundant high availability is available in limited set of regions. Please review the supported regions [here](https://docs.microsoft.com/azure/mysql/flexible-server/overview#azure-regions).
+
+## Prerequisites
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+ ```azurecli-interactive
+ az login
+ ````
+
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+`
+ ```azurecli
+ az account set --subscription <subscription id>
+ ```
+
+## Enable high availability during server creation
+You can only create server using General purpose or Memory optimized pricing tiers with high availability. You can enable high availability for a server only during create time.
+
+**Usage:**
+
+```azurecli
+az mysql flexible-server create [--high-availability {Disabled, Enabled}]
+ [--resource-group]
+ [--name]
+```
+
+**Example:**
+```azurecli
+az mysql flexible-server create --name myservername --sku-name Standard-D2ds_v4 --resource-group myresourcegroup --high-availability Enabled
+```
+
+## Disable high availability
+
+You can disable high availability by using the [az mysql flexible-server update](/cli/azure/mysql/flexible-server#az_mysql_flexible_server_update) command. Note that disabling high availability is only supported if the server was created with high availability.
+
+```azurecli
+az mysql flexible-server update [--high-availability {Disabled, Enabled}]
+ [--resource-group]
+ [--name]
+```
+
+**Example:**
+```azurecli
+az mysql flexible-server update --resource-group myresourcegroup --name myservername --high-availability Disabled
+```
++
+## Next steps
+
+- Learn about [business continuity](./concepts-business-continuity.md)
+- Learn about [zone redundant high availability](./concepts-high-availability.md)
mysql How To Configure Slow Query Log Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-configure-slow-query-log-cli.md
+
+ Title: Configure audit logs and slow query logs with Azure CLI - Azure Database for MySQL - Flexible Server
+description: This article describes how to configure and access slow query logs in Azure Database for MySQL Flexible Server from the Azure CLI.
++++ Last updated : 03/30/2021++
+# Configure slow query logs for Azure Database for MySQL - Flexible Server using the Azure CLI
+
+> [!IMPORTANT]
+> Azure Database for MySQL - Flexible Server is currently in public preview.
+
+The article shows you how to configure [slow query logs](concepts-slow-query-logs.md) for your MySQL flexible server using Azure CLI.
+
+## Prerequisites
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+ ```azurecli-interactive
+ az login
+ ````
+
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+`
+ ```azurecli
+ az account set --subscription <subscription id>
+ ```
+
+- Create a MySQL Flexible Server if you have not already created one using the ```az mysql flexible-server create``` command.
+
+ ```azurecli
+ az mysql flexible-server create --resource-group myresourcegroup --name myservername
+ ```
+
+## Configure slow query logs
+
+>[!IMPORTANT]
+> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted.
+
+Enable and configure slow query logs for your server.
+
+```azurecli
+# Turn on statement level log
+az mysql flexible-server parameter set \
+--name log_statement \
+--resource-group myresourcegroup \
+--server-name mydemoserver \
+--value all
++
+# Set log_min_duration_statement time to 10 sec
+# This setting will log all queries executing for more than 10 sec. Please adjust this threshold based on your definition for slow queries
+az mysql server configuration set \
+--name log_min_duration_statement \
+--resource-group myresourcegroup \
+--server mydemoserver \
+--value 10000
+
+# Enable Slow query logs
+az mysql flexible-server parameter set \
+--name slow_query_log \
+--resource-group myresourcegroup \
+--server-name mydemoserver \
+--value ON
+
+```
+
+## Next steps
+- Learn about [slow query logs](concepts-slow-query-logs.md)
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
+
+ Title: Restart/Stop/start - Azure portal - Azure Database for MySQL Flexible Server
+description: This article describes how to restart/stop/start operations in Azure Database for MySQL through the Azure CLI.
++++ Last updated : 03/30/2021++
+# Restart/Stop/Start an Azure Database for MySQL - Flexible Server (Preview)
+
+> [!IMPORTANT]
+> Azure Database for MySQL - Flexible Server is currently in public preview.
+
+This article shows you how to perform restart, start and stop flexible server using Azure CLI.
+
+## Prerequisites
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+ ```azurecli-interactive
+ az login
+ ````
+
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+`
+ ```azurecli
+ az account set --subscription <subscription id>
+ ```
+
+- Create a MySQL Flexible Server if you have not already created one using the ```az mysql flexible-server create``` command.
+
+ ```azurecli
+ az mysql flexible-server create --resource-group myresourcegroup --name myservername
+ ```
+
+## Stop a running server
+To stop a server, run ```az mysql flexible-server stop``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+
+**Usage:**
+```azurecli
+az mysql flexible-server stop [--name]
+ [--resource-group]
+ [--subscription]
+```
+
+**Example without local context:**
+```azurecli
+az mysql flexible-server stop --resource-group --name myservername
+```
+
+**Example with local context:**
+```azurecli
+az mysql flexible-server stop
+```
+
+## Start a stopped server
+To start a server, run ```az mysql flexible-server start``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+
+**Usage:**
+```azurecli
+az mysql flexible-server start [--name]
+ [--resource-group]
+ [--subscription]
+```
+
+**Example without local context:**
+```azurecli
+az mysql flexible-server start --resource-group --name myservername
+```
+
+**Example with local context:**
+```azurecli
+az mysql flexible-server start
+```
+
+> [!IMPORTANT]
+> Once the server has restarted successfully, all management operations are now available for the flexible server.
+
+## Restart a server
+To restart a server, run ```az mysql flexible-server restart``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+
+**Usage:**
+```azurecli
+az mysql flexible-server restart [--name]
+ [--resource-group]
+ [--subscription]
+```
+
+**Example without local context:**
+```azurecli
+az mysql flexible-server restart --resource-group --name myservername
+```
+
+**Example with local context:**
+```azurecli
+az mysql flexible-server restart
+```
++
+> [!IMPORTANT]
+> Once the server has restarted successfully, all management operations are now available for the flexible server.
+
+## Next steps
+- Learn more about [networking in Azure Database for MySQL Flexible Server](./concepts-networking.md)
+- [Create and manage Azure Database for MySQL Flexible Server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).
+
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-restore-server-cli.md
+
+ Title: Restore Azure Database for MySQL - Flexible Server with Azure CLI
+description: This article describes how to perform restore operations in Azure Database for MySQL through the Azure CLI.
++++ Last updated : 04/01/2021++
+# Point-in-time restore of a Azure Database for MySQL - Flexible Server with Azure CLI
++
+> [!IMPORTANT]
+> Azure Database for MySQL - Flexible Server is currently in public preview.
+
+This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups.
+
+## Prerequisites
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+ ```azurecli-interactive
+ az login
+ ````
+
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+`
+ ```azurecli
+ az account set --subscription <subscription id>
+ ```
+
+- Create a MySQL Flexible Server if you have not already created one using the ```az mysql flexible-server create``` command.
+
+ ```azurecli
+ az mysql flexible-server create --resource-group myresourcegroup --name myservername
+ ```
+
+## Restore a server from backup to a new server
+
+You can run the following command to restore a server to an earliest existing backup.
+
+**Usage**
+```azurecli
+az mysql flexible-server restore --restore-time
+ --source-server
+ [--ids]
+ [--location]
+ [--name]
+ [--no-wait]
+ [--resource-group]
+ [--subscription]
+```
+
+**Example:**
+Restore a server from this ```2021-03-03T13:10:00Z``` backup snapshot.
+
+```azurecli
+az mysql server restore \
+--name mydemoserver-restored \
+--resource-group myresourcegroup \
+--restore-point-in-time "2021-03-03T13:10:00Z" \
+--source-server mydemoserver
+```
+Time taken to restore will depend on the size of the data stored in the server.
+
+## Perform post-restore tasks
+After the restore is completed, you should perform the following tasks to get your users and applications back up and running:
+
+- If the new server is meant to replace the original server, redirect clients and client applications to the new server
+- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.
+- Ensure appropriate logins and database level permissions are in place
+- Configure alerts as appropriate for the newly restore server
+
+## Next steps
+Learn more about [business continuity](concepts-business-continuity.md)
+
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-restore-server-portal.md
Title: Restore - Azure portal - Azure Database for MySQL - Flexible Server
-description: This article describes how to perform restore operations in Azure Database for MySQL through the Azure portal.
+ Title: Restore an Azure Database for MySQL Flexible Server with Azure portal.
+description: This article describes how to perform restore operations in Azure Database for MySQL Flexible server through the Azure portal
Previously updated : 09/21/2020 Last updated : 04/01/2021
-# Point-in-time restore of a Azure Database for MySQL - Flexible Server (Preview)
+# Point-in-time restore of a Azure Database for MySQL - Flexible Server (Preview) using Azure portal
> [!IMPORTANT]
Follow these steps to restore your flexible server using an earliest existing ba
3. From the overview page, click **Restore**.
- [Placeholder]
- 4. Restore page will be shown with an option to choose between **Latest restore point** and Custom restore point. 5. Select **Latest restore point**. - 6. Provide a new server name in the **Restore to new server** field. :::image type="content" source="./media/concept-backup-restore/restore-blade-latest.png" alt-text="Earliest restore time":::
Follow these steps to restore your flexible server using an earliest existing ba
2. From the overview page, click **Restore**.
- [Placeholder]
- 3. Restore page will be shown with an option to choose between Earliest restore point and Custom restore point. 4. Choose **Custom restore point**.
Follow these steps to restore your flexible server using an earliest existing ba
6. Provide a new server name in the **Restore to new server** field.
-6. Provide a new server name in the **Restore to new server** field.
-
+6. Provide a new server name in the **Restore to new server** field.
+ :::image type="content" source="./media/concept-backup-restore/restore-blade-custom.png" alt-text="view overview":::
-
+ 7. Click **OK**. 8. A notification will be shown that the restore operation has been initiated.
-## Next steps
-Placeholder
+## Perform post-restore tasks
+After the restore is completed, you should perform the following tasks to get your users and applications back up and running:
+
+- If the new server is meant to replace the original server, redirect clients and client applications to the new server.
+- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.
+- Ensure appropriate logins and database level permissions are in place.
+- Configure alerts as appropriate for the newly restore server.
++
+## Next steps
+Learn more about [business continuity](concepts-business-continuity.md)
mysql Tutorial Deploy Wordpress On Aks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md Binary files differ
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/policy-reference.md
Title: Built-in policy definitions for Azure Database for MySQL description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MySQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
networking Architecture Guides https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/fundamentals/architecture-guides.md
The following table includes articles that describe how to deploy your applicati
|[IaaS: Web application with relational database](/azure/architecture/high-availability/ref-arch-iaas-web-and-db) | Describes how to use resources spread across multiple zones to provide a high availability architecture for hosting an Infrastructure as a Service (IaaS) web application and SQL Server database. | |[Sharing location in real time using low-cost serverless Azure services](/azure/architecture/example-scenario/signalr/#azure-front-door) | Uses Azure Front Door to provide higher availability for your applications than deploying to a single region. If a regional outage affects the primary region, you can use Front Door to fail over to the secondary region. | |[Highly available network virtual appliances](/azure/architecture/reference-architectures/dmz/nva-ha) | Shows how to deploy a set of network virtual appliances (NVAs) for high availability in Azure. |
+|[Multi-region load balancing with Traffic Manager and Application Gateway](/azure/architecture/high-availability/reference-architecture-traffic-manager-application-gateway) | Describes how to deploy resilient multi-tier applications in multiple Azure regions, in order to achieve availability and a robust disaster recovery infrastructure. |
## Secure your network resources
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
openshift Howto Create A Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-create-a-backup.md Binary files differ
openshift Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/responsibility-matrix.md
+
+ Title: Azure Red Hat OpenShift Responsibility Assignment Matrix
+description: Learn about the ownership of responsibilities for the operation of an Azure Red Hat OpenShift cluster
++ Last updated : 4/12/2021++
+keywords: aro, openshift, az aro, red hat, cli, RACI, support
++
+# Overview of responsibilities for Azure Red Hat OpenShift
+
+This document outlines the responsibilities of Microsoft, Red Hat, and customers for Azure Red Hat OpenShift clusters. For more information about Azure Red Hat OpenShift and its components, see the Azure Red Hat OpenShift Service Definition.
+
+While Microsoft and Red Hat manage the Azure Red Hat OpenShift service, the customer shares responsibility for the functionality of their cluster. While Azure Red Hat OpenShift clusters are hosted on Azure resources in customer Azure subscriptions, they are accessed remotely. Underlying platform and data security is owned by Microsoft and Red Hat.
+
+## Overview
+<table>
+ <tr>
+ <td><strong>Resource</strong>
+ </td>
+ <td><strong><a href="#incident-and-operations-management">Incident and Operations Management</a></strong>
+ </td>
+ <td><strong><a href="#change-management">Change Management</a></strong>
+ </td>
+ <td><strong><a href="#identity-and-access-management">Identity and Access Management</a></strong>
+ </td>
+ <td><strong><a href="#security-and-regulation-compliance">Security and Regulation Compliance</a></strong>
+ </td>
+ </tr>
+ <tr>
+ <td><a href="#customer-data-and-applications">Customer data</a>
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ </tr>
+ <tr>
+ <td><a href="#customer-data-and-applications">Customer applications</a>
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ </tr>
+ <tr>
+ <td><a href="#customer-data-and-applications">Developer services </a>
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ <td>Customer
+ </td>
+ </tr>
+ <tr>
+ <td>Platform monitoring
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Logging
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ </tr>
+ <tr>
+ <td>Application networking
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Cluster networking
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Virtual networking
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ <td>Shared
+ </td>
+ </tr>
+ <tr>
+ <td>Control plane nodes
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Worker nodes
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Cluster Version
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Shared
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Capacity Management
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Shared
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Virtual Storage
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+ <tr>
+ <td>Physical Infrastructure and Security
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ <td>Microsoft and Red Hat
+ </td>
+ </tr>
+</table>
++
+Table 1. Responsibilities by resource
++
+## Tasks for shared responsibilities by area
+
+### Incident and operations management
+
+The customer and Microsoft and Red Hat share responsibility for the monitoring and maintenance of an Azure Red Hat OpenShift cluster. The customer is responsible for incident and operations management of [customer application data](#customer-data-and-applications) and any custom networking the customer may have configured.
+
+<table>
+ <tr>
+ <td><strong>Resource</strong>
+ </td>
+ <td><strong>Microsoft and Red Hat responsibilities</strong>
+ </td>
+ <td><strong>Customer responsibilities</strong>
+ </td>
+ </tr>
+ <tr>
+ <td>Application networking
+ </td>
+ <td>
+<ul>
+
+<li>Monitor cloud load balancer(s) and native OpenShift router service, and respond to alerts.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Monitor health of service load balancer endpoints.
+
+<li>Monitor health of application routes, and the endpoints behind them.
+
+<li>Report outages to Microsoft and Red Hat.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Virtual networking
+ </td>
+ <td>
+<ul>
+
+<li>Monitor cloud load balancers, subnets, and Azure cloud components necessary for default platform networking, and respond to alerts.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Monitor network traffic that is optionally configured via VNet to VNet connection, VPN connection, or Private Link connection for potential issues or security threats.
+</li>
+</ul>
+ </td>
+ </tr>
+</table>
++
+Table 2. Shared responsibilities for incident and operations management
++
+### Change management
+
+Microsoft and Red Hat are responsible for enabling changes to the cluster infrastructure and services that the customer will control, as well as maintaining versions available for the master nodes, infrastructure services, and worker nodes. The customer is responsible for initiating infrastructure changes and installing and maintaining optional services and networking configurations on the cluster, as well as all changes to customer data and customer applications.
++
+<table>
+ <tr>
+ <td><strong>Resource</strong>
+ </td>
+ <td><strong>Microsoft and Red Hat responsibilities</strong>
+ </td>
+ <td><strong>Customer responsibilities</strong>
+ </td>
+ </tr>
+ <tr>
+ <td>Logging
+ </td>
+ <td>
+<ul>
+
+<li>Centrally aggregate and monitor platform audit logs.
+
+<li>Provide documentation for the customer to enable application logging using Log Analytics through Azure Monitor for containers.
+
+<li>Provide audit logs upon customer request.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Install the optional default application logging operator on the cluster.
+
+<li>Install, configure, and maintain any optional app logging solutions, such as logging sidecar containers or third-party logging applications.
+
+<li>Tune size and frequency of application logs being produced by customer applications if they are affecting the stability of the cluster.
+
+<li>Request platform audit logs through a support case for researching specific incidents.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Application networking
+ </td>
+ <td>
+<ul>
+
+<li>Set up public cloud load balancers
+
+<li>Set up native OpenShift router service. Provide the ability to set the router as private and add up to one additional router shard.
+
+<li>Install, configure, and maintain OpenShift SDN components for default internal pod traffic.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Configure non-default pod network permissions for project and pod networks, pod ingress, and pod egress using NetworkPolicy objects.
+
+<li>Request and configure any additional service load balancers for specific services.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Cluster networking
+ </td>
+ <td>
+<ul>
+
+<li>Set up cluster management components, such as public or private service endpoints and necessary integration with virtual networking components.
+
+<li>Set up internal networking components required for internal cluster communication between worker and master nodes.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Provide optional non-default IP address ranges for machine CIDR, service CIDR, and pod CIDR if needed through OpenShift Cluster Manager when the cluster is provisioned.
+
+<li>Request that the API service endpoint be made public or private on cluster creation or after cluster creation through Azure CLI.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Virtual networking
+ </td>
+ <td>
+<ul>
+
+<li>Set up and configure virtual networking components required to provision the cluster, including virtual private cloud, subnets, load balancers, internet gateways, NAT gateways, etc.
+
+<li>Provide the ability for the customer to manage VPN connectivity with on-premises resources, VNet to VNet connectivity, and Private Link connectivity as required through OpenShift Cluster Manager.
+
+<li>Enable customers to create and deploy public cloud load balancers for use with service load balancers.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Set up and maintain optional public cloud networking components, such as VNet to VNet connection, VPN connection, or Private Link connection.
+
+<li>Request and configure any additional service load balancers for specific services.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Cluster Version
+ </td>
+ <td>
+<ul>
+
+<li>Communicate schedule and status of upgrades for minor and maintenance versions
+
+<li>Publish changelogs and release notes for minor and maintenance upgrades
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Initiate Upgrade of cluster
+
+<li>Test customer applications on minor and maintenance versions to ensure compatibility
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Capacity Management
+ </td>
+ <td>
+<ul>
+
+<li>Monitor utilization of control plane (master nodes)
+
+<li>Scale and/or resize control plane nodes to maintain quality of service
+
+<li>Monitor utilization of customer resources including Network, Storage and Compute capacity. Where autoscaling features are not enabled alert customer for any changes required to cluster resources (eg. new compute nodes to scale, additional storage, etc.)
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Use the provided OpenShift Cluster Manager controls to add or remove additional worker nodes as required.
+
+<li>Respond to Microsoft and Red Hat notifications regarding cluster resource requirements.
+</li>
+</ul>
+ </td>
+ </tr>
+</table>
++
+Table 3. Shared responsibilities for change management
++
+### Identity and Access Management
+
+Identity and Access management includes all responsibilities for ensuring that only proper individuals have access to cluster, application, and infrastructure resources. This includes tasks such as providing access control mechanisms, authentication, authorization, and managing access to resources.
++
+<table>
+ <tr>
+ <td><strong>Resource</strong>
+ </td>
+ <td><strong>Microsoft and Red Hat responsibilities</strong>
+ </td>
+ <td><strong>Customer responsibilities</strong>
+ </td>
+ </tr>
+ <tr>
+ <td>Logging
+ </td>
+ <td>
+<ul>
+
+<li>Adhere to an industry standards-based tiered internal access process for platform audit logs.
+
+<li>Provide native OpenShift RBAC capabilities.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Configure OpenShift RBAC to control access to projects and by extension a project's application logs.
+
+<li>For third-party or custom application logging solutions, the customer is responsible for access management.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Application networking
+ </td>
+ <td>
+<ul>
+
+<li>Provide native OpenShift RBAC and dedicated-admin capabilities.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Configure OpenShift dedicated-admins and RBAC to control access to route configuration as required.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Cluster networking
+ </td>
+ <td>
+<ul>
+
+<li>Provide native OpenShift RBAC and dedicated-admin capabilities.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Manage Red Hat organization membership of Red Hat accounts.
+
+<li>Manage Org Admins for Red Hat organization to grant access to OpenShift Cluster Manager.
+
+<li>Configure OpenShift dedicated-admins and RBAC to control access to route configuration as required.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Virtual networking
+ </td>
+ <td>
+<ul>
+
+<li>Provide customer access controls through OpenShift Cluster Manager.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Manage optional user access to public cloud components through OpenShift Cluster Manager.
+</li>
+</ul>
+ </td>
+ </tr>
+</table>
++
+Table 4. Shared responsibilities for identity and access management
++
+### Security and regulation compliance
+
+Security and compliance includes any responsibilities and controls that ensure compliance with relevant laws, policies, and regulations.
++
+<table>
+ <tr>
+ <td><strong>Resource</strong>
+ </td>
+ <td><strong>Microsoft and Red Hat responsibilities</strong>
+ </td>
+ <td><strong>Customer responsibilities</strong>
+ </td>
+ </tr>
+ <tr>
+ <td>Logging
+ </td>
+ <td>
+<ul>
+
+<li>Send cluster audit logs to a Microsoft and Red Hat SIEM to analyze for security events. Retain audit logs for a defined period of time to support forensic analysis.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Analyze application logs for security events. Send application logs to an external endpoint through logging sidecar containers or third-party logging applications if longer retention is required than is offered by the default logging stack.
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Virtual networking
+ </td>
+ <td>
+<ul>
+
+<li>Monitor virtual networking components for potential issues and security threats.
+
+<li>Use additional public Microsoft and Red Hat Azure tools for additional monitoring and protection.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Monitor optionally configured virtual networking components for potential issues and security threats.
+
+<li>Configure any necessary firewall rules or data center protections as required.
+</li>
+</ul>
+ </td>
+ </tr>
+</table>
++
+Table 5. Shared responsibilities for security and regulation compliance
++
+## Customer responsibilities when using Azure Red Hat OpenShift
++
+### Customer data and applications
+
+The customer is responsible for the applications, workloads, and data that they deploy to Azure Red Hat OpenShift. However, Microsoft and Red Hat provide various tools to help the customer manage data and applications on the platform.
++
+<table>
+ <tr>
+ <td><strong>Resource</strong>
+ </td>
+ <td><strong>How Microsoft and Red Hat helps</strong>
+ </td>
+ <td><strong>Customer responsibilities</strong>
+ </td>
+ </tr>
+ <tr>
+ <td>Customer Data
+ </td>
+ <td>
+<ul>
+
+<li>Maintain platform-level standards for data encryption as defined by industry security and compliance standards.
+
+<li>Provide OpenShift components to help manage application data, such as secrets.
+
+<li>Enable integration with third-party data services (such as Azure SQL) to store and manage data outside of the cluster and/or Microsoft and Red Hat Azure.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Maintain responsibility for all customer data stored on the platform and how customer applications consume and expose this data.
+
+<li>Etcd encryption
+</li>
+</ul>
+ </td>
+ </tr>
+ <tr>
+ <td>Customer Applications
+ </td>
+ <td>
+<ul>
+
+<li>Provision clusters with OpenShift components installed so that customers can access the OpenShift and Kubernetes APIs to deploy and manage containerized applications.
+
+<li>Provide access to OpenShift APIs that a customer can use to set up Operators to add community, third-party, Microsoft and Red Hat, and Red Hat services to the cluster.
+
+<li>Provide storage classes and plug-ins to support persistent volumes for use with customer applications.
+</li>
+</ul>
+ </td>
+ <td>
+<ul>
+
+<li>Maintain responsibility for customer and third-party applications, data, and their complete lifecycle.
+
+<li>If a customer adds Red Hat, community, third party, their own, or other services to the cluster by using Operators or external images, the customer is responsible for these services and for working with the appropriate provider (including Red Hat) to troubleshoot any issues.
+
+<li>Use the provided tools and features to <a href="https://docs.openshift.com/dedicated/4/architecture/understanding-development.html#application-types">configure and deploy</a>; <a href="https://docs.openshift.com/dedicated/4/applications/deployments/deployment-strategies.html">keep up-to-date</a>; <a href="https://docs.openshift.com/dedicated/4/applications/working-with-quotas.html">set up resource requests and limits</a>; <a href="https://docs.openshift.com/dedicated/4/getting_started/scaling-your-cluster.html">size the cluster to have enough resources to run apps</a>; <a href="https://docs.openshift.com/dedicated/4/administering_a_cluster/dedicated-admin-role.html#dedicated-admin-granting-permissions_dedicated-administrator">set up permissions</a>; integrate with other services; <a href="https://docs.openshift.com/dedicated/4/openshift_images/images-understand.html">manage any image streams or templates that the customer deploys</a>; <a href="https://docs.openshift.com/dedicated/4/cloud_infrastructure_access/dedicated-aws-private-cluster.html">externally serve</a>; save, back up, and restore data; and otherwise manage their highly available and resilient workloads.
+
+<li>Maintain responsibility for monitoring the applications run on Azure Red Hat OpenShift; including installing and operating software to gather metrics and create alerts.
+</li>
+</ul>
+ </td>
+ </tr>
+</table>
++
+Table 7. Customer responsibilities for customer data, customer applications, and services
openshift Tutorial Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/tutorial-delete-cluster.md Binary files differ
postgresql Concepts Hyperscale Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-backup.md
size, user roles, PostgreSQL version, and version of the Citus extension.
Firewall settings and PostgreSQL server parameters are not preserved from the original server group, they are reset to default values. The firewall will prevent all connections. You will need to manually adjust these settings after
-restore.
-
-> [!IMPORTANT]
-> You'll need to open a support request to perform point-in-time restore of
-> your Hyperscale (Citus) cluster.
-
-### Post-restore tasks
-
-After a restore from either recovery mechanism, you should do the
-following to get your users and applications back up and running:
-
-* If the new server is meant to replace the original server, redirect clients
- and client applications to the new server
-* Ensure appropriate server-level firewall is in place for
- users to connect. These rules aren't copied from the original server group.
-* Adjust PostgreSQL server parameters as needed. The parameters aren't copied
- from the original server group.
-* Ensure appropriate logins and database level permissions are in place
-* Configure alerts, as appropriate
+restore. In general, see our list of suggested [post-restore
+tasks](howto-hyperscale-restore-portal.md#post-restore-tasks).
## Next steps
+* See the steps to [restore a server group](howto-hyperscale-restore-portal.md)
+ in the Azure portal.
* Learn aboutΓÇ»[Azure availability zones](../availability-zones/az-overview.md).
-* SetΓÇ»[suggested alerts](./howto-hyperscale-alert-on-metric.md#suggested-alerts) on Hyperscale (Citus) server groups.
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-maintenance.md
When specifying preferences for the maintenance schedule, you can pick a day of
> > However, in case of a critical emergency update such as a severe vulnerability, the notification window could be shorter than five days. The critical update may be applied to your server even if a successful scheduled maintenance was performed in the last 30 days.
-You can update scheduling settings at any time. If there is a maintenance scheduled for your Flexible server and you update scheduling preferences, the current event will proceed as scheduled and the scheduling settings change will become effective upon its successful completion.
+You can update scheduling settings at any time. If there is a maintenance scheduled for your Flexible server and you update scheduling preferences, the current rollout will proceed as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance.
+
+You can define system-managed schedule or custom schedule for each flexible server in your Azure subscription.
+* With custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a one-hour time window.
+* With system-managed schedule, the system will pick any one-hour window between 11pm and 7am in your server's region time.
+
+As part of rolling out changes, we apply the updates to the servers configured with system-managed schedule first followed by servers with custom schedule after a minimum gap of 7-days within a given region. If you intend to receive early updates on fleet of development and test environment servers, we recommend you configure system-managed schedule for servers used in development and test environment. This will allow you to receive the latest update first in your Dev/Test environment for testing and evaluation for validation. If you encounter any behavior or breaking changes, you will have time to address them before the same update is rolled out to production servers with custom-managed schedule. The update starts to roll out on custom-schedule flexible servers after 7 days and is applied to your server at the defined maintenance window. At this time, there is no option to defer the update after the notification has been sent. Custom-schedule is recommended for production environments only.
+
+In rare cases, maintenance event can be canceled by the system or may fail to complete successfully. If the update fails, the update will be reverted, and the previous version of the binaries is restored. In such failed update scenarios, you may still experience restart of the server during the maintenance window. If the update is canceled or failed, the system will create a notification about canceled or failed maintenance event respectively notifying you. The next attempt to perform maintenance will be scheduled as per your current scheduling settings and you will receive notification about it five days in advance.
-If maintenance event is canceled by the system or fails to complete successfully, the system will create a notification about canceled or failed maintenance event respectively. The next attempt to perform maintenance will be scheduled as per current scheduling settings and you will receive notification about it five days in advance.
## Next steps
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/connect-azure-cli.md Binary files differ
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-django-aks-database.md Binary files differ
postgresql Howto Hyperscale Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-restore-portal.md
+
+ Title: Restore - Hyperscale (Citus) - Azure Database for PostgreSQL - Azure portal
+description: This article describes how to perform restore operations in Azure Database for PostgreSQL - Hyperscale (Citus) through the Azure portal.
+++++ Last updated : 04/13/2021++
+# Point-in-time restore of a Hyperscale (Citus) server group
+
+This article provides step-by-step procedures to perform [point-in-time
+recoveries](concepts-hyperscale-backup.md#point-in-time-restore-pitr) for a
+Hyperscale (Citus) server group using backups. You can restore either to the
+earliest backup or to a custom restore point within your retention period.
+
+## Restoring to the earliest restore point
+
+Follow these steps to restore your Hyperscale (Citus) server group to its
+earliest existing backup.
+
+1. In the [Azure portal](https://portal.azure.com/), choose the server group
+ that you want to restore.
+
+2. Click **Overview** from the left panel and click **Restore**.
+
+ > [!IMPORTANT]
+ > If the **Restore** button is not yet present for your server group,
+ > please open an Azure support request.
+
+3. The restore page will ask you to choose between the **Earliest** and a
+ **Custom** restore point, and will display the earliest date.
+
+4. Select **Earliest restore point**.
+
+5. Provide a new server group name in the **Restore to new server** field. The
+ other fields (subscription, resource group, and location) are displayed but
+ not editable.
+
+6. Click **OK**.
+
+7. A notification will be shown that the restore operation has been initiated.
+
+Finally, follow the [post-restore tasks](#post-restore-tasks).
+
+## Restoring to a custom restore point
+
+Follow these steps to restore your Hyperscale (Citus) server group to a date
+and time of your choosing.
+
+1. In the [Azure portal](https://portal.azure.com/), choose the server group
+ that you want to restore.
+
+2. Click **Overview** from the left panel and click **Restore**
+
+ > [!IMPORTANT]
+ > If the **Restore** button is not yet present for your server group,
+ > please open an Azure support request.
+
+3. The restore page will ask you to choose between the **Earliest** and a
+ **Custom** restore point, and will display the earliest date.
+
+4. Choose **Custom restore point**.
+
+5. Select date and time for **Restore point (UTC)**, and provide a new server
+ group name in the **Restore to new server** field. The other fields
+ (subscription, resource group, and location) are displayed but not editable.
+
+6. Click **OK**.
+
+7. A notification will be shown that the restore operation has been
+ initiated.
+
+Finally, follow the [post-restore tasks](#post-restore-tasks).
+
+## Post-restore tasks
+
+After a restore, you should do the following to get your users and applications
+back up and running:
+
+* If the new server is meant to replace the original server, redirect clients
+ and client applications to the new server
+* Ensure an appropriate server-level firewall is in place for
+ users to connect. These rules aren't copied from the original server group.
+* Adjust PostgreSQL server parameters as needed. The parameters aren't copied
+ from the original server group.
+* Ensure appropriate logins and database level permissions are in place.
+* Configure alerts, as appropriate.
+
+## Next steps
+
+* Learn more about [backup and restore](concepts-hyperscale-backup.md) in
+ Hyperscale (Citus).
+* SetΓÇ»[suggested
+ alerts](./howto-hyperscale-alert-on-metric.md#suggested-alerts) on Hyperscale
+ (Citus) server groups.
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/policy-reference.md
Title: Built-in policy definitions for Azure Database for PostgreSQL description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
A private link resource is the destination target of a given private endpoint. T
|**Azure Event Grid** | Microsoft.EventGrid/topics | topic | |**Azure Event Grid** | Microsoft.EventGrid/domains | domain | |**Azure App Service** | Microsoft.Web/sites | sites |
+|**Azure App Service Slots** | Microsoft.Web/sites | sites-`<slot name>` |
|**Azure Machine Learning** | Microsoft.MachineLearningServices/workspaces | amlworkspace | |**SignalR** | Microsoft.SignalRService/SignalR | signalR | |**Azure Monitor** | Microsoft.Insights/privateLinkScopes | azuremonitor |
private-link Tutorial Private Endpoint Sql Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/tutorial-private-endpoint-sql-cli.md Binary files differ
role-based-access-control Custom Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/custom-roles.md
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group, subscription, and resource group scopes.
+If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group (in preview only), subscription, and resource group scopes.
Custom roles can be shared between subscriptions that trust the same Azure AD directory. There is a limit of **5,000** custom roles per directory. (For Azure Germany and Azure China 21Vianet, the limit is 2,000 custom roles.) Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API.
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
security-center Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/policy-reference.md
Title: Built-in policy definitions for Azure Security Center description: Lists Azure Policy built-in policy definitions for Azure Security Center. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
service-bus-messaging Deprecate Service Bus Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/deprecate-service-bus-management.md
Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subs
| **EventHubsCrud-ListEventHubsAsync**<br/>[List Event Hubs](/rest/api/eventhub/list-event-hubs)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs?$skip={skip}&$top={top}``` | &nbsp; | [list](/rest/api/servicebus/stable/eventhubs/listbynamespace) | &nbsp; | | **EventHubsCrud-GetEventHubAsync**<br/>[Get Event Hubs](/rest/api/eventhub/get-event-hub)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs/{eventHubPath}``` | &nbsp; | [get](/rest/api/eventhub/get-event-hub) | &nbsp; | | **NamespaceAuthorizationRules-DeleteNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay<br/>```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [deleteauthorizationrule](/rest/api/servicebus/stable/namespaces%20-%20authorization%20rules/deleteauthorizationrule) | [deleteauthorizationrule](/rest/api/eventhub/stable/authorization%20rules%20-%20namespaces/deleteauthorizationrule) | [deleteauthorizationrule](/rest/api/relay/namespaces/deleteauthorizationrule) |
-| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRulesAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules``` | [listauthorizationrules](/rest/api/relay/namespaces/listauthorizationrules) | [listauthorizationrules](/rest/api/eventhub/stable/authorization%20rules%20-%20namespaces/listauthorizationrules) | [listauthorizationrules](/rest/api/relay/namespaces/listauthorizationrules) |
+| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRulesAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules``` | [listauthorizationrules](/servicebus/stable/namespaces%20-%20authorization%20rules/listauthorizationrules) | [listauthorizationrules](/rest/api/eventhub/stable/authorization%20rules%20-%20namespaces/listauthorizationrules) | [listauthorizationrules](/rest/api/relay/namespaces/listauthorizationrules) |
| **NamespaceAvailability-IsNamespaceAvailable**<br/>[Service Bus Namespace Availability](/rest/api/servicebus/check-namespace-availability)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/CheckNamespaceAvailability/?namespace=<namespaceValue>``` | [checknameavailability](/rest/api/servicebus/stable/namespaces%20-%20checkname%20availability/checknameavailability) | [checknameavailability](/rest/api/eventhub/stable/check%20name%20availability%20-%20namespaces/checknameavailability) | [checknameavailability](/rest/api/relay/namespaces/checknameavailability) | | **Namespaces-CreateOrUpdateNamespaceAsync**<br/>Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/namespaces/createorupdate) | | **Topics-GetTopicAsync**<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics/{topicPath}``` | [get](/rest/api/servicebus/stable/topics/get) | &nbsp; | &nbsp; |
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/duplicate-detection.md
Title: Azure Service Bus duplicate message detection | Microsoft Docs description: This article explains how you can detect duplicates in Azure Service Bus messages. The duplicate message can be ignored and dropped. Previously updated : 01/13/2021 Last updated : 04/14/2021 # Duplicate detection
The *MessageId* can always be some GUID, but anchoring the identifier to the bus
## Enable duplicate detection
-In the portal, the feature is turned on during entity creation with the **Enable duplicate detection** check box, which is off by default. The setting for creating new topics is equivalent.
+Apart from just enabling duplicate detection, you can also configure the size of the duplicate detection history time window during which message-ids are retained.
+This value defaults to 10 minutes for queues and topics, with a minimum value of 20 seconds to maximum value of 7 days.
+
+Enabling duplicate detection and the size of the window directly impact the queue (and topic) throughput, since all recorded message-ids must be matched against the newly submitted message identifier.
+
+Keeping the window small means that fewer message-ids must be retained and matched, and throughput is impacted less. For high throughput entities that require duplicate detection, you should keep the window as small as possible.
+
+### Using the portal
+
+In the portal, the duplicate detection feature is turned on during entity creation with the **Enable duplicate detection** check box, which is off by default. The setting for creating new topics is equivalent.
![Screenshot of the Create queue dialog box with the Enable duplicate detection option selected and outlined in red.][1] > [!IMPORTANT] > You can't enable/disable duplicate detection after the queue is created. You can only do so at the time of creating the queue.
-Programmatically, you set the flag with the [QueueDescription.requiresDuplicateDetection](/dotnet/api/microsoft.servicebus.messaging.queuedescription.requiresduplicatedetection#Microsoft_ServiceBus_Messaging_QueueDescription_RequiresDuplicateDetection) property on the full framework .NET API. With the Azure Resource Manager API, the value is set with the [queueProperties.requiresDuplicateDetection](/azure/templates/microsoft.servicebus/namespaces/queues#property-values) property.
-
-The duplicate detection time history defaults to 10 minutes for queues and topics, with a minimum value of 20 seconds to maximum value of 7 days. You can change this setting in the queue and topic properties window in the Azure portal.
+The duplicate detection history time window can be changed in the queue and topic properties window in the Azure portal.
![Screenshot of the Service Bus feature with the Properties setting highlighted adn the Duplicate detection history option outlined in red.][2]
-Programmatically, you can configure the size of the duplicate detection window during which message-ids are retained, using the [QueueDescription.DuplicateDetectionHistoryTimeWindow](/dotnet/api/microsoft.servicebus.messaging.queuedescription.duplicatedetectionhistorytimewindow#Microsoft_ServiceBus_Messaging_QueueDescription_DuplicateDetectionHistoryTimeWindow) property with the full .NET Framework API. With the Azure Resource Manager API, the value is set with the [queueProperties.duplicateDetectionHistoryTimeWindow](/azure/templates/microsoft.servicebus/namespaces/queues#property-values) property.
+### Using SDKs
-Enabling duplicate detection and the size of the window directly impact the queue (and topic) throughput, since all recorded message-ids must be matched against the newly submitted message identifier.
+You can any of our SDKs across .NET, Java, JavaScript, Python and Go to enable duplicate detection feature when creating queues and topics. You can also change the duplicate detection history time window.
+The properties to update when creating queues and topics to achieve this are:
+- `RequiresDuplicateDetection`
+- `DuplicateDetectionHistoryTimeWindow`
-Keeping the window small means that fewer message-ids must be retained and matched, and throughput is impacted less. For high throughput entities that require duplicate detection, you should keep the window as small as possible.
+Please note that while the property names are provided in pascal casing here, JavaScript and Python SDKs will be using camel casing and snake casing respectively.
## Next steps
service-bus-messaging Message Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-sequencing.md
Title: Azure Service Bus message sequencing and timestamps | Microsoft Docs description: This article explains how to preserve sequencing and ordering (with timestamps) of Azure Service Bus messages. Previously updated : 06/23/2020 Last updated : 04/14/2021 # Message sequencing and timestamps
-Sequencing and timestamping are two features that are always enabled on all Service Bus entities and surface through the [SequenceΓÇïNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sequencenumber) and [EnqueuedTimeUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedtimeutc) properties of received or browsed messages.
+Sequencing and timestamping are two features that are always enabled on all Service Bus entities and surface through the `SequenceΓÇïNumber` and `EnqueuedTimeUtc` properties of received or browsed messages.
For those cases in which absolute order of messages is significant and/or in which a consumer needs a trustworthy unique identifier for messages, the broker stamps messages with a gap-free, increasing sequence number relative to the queue or topic. For partitioned entities, the sequence number is issued relative to the partition.
You can submit messages to a queue or topic for delayed processing; for example,
Scheduled messages do not materialize in the queue until the defined enqueue time. Before that time, scheduled messages can be canceled. Cancellation deletes the message.
-You can schedule messages either by setting the [ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc](/dotnet/api/microsoft.azure.servicebus.message.scheduledenqueuetimeutc) property when sending a message through the regular send path, or explicitly with the [ScheduleMessageAsync](/dotnet/api/microsoft.azure.servicebus.queueclient.schedulemessageasync#Microsoft_Azure_ServiceBus_QueueClient_ScheduleMessageAsync_Microsoft_Azure_ServiceBus_Message_System_DateTimeOffset_) API. The latter immediately returns the scheduled message's **SequenceNumber**, which you can later use to cancel the scheduled message if needed. Scheduled messages and their sequence numbers can also be discovered using [message browsing](message-browsing.md).
+You can schedule messages using any of our clients in two ways:
+- Use the regular send API, but set the `ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc` property on the message before sending.
+- Use the schedule message API, pass both the normal message and the scheduled time. This will return the scheduled message's **SequenceNumber**, which you can later use to cancel the scheduled message if needed.
+
+Scheduled messages and their sequence numbers can also be discovered using [message browsing](message-browsing.md).
The **SequenceNumber** for a scheduled message is only valid while the message is in this state. As the message transitions to the active state, the message is appended to the queue as if had been enqueued at the current instant, which includes assigning a new **SequenceNumber**.
To learn more about Service Bus messaging, see the following topics:
* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/31/2021 Last updated : 04/14/2021
service-bus-messaging Service Bus Amqp Protocol Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-amqp-protocol-guide.md
Title: AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide | Microsoft Docs description: Protocol guide to expressions and description of AMQP 1.0 in Azure Service Bus and Event Hubs Previously updated : 06/23/2020 Last updated : 04/14/2021 # AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide
AMQP calls the communicating programs *containers*; those contain *nodes*, which
The network connection is thus anchored on the container. It is initiated by the container in the client role making an outbound TCP socket connection to a container in the receiver role, which listens for and accepts inbound TCP connections. The connection handshake includes negotiating the protocol version, declaring or negotiating the use of Transport Level Security (TLS/SSL), and an authentication/authorization handshake at the connection scope that is based on SASL.
-Azure Service Bus requires the use of TLS at all times. It supports connections over TCP port 5671, whereby the TCP connection is first overlaid with TLS before entering the AMQP protocol handshake, and also supports connections over TCP port 5672 whereby the server immediately offers a mandatory upgrade of connection to TLS using the AMQP-prescribed model. The AMQP WebSockets binding creates a tunnel over TCP port 443 that is then equivalent to AMQP 5671 connections.
+Azure Service Bus requires the use of TLS always. It supports connections over TCP port 5671, whereby the TCP connection is first overlaid with TLS before entering the AMQP protocol handshake, and also supports connections over TCP port 5672 whereby the server immediately offers a mandatory upgrade of connection to TLS using the AMQP-prescribed model. The AMQP WebSockets binding creates a tunnel over TCP port 443 that is then equivalent to AMQP 5671 connections.
After setting up the connection and TLS, Service Bus offers two SASL mechanism options:
Connections, channels, and sessions are ephemeral. If the underlying connection
### AMQP outbound port requirements
-Clients that use AMQP connections over TCP require ports 5671 and 5672 to be opened in the local firewall. Along with these ports, it might be necessary to open additional ports if the [EnableLinkRedirect](/dotnet/api/microsoft.servicebus.messaging.amqp.amqptransportsettings.enablelinkredirect) feature is enabled. `EnableLinkRedirect` is a new messaging feature that helps skip one-hop while receiving messages, thus helping to boost throughput. The client would start communicating directly with the back-end service over port range 104XX as shown in the following image.
+Clients that use AMQP connections over TCP require ports 5671 and 5672 to be opened in the local firewall. Along with these ports, it might be necessary to open extra ports if the [EnableLinkRedirect](/dotnet/api/microsoft.servicebus.messaging.amqp.amqptransportsettings.enablelinkredirect) feature is enabled. `EnableLinkRedirect` is a new messaging feature that helps skip one-hop while receiving messages, thus helping to boost throughput. The client would start communicating directly with the back-end service over port range 104XX as shown in the following image.
![List of destination ports][4]
In the simplest case, the sender can choose to send messages "pre-settled," mean
The regular case is that messages are being sent unsettled, and the receiver then indicates acceptance or rejection using the *disposition* performative. Rejection occurs when the receiver cannot accept the message for any reason, and the rejection message contains information about the reason, which is an error structure defined by AMQP. If messages are rejected due to internal errors inside of Service Bus, the service returns extra information inside that structure that can be used for providing diagnostics hints to support personnel if you are filing support requests. You learn more details about errors later.
-A special form of rejection is the *released* state, which indicates that the receiver has no technical objection to the transfer, but also no interest in settling the transfer. That case exists, for example, when a message is delivered to a Service Bus client, and the client chooses to "abandon" the message because it cannot perform the work resulting from processing the message; the message delivery itself is not at fault. A variation of that state is the *modified* state, which allows changes to the message as it is released. That state is not used by Service Bus at present.
+A special form of rejection is the *released* state, which indicates that the receiver has no technical objection to the transfer, but also no interest in settling the transfer. That case exists, for example, when a message is delivered to a Service Bus client, and the client chooses to "abandon" the message because it cannot perform the work resulting from processing the message; the message delivery itself is not at fault. A variation of that state is the *modified* state, which allows changes to the message as it is released. Currently, this state is not used by Service Bus at.
The AMQP 1.0 specification defines a further disposition state called *received*, that specifically helps to handle link recovery. Link recovery allows reconstituting the state of a link and any pending deliveries on top of a new connection and session, when the prior connection and session were lost.
The arrows in the following table show the performative flow direction.
| Client | Service Bus | | | |
-| --> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={entity name},<br/>target={client link ID}<br/>) |Client attaches to entity as receiver |
-| Service Bus replies attaching its end of the link |<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={entity name},<br/>target={client link ID}<br/>) |
+| `--> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={entity name},<br/>target={client link ID}<br/>)` |Client attaches to entity as receiver |
+| Service Bus replies attaching its end of the link |`<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={entity name},<br/>target={client link ID}<br/>)` |
#### Create message sender | Client | Service Bus | | | |
-| --> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>) |No action |
-| No action |<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={client link ID},<br/>target={entity name}<br/>) |
+| `--> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>)` |No action |
+| No action |`<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={client link ID},<br/>target={entity name}<br/>)` |
#### Create message sender (error) | Client | Service Bus | | | |
-| --> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>) |No action |
-| No action |<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source=null,<br/>target=null<br/>)<br/><br/><-- detach(<br/>handle={numeric handle},<br/>closed=**true**,<br/>error={error info}<br/>) |
+| `--> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>)` |No action |
+| No action |`<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source=null,<br/>target=null<br/>)<br/><br/><-- detach(<br/>handle={numeric handle},<br/>closed=**true**,<br/>error={error info}<br/>)` |
#### Close message receiver/sender | Client | Service Bus | | | |
-| --> detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>) |No action |
-| No action |<-- detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>) |
+| `--> detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>)` |No action |
+| No action |`<-- detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>)` |
#### Send (success) | Client | Service Bus | | | |
-| --> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |No action |
-| No action |<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>) |
+| `--> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |No action |
+| No action |`<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>)` |
#### Send (error) | Client | Service Bus | | | |
-| --> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |No action |
-| No action |<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**rejected**(<br/>error={error info}<br/>)<br/>) |
+| `--> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |No action |
+| No action |`<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**rejected**(<br/>error={error info}<br/>)<br/>)` |
#### Receive | Client | Service Bus | | | |
-| --> flow(<br/>link-credit=1<br/>) |No action |
-| No action |< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| --> disposition(<br/>role=**receiver**,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>) |No action |
+| `--> flow(<br/>link-credit=1<br/>)` |No action |
+| No action |`< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| `--> disposition(<br/>role=**receiver**,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>)` |No action |
#### Multi-message receive | Client | Service Bus | | | |
-| --> flow(<br/>link-credit=3<br/>) |No action |
-| No action |< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| No action |< transfer(<br/>delivery-id={numeric handle+1},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| No action |< transfer(<br/>delivery-id={numeric handle+2},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| --> disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID+2},<br/>settled=**true**,<br/>state=**accepted**<br/>) |No action |
+| `--> flow(<br/>link-credit=3<br/>)` |No action |
+| No action |`< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| No action |`< transfer(<br/>delivery-id={numeric handle+1},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| No action |`< transfer(<br/>delivery-id={numeric handle+2},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| `--> disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID+2},<br/>settled=**true**,<br/>state=**accepted**<br/>)` |No action |
### Messages The following sections explain which properties from the standard AMQP message sections are used by Service Bus and how they map to the Service Bus API set.
-Any property that application needs to defines should be mapped to AMQP's `application-properties` map.
+Any property that application needs to define should be mapped to AMQP's `application-properties` map.
#### header
Any property that application needs to defines should be mapped to AMQP's `appli
| message-id |Application-defined, free-form identifier for this message. Used for duplicate detection. |[MessageId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) | | user-id |Application-defined user identifier, not interpreted by Service Bus. |Not accessible through the Service Bus API. | | to |Application-defined destination identifier, not interpreted by Service Bus. |[To](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
-| subject |Application-defined message purpose identifier, not interpreted by Service Bus. |[Label](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| subject |Application-defined message purpose identifier, not interpreted by Service Bus |[Label](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
| reply-to |Application-defined reply-path indicator, not interpreted by Service Bus. |[ReplyTo](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) | | correlation-id |Application-defined correlation identifier, not interpreted by Service Bus. |[CorrelationId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) | | content-type |Application-defined content-type indicator for the body, not interpreted by Service Bus. |[ContentType](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
There are few other service bus message properties, which are not part of AMQP m
| x-opt-sequence-number | Service-defined unique number assigned to a message. | [SequenceNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sequencenumber) | | x-opt-offset | Service-defined enqueued sequence number of the message. | [EnqueuedSequenceNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedsequencenumber) | | x-opt-locked-until | Service-defined. The date and time until which the message will be locked in the queue/subscription. | [LockedUntilUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.lockeduntilutc) |
-| x-opt-deadletter-source | Service-Defined. If the message is received from dead letter queue, the source of the original message. | [DeadLetterSource](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadlettersource) |
+| x-opt-deadletter-source | Service-Defined. If the message is received from dead letter queue, it's the source of the original message. | [DeadLetterSource](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadlettersource) |
### Transaction capability
This section covers advanced capabilities of Azure Service Bus that are based on
### AMQP management
-The AMQP management specification is the first of the draft extensions discussed in this article. This specification defines a set of protocols layered on top of the AMQP protocol that allow management interactions with the messaging infrastructure over AMQP. The specification defines generic operations such as *create*, *read*, *update*, and *delete* for managing entities inside a messaging infrastructure and a set of query operations.
+The AMQP management specification is the first of the draft extensions discussed in this article. This specification defines a set of protocols layered on top of the AMQP protocol that allows management interactions with the messaging infrastructure over AMQP. The specification defines generic operations such as *create*, *read*, *update*, and *delete* for managing entities inside a messaging infrastructure and a set of query operations.
All those gestures require a request/response interaction between the client and the messaging infrastructure, and therefore the specification defines how to model that interaction pattern on top of AMQP: the client connects to the messaging infrastructure, initiates a session, and then creates a pair of links. On one link, the client acts as sender and on the other it acts as receiver, thus creating a pair of links that can act as a bi-directional channel. | Logical Operation | Client | Service Bus | | | | |
-| Create Request Response Path |--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=**null**,<br/>target=ΓÇ¥myentity/$managementΓÇ¥<br/>) |No action |
-| Create Request Response Path |No action |\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=null,<br/>target=ΓÇ¥myentityΓÇ¥<br/>) |
-| Create Request Response Path |--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=ΓÇ¥myentity/$managementΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>) | |
-| Create Request Response Path |No action |\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=ΓÇ¥myentityΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>) |
+| Create Request Response Path |`--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=**null**,<br/>target=ΓÇ¥myentity/$managementΓÇ¥<br/>)` |No action |
+| Create Request Response Path |No action |`\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=null,<br/>target=ΓÇ¥myentityΓÇ¥<br/>)` |
+| Create Request Response Path |`--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=ΓÇ¥myentity/$managementΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>)` | |
+| Create Request Response Path |No action |`\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=ΓÇ¥myentityΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>)` |
Having that pair of links in place, the request/response implementation is straightforward: a request is a message sent to an entity inside the messaging infrastructure that understands this pattern. In that request-message, the *reply-to* field in the *properties* section is set to the *target* identifier for the link onto which to deliver the response. The handling entity processes the request, and then delivers the reply over the link whose *target* identifier matches the indicated *reply-to* identifier.
The *name* property identifies the entity with which the token shall be associat
| Token Type | Token Description | Body Type | Notes | | | | | |
-| amqp:jwt |JSON Web Token (JWT) |AMQP Value (string) |Not yet available. |
-| amqp:swt |Simple Web Token (SWT) |AMQP Value (string) |Only supported for SWT tokens issued by AAD/ACS |
-| servicebus.windows.net:sastoken |Service Bus SAS Token |AMQP Value (string) |- |
+| `jwt` |JSON Web Token (JWT) |AMQP Value (string) |Not yet available. |
+| `servicebus.windows.net:sastoken` |Service Bus SAS Token |AMQP Value (string) |- |
-Tokens confer rights. Service Bus knows about three fundamental rights: "Send" enables sending, "Listen" enables receiving, and "Manage" enables manipulating entities. SWT tokens issued by AAD/ACS explicitly include those rights as claims. Service Bus SAS tokens refer to rules configured on the namespace or entity, and those rules are configured with rights. Signing the token with the key associated with that rule thus makes the token express the respective rights. The token associated with an entity using *put-token* permits t