Updates from: 03/18/2021 04:09:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Previously updated : 03/16/2021 Last updated : 03/17/2021 zone_pivot_groups: b2c-policy-type
When using custom domains, consider the following:
- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so additional charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).-- Currently, the Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md) feature is not supported.
+- To use Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md), you need to confirm your firewall configuration and rules work correctly with your Azure AD B2C user flows.
- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name). - If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
Azure Front Door passes the user's original IP address. This is the IP address t
### Can I use a third-party web application firewall (WAF) with B2C?
-Currently, Azure AD B2C supports a custom domain through the use of Azure Front Door only. Don't add another WAF in front of Azure Front Door.
-
+To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows.
## Next steps
active-directory-b2c Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/faq.md
No, Azure AD Connect is not designed to work with Azure AD B2C. Consider using t
### Can my app open up Azure AD B2C pages within an iFrame?
-No, for security reasons, Azure AD B2C pages cannot be opened within an iFrame. Our service communicates with the browser to prohibit iFrames. The security community in general and the OAUTH2 specification, recommend against using iFrames for identity experiences due to the risk of click-jacking.
+This feature is in public preview. For details, see [Embedded sign-in experience](https://docs.microsoft.com/azure/active-directory-b2c/embedded-login).
### Does Azure AD B2C work with CRM systems such as Microsoft Dynamics?
Yes, see [language customization](language-customization.md). We provide transla
### Can I use my own URLs on my sign-up and sign-in pages that are served by Azure AD B2C? For instance, can I change the URL from contoso.b2clogin.com to login.contoso.com?
-Not currently. This feature is on our roadmap. Verifying your domain in the **Domains** tab in the Azure portal does not accomplish this goal. However, with b2clogin.com, we offer a [neutral top level domain](b2clogin.md), and thus the external appearance can be implemented without the mention of Microsoft.
+This feature is available in public preview. For details, see [Azure AD B2C custom domains](https://docs.microsoft.com/azure/active-directory-b2c/custom-domain?pivots=b2c-user-flow).
### How do I delete my Azure AD B2C tenant?
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-amazon.md
Previously updated : 03/15/2021 Last updated : 03/17/2021 zone_pivot_groups: b2c-policy-type
To enable sign-in for users with an Amazon account in Azure Active Directory B2C
## Add Amazon identity provider to a user flow
+At this point, the Amazon identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Amazon identity provider to a user flow:
+ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the Amazon identity provider. 1. Under the **Social identity providers**, select **Amazon**.
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
If you want to get the `family_name` and `given_name` claims from Azure AD, you
## Add Azure AD identity provider to a user flow
+At this point, the Azure AD identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Azure AD identity provider to a user flow:
+ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the Azure AD identity provider. 1. Under the **Social identity providers**, select **Contoso Azure AD**.
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-facebook.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
To enable sign-in for users with a Facebook account in Azure Active Directory B2
## Add Facebook identity provider to a user flow
+At this point, the Facebook identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Facebook identity provider to a user flow:
+ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the Facebook identity provider. 1. Under the **Social identity providers**, select **Facebook**.
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-github.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
To enable sign-in with a GitHub account in Azure Active Directory B2C (Azure AD
## Add GitHub identity provider to a user flow
+At this point, the GitHub identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the GitHub identity provider to a user flow:
++ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the GitHub identity provider. 1. Under the **Social identity providers**, select **GitHub**.
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-google.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
Enter a **Name** for your application. Enter *b2clogin.com* in the **Authorized
## Add Google identity provider to a user flow
+At this point, the Google identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Google identity provider to a user flow:
++ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the Google identity provider. 1. Under the **Social identity providers**, select **Google**.
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-linkedin.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
To enable sign-in for users with a LinkedIn account in Azure Active Directory B2
## Add LinkedIn identity provider to a user flow
+At this point, the LinkedIn identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the LinkedIn identity provider to a user flow:
+ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the LinkedIn identity provider. 1. Under the **Social identity providers**, select **LinkedIn**.
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
To enable sign-in for users with a Microsoft account in Azure Active Directory B
## Add Microsoft identity provider to a user flow
+At this point, the Microsoft identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Microsoft identity provider to a user flow:
+ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the Microsoft identity provider. 1. Under the **Social identity providers**, select **Microsoft Account**.
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
To enable sign-in for users with a Salesforce account in Azure Active Directory
## Add Salesforce identity provider to a user flow
+At this point, the Salesforce identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Salesforce identity provider to a user flow:
+ 1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the Salesforce identity provider. 1. Under the **Social identity providers**, select **Salesforce**.
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-twitter.md
Previously updated : 03/15/2021 Last updated : 03/17/2021
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
## Add Twitter identity provider to a user flow
+At this point, the Twitter identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Twitter identity provider to a user flow:
+ 1. In your Azure AD B2C tenant, select **User flows**. 1. Select the user flow that you want to add the Twitter identity provider. 1. Under the **Social identity providers**, select **Twitter**.
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/session-behavior.md
You can configure the Azure AD B2C session behavior, including:
- **Tenant** - This setting is the default. Using this setting allows multiple applications and user flows in your B2C tenant to share the same user session. For example, once a user signs into an application, the user can also seamlessly sign into another one upon accessing it. - **Application** - This setting allows you to maintain a user session exclusively for an application, independent of other applications. For example, you can use this setting if you want the user to sign in to Contoso Pharmacy regardless of whether the user is already signed into Contoso Groceries. - **Policy** - This setting allows you to maintain a user session exclusively for a user flow, independent of the applications using it. For example, if the user has already signed in and completed a multi-factor authentication (MFA) step, the user can be given access to higher-security parts of multiple applications, as long as the session tied to the user flow doesn't expire.
- - **Disabled** - This setting forces the user to run through the entire user flow upon every execution of the policy.
+ - **Suppressed** - This setting forces the user to run through the entire user flow upon every execution of the policy.
- **Keep me signed in (KMSI)** - Extends the session lifetime through the use of a persistent cookie. If this feature is enabled and the user selects it, the session remains active even after the user closes and reopens the browser. The session is revoked only when the user signs out. The KMSI feature only applies to sign-in with local accounts. The KMSI feature takes precedence over the session lifetime. ::: zone pivot="b2c-user-flow"
Upon a sign-out request, Azure AD B2C:
::: zone-end ::: zone pivot="b2c-custom-policy" 3. Attempts to sign out from federated identity providers:
- - OpenId Connect - If the identity provider well-known configuration endpoint specifies an `end_session_endpoint` location.
+ - OpenId Connect - If the identity provider well-known configuration endpoint specifies an `end_session_endpoint` location. The sign-out request doesn't pass the `id_token_hint` parameter. If the federated identity provider requires this parameter, the sign-out request will fail.
- OAuth2 - If the [identity provider metadata](oauth2-technical-profile.md#metadata) contains the `end_session_endpoint` location. - SAML - If the [identity provider metadata](identity-provider-generic-saml.md) contains the `SingleLogoutService` location. 4. Optionally, signs-out from other applications. For more information, see the [Single sign-out](#single-sign-out) section.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 02/08/2021 Last updated : 03/17/2021
Applications and systems that support customization of the attribute list includ
- ServiceNow - Workday to Active Directory / Workday to Azure Active Directory - SuccessFactors to Active Directory / SuccessFactors to Azure Active Directory-- Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported)
+- Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported). Learn more about [creating extensions](https://docs.microsoft.com/azure/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping#create-an-extension-attribute-on-a-cloud-only-user) and [known limitations](https://docs.microsoft.com/azure/active-directory/app-provisioning/known-issues).
- Apps that support [SCIM 2.0](https://tools.ietf.org/html/rfc7643) - For Azure Active Directory writeback to Workday or SuccessFactors, it is supported to update relevant metadata for supported attributes (XPATH and JSONPath), but it is not supported to add new Workday or SuccessFactors attributes beyond those included in the default schema > [!NOTE]
-> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
+> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute is not automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
When editing the list of supported attributes, the following properties are provided:
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Title: Synchronize attributes to Azure AD for mapping
-description: Learn how to synchronize attributes from your on-premises Active Directory to Azure AD. When configuring user provisioning to SaaS apps, use the directory extension feature to add source attributes that aren't synchronized by default.
+description: When configuring user provisioning to SaaS apps, use the directory extension feature to add source attributes that aren't synchronized by default.
Previously updated : 03/12/2021 Last updated : 03/17/2021
-# Sync an attribute from your on-premises Active Directory to Azure AD for provisioning to an application
+# Syncing extension attributes attributes
-When customizing attribute mappings for user provisioning, you might find that the attribute you want to map doesn't appear in the **Source attribute** list. This article shows you how to add the missing attribute by synchronizing it from your on-premises Active Directory (AD) to Azure Active Directory (Azure AD).
+When customizing attribute mappings for user provisioning, you might find that the attribute you want to map doesn't appear in the **Source attribute** list. This article shows you how to add the missing attribute by synchronizing it from your on-premises Active Directory (AD) to Azure Active Directory (Azure AD) or by creating the extension attributes in Azure AD for a cloud only user.
-Azure AD must contain all the data required to create a user profile when provisioning user accounts from Azure AD to a SaaS app. In some cases, to make the data available you might need synchronize attributes from your on-premises AD to Azure AD. Azure AD Connect automatically synchronizes certain attributes to Azure AD, but not all attributes. Furthermore, some attributes (such as SAMAccountName) that are synchronized by default might not be exposed using the Microsoft Graph API. In these cases, you can use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD. That way, the attribute will be visible to the Microsoft Graph API and the Azure AD provisioning service.
+Azure AD must contain all the data required to create a user profile when provisioning user accounts from Azure AD to a SaaS app. In some cases, to make the data available you might need synchronize attributes from your on-premises AD to Azure AD. Azure AD Connect automatically synchronizes certain attributes to Azure AD, but not all attributes. Furthermore, some attributes (such as SAMAccountName) that are synchronized by default might not be exposed using the Azure AD Graph API. In these cases, you can use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD. That way, the attribute will be visible to the Azure AD Graph API and the Azure AD provisioning service. If the data you need for provisioning is in Active Directory but isn't available for provisioning because of the reasons described above, you can use Azure AD Connect to create extension attributes.
+
+While most users are likely hybrid users that are synchronized from Active Directory, you can also create extensions on cloud-only users without using Azure AD Connect. Using PowerShell or Microsoft Graph you can extend the schema of a cloud only user.
-If the data you need for provisioning is in Active Directory but isn't available for provisioning because of the reasons described above, you can use Azure AD Connect or PowerShell to create extension attributes.
-
## Create an extension attribute using Azure AD Connect 1. Open the Azure AD Connect wizard, choose Tasks, and then choose **Customize synchronization options**.
If the data you need for provisioning is in Active Directory but isn't available
> [!NOTE] > The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/forums/169401-azure-active-directory).
-## Create an extension attribute using PowerShell
+## Create an extension attribute on a cloud only user
+Customers can use Microsoft Graph and PowerShell to extend the user schema. These extension attributes are automatically discovered in most cases, but customers with more than 1000 service principals may find extensions missing in the source attribute list. If an attribute that you create using the steps below does not automatically appear in the source attribute list please verify using graph that the extension attribute was successfully created and then add it to your schema [manually](https://docs.microsoft.com/azure/active-directory/app-provisioning/customize-application-attributes#editing-the-list-of-supported-attributes). When making the graph requests below, please click learn more to verify the permissions required to make the requests. You can use [graph explorer](https://docs.microsoft.com/graph/graph-explorer/graph-explorer-overview) to make the requests.
+
+### Create an extension attribute on a cloud only user using Microsoft Graph
+You will need to use an application to extend the schema of your users. List the apps in your tenant to identify the id of the application that you would like to use to extend the user schema. [Learn more.](https://docs.microsoft.com/graph/api/application-list?view=graph-rest-1.0&tabs=http)
+
+```json
+GET https://graph.microsoft.com/v1.0/applications
+```
+
+Create the extension attribute. Replace the **id** property below with the **id** retrieved in the previous step. You will need to use the **"id"** attribute and not the "appId". [Learn more.](https://docs.microsoft.com/graph/api/application-post-extensionproperty?view=graph-rest-1.0&tabs=http)
+```json
+POST https://graph.microsoft.com/v1.0/applications/{id}/extensionProperties
+Content-type: application/json
+
+{
+ "name": "extensionName",
+ "dataType": "string",
+ "targetObjects": [
+ "User"
+ ]
+}
+```
+
+The previous request created an extension attribute with the format "extension_appID_extensionName". Update a user with the extension attribute. [Learn more.](https://docs.microsoft.com/graph/api/user-update?view=graph-rest-1.0&tabs=http)
+```json
+PATCH https://graph.microsoft.com/v1.0/users/{id}
+Content-type: application/json
+
+{
+ "extension_inputAppId_extensionName": "extensionValue"
+}
+```
+Check the user to ensure the attribute was successfully updated. [Learn more.](https://docs.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http#example-3-users-request-using-select)
+
+```json
+GET https://graph.microsoft.com/v1.0/users/{id}?$select=displayName,extension_inputAppId_extensionName
+```
++
+### Create an extension attribute on a cloud only user using PowerShell
Create a custom extension using PowerShell and assign a value to a user. ```
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-prerequisites.md
Previously updated : 03/02/2021 Last updated : 03/17/2021
You need the following to use Azure AD Connect cloud sync:
A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management,the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and uses a gMSA for running the agent. You will be prompted for administrative credentials during setup, in order to create this account. The account will appear as (domain\provAgentgMSA$). For more information on a gMSA, see [Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) ### Prerequisites for gMSA:
-1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2012.
+1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2016.
2. [PowerShell RSAT modules](/windows-server/remote/remote-server-administration-tools) on a domain controller
-3. At least one domain controller in the domain must be running Windows Server 201.
-4. A domain joined server where the agent is being installed needs to be either Windows Server 2012 or later.
+3. At least one domain controller in the domain must be running Windows Server 2016.
+4. A domain joined server where the agent is being installed needs to be either Windows Server 2016 or later.
### Custom gMSA account If you are creating a custom gMSA account, you need to ensure that the account has the following permissions.
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 02/10/2021 Last updated : 03/17/2021
Within a Conditional Access policy, an administrator can make use of signals from conditions like risk, device platform, or location to enhance their policy decisions.
-[ ![Define a Conditional Access policy and specify conditions](./media/concept-conditional-access-conditions/conditional-access-conditions.png)](./media/concept-conditional-access-conditions/conditional-access-conditions.png#lightbox)
+[![Define a Conditional Access policy and specify conditions](./media/concept-conditional-access-conditions/conditional-access-conditions.png)](./media/concept-conditional-access-conditions/conditional-access-conditions.png#lightbox)
Multiple conditions can be combined to create fine-grained and specific Conditional Access policies.
This setting has an impact on access attempts made from the following mobile app
- When creating a policy assigned to Exchange ActiveSync clients, **Exchange Online** should be the only cloud application assigned to the policy. - Organizations can narrow the scope of this policy to specific platforms using the **Device platforms** condition.
-If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multi-factor authentication** is required, affected users are blocked, because basic authentication does not support multi-factor authentication.
+If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multi-factor authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication does not support these controls.
For more information, see the following articles:
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 11/24/2020 Last updated : 03/17/2021
By default Conditional Access requires all selected controls.
Selecting this checkbox will require users to perform Azure AD Multi-Factor Authentication. More information about deploying Azure AD Multi-Factor Authentication can be found in the article [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
+[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multi-factor authentication in Conditional Access policies.
+ ### Require device to be marked as compliant Organizations who have deployed Microsoft Intune can use the information returned from their devices to identify devices that meet specific compliance requirements. This policy compliance information is forwarded from Intune to Azure AD where Conditional Access can make decisions to grant or block access to resources. For more information about compliance policies, see the article [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started).
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-policies.md
Previously updated : 10/16/2020 Last updated : 03/17/2021
The grant control can trigger enforcement of one or more controls.
- Require Hybrid Azure AD joined device - Require approved client app - Require app protection policy
+- Require password change
+- Require terms of use
Administrators can choose to require one of the previous controls or all selected controls using the following options. The default for multiple controls is to require all.
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
Previously updated : 03/04/2021 Last updated : 03/17/2021
# Conditional Access: Users and groups
-A Conditional Access policy must include a user assignment as one of the signals in the decision process. Users can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access to the user. In addition to this article, we have a video on [how to include or exclude users from conditional access policies](https://www.youtube.com/watch?v=5DsW1hB3Jqs) that walks you through the process outlined below.
+A Conditional Access policy must include a user assignment as one of the signals in the decision process. Users can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access to the user.
-![User as a signal in the decisions made by Conditional Access](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups.png)
+> [!VIDEO https://www.youtube.com/embed/5DsW1hB3Jqs]
## Include users
By default the policy will provide an option to exclude the current user from th
![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png)
-[What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-you-are-locked-out-of-the-azure-portal)
+If you do find yourself locked out[What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-you-are-locked-out-of-the-azure-portal)
## Next steps
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-android-shared-devices.md
Title: Shared device mode for Android devices
-description: Learn how to enable shared device mode to allow Firstline Workers to share an Android device
+description: Learn how to enable shared device mode to allow Frontline Workers to share an Android device
# Shared device mode for Android devices
-Firstline Workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to do their work. That becomes problematic when they start sharing passwords or pin numbers to access customer and business data on the shared device.
+Frontline Workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to do their work. That becomes problematic when they start sharing passwords or pin numbers to access customer and business data on the shared device.
Shared device mode allows you to configure an Android device so that it can be easily shared by multiple employees. Employees can sign in and access customer information quickly. When they are finished with their shift or task, they can sign out of the device and it will be immediately ready for the next employee to use.
The following differences apply depending on whether your app is running on a sh
## Why you may want to only support single-account mode
-If you're writing an app that will only be used for firstline workers using a shared device, we recommend that you write your application to only support single-account mode. This includes most applications that are task focused such as medical records apps, invoice apps, and most line-of-business apps. Only supporting single-account mode simplifies development because you won't need to implement the additional features that are part of multiple-account apps.
+If you're writing an app that will only be used for frontline workers using a shared device, we recommend that you write your application to only support single-account mode. This includes most applications that are task focused such as medical records apps, invoice apps, and most line-of-business apps. Only supporting single-account mode simplifies development because you won't need to implement the additional features that are part of multiple-account apps.
## What happens when the device mode changes
The following diagram shows the overall app lifecycle and common events that may
## Next steps
-Try the [Use shared-device mode in your Android application](tutorial-v2-shared-device-mode.md) tutorial that shows how to run a firstline worker app on a shared mode Android device.
+Try the [Use shared-device mode in your Android application](tutorial-v2-shared-device-mode.md) tutorial that shows how to run a frontline worker app on a shared mode Android device.
active-directory Msal Ios Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-ios-shared-devices.md
Title: Shared device mode for iOS devices
-description: Learn how to enable shared device mode to allow Firstline Workers to share an iOS device
+description: Learn how to enable shared device mode to allow Frontline Workers to share an iOS device
>[!IMPORTANT] > This feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
-Firstline Workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to perform their work. These shared devices can present security risks if your users share their passwords or PINs, intentionally or not, to access customer and business data on the shared device.
+Frontline Workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to perform their work. These shared devices can present security risks if your users share their passwords or PINs, intentionally or not, to access customer and business data on the shared device.
Shared device mode allows you to configure an iOS 13 or higher device to be more easily and securely shared by employees. Employees can sign in and access customer information quickly. When they're finished with their shift or task, they can sign out of the device and it's immediately ready for use by the next employee.
On a user change, you should ensure both the previous user's data is cleared and
### Detect shared device mode
-Detecting shared device mode is important for your application. Many applications will require a change in their user experience (UX) when the application is used on a shared device. For example, your application might have a "Sign-Up" feature, which isn't appropriate for a Firstline Worker because they likely already have an account. You may also want to add extra security to your application's handling of data if it's in shared device mode.
+Detecting shared device mode is important for your application. Many applications will require a change in their user experience (UX) when the application is used on a shared device. For example, your application might have a "Sign-Up" feature, which isn't appropriate for a Frontline Worker because they likely already have an account. You may also want to add extra security to your application's handling of data if it's in shared device mode.
Use the `getDeviceInformationWithParameters:completionBlock:` API in the `MSALPublicClientApplication` to determine if an app is running on a device in shared device mode.
signoutParameters.signoutFromBrowser = YES; // Only needed for Public Preview.
## Next steps
-To see shared device mode in action, the following code sample on GitHub includes an example of running a Firstline Worker app on an iOS device in shared device mode:
+To see shared device mode in action, the following code sample on GitHub includes an example of running a Frontline Worker app on an iOS device in shared device mode:
[MSAL iOS Swift Microsoft Graph API Sample](https://github.com/Azure-Samples/ms-identity-mobile-apple-swift-objc)
active-directory Msal Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-shared-devices.md
Title: Shared device mode overview
-description: Learn about shared device mode to enable device sharing for your Firstline Workers.
+description: Learn about shared device mode to enable device sharing for your Frontline Workers.
# Overview of shared device mode
-Shared device mode is a feature of Azure Active Directory that allows you to build applications that support Firstline Workers and enable shared device mode on the devices deployed to them.
+Shared device mode is a feature of Azure Active Directory that allows you to build applications that support Frontline Workers and enable shared device mode on the devices deployed to them.
>[!IMPORTANT] > This feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
-## What are Firstline Workers?
+## What are Frontline Workers?
-Firstline Workers are retail employees, maintenance and field agents, medical personnel, and other users that don't sit in front of a computer or use corporate email for collaboration. The following sections introduce the aspects and challenges of supporting Firstline Workers, followed by an introduction to the features provided by Microsoft that enable your application for use by an organization's Firstline Workers.
+Frontline Workers are retail employees, maintenance and field agents, medical personnel, and other users that don't sit in front of a computer or use corporate email for collaboration. The following sections introduce the aspects and challenges of supporting Frontline Workers, followed by an introduction to the features provided by Microsoft that enable your application for use by an organization's Frontline Workers.
-### Challenges of supporting Firstline Workers
+### Challenges of supporting Frontline Workers
-Enabling Firstline Worker workflows includes challenges not usually presented by typical information workers. Such challenges can include high turnover rate and less familiarity with an organization's core productivity tools. To empower their Firstline Workers, organizations are adopting different strategies. Some are adopting a bring-your-own-device (BYOD) strategy in which their employees use business apps on their personal phone, while others provide their employees with shared devices like iPads or Android tablets.
+Enabling Frontline Worker workflows includes challenges not usually presented by typical information workers. Such challenges can include high turnover rate and less familiarity with an organization's core productivity tools. To empower their Frontline Workers, organizations are adopting different strategies. Some are adopting a bring-your-own-device (BYOD) strategy in which their employees use business apps on their personal phone, while others provide their employees with shared devices like iPads or Android tablets.
### Supporting multiple users on devices designed for one user
Azure Active Directory enables these scenarios with a feature called **shared de
As mentioned, shared device mode is a feature of Azure Active Directory that enables you to:
-* Build applications that support Firstline Workers
-* Deploy devices to Firstline Workers and turn on shared device mode
+* Build applications that support Frontline Workers
+* Deploy devices to Frontline Workers and turn on shared device mode
-### Build applications that support Firstline Workers
+### Build applications that support Frontline Workers
-You can support Firstline Workers in your applications by using the Microsoft Authentication Library (MSAL) and [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md) to enable a device state called *shared device mode*. When a device is in shared device mode, Microsoft provides your application with information to allow it to modify its behavior based on the state of the user on the device, protecting user data.
+You can support Frontline Workers in your applications by using the Microsoft Authentication Library (MSAL) and [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md) to enable a device state called *shared device mode*. When a device is in shared device mode, Microsoft provides your application with information to allow it to modify its behavior based on the state of the user on the device, protecting user data.
Supported features are:
Your users depend on you to ensure their data isn't leaked to another user. Shar
For details on how to modify your applications to support shared device mode, see the [Next steps](#next-steps) section at the end of this article.
-### Deploy devices to Firstline Workers and turn on shared device mode
+### Deploy devices to Frontline Workers and turn on shared device mode
-Once your applications support shared device mode and include the required data and security changes, you can advertise them as being usable by Firstline Workers.
+Once your applications support shared device mode and include the required data and security changes, you can advertise them as being usable by Frontline Workers.
An organization's device administrators are able to deploy their devices and your applications to their stores and workplaces through a mobile device management (MDM) solution like Microsoft Intune. Part of the provisioning process is marking the device as a *Shared Device*. Administrators configure shared device mode by deploying the [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md) and setting shared device mode through configuration parameters. After performing these steps, all applications that support shared device mode will use the Microsoft Authenticator application to manage its user state and provide security features for the device and organization. ## Next steps
-We support iOS and Android platforms for shared device mode. Review the documentation below for your platform to begin supporting Firstline Workers in your applications.
+We support iOS and Android platforms for shared device mode. Review the documentation below for your platform to begin supporting Frontline Workers in your applications.
* [Supporting shared device mode for iOS](msal-ios-shared-devices.md) * [Supporting shared device mode for Android](msal-android-shared-devices.md)
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
To configure application settings based on the platform or device you're targeti
| **Single-page application** | Enter a **Redirect URI** for your app. This URI is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.<br/><br/>Select this platform if you're building a client-side web app by using JavaScript or a framework like Angular, Vue.js, React.js, or Blazor WebAssembly. | | **iOS / macOS** | Enter the app **Bundle ID**. Find it in **Build Settings** or in Xcode in *Info.plist*.<br/><br/>A redirect URI is generated for you when you specify a **Bundle ID**. | | **Android** | Enter the app **Package name**. Find it in the *AndroidManifest.xml* file. Also generate and enter the **Signature hash**.<br/><br/>A redirect URI is generated for you when you specify these settings. |
- | **Mobile and desktop applications** | Select one of the **Suggested redirect URIs**. Or specify a **Custom redirect URI**.<br/><br/>For desktop applications, we recommend<br/>`https://login.microsoftonline.com/common/oauth2/nativeclient`<br/><br/>Select this platform for mobile applications that aren't using the latest Microsoft Authentication Library (MSAL) or aren't using a broker. Also select this platform for desktop applications. |
+ | **Mobile and desktop applications** | Select one of the **Suggested redirect URIs**. Or specify a **Custom redirect URI**.<br/><br/>For desktop applications using embedded browser, we recommend<br/>`https://login.microsoftonline.com/common/oauth2/nativeclient`<br/><br/>For desktop applications using system browser, we recommend<br/>`http://localhost`<br/><br/>Select this platform for mobile applications that aren't using the latest Microsoft Authentication Library (MSAL) or aren't using a broker. Also select this platform for desktop applications. |
1. Select **Configure** to complete the platform configuration. ### Redirect URI restrictions
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
In this quickstart, you download and run a code sample that demonstrates how an
> 1. Open the *appsettings.json* file and modify the following code: > > ```json
+> "Domain": "Enter the domain of your tenant, e.g. contoso.onmicrosoft.com",
> "ClientId": "Enter_the_Application_Id_here", > "TenantId": "common", > ```
active-directory Scenario Desktop App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-registration.md
If your desktop application uses interactive authentication, you can sign in use
The redirect URIs to use in a desktop application depend on the flow you want to use. -- If you use interactive authentication or device code flow, use `https://login.microsoftonline.com/common/oauth2/nativeclient`. To achieve this configuration, select the corresponding URL in the **Authentication** section for your application.
+Specify the redirect URI for your app by [configuring the platform settings](quickstart-register-app.md#add-a-redirect-uri) for the app in **App registrations** in the Azure portal.
+
+- For apps that use interactive authentication:
+ - Apps that use embedded browsers: `https://login.microsoftonline.com/common/oauth2/nativeclient`
+ - Apps that use system browsers: `http://localhost`
> [!IMPORTANT]
- > Using `https://login.microsoftonline.com/common/oauth2/nativeclient` as the redirect URI is recommended as a security best practice. If no redirect URI is specified, MSAL.NET uses `urn:ietf:wg:oauth:2.0:oob` by default which is not recommended. This default will be updated as a breaking change in the next major release.
+ > As a security best practice, we recommend explicitly setting `https://login.microsoftonline.com/common/oauth2/nativeclient` or `http://localhost` as the redirect URI. Some authentication libraries like MSAL.NET use a default value of `urn:ietf:wg:oauth:2.0:oob` when no other redirect URI is specified, which is not recommended. This default will be updated as a breaking change in the next major release.
- If you build a native Objective-C or Swift app for macOS, register the redirect URI based on your application's bundle identifier in the following format: `msauth.<your.app.bundle.id>://auth`. Replace `<your.app.bundle.id>` with your application's bundle identifier. - If your app uses only Integrated Windows Authentication or a username and a password, you don't need to register a redirect URI for your application. These flows do a round trip to the Microsoft identity platform v2.0 endpoint. Your application won't be called back on any specific URI.
active-directory Hybrid Azuread Join Federated Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-federated-domains.md
Hybrid Azure AD join requires devices to have access to the following Microsoft
Beginning with Windows 10 1803, if the instantaneous hybrid Azure AD join for a federated environment by using AD FS fails, we rely on Azure AD Connect to sync the computer object in Azure AD that's subsequently used to complete the device registration for hybrid Azure AD join. Verify that Azure AD Connect has synced the computer objects of the devices you want to be hybrid Azure AD joined to Azure AD. If the computer objects belong to specific organizational units (OUs), you must also configure the OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Configure filtering by using Azure AD Connect](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).
+> [!NOTE]
+> To get device registration sync join to succeed, as part of the device registration configuration, do not exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to AAD, see [Attributes synchronized by Azure AD Connect](https://docs.microsoft.com/azure/active-directory/hybrid/reference-connect-sync-attributes-synchronized#windows-10).
+ If your organization requires access to the internet via an outbound proxy, Microsoft recommends [implementing Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v%3dtechnet.10)) to enable Windows 10 computers for device registration with Azure AD. If you encounter issues configuring and managing WPAD, see [Troubleshoot automatic detection](/previous-versions/tn-archive/cc302643(v=technet.10)). If you don't use WPAD and want to configure proxy settings on your computer, you can do so beginning with Windows 10 1709. For more information, see [Configure WinHTTP settings by using a group policy object (GPO)](/archive/blogs/netgeeks/winhttp-proxy-settings-deployed-by-gpo).
If you experience issues with completing hybrid Azure AD join for domain-joined
Learn how to [manage device identities by using the Azure portal](device-management-azure-portal.md). <!--Image references-->
-[1]: ./media/active-directory-conditional-access-automatic-device-registration-setup/12.png
+[1]: ./media/active-directory-conditional-access-automatic-device-registration-setup/12.png
active-directory Hybrid Azuread Join Managed Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-managed-domains.md
Familiarize yourself with these articles:
Verify that Azure AD Connect has synced the computer objects of the devices you want to be hybrid Azure AD joined to Azure AD. If the computer objects belong to specific organizational units (OUs), configure the OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Organizational unitΓÇôbased filtering](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).
+> [!NOTE]
+> To get device registration sync join to succeed, as part of the device registration configuration, do not exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to AAD, see [Attributes synchronized by Azure AD Connect](https://docs.microsoft.com/azure/active-directory/hybrid/reference-connect-sync-attributes-synchronized#windows-10).
+ Beginning with version 1.1.819.0, Azure AD Connect includes a wizard to configure hybrid Azure AD join. The wizard significantly simplifies the configuration process. The wizard configures the service connection points (SCPs) for device registration. The configuration steps in this article are based on using the wizard in Azure AD Connect.
If you experience issues completing hybrid Azure AD join for domain-joined Windo
Advance to the next article to learn how to manage device identities by using the Azure portal. > [!div class="nextstepaction"]
-> [Manage device identities](device-management-azure-portal.md)
+> [Manage device identities](device-management-azure-portal.md)
active-directory Hybrid Azuread Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-manual.md
For Windows 10 devices on version 1703 or earlier, if your organization requires
Beginning with Windows 10 1803, even if a hybrid Azure AD join attempt by a device in a federated domain through AD FS fails, and if Azure AD Connect is configured to sync the computer/device objects to Azure AD, the device will try to complete the hybrid Azure AD join by using the synced computer/device.
+> [!NOTE]
+> To get device registration sync join to succeed, as part of the device registration configuration, do not exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to Azure AD, see [Attributes synchronized by Azure AD Connect](https://docs.microsoft.com/azure/active-directory/hybrid/reference-connect-sync-attributes-synchronized#windows-10).
+ To verify if the device is able to access the above Microsoft resources under the system account, you can use [Test Device Registration Connectivity](/samples/azure-samples/testdeviceregconnectivity/testdeviceregconnectivity/) script. ## Verify configuration steps
If you experience issues completing hybrid Azure AD join for domain-joined Windo
* [Introduction to device management in Azure Active Directory](overview.md) <!--Image references-->
-[1]: ./media/hybrid-azuread-join-manual/12.png
+[1]: ./media/hybrid-azuread-join-manual/12.png
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Use Event Viewer logs to locate the phase and errorcode for the join failures.
### Step 5: Collect logs and contact Microsoft Support
-Download the file Auth.zip from [https://github.com/CSS-Windows/WindowsDiag/tree/master/ADS/AUTH](https://github.com/CSS-Windows/WindowsDiag/tree/master/ADS/AUTH)
+Download the file Auth.zip from [https://github.com/CSS-Identity/DRS/tree/main/Auth](https://github.com/CSS-Identity/DRS/tree/main/Auth)
1. Unzip the files and rename the included files **start-auth.txt** and **stop-auth.txt** to **start-auth.cmd** and **stop-auth.cmd**. 1. From an elevated command prompt, run **start-auth.cmd**.
If the values are **NO**, it could be due:
Continue [troubleshooting devices using the dsregcmd command](troubleshoot-device-dsregcmd.md)
-For questions, see the [device management FAQ](faq.md)
+For questions, see the [device management FAQ](faq.md)
active-directory Troubleshoot Hybrid Join Windows Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-legacy.md
This article provides you with troubleshooting guidance on how to resolve potent
**What you should know:** - Hybrid Azure AD join for downlevel Windows devices works slightly differently than it does in Windows 10. Many customers do not realize that they need AD FS (for federated domains) or Seamless SSO configured (for managed domains).-- Seamless SSO doesn't work in private browsing mode on Firefox and Microsoft Edge browsers. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode.
+- Seamless SSO doesn't work in private browsing mode on Firefox and Microsoft Edge browsers. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode or if Enhanced Security Configuration is enabled.
- For customers with federated domains, if the Service Connection Point (SCP) was configured such that it points to the managed domain name (for example, contoso.onmicrosoft.com, instead of contoso.com), then Hybrid Azure AD Join for downlevel Windows devices will not work. - The same physical device appears multiple times in Azure AD when multiple domain users sign-in the downlevel hybrid Azure AD joined devices. For example, if *jdoe* and *jharnett* sign-in to a device, a separate registration (DeviceID) is created for each of them in the **USER** info tab. - You can also get multiple entries for a device on the user info tab because of a reinstallation of the operating system or a manual re-registration.
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-manage.md
Previously updated : 12/20/2020 Last updated : 03/12/2021
You can change the primary domain name for your organization to be any verified
## Add custom domain names to your Azure AD organization
-You can add up to 900 managed domain names. If you're configuring all your domains for federation with on-premises Active Directory, you can add up to 450 domain names in each organization.
+You can add up to 5000 managed domain names. If you're configuring all your domains for federation with on-premises Active Directory, you can add up to 2500 domain names in each organization.
## Add subdomains of a custom domain
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-users-administrator.md
If a guest user has not yet redeemed their invitation, you can resend the invita
3. Under **Manage**, select **Users**. 5. Select the user account. 6. Under **Manage**, select **Profile**.
-7. If the user has not yet accepted the invitation, a **Resend invitation** option is available. Select this button to resend.
-
- ![Resend invitation option in the user profile](./media/add-users-administrator/b2b-user-resend-invitation.png)
+7. If the user has not yet accepted the invitation, in the **Identity** section, **Invitation accepted** will be set to **No**. To resend the invitation, select **(manage)**. Then in the **Manage invitations** page, next to **Resend invite?" select **Yes**, and select **Done**.
> [!NOTE] > If you resend an invitation that originally directed the user to a specific app, understand that the link in the new invitation takes the user to the top-level Access Panel instead.
+> Additionally, only users with inviting permissions will be able to resend invitations.
## Next steps - To learn how non-Azure AD admins can add B2B guest users, see [How do information workers add B2B collaboration users?](add-users-information-worker.md)-- For information about the invitation email, see [The elements of the B2B collaboration invitation email](invitation-email-elements.md).
+- For information about the invitation email, see [The elements of the B2B collaboration invitation email](invitation-email-elements.md).
active-directory Active Directory How Subscriptions Associated Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
Title: Add an existing Azure subscription to your tenant - Azure AD
-description: Instructions about how to add an existing Azure subscription to your Azure Active Directory tenant.
+description: Instructions about how to add an existing Azure subscription to your Azure Active Directory (Azure AD) tenant.
Previously updated : 09/01/2020 Last updated : 03/05/2021
Before you can associate or add your subscription, do the following tasks:
- Review the following list of changes that will occur after you associate or add your subscription, and how you might be affected:
- - Users that have been assigned roles using Azure RBAC will lose their access
- - Service Administrator and Co-Administrators will lose access
- - If you have any key vaults, they'll be inaccessible and you'll have to fix them after association
- - If you have any managed identities for resources such as Virtual Machines or Logic Apps, you must re-enable or recreate them after the association
- - If you have a registered Azure Stack, you'll have to re-register it after association
+ - Users that have been assigned roles using Azure RBAC will lose their access.
+ - Service Administrator and Co-Administrators will lose access.
+ - If you have any key vaults, they'll be inaccessible and you'll have to fix them after association.
+ - If you have any managed identities for resources such as Virtual Machines or Logic Apps, you must re-enable or recreate them after the association.
+ - If you have a registered Azure Stack, you'll have to re-register it after association.
- For more information, see [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md). - Sign in using an account that:
Before you can associate or add your subscription, do the following tasks:
- Has an [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment for the subscription. For information about how to assign the Owner role, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). - Exists in both the current directory and in the new directory. The current directory is associated with the subscription. You'll associate the new directory with the subscription. For more information about getting access to another directory, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../external-identities/add-users-administrator.md). -- Make sure you're not using an Azure Cloud Service Providers (CSP) subscription (MS-AZR-0145P, MS-AZR-0146P, MS-AZR-159P), a Microsoft Internal subscription (MS-AZR-0015P), or a Microsoft Imagine subscription (MS-AZR-0144P).
+- Make sure that you're not using an Azure Cloud Service Providers (CSP) subscription (MS-AZR-0145P, MS-AZR-0146P, MS-AZR-159P), a Microsoft Internal subscription (MS-AZR-0015P), or a Microsoft Azure for Students Starter subscription (MS-AZR-0144P).
## Associate a subscription to a directory<a name="to-associate-an-existing-subscription-to-your-azure-ad-directory"></a>
To associate an existing subscription to your Azure AD directory, follow these s
1. Select **Change directory**.
- ![Subscriptions page, with Change directory option highlighted](media/active-directory-how-subscriptions-associated-directory/change-directory-in-azure-subscriptions.png)
+ :::image type="content" source="media/active-directory-how-subscriptions-associated-directory/change-directory-in-azure-subscriptions.png" alt-text="Screenshot that shows the Subscriptions page, with the Change directory option highlighted.":::
1. Review any warnings that appear, and then select **Change**.
- ![Change the directory page, showing the directory to change to](media/active-directory-how-subscriptions-associated-directory/edit-directory-ui.png)
+ :::image type="content" source="media/active-directory-how-subscriptions-associated-directory/edit-directory-ui.png" alt-text="Screenshot that shows the Change the directory page with a sample directory and the Change button highlighted.":::
After the directory is changed for the subscription, you will get a success message. 1. Select **Switch directories** on the subscription page to go to your new directory.
- ![Directory switcher page, with sample information](media/active-directory-how-subscriptions-associated-directory/directory-switcher.png)
+ :::image type="content" source="media/active-directory-how-subscriptions-associated-directory/directory-switcher.png" alt-text="Screenshot that shows the Directory switcher page with sample information.":::
It can take several hours for everything to show up properly. If it seems to be taking too long, check the **Global subscription filter**. Make sure the moved subscription isn't hidden. You may need to sign out of the Azure portal and sign back in to see the new directory.
After you associate a subscription to a different directory, you might need to d
- If you have any key vaults, you must change the key vault tenant ID. For more information, see [Change a key vault tenant ID after a subscription move](../../key-vault/general/move-subscription.md). -- If you used system-assigned Managed Identities for resources, you must re-enable these identities. If you used user-assigned Managed Identities, you must re-create these identities. After re-enabling or recreating the Managed Identities, you must re-establish the permissions assigned to those identities. For more information, see [What is managed identities for Azure resources?](../managed-identities-azure-resources/overview.md).
+- If you used system-assigned Managed Identities for resources, you must re-enable these identities. If you used user-assigned Managed Identities, you must re-create these identities. After re-enabling or recreating the Managed Identities, you must re-establish the permissions assigned to those identities. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md).
-- If you've registered an Azure Stack using this subscription, you must re-register. For more information, see [Register Azure Stack with Azure](/azure-stack/operator/azure-stack-registration).
+- If you've registered an Azure Stack using this subscription, you must re-register. For more information, see [Register Azure Stack Hub with Azure](/azure-stack/operator/azure-stack-registration).
- For more information, see [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Customers can work around this requirement for testing purposes by using a featu
-### Public Preview - Customize and configure Android shared devices for Firstline Workers at scale
+### Public Preview - Customize and configure Android shared devices for Frontline Workers at scale
**Type:** New feature **Service category:** Device Registration and Management **Product capability:** Identity Security & Protection
-Azure AD and Microsoft Endpoint Manager teams have combined to bring the capability to customize, scale, and secure your Firstline Worker devices.
+Azure AD and Microsoft Endpoint Manager teams have combined to bring the capability to customize, scale, and secure your Frontline Worker devices.
The following preview capabilities will allow you to: - Provision Android shared devices at scale with Microsoft Endpoint Manager - Secure your access for shift workers using device-based conditional access - Customize sign-in experiences for the shift workers with Managed Home Screen
-To learn more, refer to [Customize and configure shared devices for Firstline Workers at scale](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/customize-and-configure-shared-devices-for-firstline-workers-at/ba-p/1751708).
+To learn more, refer to [Customize and configure shared devices for Frontline Workers at scale](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/customize-and-configure-shared-devices-for-firstline-workers-at/ba-p/1751708).
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Ensure that the following prerequisites are in place.
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service. - If your firewall or proxy lets you add DNS entries to an allowlist, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
+ - If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is whitelisted . You should specify this URL explicitly since wildcard may not be accepted.
- Your Authentication Agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well. - For certificate validation, unblock the following URLs: **crl3.digicert.com:80**, **crl4.digicert.com:80**, **ocsp.digicert.com:80**, **www\.d-trust.net:80**, **root-c3-ca2-2009.ocsp.d-trust.net:80**, **crl.microsoft.com:80**, **oneocsp.microsoft.com:80**, and **ocsp.msocsp.com:80**. Since these URLs are used for certificate validation with other Microsoft products you may already have these URLs unblocked.
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
Ensure that the following prerequisites are in place:
>[!NOTE] >Azure AD Connect versions 1.1.557.0, 1.1.558.0, 1.1.561.0, and 1.1.614.0 have a problem related to password hash synchronization. If you _don't_ intend to use password hash synchronization in conjunction with Pass-through Authentication, read the [Azure AD Connect release notes](./reference-connect-version-history.md) to learn more.
+
+ >[!NOTE]
+ >If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is whitelisted . You should specify this URL explicitly since wildcard may not be accepted.
* **Use a supported Azure AD Connect topology**: Ensure that you are using one of Azure AD Connect's supported topologies described [here](plan-connect-topologies.md).
active-directory Pim How To Start Security Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-start-security-review.md
Previously updated : 10/22/2019 Last updated : 3/16/2021
This article describes how to create one or more access reviews for privileged A
![Azure AD roles - Access reviews list showing the status of all reviews](./media/pim-how-to-start-security-review/access-reviews.png)
+Click **New** to create a new access review.
+
+1. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
+
+ ![Create an access review - Review name and description](./media/pim-how-to-start-security-review/name-description.png)
+
+1. Set the **Start date**. By default, an access review occurs once, starts the same time it's created, and it ends in one month. You can change the start and end dates to have an access review start in the future and last however many days you want.
+
+ ![Start date, frequency, duration, end, number of times, and end date](./media/pim-how-to-start-security-review/start-end-dates.png)
+
+1. To make the access review recurring, change the **Frequency** setting from **One time** to **Weekly**, **Monthly**, **Quarterly**, **Annually**, or **Semi-annually**. Use the **Duration** slider or text box to define how many days each review of the recurring series will be open for input from reviewers. For example, the maximum duration that you can set for a monthly review is 27 days, to avoid overlapping reviews.
+
+1. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
+
+1. In the **Users** section, select one or more roles that you want to review membership of.
+
+ ![Users scope to review role membership of](./media/pim-how-to-start-security-review/users.png)
+
+ > [!NOTE]
+ > - Roles selected here include both [permanent and eligible roles](../privileged-identity-management/pim-how-to-add-role-to-user.md).
+ > - Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
+ > - For roles with groups assigned to them, the access of each group linked with the role under review will be reviewed as a part of the access review.
+ If you are creating an access review of **Azure AD roles**, the following shows an example of the Review membership list.
+
+ ![Review membership pane listing Azure AD roles you can select](./media/pim-how-to-start-security-review/review-membership.png)
+
+ If you are creating an access review of **Azure resource roles**, the following image shows an example of the Review membership list.
+
+ ![Review membership pane listing Azure resource roles you can select](./media/pim-how-to-start-security-review/review-membership-azure-resource-roles.png)
+
+1. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
+
+ ![Reviewers list of selected users or members (self)](./media/pim-how-to-start-security-review/reviewers.png)
+
+ - **Selected users** - Use this option when you don't know who needs access. With this option, you can assign the review to a resource owner or group manager to complete.
+ - **Members (self)** - Use this option to have the users review their own role assignments. Groups assigned to the role will not be a part of the review when this option is selected.
+ - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. Groups assigned to the role will be reviewed by the Fallback reviewer if one is selected.
+
+### Upon completion settings
+
+1. To specify what happens after a review completes, expand the **Upon completion settings** section.
+
+ ![Upon completion settings to auto apply and should review not respond](./media/pim-how-to-start-security-review/upon-completion-settings.png)
+
+1. If you want to automatically remove access for users that were denied, set **Auto apply results to resource** to **Enable**. If you want to manually apply the results when the review completes, set the switch to **Disable**.
+
+1. Use the **Should reviewer not respond** list to specify what happens for users that are not reviewed by the reviewer within the review period. This setting does not impact users who have been reviewed by the reviewers manually. If the final reviewer's decision is Deny, then the user's access will be removed.
+
+ - **No change** - Leave user's access unchanged
+ - **Remove access** - Remove user's access
+ - **Approve access** - Approve user's access
+ - **Take recommendations** - Take the system's recommendation on denying or approving the user's continued access
+
+### Advanced settings
+
+1. To specify additional settings, expand the **Advanced settings** section.
+
+ ![Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders](./media/pim-how-to-start-security-review/advanced-settings.png)
+
+1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information.
+
+1. Set **Require reason on approval** to **Enable** to require the reviewer to supply a reason for approval.
+
+1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes.
+
+1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review.
+1. The content of the email sent to reviewers is autogenerated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
+
+ ![Content of the email sent to reviewers with highlights](./media/pim-how-to-start-security-review/email-info.png)
## Start the access review
active-directory Pim Resource Roles Start Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-start-access-review.md
na
ms.devlang: na Previously updated : 03/09/2021 Last updated : 03/16/2021
The need for access to privileged Azure resource roles by employees changes over
![Azure resources - Access reviews list showing the status of all reviews](./media/pim-resource-roles-start-access-review/access-reviews.png)
+1. Click **New** to create a new access review.
+
+1. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
+
+ ![Create an access review - Review name and description](./media/pim-resource-roles-start-access-review/name-description.png)
+
+1. Set the **Start date**. By default, an access review occurs once, starts the same time it's created, and it ends in one month. You can change the start and end dates to have an access review start in the future and last however many days you want.
+
+ ![Start date, frequency, duration, end, number of times, and end date](./media/pim-resource-roles-start-access-review/start-end-dates.png)
+
+1. To make the access review recurring, change the **Frequency** setting from **One time** to **Weekly**, **Monthly**, **Quarterly**, **Annually**, or **Semi-annually**. Use the **Duration** slider or text box to define how many days each review of the recurring series will be open for input from reviewers. For example, the maximum duration that you can set for a monthly review is 27 days, to avoid overlapping reviews.
+
+1. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
+
+1. In the **Users** section, select one or more roles that you want to review membership of.
+
+ ![Users scope to review role membership of](./media/pim-resource-roles-start-access-review/users.png)
+
+ > [!NOTE]
+ > - Roles selected here include both [permanent and eligible roles](../privileged-identity-management/pim-how-to-add-role-to-user.md).
+ > - Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
+ If you are creating an access review of **Azure AD roles**, the following shows an example of the Review membership list.
+
+ ![Review membership pane listing Azure AD roles you can select](./media/pim-resource-roles-start-access-review/review-membership.png)
+
+ If you are creating an access review of **Azure resource roles**, the following image shows an example of the Review membership list.
+
+ ![Review membership pane listing Azure resource roles you can select](./media/pim-resource-roles-start-access-review/review-membership-azure-resource-roles.png)
+
+1. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
+
+ ![Reviewers list of selected users or members (self)](./media/pim-resource-roles-start-access-review/reviewers.png)
+
+ - **Selected users** - Use this option when you don't know who needs access. With this option, you can assign the review to a resource owner or group manager to complete.
+ - **Members (self)** - Use this option to have the users review their own role assignments.
+ - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory.
+
+### Upon completion settings
+
+1. To specify what happens after a review completes, expand the **Upon completion settings** section.
+
+ ![Upon completion settings to auto apply and should review not respond](./media/pim-resource-roles-start-access-review/upon-completion-settings.png)
+
+1. If you want to automatically remove access for users that were denied, set **Auto apply results to resource** to **Enable**. If you want to manually apply the results when the review completes, set the switch to **Disable**.
+
+1. Use the **Should reviewer not respond** list to specify what happens for users that are not reviewed by the reviewer within the review period. This setting does not impact users who have been reviewed by the reviewers manually. If the final reviewer's decision is Deny, then the user's access will be removed.
+
+ - **No change** - Leave user's access unchanged
+ - **Remove access** - Remove user's access
+ - **Approve access** - Approve user's access
+ - **Take recommendations** - Take the system's recommendation on denying or approving the user's continued access
+
+### Advanced settings
+
+1. To specify additional settings, expand the **Advanced settings** section.
+
+ ![Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders](./media/pim-resource-roles-start-access-review/advanced-settings.png)
+
+1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information.
+
+1. Set **Require reason on approval** to **Enable** to require the reviewer to supply a reason for approval.
+
+1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes.
+
+1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review.
+1. The content of the email sent to reviewers is autogenerated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
+
+ ![Content of the email sent to reviewers with highlights](./media/pim-resource-roles-start-access-review/email-info.png)
## Start the access review
active-directory Brightspace Desire2learn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/brightspace-desire2learn-tutorial.md
Previously updated : 02/08/2019 Last updated : 03/08/2021 # Tutorial: Azure Active Directory integration with Brightspace by Desire2Learn
-In this tutorial, you learn how to integrate Brightspace by Desire2Learn with Azure Active Directory (Azure AD).
-Integrating Brightspace by Desire2Learn with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Brightspace by Desire2Learn with Azure Active Directory (Azure AD). When you integrate Brightspace by Desire2Learn with Azure AD, you can:
-* You can control in Azure AD who has access to Brightspace by Desire2Learn.
-* You can enable your users to be automatically signed-in to Brightspace by Desire2Learn (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Brightspace by Desire2Learn.
+* Enable your users to be automatically signed-in to Brightspace by Desire2Learn with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Brightspace by Desire2Learn, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Brightspace by Desire2Learn single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Brightspace by Desire2Learn single sign-on enabled subscription.
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
To configure Azure AD integration with Brightspace by Desire2Learn, you need the
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Brightspace by Desire2Learn supports **IDP** initiated SSO
+* Brightspace by Desire2Learn supports **IDP** initiated SSO.
-## Adding Brightspace by Desire2Learn from the gallery
+## Add Brightspace by Desire2Learn from the gallery
To configure the integration of Brightspace by Desire2Learn into Azure AD, you need to add Brightspace by Desire2Learn from the gallery to your list of managed SaaS apps.
-**To add Brightspace by Desire2Learn from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Brightspace by Desire2Learn**, select **Brightspace by Desire2Learn** from result panel then click **Add** button to add the application.
-
- ![Brightspace by Desire2Learn in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Brightspace by Desire2Learn** in the search box.
+1. Select **Brightspace by Desire2Learn** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Brightspace by Desire2Learn based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Brightspace by Desire2Learn needs to be established.
+## Configure and test Azure AD SSO for Brightspace by Desire2Learn
-To configure and test Azure AD single sign-on with Brightspace by Desire2Learn, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Brightspace by Desire2Learn using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Brightspace by Desire2Learn.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Brightspace by Desire2Learn Single Sign-On](#configure-brightspace-by-desire2learn-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Brightspace by Desire2Learn test user](#create-brightspace-by-desire2learn-test-user)** - to have a counterpart of Britta Simon in Brightspace by Desire2Learn that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Brightspace by Desire2Learn, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Brightspace by Desire2Learn SSO](#configure-brightspace-by-desire2learn-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Brightspace by Desire2Learn test user](#create-brightspace-by-desire2learn-test-user)** - to have a counterpart of B.Simon in Brightspace by Desire2Learn that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Brightspace by Desire2Learn, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Brightspace by Desire2Learn** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Brightspace by Desire2Learn** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
- ![Brightspace by Desire2Learn Domain and URLs single sign-on information](common/idp-intiated.png)
-
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type one of the URL using the following patterns:
```http https://<companyname>.tenants.brightspace.com/samlLogin
To configure Azure AD single sign-on with Brightspace by Desire2Learn, perform t
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Brightspace by Desire2Learn Single Sign-On
-
-To configure single sign-on on **Brightspace by Desire2Learn** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Brightspace by Desire2Learn support team](https://www.d2l.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
+In this section, you'll create a test user in the Azure portal called B.Simon.
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Brightspace by Desire2Learn.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Brightspace by Desire2Learn**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Brightspace by Desire2Learn.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Brightspace by Desire2Learn**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Brightspace by Desire2Learn**.
+## Configure Brightspace by Desire2Learn SSO
- ![The Brightspace by Desire2Learn link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Brightspace by Desire2Learn** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Brightspace by Desire2Learn support team](https://www.d2l.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Brightspace by Desire2Learn test user
In this section, you create a user called Britta Simon in Brightspace by Desire2
> [!NOTE] > You can use any other Brightspace by Desire2Learn user account creation tools or APIs provided by Brightspace by Desire2Learn to provision Azure Active Directory user accounts.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Brightspace by Desire2Learn tile in the Access Panel, you should be automatically signed in to the Brightspace by Desire2Learn for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Brightspace by Desire2Learn for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Brightspace by Desire2Learn tile in the My Apps, you should be automatically signed in to the Brightspace by Desire2Learn for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Brightspace by Desire2Learn you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Cisco Umbrella Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-umbrella-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Cisco Umbrella | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Cisco Umbrella.
+ Title: 'Tutorial: Azure Active Directory integration with Cisco Umbrella Admin SSO | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Cisco Umbrella Admin SSO.
Previously updated : 02/09/2021 Last updated : 03/16/2021
-# Tutorial: Azure Active Directory integration with Cisco Umbrella
+# Tutorial: Azure Active Directory integration with Cisco Umbrella Admin SSO
-In this tutorial, you'll learn how to integrate Cisco Umbrella with Azure Active Directory (Azure AD). When you integrate Cisco Umbrella with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Cisco Umbrella Admin SSO with Azure Active Directory (Azure AD). When you integrate Cisco Umbrella Admin SSO with Azure AD, you can:
-* Control in Azure AD who has access to Cisco Umbrella.
-* Enable your users to be automatically signed-in to Cisco Umbrella with their Azure AD accounts.
+* Control in Azure AD who has access to Cisco Umbrella Admin SSO.
+* Enable your users to be automatically signed-in to Cisco Umbrella Admin SSO with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Cisco Umbrella with Azure Active
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Cisco Umbrella single sign-on (SSO) enabled subscription.
+* Cisco Umbrella Admin SSO single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Cisco Umbrella supports **SP and IDP** initiated SSO.
+* Cisco Umbrella Admin SSO supports **SP and IDP** initiated SSO.
-## Add Cisco Umbrella from the gallery
+## Add Cisco Umbrella Admin SSO from the gallery
-To configure the integration of Cisco Umbrella into Azure AD, you need to add Cisco Umbrella from the gallery to your list of managed SaaS apps.
+To configure the integration of Cisco Umbrella Admin SSO into Azure AD, you need to add Cisco Umbrella Admin SSO from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Cisco Umbrella** in the search box.
-1. Select **Cisco Umbrella** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Cisco Umbrella Admin SSO** in the search box.
+1. Select **Cisco Umbrella Admin SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Cisco Umbrella
+## Configure and test Azure AD SSO for Cisco Umbrella Admin SSO
-Configure and test Azure AD SSO with Cisco Umbrella using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cisco Umbrella.
+Configure and test Azure AD SSO with Cisco Umbrella Admin SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cisco Umbrella Admin SSO.
-To configure and test Azure AD SSO with Cisco Umbrella, perform the following steps:
+To configure and test Azure AD SSO with Cisco Umbrella Admin SSO, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Cisco Umbrella SSO](#configure-cisco-umbrella-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Cisco Umbrella test user](#create-cisco-umbrella-test-user)** - to have a counterpart of B.Simon in Cisco Umbrella that is linked to the Azure AD representation of user.
+1. **[Configure Cisco Umbrella Admin SSO SSO](#configure-cisco-umbrella-admin-sso-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cisco Umbrella Admin SSO test user](#create-cisco-umbrella-admin-sso-test-user)** - to have a counterpart of B.Simon in Cisco Umbrella Admin SSO that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Cisco Umbrella** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Cisco Umbrella Admin SSO** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/metadataxml.png)
-6. On the **Set up Cisco Umbrella** section, copy the appropriate URL(s) as per your requirement.
+6. On the **Set up Cisco Umbrella Admin SSO** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cisco Umbrella.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cisco Umbrella Admin SSO.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Cisco Umbrella**.
+1. In the applications list, select **Cisco Umbrella Admin SSO**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Cisco Umbrella SSO
+## Configure Cisco Umbrella Admin SSO SSO
-1. In a different browser window, sign-on to your Cisco Umbrella company site as administrator.
+1. In a different browser window, sign-on to your Cisco Umbrella Admin SSO company site as administrator.
2. From the left side of menu, click **Admin** and navigate to **Authentication** and then click on **SAML**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![The Other](./media/cisco-umbrella-tutorial/other.png)
-4. On the **Cisco Umbrella Metadata**, page, click **NEXT**.
+4. On the **Cisco Umbrella Admin SSO Metadata**, page, click **NEXT**.
![The metadata](./media/cisco-umbrella-tutorial/metadata.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
8. Click **SAVE**.
-### Create Cisco Umbrella test user
+### Create Cisco Umbrella Admin SSO test user
-To enable Azure AD users to log in to Cisco Umbrella, they must be provisioned into Cisco Umbrella.
-In the case of Cisco Umbrella, provisioning is a manual task.
+To enable Azure AD users to log in to Cisco Umbrella Admin SSO, they must be provisioned into Cisco Umbrella Admin SSO.
+In the case of Cisco Umbrella Admin SSO, provisioning is a manual task.
**To provision a user account, perform the following steps:**
-1. In a different browser window, sign-on to your Cisco Umbrella company site as administrator.
+1. In a different browser window, sign-on to your Cisco Umbrella Admin SSO company site as administrator.
2. From the left side of menu, click **Admin** and navigate to **Accounts**.
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Cisco Umbrella Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Cisco Umbrella Admin SSO Sign on URL where you can initiate the login flow.
-* Go to Cisco Umbrella Sign-on URL directly and initiate the login flow from there.
+* Go to Cisco Umbrella Admin SSO Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cisco Umbrella for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cisco Umbrella Admin SSO for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Cisco Umbrella tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cisco Umbrella for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Cisco Umbrella Admin SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cisco Umbrella Admin SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Cisco Umbrella you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Cisco Umbrella Admin SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Ciscocloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ciscocloud-tutorial.md
Previously updated : 02/14/2019 Last updated : 03/08/2021 # Tutorial: Azure Active Directory integration with Cisco Cloud
-In this tutorial, you learn how to integrate Cisco Cloud with Azure Active Directory (Azure AD).
-Integrating Cisco Cloud with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Cisco Cloud with Azure Active Directory (Azure AD). When you integrate Cisco Cloud with Azure AD, you can:
-* You can control in Azure AD who has access to Cisco Cloud.
-* You can enable your users to be automatically signed-in to Cisco Cloud (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Cisco Cloud.
+* Enable your users to be automatically signed-in to Cisco Cloud with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Cisco Cloud, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Cisco Cloud single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cisco Cloud single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Cisco Cloud supports **SP and IDP** initiated SSO
+* Cisco Cloud supports **SP and IDP** initiated SSO.
-## Adding Cisco Cloud from the gallery
+## Add Cisco Cloud from the gallery
To configure the integration of Cisco Cloud into Azure AD, you need to add Cisco Cloud from the gallery to your list of managed SaaS apps.
-**To add Cisco Cloud from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Cisco Cloud**, select **Cisco Cloud** from result panel then click **Add** button to add the application.
-
- ![Cisco Cloud in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Cisco Cloud** in the search box.
+1. Select **Cisco Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Cisco Cloud based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Cisco Cloud needs to be established.
+## Configure and test Azure AD SSO for Cisco Cloud
-To configure and test Azure AD single sign-on with Cisco Cloud, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Cisco Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cisco Cloud.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Cisco Cloud Single Sign-On](#configure-cisco-cloud-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Cisco Cloud test user](#create-cisco-cloud-test-user)** - to have a counterpart of Britta Simon in Cisco Cloud that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Cisco Cloud, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Cisco Cloud SSO](#configure-cisco-cloud-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cisco Cloud test user](#create-cisco-cloud-test-user)** - to have a counterpart of B.Simon in Cisco Cloud that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Cisco Cloud, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Cisco Cloud** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Cisco Cloud** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where Identifier and Reply U R L values appear.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `<subdomain>.cisco.com`
To configure Azure AD single sign-on with Cisco Cloud, perform the following ste
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<subdomain>.cloudapps.cisco.com`
To configure Azure AD single sign-on with Cisco Cloud, perform the following ste
![The Certificate download link](common/copy-metadataurl.png)
-### Configure Cisco Cloud Single Sign-On
-
-To configure single sign-on on **Cisco Cloud** side, you need to send the **App Federation Metadata Url** to [Cisco Cloud support team](mailto:cpr-ops@cisco.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
+In this section, you'll create a test user in the Azure portal called B.Simon.
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Cisco Cloud.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Cisco Cloud**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cisco Cloud.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Cisco Cloud**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Cisco Cloud**.
+## Configure Cisco Cloud SSO
- ![The Cisco Cloud link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **Cisco Cloud** side, you need to send the **App Federation Metadata Url** to [Cisco Cloud support team](mailto:cpr-ops@cisco.com). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Cisco Cloud test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in Cisco Cloud. Work with [Cisco Cloud support team](mailto:cpr-ops@cisco.com) to add the users in the Cisco Cloud platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Cisco Cloud test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in Cisco Cloud. Work with [Cisco Cloud support team](mailto:cpr-ops@cisco.com) to add the users in the Cisco Cloud platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Cisco Cloud Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Cisco Cloud Sign-on URL directly and initiate the login flow from there.
-When you click the Cisco Cloud tile in the Access Panel, you should be automatically signed in to the Cisco Cloud for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cisco Cloud for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Cisco Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cisco Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Cisco Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Exium Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/exium-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Exium | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Exium.
++++++++ Last updated : 03/16/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Exium
+
+In this tutorial, you'll learn how to integrate Exium with Azure Active Directory (Azure AD). When you integrate Exium with Azure AD, you can:
+
+* Control in Azure AD who has access to Exium.
+* Enable your users to be automatically signed-in to Exium with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Exium single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Exium supports **SP** initiated SSO.
+
+## Adding Exium from the gallery
+
+To configure the integration of Exium into Azure AD, you need to add Exium from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Exium** in the search box.
+1. Select **Exium** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Exium
+
+Configure and test Azure AD SSO with Exium using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Exium.
+
+To configure and test Azure AD SSO with Exium, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Exium SSO](#configure-exium-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Exium test user](#create-exium-test-user)** - to have a counterpart of B.Simon in Exium that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Exium** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://subapi.exium.net/saml/<WORKSPACE_ID>/metadata`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://subapi.exium.net/saml/<WORKSPACE_ID>/acs`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://service.exium.net/sign-in`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Exium Client support team](mailto:support@exium.net) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Exium.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Exium**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Exium SSO
+
+1. Sign in to Exium company site as an administrator.
+
+1. In the **Admin Console**, select **Company Profile** panel.
+
+ ![screenshot for admin console](./media/exium-tutorial/company-profile.png)
+
+1. In the **Profile**, click on **SSO Settings** and **Edit** it.
+
+1. Perform the below steps in the **SSO Settings** section.
+
+ ![screenshot for SSO Settings](./media/exium-tutorial/update.png)
+
+ a. Select **SSO Type** as **AzureAD** from the dropdown.
+
+ b. Paste the **App Federation Metadata Url** value in the **SAML 2.0 IDP Metadata URL** field.
+
+ c. Copy **SAML 2.0 SSO URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ d. Copy **SAML 2.0 SP Entity ID** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ e. Click on **Update**.
+
+### Create Exium test user
+
+1. Sign in to Exium company site as an administrator.
+
+1. Go to the **User Management -> Users** and click on **Add User**.
+
+ ![screenshot for create test user](./media/exium-tutorial/add-user.png)
+
+1. Enter the required fields in the following page and click on **Save**.
+
+ ![screenshot for create test user fields with save button](./media/exium-tutorial/add-user-2.png)
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Exium Sign-on URL where you can initiate the login flow.
+
+* Go to Exium Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Exium tile in the My Apps, this will redirect to Exium Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Exium you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Maverics Identity Orchestrator Saml Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md
Previously updated : 08/12/2020 Last updated : 03/17/2021
+# Integrate Azure AD single sign-on with Maverics Identity Orchestrator SAML Connector
-# Tutorial: Integrate Azure AD single sign-on with Maverics Identity Orchestrator SAML Connector
+Strata's Maverics Identity Orchestrator provides a simple way to integrate on-premises applications with Azure Active Directory (Azure AD) for authentication and access control. The Maverics Orchestrator is capable of modernizing authentication and authorization for apps that currently rely on headers, cookies, and other proprietary authentication methods. Maverics Orchestrator instances can be deployed on-premises or in the cloud.
-Strata provides a simple way to integrate on-premises applications with Azure Active Directory (Azure AD) for authentication and access control.
+This hybrid access tutorial demonstrates how to migrate an on-premises web application that's currently protected by a legacy web access management product to use Azure AD for authentication and access control. Here are the basic steps:
-This article walks you through how to configure Maverics Identity Orchestrator to:
-* Incrementally migrate users from an on-premises identity system into Azure AD during login to a legacy on-premises application.
-* Route login requests from a legacy web-access management product, such as CA SiteMinder or Oracle Access Manager, to Azure AD.
-* Authenticate users to on-premises applications that are protected by using HTTP headers or proprietary session cookies after authenticating the user against Azure AD.
-
-Strata provides software that you can deploy on-premises or in the cloud. It helps you discover, connect, and orchestrate across identity providers to create distributed identity management for hybrid and multi-cloud enterprises.
-
-This tutorial demonstrates how to migrate an on-premises web application that's currently protected by a legacy web access management product (CA SiteMinder) to use Azure AD for authentication and access control. Here are the basic steps:
-1. Install Maverics Identity Orchestrator.
-2. Register your enterprise application with Azure AD, and configure it to use Maverics Azure AD SAML Zero Code Connector for SAML-based single sign-on (SSO).
-3. Integrate Maverics with SiteMinder and the Lightweight Directory Access Protocol (LDAP) user store.
-4. Set up an Azure key vault, and configure Maverics to use it as its secrets management provider.
-5. Demonstrate user migration and session abstraction by using Maverics to provide access to an on-premises Java web application.
-
-For additional installation and configuration instructions, go to the [Strata website](https://www.strata.io).
+1. Set up the Maverics Orchestrator
+1. Proxy an application
+1. Register an enterprise application in Azure AD
+1. Authenticate via Azure and authorize access to the application
+1. Add headers for seamless application access
+1. Work with multiple applications
## Prerequisites -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- A Maverics Identity Orchestrator SAML Connector SSO-enabled
-subscription. To obtain the Maverics software, contact [Strata sales](mailto:sales@strata.io).
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* A Maverics Identity Orchestrator SAML Connector SSO-enabled subscription. To get the Maverics software, contact [Strata sales](mailto:sales@strata.io).
+* At least one application that uses header-based authentication. The examples work against an application called Sonar, which is hosted at https://app.sonarsystems.com, and an application called Connectulum, hosted at https://app.connectulum.com.
+* A Linux machine to host the Maverics Orchestrator
+ * OS: RHEL 7.7 or higher, CentOS 7+
+ * Disk: >= 10 GB
+ * Memory: >= 4 GB
+ * Ports: 22 (SSH/SCP), 443, 7474
+ * Root access for install/administrative tasks
+ * Network egress from the server hosting the Maverics Identity Orchestrator to your protected application
-## Install Maverics Identity Orchestrator
+## Step 1: Set up the Maverics Orchestrator
-To get started with the Maverics Identity Orchestrator installation, see the [installation instructions](https://www.strata.io).
+### Install Maverics
-### System requirements
-* Supported operating systems
- * RHEL 7+
- * CentOS 7+
+1. Get the latest Maverics RPM. Copy the package to the system on which you want to install the Maverics software.
-* Dependencies
- * systemd
+1. Install the Maverics package, substituting your file name in place of `maverics.rpm`.
-### Installation
+ `sudo rpm -Uvf maverics.rpm`
-1. Obtain the latest Maverics Redhat Package Manager (RPM) package. Copy the package to the system on which you want to install the Maverics software.
+ After you install Maverics, it will run as a service under `systemd`. To verify that the service is running, execute the following command:
-2. Install the Maverics package, substituting your file name in place of `maverics.rpm`.
+ `sudo systemctl status maverics`
- `sudo rpm -Uvf maverics.rpm`
+1. To restart the Orchestrator and follow the logs, you can run the following command:
-3. After you install Maverics, it will run as a service under `systemd`. To verify that the service is running, execute the following command:
+ `sudo service maverics restart; sudo journalctl --identifier=maverics -f`
- `sudo systemctl status maverics`
-
-By default, Maverics is installed in the */usr/local/bin* directory.
-
-After you install Maverics, the default *maverics.yaml* file is created in the */etc/maverics* directory. Before you edit your configuration to include `workflows` and `connectors`, your configuration file will look like this:
+After you install Maverics, the default `maverics.yaml` file is created in the `/etc/maverics` directory. Before you edit your configuration to include `appgateways` and `connectors`, your configuration file will look like this z:
```yaml # © Strata Identity Inc. 2020. All Rights Reserved. Patents Pending.
After you install Maverics, the default *maverics.yaml* file is created in the *
version: 0.1 listenAddress: ":7474" ```
-## Configuration options
-### Version
-The `version` field declares which version of the configuration file is being used. If the version isn't specified, the most recent configuration version will be used.
-
-```yaml
-version: 0.1
-```
-### listenAddress
-`listenAddress` declares which address Orchestrator will listen on. If the host section of the address is blank, Orchestrator will listen on all available unicast and anycast IP addresses of the local system. If the port section of the address is blank, a port number is chosen automatically.
-
-```yaml
-listenAddress: ":453"
-```
-### TLS
-
-The `tls` field declares a map of Transport Layer Security (TLS) objects. The TLS objects can be used by connectors and the Orchestrator server. For all available TLS options, see the `transport` package documentation.
-
-Microsoft Azure requires communication over TLS when you're using SAML-based SSO. For information about generating certificates, go to the [Let's Encrypt website](https://letsencrypt.org/getting-started/).
-The `maverics` key is reserved for the Orchestrator server. All other keys are available and can be used to inject a TLS object into a given connector.
-
-```yaml
-tls:
- maverics:
- certFile: /etc/maverics/maverics.cert
- keyFile: /etc/maverics/maverics.key
-```
-### Include files
-
-You can define `connectors` and `workflows` in their own, separate configuration files and reference them in the *maverics.yaml* file by using `includeFiles`, per the following example:
-
-```yaml
-includeFiles:
- - workflow/sessionAbstraction.yaml
- - connector/AzureAD-saml.yaml
- - connector/siteminder.yaml
- ```
+### Configure DNS
-This tutorial uses a single *maverics.yaml* configuration file.
+DNS will be helpful so that you don't have to remember the Orchestrator server's IP.
-## Use Azure Key Vault as your secrets provider
+Edit the browser machine's (your laptop's) hosts file, using a hypothetical Orchestrator IP of 12.34.56.78. On Linux-based operating systems, this file is located in `/etc/hosts`. On Windows, it's located at `C:\windows\system32\drivers\etc`.
-### Manage secrets
-
-To load secrets, Maverics can integrate with various secret management solutions. The current integrations include a file, Hashicorp Vault, and Azure Key Vault. If no secret management solution is specified, Maverics defaults to loading secrets in plain text out of the *maverics.yaml* file.
-
-To declare a value as a secret in a *maverics.yaml* config file, enclose the secret in angle brackets:
+```
+12.34.56.78 sonar.maverics.com
+12.34.56.78 connectulum.maverics.com
+```
- ```yaml
- connectors:
- - name: AzureAD
- type: AzureAD
- apiToken: <AzureADAPIToken>
- oauthClientID: <AzureADOAuthClientID>
- oauthClientSecret: <AzureADOAuthClientSecret>
- ```
+To confirm that DNS is configured as expected, you can make a request to the Orchestrator's status endpoint. From your browser, request http://sonar.maverics.com:7474/status.
-### Load secrets from a file
+### Configure TLS
-1. To load secrets from a file, add the environment variable `MAVERICS_SECRET_PROVIDER` in the */etc/maverics/maverics.env* file by using:
+Communicating over secure channels to talk to your Orchestrator is critical to maintain security. You can add a certificate/key pair in your `tls` section to achieve this.
- `MAVERICS_SECRET_PROVIDER=secretfile:///<PATH TO SECRETS FILE>`
+To generate a self-signed certificate and key for the Orchestrator server, run the following command from within the `/etc/maverics` directory:
-2. Restart the Maverics service by running:
+`openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out maverics.crt -keyout maverics.key`
- `sudo systemctl restart maverics`
+> [!NOTE]
+> For production environments, you'll likely want to use a certificate signed by a known CA to avoid warnings in the browser. [Let's Encrypt](https://letsencrypt.org/) is a good and free option if you're looking for a trusted CA.
-The *secrets.yaml* file contents can be filled with any number of `secrets`.
+Now, use the newly generated certificate and key for the Orchestrator. Your config file should now contain this code:
```yaml
-secrets:
- AzureADAPIToken: aReallyGoodToken
- AzureADOAuthClientID: aReallyUniqueID
- AzureADOAuthClientSecret: aReallyGoodSecret
+version: 0.1
+listenAddress: ":443"
+
+tls:
+ maverics:
+ certFile: /etc/maverics/maverics.crt
+ keyFile: /etc/maverics/maverics.key
```
-### Set up an Azure key vault
-You can set up an Azure key vault by using either the Azure portal or the Azure CLI.
+To confirm that TLS is configured as expected, restart the Maverics service, and make a request to the status endpoint. From your browser, request https://sonar.maverics.com/status.
-**Use the Azure portal**
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. [Create a new key vault](../../key-vault/general/quick-create-portal.md).
-1. [Add the secrets to the key vault](../../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault).
-1. [Register an application with Azure AD](../develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
-1. [Authorize an application to use a secret](../../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault).
+## Step 2: Proxy an application
-**Use the Azure CLI**
+Next, configure basic proxying in the Orchestrator by using `appgateways`. This step helps you validate that the Orchestrator has the necessary connectivity to the protected application.
-1. Open the [Azure CLI](/cli/azure/install-azure-cli), and then enter the following command:
+Your config file should now contain this code:
- ```azurecli
- az login
- ```
+```yaml
+version: 0.1
+listenAddress: ":443"
-1. Create a new key vault by running the following command:
- ```azurecli
- az keyvault create --name "[VAULT_NAME]" --resource-group "[RESOURCE_GROUP]" --location "[REGION]"
- ```
+tls:
+ maverics:
+ certFile: /etc/maverics/maverics.crt
+ keyFile: /etc/maverics/maverics.key
-1. Add the secrets to the key vault by running the following command:
- ```azurecli
- az keyvault secret set --vault-name "[VAULT_NAME]" --name "[SECRET_NAME]" --value "[SECRET_VALUE]"
- ```
+appgateways:
+ - name: sonar
+ location: /
+ # Replace https://app.sonarsystems.com with the address of your protected application
+ upstream: https://app.sonarsystems.com
+```
-1. Register an application with Azure AD by running the following command:
- ```azurecli
- az ad sp create-for-rbac -n "MavericsKeyVault" --skip-assignment > azure-credentials.json
- ```
+To confirm that proxying is working as expected, restart the Maverics service, and make a request to the application through the Maverics proxy. From your browser, request https://sonar.maverics.com. You can optionally make a request to specific application resources, for example, `https://sonar.maverics.com/RESOURCE`, where `RESOURCE` is a valid application resource of the protected upstream app.
-1. Authorize an application to use a secret by running the following command:
- ```azurecli
- az keyvault set-policy --name "[VAULT_NAME]" --spn [APPID] --secret-permissions list get
- #APPID can be found in the azure-credentials.json
- generated in the previous step
- ```
+## Step 3: Register an enterprise application in Azure AD
-1. To load secrets from your Azure key vault, set the environment variable `MAVERICS_SECRET_PROVIDER` in the */etc/maverics/maverics.env* file by using the credentials found in the *azure-credentials.json* file, in the following format:
-
- `MAVERICS_SECRET_PROVIDER='azurekeyvault://<KEYVAULT NAME>.vault.azure.net?clientID=<APPID>&clientSecret=<PASSWORD>&tenantID=<TENANT>'`
+Now, create a new enterprise application in Azure AD that will be used for authenticating end users.
-1. Restart the Maverics service:
-`sudo systemctl restart maverics`
+> [!NOTE]
+> When you use Azure AD features like Conditional Access, it's important to create an enterprise application per on-premises application. This permits per-app Conditional Access, per-app risk evaluation, per-app assigned permissions, and so on. Generally, an enterprise application in Azure AD maps to an Azure connector in Maverics.
-## Configure your application in Azure AD for SAML-based SSO
+To register an enterprise application in Azure AD:
-1. In your Azure AD tenant, go to **Enterprise applications**, search for **Maverics Identity Orchestrator SAML Connector**, and then select it.
+1. In your Azure AD tenant, go to **Enterprise applications**, and then select **New Application**. In the Azure AD gallery, search for **Maverics Identity Orchestrator SAML Connector**, and then select it.
-1. On the Maverics Identity Orchestrator SAML Connector **Properties** pane, set **User assignment required?** to **No** to enable the application to work for newly migrated users.
+1. On the Maverics Identity Orchestrator SAML Connector **Properties** pane, set **User assignment required?** to **No** to enable the application to work for all users in your directory.
1. On the Maverics Identity Orchestrator SAML Connector **Overview** pane, select **Set up single sign-on**, and then select **SAML**.
You can set up an Azure key vault by using either the Azure portal or the Azure
![Screenshot of the "Basic SAML Configuration" Edit button.](common/edit-urls.png)
-1. Enter the **Entity ID** by typing a URL in the following format: `https://<SUBDOMAIN>.maverics.org`. The Entity ID must be unique across the apps in the tenant. Save the value entered here to be included in the configuration of Maverics.
+1. Enter an **Entity ID** of `https://sonar.maverics.com`. The entity ID must be unique across the apps in the tenant, and it can be an arbitrary value. You'll use this value when you define the `samlEntityID` field for your Azure connector in the next section.
-1. Enter the **Reply URL** in the following format: `https://<AZURECOMPANY.COM>/<MY_APP>/`.
+1. Enter a **Reply URL** of `https://sonar.maverics.com/acs`. You'll use this value when you define the `samlConsumerServiceURL` field for your Azure connector in the next section.
-1. Enter the **Sign on URL** in the following format: `https://<AZURE-COMPANY.COM>/<MY_APP>/<LOGIN PAGE>`.
+1. Enter a **Sign on URL** of `https://sonar.maverics.com/`. This field won't be used by Maverics, but it is required in Azure AD to enable users to get access to the application through the Azure AD My Apps portal.
1. Select **Save**.
-1. In the **SAML Signing Certificate** section, select the **Copy** button to copy the **App Federation Metadata URL**, and then save it to your computer.
-
- ![Screenshot of the "SAML Signing Certificate" Copy button.](common/copy-metadataurl.png)
-
-## Configure Maverics Identity Orchestrator Azure AD SAML Connector
-
-Maverics Identity Orchestrator Azure AD Connector supports OpenID Connect and SAML Connect. To configure the connector, do the following:
-
-1. To enable SAML-based SSO, set `authType: saml`.
-
-1. Create the value for `samlMetadataURL` in the following format: `samlMetadataURL:https://login.microsoftonline.com/<TENANT ID>/federationmetadata/2007-06/federationmetadata.xml?appid=<APP ID>`.
-
-1. Define the URL that Azure will be redirected back to in your app after users have logged in with their Azure credentials. Use the following format:
-`samlRedirectURL: https://<AZURECOMPANY.COM>/<MY_APP>`.
+1. In the **SAML Signing Certificate** section, select the **Copy** button to copy the **App Federation Metadata URL** value, and then save it to your computer.
-1. Copy the value from the previously configured EntityID:
-`samlEntityID: https://<SUBDOMAIN>.maverics.org`.
+ ![Screenshot of the "SAML Signing Certificate" Copy button.](common/copy-metadataurl.png)
-1. Copy the value from the Reply URL that Azure AD will use to post the SAML response:
-`samlConsumerServiceURL: https://<AZURE-COMPANY.COM>/<MY_APP>`.
+## Step 4: Authenticate via Azure and authorize access to the application
-1. Generate a JSON Web Token (JWT) signing key, which is used to protect the Maverics Identity Orchestrator session information, by using the [OpenSSL tool](https://www.openssl.org/source/):
+Next, put the enterprise application you just created to use by configuring the Azure connector in Maverics. This `connectors` configuration paired with the `idps` block allows the Orchestrator to authenticate users.
- ```console
- openssl rand 64 | base64
- ```
-1. Copy the response to the `jwtSigningKey` config property:
-`jwtSigningKey: TBHPvTtu6NUqU84H3Q45grcv9WDJLHgTioqRhB8QGiVzghKlu1mHgP1QHVTAZZjzLlTBmQwgsSoWxGHRcT4Bcw==`.
-
-## Attributes and attribute mapping
-Attribute mapping is used to define the mapping of user attributes from a source on-premises user directory into an Azure AD tenant after the user is set up.
-
-Attributes determine which user data might be returned to an application in a claim, passed into session cookies, or passed to the application in HTTP header variables.
-
-## Configure the Maverics Identity Orchestrator Azure AD SAML Connector YAML file
-
-Your Maverics Identity Orchestrator Azure AD Connector configuration will look like this:
+Your config file should now contain the following code. Be sure to replace `METADATA_URL` with the App Federation Metadata URL value from the preceding step.
```yaml-- name: AzureAD
- type: azure
- authType: saml
- samlMetadataURL: https://login.microsoftonline.com/<TENANT ID>/federationmetadata/2007-06/federationmetadata.xml?appid=<APP ID>
- samlRedirectURL: https://<AZURECOMPANY.COM>/<MY_APP>
- samlConsumerServiceURL: https://<AZURE-COMPANY.COM>/<MY_APP>
- jwtSigningKey: <SIGNING KEY>
- samlEntityID: https://<SUBDOMAIN>.maverics.org
- attributeMapping:
- displayName: username
- mailNickname: givenName
- givenName: givenName
- surname: sn
- userPrincipalName: mail
- password: password
-```
-
-## Migrate users to an Azure AD tenant
-
-Follow this configuration to incrementally migrate users from a web access management product, such as CA SiteMinder, Oracle Access Manager, or IBM Tivoli. You can also migrate them from a Lightweight Directory Access Protocol (LDAP) directory or a SQL database.
-
-### Configure your application permissions in Azure AD to create users
+version: 0.1
+listenAddress: ":443"
-1. In your Azure AD tenant, go to `App registrations` and select the **Maverics Identity Orchestrator SAML Connector** application.
+tls:
+ maverics:
+ certFile: /etc/maverics/maverics.crt
+ keyFile: /etc/maverics/maverics.key
-1. On the **Maverics Identity Orchestrator SAML Connector | Certificates & secrets** pane, select `New client secret` and then select on expiration option. Select the **Copy** button to copy the secret and save it to your computer.
+idps:
+ - name: azureSonarApp
-1. On the **Maverics Identity Orchestrator SAML Connector | API permissions** pane, select **Add permission** and then, on the **Request API permissions** pane, select **Microsoft Graph** and **Application permissions**.
+appgateways:
+ - name: sonar
+ location: /
+ # Replace https://app.sonarsystems.com with the address of your protected application
+ upstream: https://app.sonarsystems.com
-1. On the next screen, select **User.ReadWrite.All**, and then select **Add permissions**.
+ policies:
+ - resource: /
+ allowIf:
+ - equal: ["{{azureSonarApp.authenticated}}", "true"]
-1. Back on the **API permissions** pane, select **Grant admin consent**.
+connectors:
+ - name: azureSonarApp
+ type: azure
+ authType: saml
+ # Replace METADATA_URL with the App Federation Metadata URL
+ samlMetadataURL: METADATA_URL
+ samlConsumerServiceURL: https://sonar.maverics.com/acs
+ samlEntityID: https://sonar.maverics.com
+```
-### Configure the Maverics Identity Orchestrator SAML Connector YAML file for user migration
+To confirm that authentication is working as expected, restart the Maverics service, and make a request to an application resource through the Maverics proxy. You should be redirected to Azure for authentication before accessing the resource.
-To enable the user migration workflow, add these additional properties to the configuration file:
-1. Enter the **Azure Graph URL** in the following format: `graphURL: https://graph.microsoft.com`.
-1. Enter the **OAuth Token URL** in the following format:
-`oauthTokenURL: https://login.microsoftonline.com/<TENANT ID>/federationmetadata/2007-06/federationmetadata.xml?appid=<APP ID>`.
-1. Enter the previously generated client secret in the following format: `oauthClientSecret: <CLIENT SECRET>`.
+## Step 5: Add headers for seamless application access
+You aren't sending headers to the upstream application yet. Let's add `headers` to the request as it passes through the Maverics proxy to enable the upstream application to identify the user.
-Your final Maverics Identity Orchestrator Azure AD Connector configuration file will look like this:
+Your config file should now contain this code:
```yaml-- name: AzureAD
- type: azure
- authType: saml
- samlMetadataURL: https://login.microsoftonline.com/<TENANT ID>/federationmetadata/2007-06/federationmetadata.xml?appid=<APP ID>
- samlRedirectURL: https://<AZURECOMPANY.COM>/<MY_APP>
- samlConsumerServiceURL: https://<AZURE-COMPANY.COM>/<MY_APP>
- jwtSigningKey: TBHPvTtu6NUqU84H3Q45grcv9WDJLHgTioqRhB8QGiVzghKlu1mHgP1QHVTAZZjzLlTBmQwgsSoWxGHRcT4Bcw==
- samlEntityID: https://<SUBDOMAIN>.maverics.org
- graphURL: https://graph.microsoft.com
- oauthTokenURL: https://login.microsoftonline.com/<TENANT ID>/oauth2/v2.0/token
- oauthClientID: <APP ID>
- oauthClientSecret: <NEW CLIENT SECRET>
- attributeMapping:
- displayName: username
- mailNickname: givenName
- givenName: givenName
- surname: sn
- userPrincipalName: mail
- password: password
-```
-
-### Configure Maverics Zero Code Connector for SiteMinder
+version: 0.1
+listenAddress: ":443"
-You use the SiteMinder connector to migrate users to an Azure AD tenant. You log the users in to legacy on-premises applications that are protected by SiteMinder by using the newly created Azure AD identities and credentials.
+tls:
+ maverics:
+ certFile: /etc/maverics/maverics.crt
+ keyFile: /etc/maverics/maverics.key
-For this tutorial, SiteMinder has been configured to protect the legacy application by using forms-based authentication and the `SMSESSION` cookie. To integrate with an app that consumes authentication and session information through HTTP headers, you need to add the header emulation configuration to the connector.
+idps:
+ - name: azureSonarApp
-This example maps the `username` attribute to the `SM_USER` HTTP header:
+appgateways:
+ - name: sonar
+ location: /
+ # Replace https://app.sonarsystems.com with the address of your protected application
+ upstream: https://app.sonarsystems.com
-```yaml
- headers:
- SM_USER: username
-```
+ policies:
+ - resource: /
+ allowIf:
+ - equal: ["{{azureSonarApp.authenticated}}", "true"]
-Set `proxyPass` to the location that requests are proxied to. Typically, this location is the host of the protected application.
+ headers:
+ email: azureSonarApp.name
+ firstname: azureSonarApp.givenname
+ lastname: azureSonarApp.surname
-`loginPage` should match the URL of the login form that's currently used by SiteMinder when it redirects users for authentication.
-
-```yaml
connectors:-- name: siteminder-login-form
- type: siteminder
- loginType: form
- loginPage: /siteminderagent/forms/login.fcc
- proxyPass: http://host.company.com
+ - name: azureSonarApp
+ type: azure
+ authType: saml
+ # Replace METADATA_URL with the App Federation Metadata URL
+ samlMetadataURL: METADATA_URL
+ samlConsumerServiceURL: https://sonar.maverics.com/acs
+ samlEntityID: https://sonar.maverics.com
```
-### Configure Maverics Zero Code Connector for LDAP
-
-When applications are protected by a web access management (WAM) product such as SiteMinder, user identities and attributes are typically stored in an LDAP directory.
+To confirm that authentication is working as expected, make a request to an application resource through the Maverics proxy. The protected application should now be receiving headers on the request.
-This connector configuration demonstrates how to connect to the LDAP directory. The connector is configured as the user store for SiteMinder so that the correct user profile information can be collected during the migration workflow and a corresponding user can be created in Azure AD.
+Feel free to edit the header keys if your application expects different headers. All claims that come back from Azure AD as part of the SAML flow are available to use in headers. For example, you can include another header of `secondary_email: azureSonarApp.email`, where `azureSonarApp` is the connector name and `email` is a claim returned from Azure AD.
-* `baseDN` specifies the location in the directory against which to perform the LDAP search.
+## Step 6: Work with multiple applications
-* `url` is the address and port of the LDAP server to connect to.
+Let's now take a look at what's required to proxy to multiple applications that are on different hosts. To achieve this step, configure another App Gateway, another enterprise application in Azure AD, and another connector.
-* `serviceAccountUsername` is the username that's used to connect to the LDAP server, usually expressed as a bind DN (for example, `CN=Directory Manager`).
-
-* `serviceAccountPassword` is the password that's used to connect to the LDAP server. This value is stored in the previously configured Azure key vault instance.
-
-* `userAttributes` defines the list of user-related attributes to query for. These attributes are later mapped into corresponding Azure AD attributes.
+Your config file should now contain this code:
```yaml-- name: company-ldap
- type: ldap
- url: "ldap://ldap.company.com:389"
- baseDN: ou=People,o=company,c=US
- serviceAccountUsername: uid=admin,ou=Admins,o=company,c=US
- serviceAccountPassword: <vaulted-password>
- userAttributes:
- - uid
- - cn
- - givenName
- - sn
- - mail
- - mobile
-```
-
-### Configure the migration workflow
-
-The migration workflow configuration determines how Maverics migrates users from SiteMinder or LDAP to Azure AD.
-
-This workflow:
-- Uses the SiteMinder connector to proxy the SiteMinder login. User credentials are validated through SiteMinder authentication and then passed to subsequent steps of the workflow.-- Retrieves user profile attributes from the SiteMinder user store.-- Makes a request to the Microsoft Graph API to create the user in your Azure AD tenant.-
-To configure the migration workflow, do the following:
-
-1. Give the workflow a name (for example, **SiteMinder to Azure AD Migration**).
-1. Specify the `endpoint`, which is an HTTP path on which the workflow is exposed, triggering the `actions` of that workflow in response to requests. The `endpoint` typically corresponds to the app that's proxied (for example, `/my_app`). The value must include both the leading and trailing slashes.
-1. Add the appropriate `actions` to the workflow.
-
- a. Define the `login` method for the SiteMinder connector. The connector value must match the name value in the connector configuration.
-
- b. Define the `getprofile` method for the LDAP connector.
-
- c. Define the `createuser` method for the AzureAD connector.
-
- ```yaml
- workflows:
- - name: SiteMinder to Azure AD Migration
- endpoint: /my_app/
- actions:
- - connector: siteminder-login-form
- method: login
- - connector: company-ldap
- method: getprofile
- - connector: AzureAD
- method: createuser
- ```
-### Verify the migration workflow
+version: 0.1
+listenAddress: ":443"
-1. If the Maverics service is not already running, start it by executing the following command:
+tls:
+ maverics:
+ certFile: /etc/maverics/maverics.crt
+ keyFile: /etc/maverics/maverics.key
- `sudo systemctl start maverics`
+idps:
+ - name: azureSonarApp
+ - name: azureConnectulumApp
+
+appgateways:
+ - name: sonar
+ host: sonar.maverics.com
+ location: /
+ # Replace https://app.sonarsystems.com with the address of your protected application
+ upstream: https://app.sonarsystems.com
+
+ policies:
+ - resource: /
+ allowIf:
+ - equal: ["{{azureSonarApp.authenticated}}", "true"]
+
+ headers:
+ email: azureSonarApp.name
+ firstname: azureSonarApp.givenname
+ lastname: azureSonarApp.surname
+
+ - name: connectulum
+ host: connectulum.maverics.com
+ location: /
+ # Replace https://app.connectulum.com with the address of your protected application
+ upstream: https://app.connectulum.com
+
+ policies:
+ - resource: /
+ allowIf:
+ - equal: ["{{azureConnectulumApp.authenticated}}", "true"]
+
+ headers:
+ email: azureConnectulumApp.name
+ firstname: azureConnectulumApp.givenname
+ lastname: azureConnectulumApp.surname
-1. Go to the proxied login URL, `http://host.company.com/my_app`.
-1. Provide the user credentials that are used to log in to the application while it's protected by SiteMinder.
-4. Go to **Home** > **Users | All Users** to verify that the user is created in the Azure AD tenant.
+connectors:
+ - name: azureSonarApp
+ type: azure
+ authType: saml
+ # Replace METADATA_URL with the App Federation Metadata URL
+ samlMetadataURL: METADATA_URL
+ samlConsumerServiceURL: https://sonar.maverics.com/acs
+ samlEntityID: https://sonar.maverics.com
+
+ - name: azureConnectulumApp
+ type: azure
+ authType: saml
+ # Replace METADATA_URL with the App Federation Metadata URL
+ samlMetadataURL: METADATA_URL
+ samlConsumerServiceURL: https://connectulum.maverics.com/acs
+ samlEntityID: https://connectulum.maverics.com
+```
-### Configure the session abstraction workflow
+You might have noticed that the code adds a `host` field to your App Gateway definitions. The `host` field enables the Maverics Orchestrator to distinguish which upstream host to proxy traffic to.
-The session abstraction workflow moves authentication and access control for the legacy on-premises web application to the Azure AD tenant.
+To confirm that the newly added App Gateway is working as expected, make a request to https://connectulum.maverics.com.
-The Azure connector uses the `login` method to redirect the user to the login URL, assuming that no session exists.
+## Advanced scenarios
-After it's authenticated, the session token that's created as a result is passed to Maverics. The SiteMinder connector's `emulate` method is used to emulate the cookie-based session or the header-based session and then decorate the request with any additional attributes required by the application.
+### Identity migration
-1. Give the workflow a name (for example, **SiteMinder Session Abstraction**).
-1. Specify the `endpoint`, which corresponds to the app that's being proxied. The value must include both leading and trailing slashes (for example, `/my_app/`).
-1. Add the appropriate `actions` to the workflow.
+Can't stand your end-of-life'd web access management tool, but you don't have a way to migrate your users without mass password resets? The Maverics Orchestrator supports identity migration by using `migrationgateways`.
- a. Define the `login` method for the Azure connector. The `connector` value must match the `name` value in the connector configuration.
+### Web server gateways
- b. Define the `emulate` method for the SiteMinder connector.
+Don't want to rework your network and proxy traffic through the Maverics Orchestrator? Not a problem. The Maverics Orchestrator can be paired with web server gateways (modules) to offer the same solutions without proxying.
- ```yaml
- - name: SiteMinder Session Abstraction
- endpoint: /my_app/
- actions:
- - connector: azure
- method: login
- - connector: siteminder-login-form
- method: emulate
- ```
-### Verify the session abstraction workflow
+## Wrap-up
-1. Go to the proxied application URL, `https://<AZURECOMPANY.COM>/<MY_APP>`.
-
- You're redirected to the proxied login page.
+At this point, you've installed the Maverics Orchestrator, created and configured an enterprise application in Azure AD, and configured the Orchestrator to proxy to a protected application while requiring authentication and enforcing policy. To learn more about how the Maverics Orchestrator can be used for distributed identity management use cases, [contact Strata](mailto:sales@strata.io).
-1. Enter the Azure AD user credentials.
+## Next steps
- You should be redirected to the application as though you were authenticated directly by SiteMinder.
+- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Solarwinds Orion Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/solarwinds-orion-tutorial.md
Previously updated : 07/24/2020 Last updated : 03/01/2021
In this tutorial, you'll learn how to integrate SolarWinds Orion with Azure Acti
* Enable your users to be automatically signed-in to SolarWinds Orion with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* SolarWinds Orion supports **SP and IDP** initiated SSO
-* Once you configure SolarWinds Orion you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+* SolarWinds Orion supports **SP and IDP** initiated SSO.
-## Adding SolarWinds Orion from the gallery
+## Add SolarWinds Orion from the gallery
To configure the integration of SolarWinds Orion into Azure AD, you need to add SolarWinds Orion from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
To configure the integration of SolarWinds Orion into Azure AD, you need to add
Configure and test Azure AD SSO with SolarWinds Orion using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SolarWinds Orion.
-To configure and test Azure AD SSO with SolarWinds Orion, complete the following building blocks:
+To configure and test Azure AD SSO with SolarWinds Orion, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with SolarWinds Orion, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **SolarWinds Orion** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **SolarWinds Orion**
+application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SolarWinds Orion.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Evergreen.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **SolarWinds Orion**.
+1. In the applications list, select **Evergreen**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure SolarWinds Orion SSO
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the SolarWinds Orion tile in the Access Panel, you should be automatically signed in to the SolarWinds Orion for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to SolarWinds Orion Sign on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to SolarWinds Orion Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the SolarWinds Orion for which you set up the SSO.
-- [Try SolarWinds Orion with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the SolarWinds Orion tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SolarWinds Orion for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect SolarWinds Orion with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure SolarWinds Orion you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Wdesk Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/wdesk-tutorial.md
Previously updated : 04/02/2020 Last updated : 03/08/2021 # Tutorial: Azure Active Directory single sign-on (SSO) integration with Wdesk
In this tutorial, you'll learn how to integrate Wdesk with Azure Active Director
* Enable your users to be automatically signed-in to Wdesk with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Wdesk supports **SP** and **IDP** initiated SSO
-* Once you configure Wdesk you can enforce Session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+* Wdesk supports **SP** and **IDP** initiated SSO.
-## Adding Wdesk from the gallery
+## Add Wdesk from the gallery
To configure the integration of Wdesk into Azure AD, you need to add Wdesk from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Wdesk** in the search box. 1. Select **Wdesk** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Wdesk
In this section, you configure and test Azure AD single sign-on with Wdesk based on a test user called **Britta Simon**. For single sign-on to work, a link relationship between an Azure AD user and the related user in Wdesk needs to be established.
-To configure and test Azure AD SSO with Wdesk, complete the following building blocks:
+To configure and test Azure AD SSO with Wdesk, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Wdesk, complete the following building b
1. **[Create Wdesk test user](#create-wdesk-test-user)** - to have a counterpart of B.Simon in Wdesk that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with Wdesk, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Wdesk** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Wdesk** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://<subdomain>.wdesk.com/auth/saml/sp/metadata/<instancename>`
To configure Azure AD single sign-on with Wdesk, perform the following steps:
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<subdomain>.wdesk.com/auth/login/saml/<instancename>`
To configure Azure AD single sign-on with Wdesk, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
+
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Wdesk.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Wdesk**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Wdesk**.
-
- ![The Wdesk link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Wdesk.
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Wdesk**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
## Configure Wdesk SSO
In this section, you enable Britta Simon to use Azure single sign-on by granting
1. In the bottom left, click **Admin** and choose **Account Admin**:
- ![Screenshot shows Account Admin selected from the Admin menu.](./media/wdesk-tutorial/tutorial_wdesk_ssoconfig1.png)
+ ![Screenshot shows Account Admin selected from the Admin menu.](./media/wdesk-tutorial/account.png)
1. In Wdesk Admin, navigate to **Security**, then **SAML** > **SAML Settings**:
- ![Screenshot shows SAML Settings selected from the SAML tab.](./media/wdesk-tutorial/tutorial_wdesk_ssoconfig2.png)
+ ![Screenshot shows SAML Settings selected from the SAML tab.](./media/wdesk-tutorial/settings.png)
1. Under **SAML User ID Settings**, check **SAML User ID is Wdesk Username**.
In this section, you enable Britta Simon to use Azure single sign-on by granting
4. Under **General Settings**, check the **Enable SAML Single Sign On**:
- ![Screenshot shows Edit SAML Settings where you can select Enable SAML Single Sign-On.](./media/wdesk-tutorial/tutorial_wdesk_ssoconfig3.png)
+ ![Screenshot shows Edit SAML Settings where you can select Enable SAML Single Sign-On.](./media/wdesk-tutorial/user-settings.png)
5. Under **Service Provider Details**, perform the following steps:
- ![Screenshot shows Service Provider Details where you can enter the values described.](./media/wdesk-tutorial/tutorial_wdesk_ssoconfig4.png)
+ ![Screenshot shows Service Provider Details where you can enter the values described.](./media/wdesk-tutorial/service-provider.png)
1. Copy the **Login URL** and paste it in **Sign-on Url** textbox on Azure portal.
In this section, you enable Britta Simon to use Azure single sign-on by granting
1. Click **Configure IdP Settings** to open **Edit IdP Settings** dialog. Click **Choose File** to locate the **Metadata.xml** file you saved from Azure portal, then upload it.
- ![Screenshot shows Edit I d P Settings where you can upload metadata.](./media/wdesk-tutorial/tutorial_wdesk_ssoconfig5.png)
+ ![Screenshot shows Edit I d P Settings where you can upload metadata.](./media/wdesk-tutorial/metadata.png)
1. Click **Save changes**.
- ![Screenshot shows the Save changes button.](./media/wdesk-tutorial/tutorial_wdesk_ssoconfigsavebutton.png)
+ ![Screenshot shows the Save changes button.](./media/wdesk-tutorial/save.png)
### Create Wdesk test user
To enable Azure AD users to sign in to Wdesk, they must be provisioned into Wdes
2. Navigate to **Admin** > **Account Admin**.
- ![Screenshot shows Account Admin selected from the Admin menu.](./media/wdesk-tutorial/tutorial_wdesk_ssoconfig1.png)
+ ![Screenshot shows Account Admin selected from the Admin menu.](./media/wdesk-tutorial/account.png)
3. Click **Members** under **People**. 4. Now click **Add Member** to open **Add Member** dialog box.
- ![Screenshot shows the Members tab where you can select Add Member.](./media/wdesk-tutorial/createuser1.png)
+ ![Screenshot shows the Members tab where you can select Add Member.](./media/wdesk-tutorial/create-user-1.png)
5. In **User** text box, enter the username of user like b.simon@contoso.com and click **Continue** button.
- ![Screenshot shows the Add Member dialog box where you can enter a user.](./media/wdesk-tutorial/createuser3.png)
+ ![Screenshot shows the Add Member dialog box where you can enter a user.](./media/wdesk-tutorial/create-user-3.png)
6. Enter the details as shown below:
- ![Screenshot shows the Add Member dialog box where you can add Basic Information for a user.](./media/wdesk-tutorial/createuser4.png)
+ ![Screenshot shows the Add Member dialog box where you can add Basic Information for a user.](./media/wdesk-tutorial/create-user-4.png)
a. In **E-mail** text box, enter the email of user like b.simon@contoso.com.
To enable Azure AD users to sign in to Wdesk, they must be provisioned into Wdes
7. Click **Save Member** button.
- ![Screenshot shows the Send welcome email with the Save Member button.](./media/wdesk-tutorial/createuser5.png)
+ ![Screenshot shows the Send welcome email with the Save Member button.](./media/wdesk-tutorial/create-user-5.png)
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-### Test SSO
+#### SP initiated:
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Click on **Test this application** in Azure portal. This will redirect to Wdesk Sign on URL where you can initiate the login flow.
-When you click the Wdesk tile in the Access Panel, you should be automatically signed in to the Wdesk for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Wdesk Sign-on URL directly and initiate the login flow from there.
-## Additional Resources
+#### IDP initiated:
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Wdesk for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Wdesk tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Wdesk for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+Once you configure Wdesk you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory My Staff Team Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-staff-team-manager.md
Title: Manage passwords and phone numbers with My Staff (preview) - Azure AD | Microsoft Docs
+ Title: Manage passwords and phone numbers with My Staff - Azure AD | Microsoft Docs
description: Manage passwords and phone numbers for your users with My Staff documentationcenter: ''
Previously updated : 04/14/2020 Last updated : 03/17/2021
-# Delegate user management with My Staff (preview)
+# Delegate user management with My Staff
Your organization can use **My Staff** to delegate user management tasks to figures of authority, such as a store manager or team leader, to help their staff members access the applications that they need. If your team member can't access an application because they forget a password, productivity is lost. This also drives up support costs and causes a bottleneck in your administrative processes. With My Staff, a team member who can't access their account can regain access in just a couple of clicks, with no administrator help required.
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/availability-zones.md
description: Learn how to create a cluster that distributes nodes across availab
Previously updated : 09/04/2020 Last updated : 03/16/2021
You need the Azure CLI version 2.0.76 or later installed and configured. Run `a
AKS clusters can currently be created using availability zones in the following regions: * Australia East
+* Brazil South
* Canada Central * Central US * East US
Name: aks-nodepool1-28993262-vmss000004
We now have two additional nodes in zones 1 and 2. You can deploy an application consisting of three replicas. We will use NGINX as an example: ```console
-kubectl create deployment nginx --image=nginx
+kubectl create deployment nginx --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
kubectl scale deployment nginx --replicas=3 ```
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-ad-rbac.md
description: Learn how to use Azure Active Directory group membership to restrict access to cluster resources using Kubernetes role-based access control (Kubernetes RBAC) in Azure Kubernetes Service (AKS) Previously updated : 07/21/2020 Last updated : 03/17/2021
az role assignment create \
With two example groups created in Azure AD for our application developers and SREs, now lets create two example users. To test the Kubernetes RBAC integration at the end of the article, you sign in to the AKS cluster with these accounts.
+Set the user principal name (UPN) and password for the application developers. The following command prompts you for the UPN and sets it to *AAD_DEV_UPN* for use in a later command (remember that the commands in this article are entered into a BASH shell). The UPN must include the verified domain name of your tenant, for example `aksdev@contoso.com`.
+
+```azurecli-interactive
+echo "Please enter the UPN for application developers: " && read AAD_DEV_UPN
+```
+
+The following command prompts you for the password and sets it to *AAD_DEV_PW* for use in a later command.
+
+```azurecli-interactive
+echo "Please enter the secure password for application developers: " && read AAD_DEV_PW
+```
+ Create the first user account in Azure AD using the [az ad user create][az-ad-user-create] command.
-The following example creates a user with the display name *AKS Dev* and the user principal name (UPN) of `aksdev@contoso.com`. Update the UPN to include a verified domain for your Azure AD tenant (replace *contoso.com* with your own domain), and provide your own secure `--password` credential:
+The following example creates a user with the display name *AKS Dev* and the UPN and secure password using the values in *AAD_DEV_UPN* and *AAD_DEV_PW*:
```azurecli-interactive AKSDEV_ID=$(az ad user create \ --display-name "AKS Dev" \
- --user-principal-name aksdev@contoso.com \
- --password P@ssw0rd1 \
+ --user-principal-name $AAD_DEV_UPN \
+ --password $AAD_DEV_PW \
--query objectId -o tsv) ```
Now add the user to the *appdev* group created in the previous section using the
az ad group member add --group appdev --member-id $AKSDEV_ID ```
-Create a second user account. The following example creates a user with the display name *AKS SRE* and the user principal name (UPN) of `akssre@contoso.com`. Again, update the UPN to include a verified domain for your Azure AD tenant (replace *contoso.com* with your own domain), and provide your own secure `--password` credential:
+Set the UPN and password for SREs. The following command prompts you for the UPN and sets it to *AAD_SRE_UPN* for use in a later command (remember that the commands in this article are entered into a BASH shell). The UPN must include the verified domain name of your tenant, for example `akssre@contoso.com`.
+
+```azurecli-interactive
+echo "Please enter the UPN for SREs: " && read AAD_SRE_UPN
+```
+
+The following command prompts you for the password and sets it to *AAD_SRE_PW* for use in a later command.
+
+```azurecli-interactive
+echo "Please enter the secure password for SREs: " && read AAD_SRE_PW
+```
+
+Create a second user account. The following example creates a user with the display name *AKS SRE* and the UPN and secure password using the values in *AAD_SRE_UPN* and *AAD_SRE_PW*:
```azurecli-interactive # Create a user for the SRE role AKSSRE_ID=$(az ad user create \ --display-name "AKS SRE" \
- --user-principal-name akssre@contoso.com \
- --password P@ssw0rd1 \
+ --user-principal-name $AAD_SRE_UPN \
+ --password $AAD_SRE_PW \
--query objectId -o tsv) # Add the user to the opssre Azure AD group
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --ov
Schedule a basic NGINX pod using the [kubectl run][kubectl-run] command in the *dev* namespace: ```console
-kubectl run nginx-dev --image=nginx --namespace dev
+kubectl run nginx-dev --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace dev
``` As the sign in prompt, enter the credentials for your own `appdev@contoso.com` account created at the start of the article. Once you are successfully signed in, the account token is cached for future `kubectl` commands. The NGINX is successfully schedule, as shown in the following example output: ```console
-$ kubectl run nginx-dev --image=nginx --namespace dev
+$ kubectl run nginx-dev --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace dev
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code B24ZD6FP8 to authenticate.
Error from server (Forbidden): pods is forbidden: User "aksdev@contoso.com" cann
In the same way, try to schedule a pod in different namespace, such as the *sre* namespace. The user's group membership does not align with a Kubernetes Role and RoleBinding to grant these permissions, as shown in the following example output: ```console
-$ kubectl run nginx-dev --image=nginx --namespace sre
+$ kubectl run nginx-dev --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace sre
Error from server (Forbidden): pods is forbidden: User "aksdev@contoso.com" cannot create resource "pods" in API group "" in the namespace "sre" ```
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --ov
Try to schedule and view pods in the assigned *sre* namespace. When prompted, sign in with your own `opssre@contoso.com` credentials created at the start of the article: ```console
-kubectl run nginx-sre --image=nginx --namespace sre
+kubectl run nginx-sre --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace sre
kubectl get pods --namespace sre ``` As shown in the following example output, you can successfully create and view the pods: ```console
-$ kubectl run nginx-sre --image=nginx --namespace sre
+$ kubectl run nginx-sre --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace sre
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code BM4RHP3FD to authenticate.
Now, try to view or schedule pods outside of assigned SRE namespace:
```console kubectl get pods --all-namespaces
-kubectl run nginx-sre --image=nginx --namespace dev
+kubectl run nginx-sre --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace dev
``` These `kubectl` commands fail, as shown in the following example output. The user's group membership and Kubernetes Role and RoleBindings don't grant permissions to create or manager resources in other namespaces:
These `kubectl` commands fail, as shown in the following example output. The use
$ kubectl get pods --all-namespaces Error from server (Forbidden): pods is forbidden: User "akssre@contoso.com" cannot list pods at the cluster scope
-$ kubectl run nginx-sre --image=nginx --namespace dev
+$ kubectl run nginx-sre --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --namespace dev
Error from server (Forbidden): pods is forbidden: User "akssre@contoso.com" cannot create pods in the namespace "dev" ```
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-kubenet.md
az ad sp create-for-rbac --skip-assignment
The following example output shows the application ID and password for your service principal. These values are used in additional steps to assign a role to the service principal and then create the AKS cluster:
-```azurecli
-az ad sp create-for-rbac --skip-assignment
-```
- ```output { "appId": "476b3636-5eda-4c0e-9751-849e70b5cfad",
aks Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/egress.md
description: Learn how to create and use a static public IP address for egress traffic in an Azure Kubernetes Service (AKS) cluster Previously updated : 03/04/2019 Last updated : 03/16/2021 #Customer intent: As an cluster operator, I want to define the egress IP address to control the flow of traffic from a known, defined address.
Last updated 03/04/2019
# Use a static public IP address for egress traffic with a *Basic* SKU load balancer in Azure Kubernetes Service (AKS)
-By default, the egress IP address from an Azure Kubernetes Service (AKS) cluster is randomly assigned. This configuration is not ideal when you need to identify an IP address for access to external services, for example. Instead, you may need to assign a static IP address to be added to an allow list for service access.
+By default, the egress IP address from an Azure Kubernetes Service (AKS) cluster is randomly assigned. This configuration is not ideal when you need to identify an IP address for access to external services, for example. Instead, you may need to assign a static IP address to be added to an allowlist for service access.
This article shows you how to create and use a static public IP address for use with egress traffic in an AKS cluster.
To verify that the static public IP address is being used, you can use DNS look-
Start and attach to a basic *Debian* pod: ```console
-kubectl run -it --rm aks-ip --image=debian
+kubectl run -it --rm aks-ip --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
``` To access a web site from within the container, use `apt-get` to install `curl` into the container.
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-internal-ip.md
description: Learn how to install and configure an NGINX ingress controller for an internal, private network in an Azure Kubernetes Service (AKS) cluster. Previously updated : 08/17/2020 Last updated : 03/16/2021
ingress.extensions/hello-world-ingress created
To test the routes for the ingress controller, browse to the two applications with a web client. If needed, you can quickly test this internal-only functionality from a pod on the AKS cluster. Create a test pod and attach a terminal session to it: ```console
-kubectl run -it --rm aks-ingress-test --image=debian --namespace ingress-basic
+kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --namespace ingress-basic
``` Install `curl` in the pod using `apt-get`:
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-cluster-security.md
metadata:
spec: containers: - name: hello
- image: busybox
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] ```
Deploy the sample pod using the [kubectl apply][kubectl-apply] command:
kubectl apply -f aks-apparmor.yaml ```
-With the pod deployed, use the [kubectl exec][kubectl-exec] command to write to a file. The command can't be executed, as shown in the following example output:
+With the pod deployed, use verify the *hello-apparmor* pod shows as *blocked*:
```
-$ kubectl exec hello-apparmor touch /tmp/test
+$ kubectl get pods
-touch: /tmp/test: Permission denied
-command terminated with exit code 1
+NAME READY STATUS RESTARTS AGE
+aks-ssh 1/1 Running 0 4m2s
+hello-apparmor 0/1 Blocked 0 50s
``` For more information about AppArmor, see [AppArmor profiles in Kubernetes][k8s-apparmor].
metadata:
spec: containers: - name: chmod
- image: busybox
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
command: - "chmod" args:
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/planned-maintenance.md
You can also use a JSON file create a maintenance window instead of using parame
"notAllowedTime": [ { "start": "2021-05-26T03:00:00Z",
- "end": "2021-05-30T012:00:00Z"
+ "end": "2021-05-30T12:00:00Z"
} ] }
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
aks Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ssh.md
To create an SSH connection to an AKS node, you run a helper pod in your AKS clu
1. Run a `debian` container image and attach a terminal session to it. This container can be used to create an SSH session with any node in the AKS cluster: ```console
- kubectl run -it --rm aks-ssh --image=debian
+ kubectl run -it --rm aks-ssh --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
``` > [!TIP] > If you use Windows Server nodes, add a node selector to the command to schedule the Debian container on a Linux node: > > ```console
- > kubectl run -it --rm aks-ssh --image=debian --overrides='{"apiVersion":"v1","spec":{"nodeSelector":{"beta.kubernetes.io/os":"linux"}}}'
+ > kubectl run -it --rm aks-ssh --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --overrides='{"apiVersion":"v1","spec":{"nodeSelector":{"beta.kubernetes.io/os":"linux"}}}'
> ``` 1. Once the terminal session is connected to the container, install an SSH client using `apt-get`:
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/troubleshooting.md
spec:
```yaml initContainers: - name: volume-mount
- image: busybox
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
command: ["sh", "-c", "chown -R 100:100 /data"] volumeMounts: - name: <your data volume>
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-network-policies.md
description: Learn how to secure traffic that flows in and out of pods by using Kubernetes network policies in Azure Kubernetes Service (AKS) Previously updated : 05/06/2019 Last updated : 03/16/2021
Calico networking policies with Windows nodes is currently in preview.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-```azurecli
-PASSWORD_WIN="P@ssw0rd1234"
+Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell).
+```azurecli-interactive
+echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME
+```
+
+```azurecli
az aks create \ --resource-group $RESOURCE_GROUP_NAME \ --name $CLUSTER_NAME \
az aks create \
--vnet-subnet-id $SUBNET_ID \ --service-principal $SP_ID \ --client-secret $SP_PASSWORD \
- --windows-admin-password $PASSWORD_WIN \
- --windows-admin-username azureuser \
+ --windows-admin-username $WINDOWS_USERNAME \
--vm-set-type VirtualMachineScaleSets \ --kubernetes-version 1.20.2 \ --network-plugin azure \
az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAM
## Deny all inbound traffic to a pod
-Before you define rules to allow specific network traffic, first create a network policy to deny all traffic. This policy gives you a starting point to begin to create an allow list for only the desired traffic. You can also clearly see that traffic is dropped when the network policy is applied.
+Before you define rules to allow specific network traffic, first create a network policy to deny all traffic. This policy gives you a starting point to begin to create an allowlist for only the desired traffic. You can also clearly see that traffic is dropped when the network policy is applied.
For the sample application environment and traffic rules, let's first create a namespace called *development* to run the example pods:
kubectl label namespace/development purpose=development
Create an example back-end pod that runs NGINX. This back-end pod can be used to simulate a sample back-end web-based application. Create this pod in the *development* namespace, and open port *80* to serve web traffic. Label the pod with *app=webapp,role=backend* so that we can target it with a network policy in the next section: ```console
-kubectl run backend --image=nginx --labels app=webapp,role=backend --namespace development --expose --port 80
+kubectl run backend --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --labels app=webapp,role=backend --namespace development --expose --port 80
``` Create another pod and attach a terminal session to test that you can successfully reach the default NGINX webpage: ```console
-kubectl run --rm -it --image=alpine network-policy --namespace development
+kubectl run --rm -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 network-policy --namespace development
``` At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage:
kubectl apply -f backend-policy.yaml
Let's see if you can use the NGINX webpage on the back-end pod again. Create another test pod and attach a terminal session: ```console
-kubectl run --rm -it --image=alpine network-policy --namespace development
+kubectl run --rm -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 network-policy --namespace development
``` At the shell prompt, use `wget` to see if you can access the default NGINX webpage. This time, set a timeout value to *2* seconds. The network policy now blocks all inbound traffic, so the page can't be loaded, as shown in the following example:
kubectl apply -f backend-policy.yaml
Schedule a pod that is labeled as *app=webapp,role=frontend* and attach a terminal session: ```console
-kubectl run --rm -it frontend --image=alpine --labels app=webapp,role=frontend --namespace development
+kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace development
``` At the shell prompt, use `wget` to see if you can access the default NGINX webpage:
exit
The network policy allows traffic from pods labeled *app: webapp,role: frontend*, but should deny all other traffic. Let's test to see whether another pod without those labels can access the back-end NGINX pod. Create another test pod and attach a terminal session: ```console
-kubectl run --rm -it --image=alpine network-policy --namespace development
+kubectl run --rm -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 network-policy --namespace development
``` At the shell prompt, use `wget` to see if you can access the default NGINX webpage. The network policy blocks the inbound traffic, so the page can't be loaded, as shown in the following example:
kubectl label namespace/production purpose=production
Schedule a test pod in the *production* namespace that is labeled as *app=webapp,role=frontend*. Attach a terminal session: ```console
-kubectl run --rm -it frontend --image=alpine --labels app=webapp,role=frontend --namespace production
+kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace production
``` At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage:
kubectl apply -f backend-policy.yaml
Schedule another pod in the *production* namespace and attach a terminal session: ```console
-kubectl run --rm -it frontend --image=alpine --labels app=webapp,role=frontend --namespace production
+kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace production
``` At the shell prompt, use `wget` to see that the network policy now denies traffic:
exit
With traffic denied from the *production* namespace, schedule a test pod back in the *development* namespace and attach a terminal session: ```console
-kubectl run --rm -it frontend --image=alpine --labels app=webapp,role=frontend --namespace development
+kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace development
``` At the shell prompt, use `wget` to see that the network policy allows the traffic:
aks Virtual Nodes Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/virtual-nodes-cli.md
description: Learn how to use the Azure CLI to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods. Previously updated : 05/06/2019 Last updated : 03/16/2021
The pod is assigned an internal IP address from the Azure virtual network subnet
To test the pod running on the virtual node, browse to the demo application with a web client. As the pod is assigned an internal IP address, you can quickly test this connectivity from another pod on the AKS cluster. Create a test pod and attach a terminal session to it: ```console
-kubectl run -it --rm testvk --image=debian
+kubectl run -it --rm testvk --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
``` Install `curl` in the pod using `apt-get`:
aks Virtual Nodes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/virtual-nodes-portal.md
Title: Create virtual nodes using the portal in Azure Kubernetes Services (AKS)
description: Learn how to use the Azure portal to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods. Previously updated : 05/06/2019 Last updated : 03/15/2021
In the top left-hand corner of the Azure portal, select **Create a resource** >
On the **Basics** page, configure the following options: - *PROJECT DETAILS*: Select an Azure subscription, then select or create an Azure resource group, such as *myResourceGroup*. Enter a **Kubernetes cluster name**, such as *myAKSCluster*.-- *CLUSTER DETAILS*: Select a region, Kubernetes version, and DNS name prefix for the AKS cluster.
+- *CLUSTER DETAILS*: Select a region and Kubernetes version for the AKS cluster.
- *PRIMARY NODE POOL*: Select a VM size for the AKS nodes. The VM size **cannot** be changed once an AKS cluster has been deployed. - Select the number of nodes to deploy into the cluster. For this article, set **Node count** to *1*. Node count **can** be adjusted after the cluster has been deployed.
-Click **Next: Scale**.
+Click **Next: Node Pools**.
-On the **Scale** page, select *Enabled* under **Virtual nodes**.
+On the **Node Pools** page, select *Enable virtual nodes*.
-![Create AKS cluster and enable the virtual nodes](media/virtual-nodes-portal/enable-virtual-nodes.png)
By default, a cluster identity is created. This cluster identity is used for cluster communication and integration with other Azure services. By default, this cluster identity is a managed identity. For more information, see [Use managed identities](use-managed-identity.md). You can also use a service principal as your cluster identity.
The pod is assigned an internal IP address from the Azure virtual network subnet
To test the pod running on the virtual node, browse to the demo application with a web client. As the pod is assigned an internal IP address, you can quickly test this connectivity from another pod on the AKS cluster. Create a test pod and attach a terminal session to it: ```console
-kubectl run -it --rm virtual-node-test --image=debian
+kubectl run -it --rm virtual-node-test --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
``` Install `curl` in the pod using `apt-get`:
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
Title: Configure Linux Python apps description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Previously updated : 02/01/2021 Last updated : 03/16/2021
The following sections provide additional guidance for specific issues.
- [App doesn't appear - "service unavailable" message](#service-unavailable) - [Could not find setup.py or requirements.txt](#could-not-find-setuppy-or-requirementstxt) - [ModuleNotFoundError on startup](#modulenotfounderror-when-app-starts)
+- [Database is locked](#database-is-locked)
- [Passwords don't appear in SSH session when typed](#other-issues) - [Commands in the SSH session appear to be cut off](#other-issues) - [Static assets don't appear in a Django app](#other-issues)
The following sections provide additional guidance for specific issues.
If you see an error like `ModuleNotFoundError: No module named 'example'`, this means that Python could not find one or more of your modules when the application started. This most often occurs if you deploy your virtual environment with your code. Virtual environments are not portable, so a virtual environment should not be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html).
+### Database is locked
+
+When attempting to run database migrations with a Django app, you may see "sqlite3. OperationalError: database is locked." The error indicates that your application is using a SQLite database for which Django is configured by default, rather than using a cloud database such as PostgreSQL for Azure.
+
+Check the `DATABASES` variable in the app's *settings.py* file to ensure that your app is using a cloud database instead of SQLite.
+
+If you're encountering this error with the sample in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md), check that you completed the steps in [Configure environment variables to connect the database](tutorial-python-postgresql-app.md#42-configure-environment-variables-to-connect-the-database).
+ #### Other issues - **Passwords don't appear in the SSH session when typed**: For security reasons, the SSH session keeps your password hidden as you type. The characters are being recorded, however, so type your password as usual and press **Enter** when done.
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
Large enterprise development teams often need to adhere to security requirements
After providing your application's Health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, click the **Metrics** in the top toolbar. This will open a new blade where you can see the site's historical health status and create a new alert rule. For more information on monitoring your sites, [see the guide on Azure Monitor](web-sites-monitor.md).
+## Limitations
+
+Health check should not be enabled on Premium Functions sites. Due to the rapid scaling of Premium Functions, the health check requests can cause unnecessary fluctuations in HTTP traffic. Premium Functions have their own internal health probes that are used to inform scaling decisions.
+ ## Next steps - [Create an Activity Log Alert to monitor all Autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/monitor-autoscale-alert) - [Create an Activity Log Alert to monitor all failed Autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/monitor-autoscale-failed-alert) [1]: ./media/app-service-monitor-instances-health-check/health-check-success-diagram.png [2]: ./media/app-service-monitor-instances-health-check/health-check-failure-diagram.png
-[3]: ./media/app-service-monitor-instances-health-check/azure-portal-navigation-health-check.png
+[3]: ./media/app-service-monitor-instances-health-check/azure-portal-navigation-health-check.png
app-service Overview Inbound Outbound Ips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-inbound-outbound-ips.md
## How IP addresses work in App Service
-An App Service app runs in an App Service plan, and App Service plans are deployed into one of the deployment units in the Azure infrastructure (internally called a webspace). Each deployment unit is assigned up to five virtual IP addresses, which includes one public inbound IP address and four outbound IP addresses. All App Service plans in the same deployment unit, and app instances that run in them, share the same set of virtual IP addresses. For an App Service Environment (an App Service plan in [Isolated tier](https://azure.microsoft.com/pricing/details/app-service/)), the App Service plan is the deployment unit itself, so the virtual IP addresses are dedicated to it as a result.
+An App Service app runs in an App Service plan, and App Service plans are deployed into one of the deployment units in the Azure infrastructure (internally called a webspace). Each deployment unit is assigned a set of virtual IP addresses, which includes one public inbound IP address and a set of [outbound IP addresses](#find-outbound-ips). All App Service plans in the same deployment unit, and app instances that run in them, share the same set of virtual IP addresses. For an App Service Environment (an App Service plan in [Isolated tier](https://azure.microsoft.com/pricing/details/app-service/)), the App Service plan is the deployment unit itself, so the virtual IP addresses are dedicated to it as a result.
Because you're not allowed to move an App Service plan between deployment units, the virtual IP addresses assigned to your app usually remain the same, but there are exceptions.
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
You can also inspect the log files from the browser at `https://<app-name>.scm.a
To stop log streaming at any time, press **Ctrl**+**C** in the terminal.
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
## Manage the Azure app
app-service Resources Kudu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/resources-kudu.md
+
+ Title: Kudu service overview
+description: Learn about the engine that powers continuous deployment in App Service and its features.
Last updated : 03/17/2021++
+# Kudu service overview
+
+Kudu is the engine behind a number of features in [Azure App Service](overview.md) related to source control based deployment, and other deployment methods like Dropbox and OneDrive sync.
+
+## Access Kudu for your app
+Anytime you create an app, App Service creates a companion app for it that's secured by HTTPS. This Kudu app is accessible at:
+
+- App not in Isolated tier: `https://<app-name>.scm.azurewebsites.net`
+- App in Isolated tier (App Service Environment): `https://<app-name>.scm.<ase-name>.p.azurewebsites.net`
+
+For more information, see [Accessing the kudu service](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service).
+
+## Kudu features
+
+Kudu gives you helpful information about your App Service app, such as:
+
+- App settings
+- Connection strings
+- Environment variables
+- Server variables
+- HTTP headers
+
+It also provides other features, such as:
+
+- Run commands in the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console).
+- Download IIS diagnostic dumps or Docker logs.
+- Manage IIS processes and site extensions.
+- Add deployment webhooks for Windows aps.
+- Allow ZIP deployment UI with `/ZipDeploy`.
+- Generates [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
+- Allows access with [REST API](https://github.com/projectkudu/kudu/wiki/REST-API).
+
+## More Resources
+
+Kudu is an [open source project](https://github.com/projectkudu/kudu), and has its documentation at [Kudu Wiki](https://github.com/projectkudu/kudu/wiki).
+
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Data Lake Storage Gen2 | Azure Active Directory Domain Services | > | Azure ExpressRoute | Azure Bastion | > | Azure Public IP | Azure Cache for Redis |
-> | Azure SQL Database | Azure Cognitive Search |
-> | Azure SQL : Managed Instance | Azure Cognitive Services |
-> | Disk Storage | Azure Cognitive
-> | Event Hubs | Azure Cognitive
-> | Key Vault | Azure Cognitive
-> | Load balancer | Azure Cognitive
-> | Service Bus | Azure Cognitive
-> | Service Fabric | Azure Cognitive
-> | Storage: Hot/Cool Blob Storage Tiers | Azure Cognitive
-> | Storage: Managed Disks | Azure Cognitive
-> | Virtual Machine Scale Sets | Azure Data Explorer |
-> | Virtual Machines | Azure Data Share |
-> | Virtual Machines: Azure Dedicated Host | Azure Database for MySQL |
-> | Virtual Machines: Av2-Series | Azure Database for PostgreSQL |
-> | Virtual Machines: Bs-Series | Azure DDoS Protection |
-> | Virtual Machines: DSv2-Series | Azure Firewall |
-> | Virtual Machines: DSv3-Series | Azure Firewall Manager |
-> | Virtual Machines: Dv2-Series | Azure Functions |
-> | Virtual Machines: Dv3-Series | Azure IoT Hub |
-> | Virtual Machines: ESv3-Series | Azure Kubernetes Service (AKS) |
-> | Virtual Machines: Ev3-Series | Azure Machine Learning |
-> | Virtual Network | Azure Monitor: Application Insights |
-> | VPN Gateway | Azure Monitor: Log Analytics |
+> | Azure SQL Database: Business Critical & Premium Tiers | Azure Cognitive Search |
+> | Disk Storage | Azure Cognitive Services |
+> | Event Hubs | Azure Cognitive
+> | Key Vault | Azure Cognitive
+> | Load balancer | Azure Cognitive
+> | Service Bus | Azure Cognitive
+> | Service Fabric | Azure Cognitive
+> | Storage: Hot/Cool Blob Storage Tiers | Azure Cognitive
+> | Storage: Managed Disks | Azure Cognitive
+> | Virtual Machine Scale Sets | Azure Cognitive
+> | Virtual Machines | Azure Data Explorer |
+> | Virtual Machines: Azure Dedicated Host | Azure Data Share |
+> | Virtual Machines: Av2-Series | Azure Database for MySQL |
+> | Virtual Machines: Bs-Series | Azure Database for PostgreSQL |
+> | Virtual Machines: DSv2-Series | Azure DDoS Protection |
+> | Virtual Machines: DSv3-Series | Azure Firewall |
+> | Virtual Machines: Dv2-Series | Azure Firewall Manager |
+> | Virtual Machines: Dv3-Series | Azure Functions |
+> | Virtual Machines: ESv3-Series | Azure IoT Hub |
+> | Virtual Machines: Ev3-Series | Azure Kubernetes Service (AKS) |
+> | Virtual Network | Azure Machine Learning |
+> | VPN Gateway | Azure Monitor: Application Insights |
+> | | Azure Monitor: Log Analytics |
> | | Azure Private Link | > | | Azure Red Hat OpenShift | > | | Azure Site Recovery |
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
For information on workarounds to know issues running .NET isolated process func
## Next steps + [Learn more about triggers and bindings](functions-triggers-bindings.md)
-+ [Learn more about best practices for Azure Functions](functions-best-practices.md)
++ [Learn more about best practices for Azure Functions](functions-best-practices.md)
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-timer.md
The timer trigger uses a storage lock to ensure that there is only one timer ins
Unlike the queue trigger, the timer trigger doesn't retry after a function fails. When a function fails, it isn't called again until the next time on the schedule.
+## Manually invoke a timer trigger
+
+The timer trigger for Azure Functions provides an HTTP webhook that can be invoked to manually trigger the function. This can be extremely useful in the following scenarios.
+
+* Integration testing
+* Slot swaps as part of a smoke test or warmup activity
+* Initial deployment of a function to immediately populate a cache or lookup table in a database
+
+Please refer to [manually run a non HTTP-triggered function](./functions-manually-run-non-http.md) for details on how to manually invoke a timer triggered function.
+ ## Troubleshooting For information about what to do when the timer trigger doesn't work as expected, see [Investigating and reporting issues with timer triggered functions not firing](https://github.com/Azure/azure-functions-host/wiki/Investigating-and-reporting-issues-with-timer-triggered-functions-not-firing).
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-recover-storage-account.md
Your function app must be able to access the storage account. Common issues that
* The function app is deployed to your App Service Environment (ASE) without the correct network rules to allow traffic to and from the storage account. * The storage account firewall is enabled and not configured to allow traffic to and from functions. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+* Verify that the `allowSharedKeyAccess` setting is set to `true` which is its default value. For more information, see [Prevent Shared Key authorization for an Azure Storage account](https://docs.microsoft.com/azure/storage/common/shared-key-authorization-prevent?tabs=portal#verify-that-shared-key-access-is-not-allowed).
## Daily execution quota is full
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-linux-vm.md
Previously updated : 03/11/2021 Last updated : 03/16/2021 # Deploy STIG-compliant Linux Virtual Machines (Preview)
-Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG support](mailto:azurestigsupport@microsoft.com).
+Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal.
This quickstart shows how to deploy a STIG-compliant Linux virtual machine (Preview) on Azure or Azure Government using the corresponding portal.
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-windows-vm.md
Previously updated : 03/11/2021 Last updated : 03/16/2021 # Deploy STIG-compliant Windows Virtual Machines (Preview)
-Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG support](mailto:azurestigsupport@microsoft.com).
+Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal.
This quickstart shows how to deploy a STIG-compliant Windows virtual machine (Preview) on Azure or Azure Government using the corresponding portal.
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
You can install the Azure Monitor agent on Azure virtual machines and on Azure A
Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines. # [Windows](#tab/PowerShellWindows) ```powershell
-Set-AzVMExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location>
+Set-AzVMExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0
``` # [Linux](#tab/PowerShellLinux) ```powershell
-Set-AzVMExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location>
+Set-AzVMExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0
```
az connectedmachine machine-extension create --name AzureMonitorLinuxAgent --pub
## Next steps -- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent (AMA), which collects monitorin
Previously updated : 08/10/2020- Last updated : 03/16/2021+ # Azure Monitor agent overview (preview)
The following limitations apply during public preview of the Azure Monitor Agent
- *.control.monitor.azure.com
+## Supported regions
+Azure Monitor agent currently supports resources in the following regions:
+
+- East Asia
+- Southeast Asia
+- Australia Central
+- Australia East
+- Australia Southeast
+- Canada Central
+- North Europe
+- West Europe
+- France Central
+- Germany West Central
+- Central India
+- Japan East
+- Korea Central
+- South Africa North
+- Switzerland North
+- UK South
+- UK West
+- Central US
+- East US
+- East US 2
+- North Central US
+- South Central US
+- West US
+- West US 2
+- West Central US
+ ## Coexistence with other agents The Azure Monitor agent can coexist with the existing agents so that you can continue to use their existing functionality during evaluation or migration. This is particularly important because of the limitations in public preview in supporting existing solutions. You should be careful though in collecting duplicate data since this could skew query results and result in additional charges for data ingestion and retention.
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot.md
Azure Monitor alerts proactively notify you when important conditions are found
If you have a problem with an alert firing or not firing when expected, refer to the articles below. You can see "fired" alerts in the Azure portal. - [Troubleshooting Azure Monitor Metric Alerts in Microsoft Azure](alerts-troubleshoot-metric.md) -- [Troubleshooting Azure Monitor Log Alerts in Microsoft Azure](alerts-troubleshoot-metric.md)
+- [Troubleshooting Azure Monitor Log Alerts in Microsoft Azure](alerts-troubleshoot-log.md)
If the alert fires as intended according to the Azure portal but the proper notifications do not occur, use the information in the rest of this article to troubleshoot that problem.
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ilogger.md
Modify Program.cs and appsettings.json as follows:
} ```
-This code is required only when you use a standalone logging provider. For regular Application Insights monitoring, the instrumentation key is loaded automatically from the configuration path *ApplicationInsights: Instrumentationkey*. Appsettings.json should look like this:
+This code is required only when you use a standalone logging provider. For regular Application Insights monitoring, the instrumentation key is loaded automatically from the configuration path *ApplicationInsights: InstrumentationKey*. Appsettings.json should look like this:
```json { "ApplicationInsights": {
- "Instrumentationkey":"putrealikeyhere"
+ "InstrumentationKey":"putrealikeyhere"
} } ```
azure-monitor Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/faq.md
If you have configured Azure Monitor with a Log Analytics workspace using the *F
Under this condition, you will be prompted with the **Try Now** option when you open the VM and select **Insights** from the left-hand pane, even after it has been installed already on the VM. However, you are not prompted with options as would normally occur if this VM were not onboarded to VM insights.
+## SQL insights (preview)
+
+### What versions of SQL Server are supported?
+See [Supported versions](insights/sql-insights-overview.md#supported-versions) for supported versions of SQL.
+
+### What SQL resource types are supported?
+
+- Azure SQL Database. Single database only, not databases in an Elastic Pool.
+- Azure SQL Managed Instance
+- Azure SQL virtual machines ([Windows](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md#get-started-with-sql-server-vms), [Linux](../azure-sql/virtual-machines/linux/sql-server-on-linux-vm-what-is-iaas-overview.md#create)) and Azure virtual machines that SQL Server is installed on.
+
+### What operating systems for the machine running SQL Server are supported?
+Any OS that supports running supported version of SQL.
+
+### What operating system for the remote monitoring server are supported?
+
+Ubuntu 18.04 is currently the only operating system supported.
+
+### Where will the monitoring data be stored in Log Analytics
+All of the monitoring data is stored in the **InsightsMetrics** table. The **Origin** column has the value *solutions.azm.ms/telegraf/SqlInsights*. The **Namespace** column has values that start with *sqlserver_*.
+
+### How often is data collected?
+See [Data collected by SQL insights](../insights/../azure-monitor/insights/sql-insights-overview.md#data-collected-by-sql-insights) for details on the frequency that different data is collected.
## Next steps If your question isn't answered here, you can refer to the following forums to additional questions and answers.
azure-monitor Sql Insights Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-alerts.md
+
+ Title: Create alerts with SQL insights (preview)
+description: Create alerts with SQL insights in Azure Monitor
+++ Last updated : 03/12/2021++
+# Create alerts with SQL insights (preview)
+SQL insights includes a set of alert rule templates you can use to create [alert rules in Azure Monitor](../alert/../alerts/alerts-overview.md) for common SQL issues. The alert rules in SQL insights are log alert rules based on performance data stored in the *InsightsMetrics* table in Azure Monitor Logs.
+
+> [!NOTE]
+> If you have requests for more SQL insights alert rule templates, please send feedback using the link at the bottom of this page or using the SQL insights feedback link in the Azure portal.
+
+## Enable alert rules
+Use the following steps to enable the alerts in Azure Monitor from the Azure portal. The alert rules that are created will be scoped to all of the SQL resources monitored under the selected monitoring profile. When an alert rule is triggered, it will trigger on the specific SQL instance or database.
+
+> [!NOTE]
+> You can also create custom [log alert rules](../alerts/alerts-log.md) by running queries on the data sets in the *InsightsMetrics* table and then saving those queries as an alert rule.
+
+Select **SQL (preview)** from the **Insights** section of the Azure Monitor menu in the Azure portal. Click **Alerts**
++
+The **Alerts** pane opens on the right side of the page. By default, it will display fired alerts for SQL resources in the selected monitoring profile based on the alert rules you've already created. Select **Alert templates**, which will display the list of available templates you can use to create an alert rule.
++
+On the **Create Alert rule** page, review the default settings for the rule and edit them as needed. You can also select an [action group](../alerts/action-groups.md) to create notifications and actions when the alert rule is triggered. Click **Enable alert rule** to create the alert rule once you've verified all of its properties.
+++
+To deploy the alert rule immediately, click **Deploy alert rule**. Click **View Template** if you want to view the rule template before actually deploying it.
++
+If you choose to view the templates, select **Deploy** from the template page to create the alert rule.
+++
+## Next steps
+
+Learn more about [alerts in Azure Monitor](../alerts/alerts-overview.md).
+
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
+
+ Title: Enable SQL insights
+description: Enable SQL insights in Azure Monitor
+++ Last updated : 03/15/2021++
+# Enable SQL insights (preview)
+This article describes how to enable [SQL insights](sql-insights-overview.md) to monitor your SQL deployments. Monitoring is performed from an Azure virtual machine that makes a connection to your SQL deployments and uses Dynamic Management Views (DMVs) to gather monitoring data. You can control what datasets are collected and the frequency of collection using a monitoring profile.
+
+## Create Log Analytics workspace
+SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
+
+## Create monitoring user
+You need a user on the SQL deployments that you want to monitor. Follow the procedures below for different types of SQL deployments.
+
+### Azure SQL database
+Open Azure SQL Database with [SQL Server Management Studio](../../azure-sql/database/connect-query-ssms.md) or [Query Editor (preview)](../../azure-sql/database/connect-query-portal.md) in the Azure portal.
+
+Run the following script to create a user with the required permissions. Replace *user* with a username and *mystrongpassword* with a password.
+
+```
+CREATE USER [user] WITH PASSWORD = N'mystrongpassword';
+GO
+GRANT VIEW DATABASE STATE TO [user];
+GO
+```
++
+Verify the user was created.
++
+### Azure SQL Managed Instance
+Log into your Azure SQL Managed Instance and use [SSMS](../../azure-sql/database/connect-query-ssms.md) or similar tool to run the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a password.
+
+
+```
+USE master;
+GO
+CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
+GO
+GRANT VIEW SERVER STATE TO [user];
+GO
+GRANT VIEW ANY DEFINITION TO [user];
+GO
+```
+
+### SQL Server
+Log into your Azure virtual machine running SQL Server and use [SQL Server Management Studio](../../azure-sql/database/connect-query-ssms.md) or similar tool to run the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a password.
+
+
+```
+USE master;
+GO
+CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
+GO
+GRANT VIEW SERVER STATE TO [user];
+GO
+GRANT VIEW ANY DEFINITION TO [user];
+GO
+```
+
+## Create Azure Virtual Machine
+You will need to create one or more Azure virtual machines that will be used to collect data to monitor SQL.
+
+> [!NOTE]
+> The [monitoring profiles](#create-sql-monitoring-profile) specifies what data you will collect from the different types of SQL you want to monitor. Each monitoring virtual machine can have only one monitoring profile associated with it. If you have a need for multiple monitoring profiles, then you need to create a virtual machine for each.
+
+### Azure virtual machine requirements
+The Azure virtual machines has the following requirements.
+
+- Operating system: Ubuntu 18.04
+- Recommended Azure virtual machine sizes: Standard_B2s (2 cpus, 4 GiB memory)
+- Supported regions: Any [region supported by the Azure Monitor agent](../agents/azure-monitor-agent-overview.md#supported-regions)
+
+> [!NOTE]
+> The Standard_B2s (2 cpus, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't allocate more than 100 connections to a single virtual machine.
+
+The virtual machines need to be placed in the same VNET as your SQL systems so they can make network connections to collect monitoring data. If use the monitoring virtual machine to monitor SQL running on Azure virtual machines or as an Azure Managed Instance, consider placing the monitoring virtual machine in an application security group or the same virtual network as those resources so that you donΓÇÖt need to provide a public network endpoint for monitoring the SQL server.
+
+## Configure network settings
+Each type of SQL offers methods for your monitoring virtual machine to securely access SQL. The sections below cover the options based upon the type of SQL.
+
+### Azure SQL Databases
+
+[Tutorial - Connect to an Azure SQL server using an Azure Private Endpoint - Azure portal](../../private-link/tutorial-private-endpoint-sql-portal.md) provides an example for how to setup a private endpoint that you can use to access your database. If you use this method, you will need to ensure your monitoring virtual machines is in the same VNET and subnet that you will be using for the private endpoint. You can then create the private endpoint on your database if you have not already done so.
+
+If you use a [firewall setting](../../azure-sql/database/firewall-configure.md) to provide access to your SQL Database, you need to add a firewall rule to provide access from the public IP address of the monitoring virtual machine. You can access the firewall settings from the **Azure SQL Database Overview** page in the portal.
+++
+### Azure SQL Managed Instances
+
+If your monitoring virtual machine will be in the same VNet as your SQL MI resources, then see [Connect inside the same VNet](https://docs.microsoft.com/azure/azure-sql/managed-instance/connect-application-instance#connect-inside-the-same-vnet). If your monitoring virtual machine will be in the different VNet than your SQL MI resources, then see [Connect inside a different VNet](https://docs.microsoft.com/azure/azure-sql/managed-instance/connect-application-instance#connect-inside-a-different-vnet).
++
+### Azure virtual machine and Azure SQL virtual machine
+If your monitoring virtual machine is in the same VNet as your SQL virtual machine resources, then see [Connect to SQL Server within a virtual network](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql#connect-to-sql-server-within-a-virtual-network). If your monitoring virtual machine will be in the different VNet than your SQL virtual machine resources, then see [Connect to SQL Server over the internet](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql#connect-to-sql-server-over-the-internet).
+
+## Store monitoring password in Key Vault
+You should store your SQL user connection passwords in a Key Vault rather than entering them directly into your monitoring profile connection strings.
+
+When settings up your profile for SQL monitoring, you will need one of the following permissions on the Key Vault resource you intend to use:
+
+- Microsoft.Authorization/roleAssignments/write
+- Microsoft.Authorization/roleAssignments/delete permissions such as User Access Administrator or Owner
+
+A new access policy will be automatically created as part of creating your SQL Monitoring profile that uses the Key Vault you specified. Use *Allow access from All networks* for Key Vault Networking settings.
++
+## Create SQL monitoring profile
+Open SQL insights by selecting **SQL (preview)** from the **Insights** section of the **Azure Monitor** menu in the Azure portal. Click **Create new profile**.
++
+The profile will store the information that you want to collect from your SQL systems. It has specific settings for:
+
+- Azure SQL Database
+- Azure SQL Managed Instances
+- SQL Server running on virtual machines
+
+For example, you might create one profile named *SQL Production* and another named *SQL Staging* with different settings for frequency of data collection, what data to collect, and which workspace to send the data to.
+
+The profile is stored as a [data collection rule](../agents/data-collection-rule-overview.md) resource in the subscription and resource group you select. Each profile needs the following:
+
+- Name. Cannot be edited once created.
+- Location. This is an Azure region.
+- Log Analytics workspace to store the monitoring data.
+- Collection settings for the frequency and type of sql monitoring data to collect.
+
+> [!NOTE]
+> The location of the profile should be in the same location as the Log Analytics workspace you plan to send the monitoring data to.
+++
+Click **Create monitoring profile** once you've entered the details for your monitoring profile. It can take up to a minute for the profile to be deployed. If you don't see the new profile listed in **Monitoring profile** combo box, click the refresh button and it should appear once the deployment is completed. Once you've selected the new profile, select the **Manage profile** tab to add a monitoring machine that will be associated with the profile.
+
+### Add monitoring machine
+Select **Add monitoring machine** to open a context panel to choose the virtual machine to setup to monitor your SQL instances and provide the connection strings.
+
+Select the subscription and name of your monitoring virtual machine. If you're using Key Vault to store your password for the monitoring user, select the Key Vault resources with these secrets and enter the URL and secret name to be used in the connection strings. See the next section for details on identifying the connection string for different SQL deployments.
++++
+### Add connection strings
+The connection string specifies the username that SQL insights should use when logging into SQL to run the Dynamic Management Views. If you're using a Key Vault to store the password for your monitoring user, provide the URL and name of the secret to use.
+
+The connections string will vary for each type of SQL resource:
+
+#### Azure SQL Databases
+Enter the connection string in the form:
+
+```
+sqlAzureConnections":ΓÇ»[
+ "Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User Id=$username;Password=$password;"
+}
+```
+
+Get the details from the **Connection strings** menu item for the database.
++
+To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string.
++
+#### Azure virtual machines running SQL Server
+Enter the connection string in the form:
+
+```
+"sqlVmConnections":ΓÇ»[
+ "Server=MyServerIPAddress;Port=1433;User Id=$username;Password=$password;"
+]
+```
+
+If your monitoring virtual machine is in the same VNET, use the private IP address of the Server. Otherwise, use the public IP address. If you're using Azure SQL virtual machine, you can see which port to use here on the **Security** page for the resource.
++
+To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string.
++
+### Azure SQL Managed Instances
+Enter the connection string in the form:
+
+```
+"sqlManagedInstanceConnections":ΓÇ»[
+      "Server= mysqlserver.database.windows.net;Port=1433;User Id=$username;Password=$password;",
+    ]
+```
+Get the details from the **Connection strings** menu item for the managed instance.
+++
+To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string.
+++
+## Profile created
+Select **Add monitoring virtual machine** to configure the virtual machine to collect data from your SQL deployments. Do not return to the **Overview** tab. In a few minutes, the Status column should change to say "Collecting", you should see data for the systems you have chosen to monitor.
+
+If you do not see data, see [Troubleshooting SQL insights](sql-insights-troubleshoot.md) to identify the issue.
++
+## Next steps
+
+- See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) if SQL insights isn't working properly after being enabled.
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-overview.md
+
+ Title: Monitor your SQL deployments with SQL insights (preview)
+description: Overview of SQL insights in Azure Monitor
+++ Last updated : 03/15/2021++
+# Monitor your SQL deployments with SQL insights (preview)
+SQL insights monitors the performance and health of your SQL deployments. It can help deliver predictable performance and availability of vital workloads you have built around a SQL backend by identifying performance bottlenecks and issues. SQL insights stores its data in [Azure Monitor Logs](../logs/data-platform-logs.md), which allows it to deliver powerful aggregation and filtering and to analyze data trends over time. You can view this data from Azure Monitor in the views we ship as part of this offering and you can delve directly into the Log data to run queries and analyze trends.
+
+SQL insights does not install anything on your SQL IaaS deployments. Instead, it uses dedicated monitoring virtual machines to remotely collect data for both SQL PaaS and SQL IaaS deployments. The SQL insights monitoring profile allows you to manage the data sets to be collected based upon the type of SQL, including Azure SQL DB, Azure SQL Managed Instance, and SQL server running on an Azure virtual machine.
+
+## Pricing
+
+There's no direct cost for SQL insights, but you're charged for its activity in the Log Analytics workspace. Based on the pricing that's published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/), SQL insights is billed for:
+
+- Data ingested from agents and stored in the workspace.
+- Alert rules based on log data.
+- Notifications sent from alert rules.
+
+The log size varies by the string lengths of the data collected, and it can increase with the amount of database activity.
+
+## Supported versions
+SQL insights supports the following versions of SQL Server:
+
+- SQL Server 2012 and newer
+
+SQL insights supports SQL Server running in the following environments:
+
+- Azure SQL Database
+- Azure SQL Managed Instance
+- Azure SQL VMs
+- Azure VMs
+
+SQL insights has no support or limited support for the following:
+
+- SQL Server running on virtual machines outside of Azure are currently not supported.
+- Azure SQL Database Elastic Pools: Limited support during the Public Preview. Will be fully supported at general availability.
+- Azure SQL Serverless Deployments: Like Active Geo-DR, the current monitoring agents will prevent serverless deployment from going to sleep. This could cause higher than expected costs from serverless deployments.
+- Readable Secondaries: Currently only deployment types with a single readable secondary endpoint (Business Critical or Hyperscale) will be supported. When Hyperscale deployments support named replicas, we will be capable of supporting multiple readable secondary endpoints for a single logical database.
+- Azure Active Directories: Currently we only support SQL Logins for the Monitoring Agent. We plan to support Azure Active Directories in an upcoming release and have no current support for SQL VM authentication using Active Directories on a bespoke domain controller.
++
+## Open SQL insights
+Open SQL insights by selecting **SQL (preview)** from the **Insights** section of the **Azure Monitor** menu in the Azure portal. Click on a tile to load the experience for the type of SQL you are monitoring.
+++
+## Enable SQL insights
+See [Enable SQL insights](sql-insights-enable.md) for the detailed procedure to enable SQL insights in addition to steps for troubleshooting.
++
+## Data collected by SQL insights
+In the public preview, SQL insights only supports the remote method of monitoring. The Telegraf agent is not installed on the SQL Server. It uses the SQL Server input plugin for Telegraf and use the three groups of queries for the different types of SQL it monitors: Azure SQL DB, Azure SQL Managed Instance, SQL server running on an Azure VM.
+
+The following tables summarize the following:
+
+- Name of the query in the sqlserver telegraf plugin
+- Dynamic managed views the query calls
+- Namespace the data appears under in the *InsighstMetrics* table
+- Whether the data is collected by default
+- How often the data is collected by default
+
+You can modify which queries are run and data collection frequency when you create your monitoring profile.
+
+### Azure SQL DB data
+
+| Query Name | DMV | Namespace | Enabled by Default | Default collection frequency |
+|:|:|:|:|:|
+| AzureSQLDBWaitStats | sys.dm_db_wait_stats | sqlserver_azuredb_waitstats | No | NA |
+| AzureSQLDBResourceStats | sys.dm_db_resource_stats | sqlserver_azure_db_resource_stats | Yes | 60 seconds |
+| AzureSQLDBResourceGovernance | sys.dm_user_db_resource_governance | sqlserver_db_resource_governance | Yes | 60 seconds |
+| AzureSQLDBDatabaseIO | sys.dm_io_virtual_file_stats<br>sys.database_files<br>tempdb.sys.database_files | sqlserver_database_io | Yes | 60 seconds |
+| AzureSQLDBServerProperties | sys.dm_os_job_object<br>sys.database_files<br>sys.[databases]<br>sys.[database_service_objectives] | sqlserver_server_properties | Yes | 60 seconds |
+| AzureSQLDBOsWaitstats | sys.dm_os_wait_stats | sqlserver_waitstats | Yes | 60 seconds |
+| AzureSQLDBMemoryClerks | sys.dm_os_memory_clerks | sqlserver_memory_clerks | Yes | 60 seconds |
+| AzureSQLDBPerformanceCounters | sys.dm_os_performance_counters<br>sys.databases | sqlserver_performance | Yes | 60 seconds |
+| AzureSQLDBRequests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | sqlserver_requests | No | NA |
+| AzureSQLDBSchedulers | sys.dm_os_schedulers | sqlserver_schedulers | No | NA |
+
+### Azure SQL managed instance data
+
+| Query Name | DMV | Namespace | Enabled by Default | Default collection frequency |
+|:|:|:|:|:|
+| AzureSQLMIResourceStats | sys.server_resource_stats | sqlserver_azure_db_resource_stats | Yes | 60 seconds |
+| AzureSQLMIResourceGovernance | sys.dm_instance_resource_governance | sqlserver_instance_resource_governance | Yes | 60 seconds |
+| AzureSQLMIDatabaseIO | sys.dm_io_virtual_file_stats<br>sys.master_files | sqlserver_database_io | Yes | 60 seconds |
+| AzureSQLMIServerProperties | sys.server_resource_stats | sqlserver_server_properties | Yes | 60 seconds |
+| AzureSQLMIOsWaitstats | sys.dm_os_wait_stats | sqlserver_waitstats | Yes | 60 seconds |
+| AzureSQLMIMemoryClerks | sys.dm_os_memory_clerks | sqlserver_memory_clerks | Yes | 60 seconds |
+| AzureSQLMIPerformanceCounters | sys.dm_os_performance_counters<br>sys.databases | sqlserver_performance | Yes | 60 seconds |
+| AzureSQLMIRequests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | sqlserver_requests | No | NA |
+| AzureSQLMISchedulers | sys.dm_os_schedulers | sqlserver_schedulers | No | NA |
+
+### SQL Server data
+
+| Query Name | DMV | Namespace | Enabled by Default | Default collection frequency |
+|:|:|:|:|:|
+| SQLServerPerformanceCounters | sys.dm_os_performance_counters | sqlserver_performance | Yes | 60 seconds |
+| SQLServerWaitStatsCategorized | sys.dm_os_wait_stats | sqlserver_waitstats | Yes | 60 seconds |
+| SQLServerDatabaseIO | sys.dm_io_virtual_file_stats<br>sys.master_files | sqlserver_database_io | Yes | 60 seconds |
+| SQLServerProperties | sys.dm_os_sys_info | sqlserver_server_properties | Yes | 60 seconds |
+| SQLServerMemoryClerks | sys.dm_os_memory_clerks | sqlserver_memory_clerks | Yes | 60 seconds |
+| SQLServerSchedulers | sys.dm_os_schedulers | sqlserver_schedulers | No | NA |
+| SQLServerRequests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | sqlserver_requests | No | NA |
+| SQLServerVolumeSpace | sys.master_files | sqlserver_volume_space | Yes | 60 seconds |
+| SQLServerCpu | sys.dm_os_ring_buffers | sqlserver_cpu | Yes | 60 seconds |
+| SQLServerAvailabilityReplicaStates | sys.dm_hadr_availability_replica_states<br>sys.availability_replicas<br>sys.availability_groups<br>sys.dm_hadr_availability_group_states | sqlserver_hadr_replica_states | | 60 seconds |
+| SQLServerDatabaseReplicaStates | sys.dm_hadr_database_replica_states<br>sys.availability_replicas | sqlserver_hadr_dbreplica_states | | 60 seconds |
++++
+## Next steps
+
+See [Enable SQL insights](sql-insights-enable.md) for the detailed procedure to enable SQL insights.
+See [Frequently asked questions](../faq.md#sql-insights-preview) for frequently asked questions about SQL insights.
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-troubleshoot.md
+
+ Title: Troubleshooting SQL insights (preview)
+description: Troubleshooting SQL insights in Azure Monitor
+++ Last updated : 03/04/2021++
+# Troubleshooting SQL insights (preview)
+To troubleshoot data collection issues in SQL insights, check the status of the monitoring machine in the **Manage profile** tab. This will have one of the following states:
+
+- Collecting
+- Not collecting
+- Collecting with errors
+
+Click the **Status** to drill in to see logs and further details, which may help you resolve the problem.
++
+## Not collecting state
+The monitoring machine has a state of *Not collecting* if there's no data in *InsightsMetrics* for SQL in the last 10 minutes.
+
+SQL insights uses the following query to retrieve this information:
+
+```
+InsightsMetrics
+    | extend Tags = todynamic(Tags)
+    | extend SqlInstance = tostring(Tags.sql_instance)
+    | where TimeGenerated > ago(10m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
+```
+
+Check if there are any logs from Telegraf that helps identify the root cause the issue. If there are log entries, you can click *Not collecting* and check the logs and troubleshooting info for common problems.
++
+If there are no logs, then you must check the logs on the monitoring virtual machine for the following services installed by two virtual machine extensions:
+
+- Microsoft.Azure.Monitor.AzureMonitorLinuxAgent
+ - Service: mdsd
+- Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension
+ - Service: wli
+ - Service: ms-telegraf
+ - Service: td-agent-bit-wli
+ - Extension log to check install failures: /var/log/azure/Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension/wlilogs.log
+++
+### wli service logs
+
+Service logs: `/var/log/wli.log`
+
+To see recent logs: `tail -n 100 -f /var/log/wli.log`
+
+
+If you see the following error log, it indicates a problem with the **mdsd** service.
+
+```
+2021-01-27T06:09:28Z [Error] Failed to get config data. Error message: dial unix /var/run/mdsd/default_fluent.socket: connect: no such file or directory
+```
++
+### Telegraf service logs
+
+Service logs: `/var/log/ms-telegraf/telegraf.log`
+
+To see recent logs: `tail -n 100 -f /var/log/ms-telegraf/telegraf.log`
+To see recent error and warning logs: `tail -n 1000 /var/log/ms-telegraf/telegraf.log | grep "E\!\|W!"`
+
+ The configuration used by telegraf is generated by wli service and placed in: `/etc/ms-telegraf/telegraf.d/wli`
+
+If a bad configuration is generated ms-telegraf service may fail to start, check if ms-telegraf service is running with the command: `service ms-telegraf status`
+
+To see error messages from telegraf service run it manually with the following command:
+
+```
+/usr/bin/ms-telegraf --config /etc/ms-telegraf/telegraf.conf --config-directory /etc/ms-telegraf/telegraf.d/wli --test
+```
+
+### mdsd service logs
+
+Check [Current Limitations](../agents/azure-monitor-agent-overview.md#current-limitations) for the Azure Monitor agent.
++
+Service logs:
+- `/var/log/mdsd.err`
+- `/var/log/mdsd.warn`
+- `/var/log/mdsd.info`
+
+To see recent errors: `tail -n 100 -f /var/log/mdsd.err`
+
+ If you need to contact support, collect the following information:
+
+- Logs in `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/`
+- Log in `/var/log/waagent.log`
+- Logs in `/var/log/mdsd*`
+- Files in `/etc/mdsd.d/`
+- File `/etc/default/mdsd`
+
+### Invalid monitoring virtual machine configuration
+
+One cause of the *Not Collecting* state is when you have an invalid monitoring virtual machine configuration. Following is the default configuration:
+
+```json
+{
+ "version": 1,
+ "secrets": {
+ "telegrafPassword": {
+ "keyvault": "https://mykeyvault.vault.azure.net/",
+ "name": "sqlPassword"
+ }
+ },
+ "parameters": {
+ "sqlAzureConnections": [
+ "Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User Id=telegraf;Password=$telegrafPassword;"
+ ],
+ "sqlVmConnections": [
+ ],
+ "sqlManagedInstanceConnections": [
+ ]
+ }
+}
+```
+
+This configuration specifies the replacement tokens to be used in the profile configuration on your monitoring virtual machine. It also allows you to reference secrets from Azure Key Vault, so you don't have keep secret values in any configuration, which is strongly recommended.
+
+#### Secrets
+Secrets are tokens whose values are retrieved at run time from an Azure Key Vault. A secret is defined by a pair of a Key Vault reference and secret name. This allows Azure Monitor to get the dynamic value of the secret and use it is downstream config references.
+
+You can define as many secrets as needed in the configuration, including secrets stored in separate Key Vaults.
+
+```json
+ "secrets": {
+ "<secret-token-name-1>": {
+ "keyvault": "<key-vault-uri>",
+ "name": "<key-vault-secret-name>"
+ },
+ "<secret-token-name-2>": {
+ "keyvault": "<key-vault-uri-2>",
+ "name": "<key-vault-secret-name-2>"
+ }
+ }
+```
+
+The permissions to access the Key Vault is provided to a Managed Service Identity on the monitoring virtual machine. Azure Monitor expects the Key Vault to provide at least secrets get permission to the virtual machine. You can enable it from the Azure portal, PowerShell, CLI, or Resource Manager template.
+
+#### Parameters
+Parameters are tokens that can be referenced in the profile configuration via JSON templating. Parameters have a name and a value. Values can be any JSON type including objects and arrays. A parameter is referenced in the profile config using its name in this convention `.Parameters.<name>`.
+
+Parameters can reference secrets in Key Vault using the same convention. For example, `sqlAzureConnections` references the secret `telegrafPassword` using the convention `$telegrafPassword`.
+
+At run time, all parameters and secrets will be resolved and merged with the profile configuration to construct the actual configuration to be used on the machine.
+
+> [!NOTE]
+> The parameter names of `sqlAzureConnections`, `sqlVmConnections`, and `sqlManagedInstanceConnections` are all required in the configuration even if you do not have connection strings you will be providing for some of them.
++
+## Collecting with errors state
+The monitoring machine will be in state *Collecting with errors* if there's at least one *InsightsMetrics* log but there are also errors in the *Operation* table.
+
+SQL insights uses the following queries to retrieve this information:
+
+```
+InsightsMetrics
+    | extend Tags = todynamic(Tags)
+    | extend SqlInstance = tostring(Tags.sql_instance)
+    | where TimeGenerated > ago(240m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
+```
+
+```
+Operation
+ | where OperationCategory == "WorkloadInsights"
+ | summarize Errors = countif(OperationStatus == 'Error')
+```
+
+For common cases, we provide troubleshooting knowledge in our logs view:
++++
+## Next steps
+
+- Get details on [enabling SQL insights](sql-insights-enable.md).
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-percept Quickstart Percept Dk Unboxing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-unboxing.md
Once you have received your Azure Percept DK, reference this guide for informati
1. Connect the devkit components. > [!NOTE]
- > The power adapter port is located on the right side of the carrier board. The remaining ports (2x USB-A, 1x USB-C, 1x HDMI, and 1x Ethernet) and the reset button are located on the left side of the carrier board.
+ > The power adapter port is located on the right side of the carrier board. The remaining ports (2x USB-A, 1x USB-C, and 1x Ethernet) and the power button are located on the left side of the carrier board.
1. Hand screw both Wi-Fi antennas into the carrier board.
Once you have received your Azure Percept DK, reference this guide for informati
1. Connect the power adapter/cable to the carrier board and a wall outlet. To fully secure the power cable connector to the carrier board, use a P7 screwdriver (not included in the devkit) to tighten the connector screws.
- 1. After plugging the power cable into a wall outlet, the device will automatically power on. The reset button on the left side of the carrier board will be illuminated. Please allow some time for the device to boot up.
+ 1. After plugging the power cable into a wall outlet, the device will automatically power on. The power button on the left side of the carrier board will be illuminated. Please allow some time for the device to boot up.
> [!NOTE]
- > The reset button is for powering off or resetting the device while connected to a power outlet. In the event of a power outage, the device will automatically reset and power back on.
+ > The power button is for powering off or restarting the device while connected to a power outlet. In the event of a power outage, the device will automatically restart.
## Next steps
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
To redirect any output to a .txt file for further analysis, use the following sy
sudo [command] > [file name].txt ```
+Change the permissions of the .txt file so it can be copied:
+
+```console
+sudo chmod 666 [file name].txt
+```
+ After redirecting output to a .txt file, copy the file to your host PC via SCP: ```console
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 12/01/2020 Last updated : 03/16/2021 # Resource providers for Azure services
-This article shows how resource provider namespaces map to Azure services.
+This article shows how resource provider namespaces map to Azure services. If you don't know the resource provider, see [Find resource provider](#find-resource-provider).
## Match resource provider to service
The resources providers above that are marked with **- registered** are register
> [!IMPORTANT] > Only register a resource provider when you're ready to use it. The registration step enables you to maintain least privileges within your subscription. A malicious user can't use resource providers that aren't registered.
+## Find resource provider
+
+If you have existing infrastructure in Azure, but aren't sure which resource provider is used, you can use either Azure CLI or PowerShell to find the resource provider. Specify the name of the resource group that contains the resources to find.
+
+The following example uses Azure CLI:
+
+```azurecli-interactive
+az resource list -g examplegroup
+```
+
+The results include the resource type. The resource provider namespace is the first part of the resource type. The following example shows the **Microsoft.KeyVault** resource provider.
+
+```json
+[
+ {
+ ...
+ "type": "Microsoft.KeyVault/vaults"
+ }
+]
+```
+
+The following example uses PowerShell:
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName examplegroup
+```
+
+The results include the resource type. The resource provider namespace is the first part of the resource type. The following example shows the **Microsoft.KeyVault** resource provider.
+
+```azurepowershell
+Name : examplekey
+ResourceGroupName : examplegroup
+ResourceType : Microsoft.KeyVault/vaults
+...
+```
+ ## Next steps For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | > | - | -- | - |
-> | flexibleServers | Yes | Yes |
+> | flexibleServers | No | No |
> | servers | Yes | Yes | ## Microsoft.DBforPostgreSQL
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-resource-manager Bicep Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-install.md
Title: Setup Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 03/09/2021 Last updated : 03/17/2021 # Setup Bicep development and deployment environment
Learn how to setup Bicep development and deployment environments.
To get the best Bicep authoring experience, you need two components: -- **Bicep extension for Visual Studio Code**. To create Bicep files, you need a good Bicep editor. We recommend [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). These tools provide language support and resource autocompletion. They help create and validate Bicep files. For more information, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
+- **Bicep extension for Visual Studio Code**. To create Bicep files, you need a good Bicep editor. We recommend [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). These tools provide language support and resource autocompletion. They help create and validate Bicep files. For more information about using Visual Studio Code and the Bicep extension, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
- **Bicep CLI**. Use Bicep CLI to compile Bicep files to ARM JSON templates, and decompile ARM JSON templates to Bicep files. For more information, see [Install Bicep CLI](#install-bicep-cli). ## Deployment environment
azure-resource-manager Bicep Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-modules.md
+
+ Title: Bicep modules
+description: Describes how to define and consume a module, and how to use module scopes.
+ Last updated : 03/17/2021++
+# Use Bicep modules
+
+Bicep enables you to break down a complex solution into modules. A Bicep module is a set of one or more resources to be deployed together. Modules abstract away complex details of the raw resource declaration, which can increase readability. You can reuse these modules, and share them with other people. Combined with [template specs](./template-specs.md), it creates a way for modularity and code reuse. For a tutorial, see [Tutorial: Add Bicep modules](./bicep-tutorial-add-modules.md).
+
+## Define modules
+
+Every Bicep file can be consumed as a module. A module only exposes parameters and outputs as contract to other Bicep files. Both parameters and outputs are optional.
+
+The following Bicep file can be deployed directly to create a storage account or be used as a module. The next section shows you how to consume modules:
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param storagePrefix string
+
+@allowed([
+ 'Standard_LRS'
+ 'Standard_GRS'
+ 'Standard_RAGRS'
+ 'Standard_ZRS'
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GZRS'
+ 'Standard_RAGZRS'
+])
+param storageSKU string = 'Standard_LRS'
+param location string
+
+var uniqueStorageName = '${storagePrefix}${uniqueString(resourceGroup().id)}'
+
+resource stg 'Microsoft.Storage/storageAccounts@2019-04-01' = {
+ name: uniqueStorageName
+ location: location
+ sku: {
+ name: storageSKU
+ }
+ kind: 'StorageV2'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}
+
+output storageEndpoint object = stg.properties.primaryEndpoints
+```
+
+Output is used to pass values to the parent Bicep files.
+
+## Consume modules
+
+Use the _module_ keyword to consume a module. The following Bicep file deploys the resource defined in the module file being referenced:
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param namePrefix string
+param location string = resourceGroup().location
+
+module stgModule './storageAccount.bicep' = {
+ name: 'storageDeploy'
+ params: {
+ storagePrefix: namePrefix
+ location: location
+ }
+}
+
+output storageEndpoint object = stgModule.outputs.storageEndpoint
+```
+
+- **module**: Keyword.
+- **symbolic name** (stgModule): Identifier for the module.
+- **module file**: The path to the module in this example is specified using a relative path (./storageAccount.bicep). All paths in Bicep must be specified using the forward slash (/) directory separator to ensure consistent compilation cross-platform. The Windows backslash (\\) character is unsupported.
+- The **_name_** property (storageDeploy) is required when consuming a module. When Bicep generates the template IL, this field is used as the name of the nested deployment resource, which is generated for the module:
+
+ ```json
+ ...
+ ...
+ "resources": [
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2019-10-01",
+ "name": "storageDeploy",
+ "properties": {
+ ...
+ }
+ }
+ ]
+ ...
+ ```
+
+To get an output value from a module, retrieve the property value with syntax like: `stgModule.outputs.storageEndpoint` where `stgModule` is the identifier of the module.
+
+## Configure module scopes
+
+When declaring a module, you can supply a _scope_ property to set the scope at which to deploy the module:
+
+```bicep
+module stgModule './storageAccount.bicep' = {
+ name: 'storageDeploy'
+ scope: resourceGroup('someOtherRg') // pass in a scope to a different resourceGroup
+ params: {
+ storagePrefix: namePrefix
+ location: location
+ }
+}
+```
+
+The _scope_ property can be omitted when the module's target scope and the parent's target scope are the same. When the scope property is not provided, the module is deployed at the parent's target scope.
+
+The following Bicep file shows how to create a resource group and deploy a module to the resource group:
+
+```bicep
+// set the target scope for this file
+targetScope = 'subscription'
+
+@minLength(3)
+@maxLength(11)
+param namePrefix string
+
+param location string = deployment().location
+
+var resourceGroupName = '${namePrefix}rg'
+resource myResourceGroup 'Microsoft.Resources/resourceGroups@2020-01-01' = {
+ name: resourceGroupName
+ location: location
+ scope: subscription()
+}
+
+module stgModule './storageAccount.bicep' = {
+ name: 'storageDeploy'
+ scope: myResourceGroup
+ params: {
+ storagePrefix: namePrefix
+ location: location
+ }
+}
+
+output storageEndpoint object = stgModule.outputs.storageEndpoint
+```
+
+## Next steps
+
+- To go through a tutorial, see [Tutorial: Add Bicep modules](./bicep-tutorial-add-modules.md).
azure-resource-manager Bicep Tutorial Add Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-outputs.md
Title: Tutorial - add outputs to Azure Resource Manager Bicep file description: Add outputs to your Bicep file to simplify the syntax. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-resource-manager Bicep Tutorial Create First Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-create-first-bicep.md
Title: Tutorial - Create & deploy Azure Resource Manager Bicep files description: Create your first Bicep file for deploying Azure resources. In the tutorial, you learn about the Bicep file syntax and how to deploy a storage account. Previously updated : 03/10/2021 Last updated : 03/17/2021
Let's start by making sure you have the tools you need to create and deploy Bice
### Editor
-To create Bicep files, you need a good editor. We recommend Visual Studio Code with the Bicep extension. If you need to install these tools, see [Quickstart: Create Bicep files with Visual Studio Code](quickstart-create-bicep-use-visual-studio-code.md).
+To create Bicep files, you need a good editor. We recommend Visual Studio Code with the Bicep extension. If you need to install these tools, see [Configure Bicep development environment](./bicep-install.md#development-environment).
### Command-line deployment
-You also need either the latest Azure PowerShell or the latest Azure CLI to deploy the Bicep file. For the installation instructions, see:
+You can deploy Bicep files by using Azure CLI or Azure PowerShell. For Azure CLI, you need version 2.20.0 or later; for Azure PowerShell, you need version 5.6.0 or later. For the installation instructions, see:
- [Install Azure PowerShell](/powershell/azure/install-az-ps) - [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows)
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
Previously updated : 03/09/2021 Last updated : 03/17/2021 # Auditing for Azure SQL Database and Azure Synapse Analytics
AzureDiagnostics
### <a id="audit-storage-destination"></a>Audit to storage destination
-To configure writing audit logs to a storage account, select **Storage** and open **Storage details**. Select the Azure storage account where logs will be saved, and then select the retention period. Then click **OK**. Logs older than the retention period are deleted.
+To configure writing audit logs to a storage account, select **Storage** when you get to the **Auditing** section. Select the Azure storage account where logs will be saved, and then select the retention period by opening **Advanced properties**. Then click **Save**. Logs older than the retention period are deleted.
-- The default value for retention period is 0 (unlimited retention). You can change this value by moving the **Retention (Days)** slider in **Storage settings** when configuring the storage account for auditing.
+- The default value for retention period is 0 (unlimited retention). You can change this value by moving the **Retention (Days)** slider in **Advanced properties** when configuring the storage account for auditing.
- If you change retention period from 0 (unlimited retention) to any other value, please note that retention will only apply to logs written after retention value was changed (logs written during the period when retention was set to unlimited are preserved, even after retention is enabled). ![storage account](./media/auditing-overview/auditing_select_storage.png) ### <a id="audit-log-analytics-destination"></a>Audit to Log Analytics destination
-To configure writing audit logs to a Log Analytics workspace, select **Log Analytics** and open **Log Analytics details**. Select or create the Log Analytics workspace where logs will be written and then click **OK**.
+To configure writing audit logs to a Log Analytics workspace, select **Log Analytics** and open **Log Analytics details**. Select the Log Analytics workspace where logs will be written and then click **OK**. If you have not created a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md).
![LogAnalyticsworkspace](./media/auditing-overview/auditing_select_oms.png)
For more details about Azure Monitor Log Analytics workspace, see [Designing you
### <a id="audit-event-hub-destination"></a>Audit to Event Hub destination
-To configure writing audit logs to an event hub, select **Event Hub** and open **Event Hub details**. Select the event hub where logs will be written and then click **OK**. Be sure that the event hub is in the same region as your database and server.
+To configure writing audit logs to an event hub, select **Event Hub**. Select the event hub where logs will be written and then click **Save**. Be sure that the event hub is in the same region as your database and server.
![Eventhub](./media/auditing-overview/auditing_select_event_hub.png)
If you chose to write audit logs to an Azure storage account, there are several
- Use the [Azure portal](https://portal.azure.com). Open the relevant database. At the top of the database's **Auditing** page, click **View audit logs**.
- ![Screenshot that shows the View audit logs button highlighted on the database auditing page.](./media/auditing-overview/7_auditing_get_started_blob_view_audit_logs.png)
+ ![view audit logs](./media/auditing-overview/auditing-view-audit-logs.png)
**Audit records** opens, from which you'll be able to view the logs. - You can view specific dates by clicking **Filter** at the top of the **Audit records** page. - You can switch between audit records that were created by the *server audit policy* and the *database audit policy* by toggling **Audit Source**.
- - You can view only SQL injection related audit records by checking **Show only audit records for SQL injections** checkbox.
![Screenshot that shows the options for viewing the audit records.]( ./media/auditing-overview/8_auditing_get_started_blob_audit_records.png)
With geo-replicated databases, when you enable auditing on the primary database
In production, you are likely to refresh your storage keys periodically. When writing audit logs to Azure storage, you need to resave your auditing policy when refreshing your keys. The process is as follows:
-1. Open **Storage Details**. In the **Storage Access Key** box, select **Secondary**, and click **OK**. Then click **Save** at the top of the auditing configuration page.
+1. Open **Advanced properties** under **Storage**. In the **Storage Access Key** box, select **Secondary**. Then click **Save** at the top of the auditing configuration page.
![Screenshot that shows the process for selecting a secondary storage access key.](./media/auditing-overview/5_auditing_get_started_storage_key_regeneration.png) 2. Go to the storage configuration page and regenerate the primary access key.
azure-sql Connect Query Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-content-reference-guide.md
Previously updated : 05/29/2020 Last updated : 03/17/2021 # Azure SQL Database and Azure SQL Managed Instance connect and query articles [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
The following table lists examples of object-relational mapping (ORM) frameworks
- [Connect and query using Java](connect-query-java.md) - [Connect and query using Python](connect-query-python.md) - [Connect and query using Ruby](connect-query-ruby.md)
+ - [Install sqlcmd and bcp the SQL Server command-line tools on Linux](/sql/linux/sql-server-linux-setup-tools) - For Linux users, try connecting to Azure SQL Database or Azure SQL Managed Instance using [sqlcmd](/sql/ssms/scripting/sqlcmd-use-the-utility).
- Retry logic code examples: - [Connect resiliently with ADO.NET][step-4-connect-resiliently-to-sql-with-ado-net-a78n] - [Connect resiliently with PHP][step-4-connect-resiliently-to-sql-with-php-p42h]
azure-sql Job Automation Managed Instances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/job-automation-managed-instances.md
SQL Agent Job steps are sequences of actions that SQL Agent should execute. Ever
SQL Agent enables you to create different types of job steps, such as Transact-SQL job steps that execute a single Transact-SQL batch against the database, or OS command/PowerShell steps that can execute custom OS script, [SSIS job steps](../../data-factory/how-to-invoke-ssis-package-managed-instance-agent.md) that enable you to load data using SSIS runtime, or [replication](../managed-instance/replication-transactional-overview.md) steps that can publish changes from your database to other databases. > [!Note]
-> For more information on leveraging the Azure SSIS Integration Runtime with SSISDB hosted by Azure SQL Managed Instance, see [Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory](/../azure/data-factory/how-to-use-sql-managed-instance-with-ir.md).
+> For more information on leveraging the Azure SSIS Integration Runtime with SSISDB hosted by Azure SQL Managed Instance, see [Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory](../../data-factory/how-to-use-sql-managed-instance-with-ir.md).
[Transactional replication](../managed-instance/replication-transactional-overview.md) can replicate the changes from your tables into other databases in Azure SQL Managed Instance, Azure SQL Database, or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](../../azure-sql/managed-instance/replication-between-two-instances-configure-tutorial.md).
GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO [login_name];
- [What is Azure SQL Managed Instance?](../managed-instance/sql-managed-instance-paas-overview.md) - [What's new in Azure SQL Database & SQL Managed Instance?](../../azure-sql/database/doc-changes-updates-release-notes.md?tabs=managed-instance) - [Azure SQL Managed Instance T-SQL differences from SQL Server](../../azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md#sql-server-agent)-- [Features comparison: Azure SQL Database and Azure SQL Managed Instance](../../azure-sql/database/features-comparison.md)
+- [Features comparison: Azure SQL Database and Azure SQL Managed Instance](../../azure-sql/database/features-comparison.md)
azure-sql Monitor Tune Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/monitor-tune-overview.md
Previously updated : 09/30/2020 Last updated : 03/17/2021 # Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
In the Azure portal, Azure SQL Database and Azure SQL Managed Instance provide m
> [!NOTE] > Databases with extremely low usage may show in the portal with less than actual usage. Due to the way telemetry is emitted when converting a double value to the nearest integer certain usage amounts less than 0.5 will be rounded to 0 which causes a loss in granularity of the emitted telemetry. For details, see [Low database and elastic pool metrics rounding to zero](#low-database-and-elastic-pool-metrics-rounding-to-zero).
+### Monitor with SQL insights
+
+[Azure Monitor SQL insights](../../azure-monitor/insights/sql-insights-overview.md) is a tool for monitoring Azure SQL managed instances, Azure SQL databases, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be monitored and analyzed. You can view this data from [Azure Monitor](../../azure-monitor/overview.md) in provided views, or access the Log data directly to run queries and analyze trends. To start using Azure Monitor SQL insights, see [Enable SQL insights](../../azure-monitor/insights/sql-insights-enable.md).
+ ### Azure SQL Database and Azure SQL Managed Instance resource monitoring You can quickly monitor a variety of resource metrics in the Azure portal in the **Metrics** view. These metrics enable you to see if a database is reaching 100% of processor, memory, or IO resources. High DTU or processor percentage, as well as high IO percentage, indicates that your workload might need more CPU or IO resources. It might also indicate queries that need to be optimized.
azure-sql Monitoring With Dmvs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/monitoring-with-dmvs.md
Previously updated : 1/14/2021 Last updated : 03/15/2021 # Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
Microsoft Azure SQL Database and Azure SQL Managed Instance partially support th
For detailed information on dynamic management views, see [Dynamic Management Views and Functions (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views).
+## Monitor with SQL insights
+
+[Azure Monitor SQL insights](../../azure-monitor/insights/sql-insights-overview.md) is a tool for monitoring Azure SQL managed instances, Azure SQL databases, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be monitored and analyzed. You can view this data from [Azure Monitor](../../azure-monitor/overview.md) in provided views, or access the Log data directly to run queries and analyze trends. To start using Azure Monitor SQL insights, see [Enable SQL insights](../../azure-monitor/insights/sql-insights-enable.md).
+ ## Permissions In Azure SQL Database, querying a dynamic management view requires **VIEW DATABASE STATE** permissions. The **VIEW DATABASE STATE** permission returns information about all objects within the current database.
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
azure-sql Sql Database Vulnerability Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-vulnerability-assessment-rules.md
Previously updated : 12/14/2020 Last updated : 03/17/2021 # SQL Vulnerability Assessment rules reference guide
SQL Vulnerability Assessment rules have five categories, which are in the follow
|Rule ID |Rule Title |Rule Severity |Rule Description |Platform | |||||| | VA1017 |Execute permissions on xp_cmdshell from all users (except dbo) should be revoked |High |The xp_cmdshell extended stored procedure spawns a Windows command shell, passing in a string for execution. This rule checks that no users (other than users with the CONTROL SERVER permission like members of the sysadmin server role) have permission to execute the xp_cmdshell extended stored procedure. |<nobr>SQL Server 2012+<sup>1</sup><nobr/> |
-|VA1020 |Database user GUEST should not be a member of any role |High |The guest user permits access to a database for any logins that are not mapped to a specific database user. This rule checks that no database roles are assigned to the Guest user. |<nobr>SQL Server 2012+<nobr/> |
+|VA1020 |Database user GUEST should not be a member of any role |High |The guest user permits access to a database for any logins that are not mapped to a specific database user. This rule checks that no database roles are assigned to the Guest user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Database |
|VA1042 |Database ownership chaining should be disabled for all databases except for `master`, `msdb`, and `tempdb` |High |Cross database ownership chaining is an extension of ownership chaining, except it does cross the database boundary. This rule checks that this option is disabled for all databases except for `master`, `msdb`, and `tempdb` . For `master`, `msdb`, and `tempdb`, cross database ownership chaining is enabled by default. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1043 |Principal GUEST should not have access to any user database |Medium |The guest user permits access to a database for any logins that are not mapped to a specific database user. This rule checks that the guest user cannot connect to any database. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1046 |CHECK_POLICY should be enabled for all SQL logins |Low |CHECK_POLICY option enables verifying SQL logins against the domain policy. This rule checks that CHECK_POLICY option is enabled for all SQL logins. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
azure-sql Machine Learning Services Differences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/machine-learning-services-differences.md
Title: Key differences for Machine Learning Services (preview)
+ Title: Key differences for Machine Learning Services
description: This article describes key differences between Machine Learning Services in Azure SQL Managed Instance and SQL Server Machine Learning Services.
Previously updated : 10/26/2020 Last updated : 03/17/2021 # Key differences between Machine Learning Services in Azure SQL Managed Instance and SQL Server
-The functionality of [Machine Learning Services in Azure SQL Managed Instance (preview)](machine-learning-services-overview.md) is nearly identical to [SQL Server Machine Learning Services](/sql/advanced-analytics/what-is-sql-server-machine-learning). Following are some key differences.
+This article describes the few, key differences in functionality between [Machine Learning Services in Azure SQL Managed Instance](machine-learning-services-overview.md) and [SQL Server Machine Learning Services](https://docs.microsoft.com/sql/advanced-analytics/what-is-sql-server-machine-learning).
-> [!IMPORTANT]
-> Machine Learning Services in Azure SQL Managed Instance is currently in public preview. To sign up, see [Sign up for the preview](machine-learning-services-overview.md#signup).
-
-## Preview limitations
+## Language support
-During the preview, the service has the following limitations:
+Machine Learning Services in both SQL Managed Instance and SQL Server support the Python and R [extensibility framework](/sql/machine-learning/concepts/extensibility-framework). The key differences in SQL Managed Instance are:
-- Loopback connections do not work (see [Loopback connection to SQL Server from a Python or R script](/sql/machine-learning/connect/loopback-connection)).-- External Resource pools are not supported. - Only Python and R are supported. External languages such as Java cannot be added.-- Scenarios using the [Message Passing Interface](/message-passing-interface/microsoft-mpi) (MPI) are not supported.
-In case of a Service Level Objective (SLO) update, update the SLO and raise a support ticket to re-enable the dedicated resource limits for R/Python.
+- The initial versions of Python and R are different:
-## Language support
+ | Platform | Python runtime version | R runtime versions |
+ |-|-|--|
+ | Azure SQL Managed Instance | 3.7.2 | 3.5.2 |
+ | SQL Server 2019 | 3.7.1 | 3.5.2 |
+ | SQL Server 2017 | 3.5.2 and 3.7.2 (CU22 and later) | 3.3.3 and 3.5.2 (CU22 and later) |
+ | SQL Server 2016 | Not available | 3.2.2 and 3.5.2 (SP2 CU14 and later) |
-Machine Learning Services in SQL Managed Instance and SQL Server support both Python and R [extensibility framework](/sql/advanced-analytics/concepts/extensibility-framework). The key differences are:
+## Python and R Packages
-- The initial versions of Python and R are different between Machine Learning Services in SQL Managed Instance and SQL Server:
+There is no support in SQL Managed Instance for packages that depend on external runtimes (like Java) or need access to OS APIs for installation or usage.
- | System | Python | R |
- |-|--|-|
- | SQL Managed Instance | 3.7.1 | 3.5.2 |
- | SQL Server | 3.5.2 | 3.3.3 |
+For more information about managing Python and R packages, see:
-- There is no need to configure `external scripts enabled` via `sp_configure`. Once you are [signed up](machine-learning-services-overview.md#signup) for the preview, machine learning is enabled for Azure SQL Managed Instance.
+- [Get Python package information](https://docs.microsoft.com/sql/machine-learning/package-management/python-package-information?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true)
+- [Get R package information](https://docs.microsoft.com/sql/machine-learning/package-management/r-package-information?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true)
-## Packages
+## Resource governance
-Python and R package management work differently between SQL Managed Instance and SQL Server. These differences are:
+In SQL Managed Instance, it's not possible to limit R resources through [Resource Governor](/sql/relational-databases/resource-governor/resource-governor?view=azuresqldb-mi-current&preserve-view=true), and external resource pools are not supported.
-- There is no support for packages that depend on external runtimes (like Java) or need access to OS APIs for installation or usage.-- Packages can perform outbound network calls (change from earlier in the preview). You can set the right outbound security rules at the [Network Security Group](../../virtual-network/network-security-groups-overview.md) level to enable outbound network calls.
+By default, R resources are set to a maximum of 20% of the available SQL Managed Instance resources when extensibility is enabled. To change this default percentage, create an Azure support ticket at [https://azure.microsoft.com/support/create-ticket/](https://azure.microsoft.com/support/create-ticket/).
-For more information about managing Python and R packages, see:
+Extensibility is enabled with the following SQL commands (SQL Managed Instance will restart and be unavailable for a few seconds):
-- [Get Python package information](/sql/machine-learning/package-management/python-package-information?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true)-- [Get R package information](/sql/machine-learning/package-management/r-package-information?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true)
+```sql
+sp_configure 'external scripts enabled', 1;
+RECONFIGURE WITH OVERRIDE;
+```
-## Resource governance
+To disable extensibility and restore 100% of memory and CPU resources to SQL Server, use the following commands:
-It is not possible to limit R resources through [Resource Governor](/sql/relational-databases/resource-governor/resource-governor) and external resource pools.
+```sql
+sp_configure 'external scripts enabled', 0;
+RECONFIGURE WITH OVERRIDE;
+```
-During the public preview, R resources are set to a maximum of 20% of the SQL Managed Instance resources, and depend on which service tier you choose. For more information, see [Azure SQL Database purchasing models](../database/purchasing-models.md).
+The total resources available to SQL Managed Instance depend on which service tier you choose. For more information, see [Azure SQL Database purchasing models](/azure/sql-database/sql-database-service-tiers).
### Insufficient memory error
-If there is insufficient memory available for R, you will get an error message. Common error messages are:
+Memory usage depends on how much is used in your R scripts and the number of parallel queries being executed. If there is insufficient memory available for R, you'll get an error message. Common error messages are:
- `Unable to communicate with the runtime for 'R' script for request id: *******. Please check the requirements of 'R' runtime` - `'R' script error occurred during execution of 'sp_execute_external_script' with HRESULT 0x80004004. ...an external script error occurred: "..could not allocate memory (0 Mb) in C function 'R_AllocStringBuffer'"` - `An external script error occurred: Error: cannot allocate vector of size.`
-Memory usage depends on how much is used in your R scripts and the number of parallel queries being executed. If you receive the errors above, you can scale your database to a higher service tier to resolve this.
+If you receive one of these errors, you can resolve it by scaling your database to a higher service tier.
+
+## SQL Managed Instance pools
+
+Machine Learning Services is currently not supported on [Azure SQL Managed Instance pools (preview)](instance-pools-overview.md).
## Next steps - See the overview, [Machine Learning Services in Azure SQL Managed Instance](machine-learning-services-overview.md). - To learn how to use Python in Machine Learning Services, see [Run Python scripts](/sql/machine-learning/tutorials/quickstart-python-create-script?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true).-- To learn how to use R in Machine Learning Services, see [Run R scripts](/sql/machine-learning/tutorials/quickstart-r-create-script?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true).
+- To learn how to use R in Machine Learning Services, see [Run R scripts](/sql/machine-learning/tutorials/quickstart-r-create-script?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true).
azure-sql Machine Learning Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/machine-learning-services-overview.md
Title: Machine Learning Services in Azure SQL Managed Instance (preview)
+ Title: Machine Learning Services in Azure SQL Managed Instance
description: This article provides an overview or Machine Learning Services in Azure SQL Managed Instance.
Previously updated : 06/03/2020 Last updated : 03/17/2021
-# Machine Learning Services in Azure SQL Managed Instance (preview)
+# Machine Learning Services in Azure SQL Managed Instance
-Machine Learning Services is a feature of Azure SQL Managed Instance (preview) that provides in-database machine learning, supporting both Python and R scripts. The feature includes Microsoft Python and R packages for high-performance predictive analytics and machine learning. The relational data can be used in scripts through stored procedures, T-SQL script containing Python or R statements, or Python or R code containing T-SQL.
-
-> [!IMPORTANT]
-> Machine Learning Services is a feature of Azure SQL Managed Instance that's currently in public preview.
-> This preview functionality is initially available in a limited number of regions in the US, Asia Europe, and Australia with additional regions being added later.
->
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> [Sign up for the preview](#signup) below.
+Machine Learning Services is a feature of Azure SQL Managed Instance that provides in-database machine learning, supporting both Python and R scripts. The feature includes Microsoft Python and R packages for high-performance predictive analytics and machine learning. The relational data can be used in scripts through stored procedures, T-SQL script containing Python or R statements, or Python or R code containing T-SQL.
## What is Machine Learning Services?
Use Machine Learning Services with R/Python support in Azure SQL Managed Instanc
- **Deploy your models and scripts into production in stored procedures** - The scripts and trained models can be operationalized simply by embedding them in T-SQL stored procedures. Apps connecting to Azure SQL Managed Instance can benefit from predictions and intelligence in these models by just calling a stored procedure. You can also use the native T-SQL PREDICT function to operationalize models for fast scoring in highly concurrent real-time scoring scenarios.
-Base distributions of Python and R are included in Machine Learning Services. You can install and use open-source packages and frameworks, such as PyTorch, TensorFlow, and scikit-learn, in addition to the Microsoft packages [revoscalepy](/sql/advanced-analytics/python/ref-py-revoscalepy) and [microsoftml](/sql/advanced-analytics/python/ref-py-microsoftml) for Python, and [RevoScaleR](/sql/advanced-analytics/r/ref-r-revoscaler), [MicrosoftML](/sql/advanced-analytics/r/ref-r-microsoftml), [olapR](/sql/advanced-analytics/r/ref-r-olapr), and [sqlrutils](/sql/advanced-analytics/r/ref-r-sqlrutils) for R.
-
-<a name="signup"></a>
-
-## Sign up for the preview
-
-This limited public preview is subject to the [Azure preview terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-If you're interested in joining the preview program and accept these terms, then you can request enrollment by creating an Azure support ticket at [**https://azure.microsoft.com/support/create-ticket/**](https://azure.microsoft.com/support/create-ticket/).
-
-1. On the **Create a support ticket** page, click **Create an Incident**.
-
-1. On the **Help + support** page, click **New support request** to create a new ticket.
-
-1. Select the following options:
- - Issue type - **Technical**
- - Subscription - *select your subscription*
- - Service - **SQL Managed Instance**
- - Resource - *select your managed instance*
- - Summary - *enter a brief description of your request*
- - Problem type - **Machine Learning Services for SQL Managed Instance (Preview)**
- - Problem subtype - **Other issue or "How To" questions**
-
-1. Click **Next: Solutions**.
-
-1. Read the information about the preview, and then click **Next: Details**.
+Base distributions of Python and R are included in Machine Learning Services. You can install and use open-source packages and frameworks, such as PyTorch, TensorFlow, and scikit-learn, in addition to the Microsoft packages
+[revoscalepy](/sql/machine-learning/python/ref-py-revoscalepy) and
+[microsoftml](/sql/machine-learning/python/ref-py-microsoftml) for Python, and
+ [RevoScaleR](/sql/machine-learning/r/ref-r-revoscaler),
+[MicrosoftML](/sql/machine-learning/r/ref-r-microsoftml),
+ [olapR](/sql/machine-learning/r/ref-r-olapr), and
+ [sqlrutils](/sql/machine-learning/r/ref-r-sqlrutils) for R.
-1. On this page:
- - For the question **Are you trying to sign up for the Preview?**, select **Yes**.
- - For **Description**, enter the specifics of your request, including the logical server name, region, and subscription ID that you want to enroll in the preview. Enter other details as appropriate.
- - Select your preferred contact method.
+## How to enable Machine Learning Services
-1. When you're finished, click **Next: Review + create**, and then click **Create**.
+You can enable Machine Learning Services in Azure SQL Managed Instance by enabling extensibility with the following SQL commands (SQL Managed Instance will restart and be unavailable for a few seconds):
-Once you're enrolled in the program, Microsoft will onboard you to the public preview and enable Machine Learning Services for your existing or new database.
+```sql
+sp_configure 'external scripts enabled', 1;
+RECONFIGURE WITH OVERRIDE;
+```
-Machine Learning Services in SQL Managed Instance is not recommended for production workloads during the public preview.
+For details on how this command affects SQL Managed Instance resources, see [Resource Governance](machine-learning-services-differences.md#resource-governance).
## Next steps - See the [key differences from SQL Server Machine Learning Services](machine-learning-services-differences.md).-- To learn how to use Python in Machine Learning Services, see [Run Python scripts](/sql/machine-learning/tutorials/quickstart-python-create-script?context=%2fazure%2fazure-sql%2fmanaged-instance%2fcontext%2fml-context&view=sql-server-ver15).-- To learn how to use R in Machine Learning Services, see [Run R scripts](/sql/machine-learning/tutorials/quickstart-r-create-script?context=%2fazure%2fazure-sql%2fmanaged-instance%2fcontext%2fml-context&view=sql-server-ver15).-- For more information about machine learning on other SQL platforms, see the [SQL machine learning documentation](/sql/machine-learning/).
+- To learn how to use Python in Machine Learning Services, see [Run Python scripts](/sql/machine-learning/tutorials/quickstart-python-create-script?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true).
+- To learn how to use R in Machine Learning Services, see [Run R scripts](/sql/machine-learning/tutorials/quickstart-r-create-script?context=/azure/azure-sql/managed-instance/context/ml-context&view=azuresqldb-mi-current&preserve-view=true).
+- For more information about machine learning on other SQL platforms, see the [SQL machine learning documentation](/sql/machine-learning/index).
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Previously updated : 08/14/2020 Last updated : 01/14/2021 # What is Azure SQL Managed Instance?
The [vCore-based purchasing model](../database/service-tiers-vcore.md) for SQL M
In the vCore model, you can choose between generations of hardware. -- **Gen4** logical CPUs are based on Intel® E5-2673 v3 (Haswell) 2.4 GHz processors, attached SSD, physical cores, 7-GB RAM per core, and compute sizes between 8 and 24 vCores.-- **Gen5** logical CPUs are based on Intel® E5-2673 v4 (Broadwell) 2.3 GHz, Intel® SP-8160 (Skylake), and Intel® 8272CL (Cascade Lake) 2.5 GHz processors, fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 80 cores.
+- **Gen4** logical CPUs are based on Intel&reg; E5-2673 v3 (Haswell) 2.4 GHz processors, attached SSD, physical cores, 7-GB RAM per core, and compute sizes between 8 and 24 vCores.
+- **Gen5** logical CPUs are based on Intel&reg; E5-2673 v4 (Broadwell) 2.3 GHz, Intel&reg; SP-8160 (Skylake), and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz processors, fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 80 cores.
Find more information about the difference between hardware generations in [SQL Managed Instance resource limits](resource-limits.md#hardware-generation-characteristics).
azure-sql Storage Migrate To Ultradisk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/storage-migrate-to-ultradisk.md
To enable compatibility, follow these steps:
:::image type="content" source="media/storage-migrate-to-ultradisk/additional-disks-settings-azure-portal.png" alt-text="Select additional settings for Disks under Settings in the Azure portal":::
-1. Select **Yes** to **Enable Ultra Disk compatibility**.
+1. Select **Yes** to **Enable Ultra disk compatibility**.
- :::image type="content" source="../../../virtual-machines/media/virtual-machines-disks-getting-started-ultra-ssd/ultra-options-yes-enable.png" alt-text="Screenshot that shows the Yes option.":::
+ :::image type="content" source="../../../virtual-machines/media/virtual-machines-disks-getting-started-ultra-ssd/enable-ultra-disks-existing-vm.png" alt-text="Screenshot that shows the Yes option.":::
1. Select **Save**.
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Last updated 03/16/2021
# Platform updates for Azure VMware Solution
-Important updates to Azure VMware Solution will be applied starting in March 2021. You'll receive notification through Azure Service Health that includes the timeline of the maintenance. In this article, you'll learn what to expect during this maintenance operation and changes to your private cloud.
+Important updates to Azure VMware Solution will be applied starting in March 2021. You'll receive notification through Azure Service Health that includes the timeline of the maintenance. For more details about the key upgrade processes and features in Azure VMware Solution, see [Azure VMware Solution private cloud updates and upgrades](concepts-upgrades.md).
## March 15, 2021 -- Azure VMware Solution service will be performing maintenance work to update vCenter server in your private cloud to vCenter Server 6.7 Update 3l version through March 19, 2021.
+- Azure VMware Solution service will perform maintenance work through March 19, 20201, to update vCenter server in your private cloud to vCenter Server 6.7 Update 3l version.
-- During this time, VMware vCenter will be unavailable, and you won't be able to manage VMs (stop, start, create, delete). VMware High Availability (HA) will continue to operate to provide protection for existing VMs. Private cloud scaling (adding/removing servers and clusters) will also be unavailable.
+- During this time, VMware vCenter will be unavailable, and you won't be able to manage VMs (stop, start, create, delete). Private cloud scaling (adding/removing servers and clusters) will also be unavailable. VMware High Availability (HA) will continue to operate to provide protection for existing VMs.
For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html). ## March 4, 2021 -- Azure VMware Solutions will apply patches to ESXi in existing private clouds to [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) through March 15, 2021.
+- Azure VMware Solutions will apply patches through March 15, 2021, to ESXi in existing private clouds to [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html).
- Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied through March 15, 2021.
Once complete, newer versions of VMware components appear. If you notice any iss
++
azure-vmware Concepts Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-upgrades.md
Title: Concepts - Private cloud updates and upgrades description: Learn about the key upgrade processes and features in Azure VMware Solution. Previously updated : 02/16/2021 Last updated : 03/17/2021 # Azure VMware Solution private cloud updates and upgrades
Azure VMware Solution also takes a configuration backup of the following VMware
At times of failure, Azure VMware Solution can restore these components from the configuration backup.
-For more information on VMware software versions, see the [private clouds and clusters concept article](concepts-private-clouds-clusters.md) and the [FAQ](faq.yml).
+## VMware software versions
+ ## Next steps
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-back-up-vms.md
Title: Backup solutions for Azure VMware Solution virtual machines description: Learn about leading backup and restore solutions for your Azure VMware Solution virtual machines. Previously updated : 01/11/2021 Last updated : 03/17/2021 # Backup solutions for Azure VMware Solution virtual machines (VMs)
You can find more information on these backup solutions here:
- [Veritas](https://vrt.as/nb4avs) - [Veeam](https://www.veeam.com/kb4012) - [Cohesity](https://www.cohesity.com/resource-assets/solution-brief/Cohesity-Azure-Solution-Brief.pdf)
+- [Dell Technologies](https://www.delltechnologies.com/resources/en-us/asset/briefs-handouts/solutions/dell-emc-data-protection-for-avs.pdf)
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
For more information, see the [Azure VMware Solution private cloud and clusters]
>[!TIP] >You can always extend the cluster later if you need to go beyond the initial deployment number.
-## vCenter admin password
-Define the vCenter admin password. During the deployment, you'll create a vCenter admin password. The password is assigned to the cloudadmin@vsphere.local admin account during the vCenter build. You'll use these credentials to sign in to vCenter.
-
-## NSX-T admin password
-Define the NSX-T admin password. During the deployment, you'll create an NSX-T admin password. The password is assigned to the admin user in the NSX account during the NSX build. You'll use these credentials to sign in to NSX-T Manager.
- ## IP address segment for private cloud management The first step in planning the deployment is to plan out the IP segmentation. Azure VMware Solution requires a /22 CIDR network. This address space carves it up into smaller network segments (subnets) and used for vCenter, VMware HCX, NSX-T, and vMotion functionality.
azure-vmware Reset Vsphere Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reset-vsphere-credentials.md
Last updated 03/16/2021
# Reset vSphere credentials for Azure VMware Solution
-In this article, we'll walk through the steps to reset the vSphere credentials for your Azure VMware Solution private cloud. This will allow you to ensure the HCX connector has the latest vSphere credentials.
+In this article, we'll walk through the steps to reset the vCenter Server and NSX-T Manager credentials for your Azure VMware Solution private cloud. This will allow you to ensure the HCX connector has the latest vCenter Server credentials.
-## Reset your vSphere credentials
+## Reset your Azure VMware Solution credentials
- First let's reset your vSphere credentials. Your vCenter CloudAdmin and NSX-T admin credentials donΓÇÖt expire; however, you can follow these steps to generate new passwords for these accounts.
+ First let's reset your Azure VMare Solution components credentials. Your vCenter Server CloudAdmin and NSX-T admin credentials donΓÇÖt expire; however, you can follow these steps to generate new passwords for these accounts.
> [!NOTE]
-> If you use your CloudAdmin credentials for connected services like HCX, vCenter Orchestrator, vCloud Director, or vRealize, your connections will stop working once you update your password. These services should be stopped before initiating the password rotation. Failure to do so may result in temporary locks on your vCenter CloudAdmin and NSX-T admin accounts, as these services will continuously call using your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](https://docs.microsoft.com/azure/azure-vmware/concepts-identity).
+> If you use your CloudAdmin credentials for connected services like HCX, vRealize Orchestrator, vRealizae Operations Manager or VMware Horizon, your connections will stop working once you update your password. These services should be stopped before initiating the password rotation. Failure to do so may result in temporary locks on your vCenter CloudAdmin and NSX-T admin accounts, as these services will continuously call using your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](https://docs.microsoft.com/azure/azure-vmware/concepts-identity).
-1. In your Azure VMware Solutions portal, open a command line.
+1. From the Azure portal, open an Azure Cloud Shell session.
2. Run the following command to update your vCenter CloudAdmin password. You will need to replace {SubscriptionID}, {ResourceGroup}, and {PrivateCloudName} with the actual values of the private cloud that the CloudAdmin account belongs to.
az resource invoke-action --action rotateVcenterPassword --ids "/subscriptions/{
az resource invoke-action --action rotateNSXTPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview" ```
-## Ensure the HCX connector has your latest vSphere credentials
+## Ensure the HCX connector has your latest vCenter Server credentials
Now that you've reset your credentials, follow these steps to ensure the HCX connector has your updated credentials.
Now that you've reset your credentials, follow these steps to ensure the HCX con
:::image type="content" source="media/reset-vsphere-credentials/hcx-site-pairing.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
-3. Select the correct connection to AVS (if there is more than one) and select **Edit Connection**.
+3. Select the correct connection to Azure VMware Solution (if there is more than one) and select **Edit Connection**.
-4. Provide the new vSphere credentials and select **Edit**, which saves the credentials. Save should show successful.
+4. Provide the new vCenter Server CloudAdmin user credentials and select **Edit**, which saves the credentials. Save should show successful.
## Next steps
-Now that you've covered resetting vSphere credentials for Azure VMware Solution, you may want to learn about:
+Now that you've covered resetting vCenter Server and NSX-T Manager credentials for Azure VMware Solution, you may want to learn about:
- [Configuring NSX network components in Azure VMware Solution](configure-nsx-network-components-azure-portal.md). - [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
backup Backup Azure Backup Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-backup-faq.md
Yes. To move a subscription (that contains a vault) to a different Azure Active
>[!IMPORTANT] >Ensure that you perform the following actions after moving the subscription:<ul><li>Role-based access control permissions and custom roles are not transferrable. You must recreate the permissions and roles in the new Azure AD.</li><li>You must recreate the Managed Identity (MI) of the vault by disabling and enabling it again. Also, you must evaluate and recreate the MI permissions.</li><li>If the vault uses features which leverage MI, such as [Private Endpoints](private-endpoints.md#before-you-start) and [Customer Managed Keys](encryption-at-rest-with-cmk.md#before-you-start), you must reconfigure the features.</li></ul>
+### Can I move a subscription that contains a Recovery Services Vault to a different tenant?
+
+Yes. Ensure that you do the following:
+
+>[!IMPORTANT]
+>Ensure that you perform the following actions after moving the subscription:<ul><li>If the vault uses CMK (customer managed keys), you must update the vault. This enables the vault to recreate and reconfigure the vault managed identity and CMK (which will reside in the new tenant), otherwise the backups/restore operation will fail.</li><li>You must reconfigure the RBAC permissions in the subscription as the existing permissions canΓÇÖt be moved.</li></ul>
+ ## Azure Backup agent ### Where can I find common questions about the Azure Backup agent for Azure VM backup?
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
backup Tutorial Backup Sap Hana Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-backup-sap-hana-db.md
The command output should display the {SID}{DBNAME} key, with the user shown as
>[!NOTE] > Make sure you have a unique set of SSFS files under `/usr/sap/{SID}/home/.hdb/`. There should be only one folder in this path.
+Here is a summary of steps required for completing the pre-registration script run.
+
+|Who |From |What to run |Comments |
+|||||
+|```<sid>```adm (OS) | HANA OS | Read tutorial and download pre-registration script | Read the [pre-requisites above](#prerequisites) Download Pre-registration script from [here](https://aka.ms/scriptforpermsonhana) |
+|```<sid>```adm (OS) and SYSTEM user (HANA) | HANA OS | Run hdbuserstore Set command | e.g. hdbuserstore Set SYSTEM hostname>:3```<Instance#>```13 SYSTEM ```<password>``` **Note:** Make sure to use hostname instead of IP address or FQDN |
+|```<sid>```adm (OS) | HANA OS | Run hdbuserstore List command | Check if the result includes the default store like below : ```KEY SYSTEM  ENV : <hostname>:3<Instance#>13  USER: SYSTEM``` |
+|Root (OS) | HANA OS | Run Azure Backup HANA pre-registration script | ```./msawb-plugin-config-com-sap-hana.sh -a --sid <SID> -n <Instance#> --system-key SYSTEM``` |
+|```<sid>```adm (OS) | HANA OS | Run hdbuserstore List command | Check if result includes new lines as below :  ```KEY AZUREWLBACKUPHANAUSER  ENV : localhost: 3<Instance#>13   USER: AZUREWLBACKUPHANAUSER``` |
+
+After running the pre-registration script successfully and verifying, you can then proceed to check [the connectivity requirements](#set-up-network-connectivity) and then [configure backup](#discover-the-databases) from Recovery services vault
+ ## Create a Recovery Services vault A Recovery Services vault is an entity that stores the backups and recovery points created over time. The Recovery Services vault also contains the backup policies that are associated with the protected virtual machines.
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
This article discusses a collection of best practices and useful tips for using
### Pool configuration and naming -- **Pool allocation mode**
- When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable an important, but small subset of scenarios. You can read more about user subscription mode at [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).
+- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable an important, but small subset of scenarios. You can read more about user subscription mode at [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).
-- **'virtualMachineConfiguration' or 'cloudServiceConfiguration'.**
+- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':**
While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md). -- **Consider job and task run time when determining job to pool mapping.**
- If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
+- **Consider job and task run time when determining job to pool mapping:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
-- **Pools should have more than one compute node.**
- Individual nodes are not guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.
+- **Pools should have more than one compute node:** Individual nodes are not guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.
-- **Do not reuse resource names.**
- Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. This can be done by using a GUID (either as the entire resource name, or as a part of it) or embedding the time the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can be used to give a resource a human readable name even if the actual resource ID is something that isn't that human friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
+- **Do not reuse resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. This can be done by using a GUID (either as the entire resource name, or as a part of it) or embedding the time the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can be used to give a resource a human readable name even if the actual resource ID is something that isn't that human friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
-- **Continuity during pool maintenance and failure.**
- It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that your jobs won't run if something goes wrong with the pool. This is especially important for time-sensitive workloads. To fix this, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.
+- **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that your jobs won't run if something goes wrong with the pool. This is especially important for time-sensitive workloads. To fix this, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.
-- **Business continuity during pool maintenance and failure**
- There are many possible causes that may prevent a pool from growing to the required size you desire, such as internal errors, capacity constraints, etc. For this reason, you should be ready to retarget jobs at a different pool (possibly with a different VM size - Batch supports this via [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid using a static pool ID with the expectation that it will never be deleted and never change.
+- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors, capacity constraints, etc. For this reason, you should be ready to retarget jobs at a different pool (possibly with a different VM size - Batch supports this via [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid using a static pool ID with the expectation that it will never be deleted and never change.
### Pool lifetime and billing Pool lifetime can vary depending upon the method of allocation and options applied to the pool configuration. Pools can have an arbitrary lifetime and a varying number of compute nodes in the pool at any point in time. It's your responsibility to manage the compute nodes in the pool either explicitly, or through features provided by the service ([autoscale](nodes-and-pools.md#automatic-scaling-policy) or [autopool](nodes-and-pools.md#autopools)). -- **Keep pools fresh.**
- Resize your pools to zero every few months to ensure you get the [latest node agent updates and bug fixes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md). Your pool won't receive node agent updates unless it's recreated, or resized to 0 compute nodes. Before you recreate or resize your pool, it's recommended to download any node agent logs for debugging purposes, as discussed in the [Nodes](#nodes) section.
+- **Keep pools fresh:** Resize your pools to zero every few months to ensure you get the [latest node agent updates and bug fixes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md). Your pool won't receive node agent updates unless it's recreated, or resized to 0 compute nodes. Before you recreate or resize your pool, it's recommended to download any node agent logs for debugging purposes, as discussed in the [Nodes](#nodes) section.
-- **Pool re-creation**
- On a similar note, it's not recommended to delete and re-create your pools on a daily basis. Instead, create a new pool, update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool.
+- **Pool re-creation:** On a similar note, it's not recommended to delete and re-create your pools on a daily basis. Instead, create a new pool, update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool.
-- **Pool efficiency and billing**
- Batch itself incurs no extra charges, but you do incur charges for the compute resources used. You're billed for every compute node in the pool, regardless of the state it's in. This includes any charges required for the node to run such as storage and networking costs. To learn more best practices, see [Cost analysis and budgets for Azure Batch](budget.md).
+- **Pool efficiency and billing:** Batch itself incurs no extra charges, but you do incur charges for the compute resources used. You're billed for every compute node in the pool, regardless of the state it's in. This includes any charges required for the node to run such as storage and networking costs. To learn more best practices, see [Cost analysis and budgets for Azure Batch](budget.md).
### Pool allocation failures
Pools can be created using third-party images published to Azure Marketplace. Wi
### Azure region dependency
-It's advised to not depend on a single Azure region if you have a time-sensitive or production workload. While rare, there are issues that can affect an entire region. For example, if your processing needs to start at a specific time, consider scaling up the pool in your primary region *well before your start time*. If that pool scale fails, you can fall back to scaling up a pool in a backup region (or regions). Pools across multiple accounts in different regions provide a ready, easily accessible backup if something goes wrong with another pool. For more information, see [Design your application for high availability](high-availability-disaster-recovery.md).
+You shouldn't rely on a single Azure region if you have a time-sensitive or production workload. While rare, there are issues that can affect an entire region. For example, if your processing needs to start at a specific time, consider scaling up the pool in your primary region *well before your start time*. If that pool scale fails, you can fall back to scaling up a pool in a backup region (or regions). Pools across multiple accounts in different regions provide a ready, easily accessible backup if something goes wrong with another pool. For more information, see [Design your application for high availability](high-availability-disaster-recovery.md).
## Jobs
A common example is a task to copy files to a compute node. A simple approach is
### Avoid short execution time
-Tasks that only run for one to two seconds are not ideal. You should try to do a significant amount of work in an individual task (10 second minimum, going up to hours or days). If each task is executing for one minute (or more), then the scheduling overhead as a fraction of overall compute time is small.
+Tasks that only run for one to two seconds are not ideal. Try to do a significant amount of work in an individual task (10 second minimum, going up to hours or days). If each task is executing for one minute (or more), then the scheduling overhead as a fraction of overall compute time is small.
### Use pool scope for short tasks on Windows nodes
batch Monitor Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/monitor-application-insights.md
A sample C# solution with code to accompany this article is available on [GitHub
* Use the Azure portal to create an Application Insights *resource*. Select the *General* **Application type**. * Copy the [instrumentation
-key](../azure-monitor/app/create-new-resource.md #copy-the-instrumentation-key) from the portal. It is required later in this article.
+key](../azure-monitor/app/create-new-resource.md#copy-the-instrumentation-key) from the portal. It is required later in this article.
> [!NOTE] > You may be [charged](https://azure.microsoft.com/pricing/details/application-insights/) for the data stored in Application Insights.
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
Last updated 01/16/2020
+zone_pivot_groups: programming-languages-set-two
# Configure OpenSSL for Linux
Set environment variable `SSL_CERT_FILE` to point at that file before running a
```bash export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt ```+
+## Certificate Revocation Checks
+When connecting to the Speech Service, the Speech SDK will verify that the TLS certificate used by the Speech Service has not been revoked. To conduct this check, the Speech SDK will need access to the CRL distribution points for Certificate Authorities used by Azure. A list of possible CRL download locations can be found in [this document](https://docs.microsoft.com/azure/security/fundamentals/tls-certificate-changes). If a certificate has been revoked or the CRL cannot be downloaded the Speech SDK will abort the connection and raise the Canceled event.
+
+In the event the network where the Speech SDK is being used from is configured in a manner that does not permit access to the CRL download locations, the CRL check can either be disabled or set to not fail if the CRL cannot be retrieved. This configuration is done through the configuration object used to create a Recognizer object.
+
+To continue with the connection when a CRL cannot be retrieved set the property OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE.
++
+```csharp
+config.SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
+```
+++
+```C++
+config->SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
+```
+++
+```java
+config.setProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
+```
+++
+```Python
+speech_config.set_property_by_name("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")?
+```
+++
+```ObjectiveC
+[config setPropertyTo:@"true" byName:"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"];
+```
+
+When set to "true" an attempt will be made to retrieve the CRL and if the retrieval is successful the certificate will be checked for revocation, if the retrieval fails, the connection will be allowed to continue.
+
+To completely disable certificate revocation checks, set the property OPENSSL_DISABLE_CRL_CHECK to "true".
+
+```csharp
+config.SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
+```
+++
+```C++
+config->SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
+```
+++
+```java
+config.setProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
+```
+++
+```Python
+speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")?
+```
+++
+```ObjectiveC
+[config setPropertyTo:@"true" byName:"OPENSSL_DISABLE_CRL_CHECK"];
+```
+++ > [!NOTE] > It is also worth noting that some distributions of Linux do not have a TMP or TMPDIR environment variable defined. This will cause the Speech SDK to download the Certificate Revocation List (CRL) every time, rather than caching the CRL to disk for reuse until they expire. To improve initial connection performance you can [create an environment variable named TMPDIR and set it to the path of your chosen temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
To create your first project, select the **Text-to-Speech/Custom Voice** tab, th
## How to migrate to Custom Neural Voice
-If you are using the non-neural (or standard) Custom Voice, consider to migrate to Custom Neural Voice immediately following the steps below. Moving to Custom Neural Voice will help you develop more realistic voices for even more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way.
+The standard/non-neural training tier (adaptive, statistical parametric, concacenative) of Custom Voice is being deprecated. The annoucement has been sent out to all existing Speech subscriptions before 2/28/2021. During the deprecation period (3/1/2021 - 2/29/2024), existing standard tier users can continue to use their non-neural models created. All new users/new speech resources should move to the neural tier/Custom Neural Voice. After 2/29/2024, all standard/non-neural custom voices will no longer be supported.
+
+If you are using non-neural/standard Custom Voice, migrate to Custom Neural Voice immediately following the steps below. Moving to Custom Neural Voice will help you develop more realistic voices for even more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way.
1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural). Note that the access to the Custom Neural Voice service is subject to MicrosoftΓÇÖs sole discretion based on our eligibility criteria. Customers may gain access to the technology only after their application is reviewed and they have committed to using it in alignment with our [Responsible AI principles](https://microsoft.com/ai/responsible-ai) and the [code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext). 2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to the [Custom Voice portal](https://speech.microsoft.com/customvoice) using the same Azure subscription that you provide in your application.
If you are using the non-neural (or standard) Custom Voice, consider to migrate
- [Prepare Custom Voice data](how-to-custom-voice-prepare-data.md) - [Create a Custom Voice](how-to-custom-voice-create-voice.md)-- [Guide: Record your voice samples](record-custom-voice-samples.md)
+- [Guide: Record your voice samples](record-custom-voice-samples.md)
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
See the full Speech-to-text REST API v3.0 Reference [here](https://centralus.dev
## Speech-to-text REST API for short audio
-As an alternative to the [Speech SDK](speech-sdk.md), the Speech service allows you to convert Speech-to-text using a REST API. Each accessible endpoint is associated with a region. Your application requires a subscription key for the endpoint you plan to use. The REST API for short audio is very limited, and it should only be used in cases were the [Speech SDK](speech-sdk.md) cannot.
+As an alternative to the [Speech SDK](speech-sdk.md), the Speech service allows you to convert Speech-to-text using a REST API.
+The REST API for short audio is very limited, and it should only be used in cases were the [Speech SDK](speech-sdk.md) cannot.
Before using the Speech-to-text REST API for short audio, consider the following:
A typical response for recognition with pronunciation assessment:
- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/) - [Customize acoustic models](./how-to-custom-speech-train-model.md) - [Customize language models](./how-to-custom-speech-train-model.md)-- [Get familiar with Batch transcription](batch-transcription.md)
+- [Get familiar with Batch transcription](batch-transcription.md)
+
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
Every client interaction with Azure Communication Services needs to be authentic
Another type of authentication uses *user access tokens* to authenticate against services that require user participation. For example, the chat or calling service utilizes *user access tokens* to allow users to be added in a thread and have conversations with each other.
-## Authentication Options:
+## Authentication Options
The following table shows the Azure Communication Services client libraries and their authentication options:
The following table shows the Azure Communication Services client libraries and
Each authorization option is briefly described below: -- **Access Key** authentication is suitable for service applications running in a trusted service environment. The access key can be found in Azure Communication Services portal and the service application uses it as the credential to initialize the corresponding client libraries. See an example with how it is used in the [Identity client library](../quickstarts/access-tokens.md). Since the access key is part of the connection string of your resource, authentication with a connection string is equivalent to authentication with an access key.
+### Access Key
-- **Managed Identity** authentication provides superior security and ease of use over other authorization options. For example, by using Azure AD, you avoid having to store your account access key with your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Azure AD where possible. To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/managed-identity-from-cli.md). Then, the endpoint and credentials can be used to authenticate the client libraries. See examples of how [managed identity](../quickstarts/managed-identity.md) is used.
+Access key authentication is suitable for service applications running in a trusted service environment. Your access key can be found in the Azure Communication Services portal. The service application uses it as a credential to initialize the corresponding client libraries. See an example of how it is used in the [Identity client library](../quickstarts/access-tokens.md).
-- **User Access Tokens** are generated using the Identity client library and are associated with users created in the Identity client library. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and managed identity authentication in that it is used to authenticate a user rather than a secured Azure resource.
+Since the access key is part of the connection string of your resource, authentication with a connection string is equivalent to authentication with an access key.
+
+If you wish to call ACS' APIs manually using an access key, then you will need to sign the request. Signing the request is explained, in detail, within a [tutorial](../tutorials/hmac-header-tutorial.md).
+
+### Managed Identity
+
+Managed Identities, provides superior security and ease of use over other authorization options. For example, by using Azure AD, you avoid having to store your account access key within your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Azure AD where possible.
+
+To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/managed-identity-from-cli.md). Then, the endpoint and credentials can be used to authenticate the client libraries. See examples of how [managed identity](../quickstarts/managed-identity.md) is used.
+
+### User Access Tokens
+
+User access tokens are generated using the Identity client library and are associated with users created in the Identity client library. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and managed identity authentication in that it is used to authenticate a user rather than a secured Azure resource.
## Next steps
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-virtual-network-concepts.md
Last updated 08/11/2020
This article provides background about virtual network scenarios, limitations, and resources. For deployment examples using the Azure CLI, see [Deploy container instances into an Azure virtual network](container-instances-vnet.md).
+> [!IMPORTANT]
+> Container group deployment to a virtual network is generally available for Linux containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
+ ## Scenarios Container groups deployed into an Azure virtual network enable scenarios like:
Container groups deployed into an Azure virtual network enable scenarios like:
[!INCLUDE [container-instances-restart-ip](../../includes/container-instances-restart-ip.md)]
-## Where to deploy
-
-The following regions and maximum resources are available to deploy a container group in an Azure virtual network.
-- ## Required network resources There are three Azure Virtual Network resources required for deploying container groups to a virtual network: the [virtual network](#virtual-network) itself, a [delegated subnet](#subnet-delegated) within the virtual network, and a [network profile](#network-profile).
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-transfer-images.md
az resource delete \
* Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you are transferring a maximum of 50 artifacts. * Pipeline run might not have completed. An export or import run can take some time. * For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team.-
+* **Problems pulling the image in a physically isolated environment**
+ * If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If this is the case, you will need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.md#how-do-i-push-non-distributable-layers-to-a-registry)
## Next steps
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
description: Learn about Azure Cosmos DB transactional (row-based) and analytica
Previously updated : 11/30/2020 Last updated : 03/16/2021
Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the in
The auto-sync capability along with analytical store provides the following key benefits:
-#### Scalability & elasticity
+### Scalability & elasticity
By using horizontal partitioning, Azure Cosmos DB transactional store can elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it is 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store.
-#### <a id="analytical-schema"></a>Automatically handle schema updates
+### <a id="analytical-schema"></a>Automatically handle schema updates
Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. With the auto-sync capability, Azure Cosmos DB manages the schema inference over the latest updates from the transactional store. It also manages the schema representation in the analytical store out-of-the-box which, includes handling nested data types. As your schema evolves, and new properties are added over time, the analytical store automatically presents a unionized schema across all historical schemas in the transactional store.
-##### Schema constraints
+#### Schema constraints
The following constraints are applicable on the operational data in Azure Cosmos DB when you enable analytical store to automatically infer and represent the schema correctly:
-* You can have a maximum of 200 properties at any nesting level in the schema and a maximum nesting depth of 5.
+* You can have a maximum of 1000 properties at any nesting level in the schema and a maximum nesting depth of 127.
+ * Only the first 1000 properties are represented in the analytical store.
+ * Only the first 127 nested levels are represented in the analytical store.
+
+* While JSON documents (and Cosmos DB collections/containers) are case sensitive from the uniqueness perspective, analytical store is not.
+
+ * In the same document: Properties names in the same level should be unique when compared case insensitively. For example, the following JSON document has "Name" and "name" in the same level of the document. While it's a valid JSON document, it doesn't satisfy analytical store constraint and hence will not be fully represented in the analytical store. In this case, "Name" and "name" are the same when compared in a case insensitive manner. Only "Name" will be represented in analytical store, because it is the first occurrence. And `"name": "john"` won't be represented at all.
+
+
+ ```json
+ {"id": 1, "Name": "fred", "name": "john"}
+ ```
- * An item with 201 properties at the top level doesnΓÇÖt satisfy this constraint and hence it will not be represented in the analytical store.
- * An item with more than five nested levels in the schema also doesnΓÇÖt satisfy this constraint and hence it will not be represented in the analytical store. For example, the following item doesn't satisfy the requirement:
+ * In different documents: Properties in the same level and with the same name, but in different cases, will be represented with the first occurrence. For example, the following documents have "Name" and "name" in the same level. Since the first document has "Name", this format will be used to represent this property in analytical store. In other words, the column name in analytical store will be "Name". Both `"fred"` and `"john"` will be represented, in the "Name" column.
++
+ ```json
+ {"id": 1, "Name": "fred"}
+ {"id": 2, "name": "john"}
+ ```
++
+* The first document of the collection defines the initial analytical store schema.
+ * Properties in the first level of the document will be represented as columns.
+ * Documents with more properties than the initial schema will generate new columns in analytical store.
+ * Columns can't be removed.
+ * The deletion of all documents in a collection doesn't reset the analytical store schema.
+ * There is not schema versioning. The last version inferred from transactional store is what you will see in analytical store.
- `{"level1": {"level2":{"level3":{"level4":{"level5":{"too many":12}}}}}}`
+* Currently we do not support Azure Synapse Spark reading column names that contain blanks (white spaces).
-* Property names should be unique when compared case insensitively. For example, the following items do not satisfy this constraint and hence will not be represented in the analytical store:
+* Expect different behavior in regard to `NULL` values:
+ * Spark pools in Azure Synapse will read these value as a 0 (zero).
+ * SQL serverless pools in Azure Synapse will read these values as `NULL`.
- `{"Name": "fred"} {"name": "john"}` ΓÇô "Name" and "name" are the same when compared in a case insensitive manner.
+* Expect different behavior in regard to missing columns:
+ * Spark pools in Azure Synapse will represent these columns as `undefined`.
+ * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
-##### Schema representation
+#### Schema representation
There are two modes of schema representation in the analytical store. These modes have tradeoffs between the simplicity of a columnar representation, handling the polymorphic schemas, and simplicity of query experience:
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
Azure Synapse Link is available for Azure Cosmos DB SQL API containers or for Az
* [Enable Synapse Link for your Azure Cosmos DB accounts](#enable-synapse-link) * [Create an analytical store enabled Azure Cosmos DB container](#create-analytical-ttl)
+* [Optional - Update analytical store ttl for an Azure Cosmos DB container](#update-analytical-ttl)
* [Connect your Azure Cosmos DB database to a Synapse workspace](#connect-to-cosmos-database) * [Query the analytical store using Synapse Spark](#query-analytical-store-spark) * [Query the analytical store using serverless SQL pool](#query-analytical-store-sql-on-demand)
Azure Synapse Link is available for Azure Cosmos DB SQL API containers or for Az
> [!NOTE] > Turning on Synapse Link does not turn on the analytical store automatically. Once you enable Synapse Link on the Cosmos DB account, enable analytical store on containers when you create them, to start replicating your operation data to analytical store.
+### Azure CLI
+
+The following links shows how to enabled Synapse Link by using Azure CLI:
+
+* [Create a new Azure Cosmos DB account with Synapse Link enabled](https://docs.microsoft.com/cli/azure/cosmosdb?view=azure-cli-latest#az_cosmosdb_create-optional-parameters&preserve-view=true)
+* [Update an existing Azure Cosmos DB account to enable Synapse Link](https://docs.microsoft.com/cli/azure/cosmosdb?view=azure-cli-latest#az_cosmosdb_update-optional-parameters&preserve-view=true)
+
+### PowerShell
+
+* [Create a new Azure Cosmos DB account with Synapse Link enabled](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbaccount?view=azps-5.5.0#description&preserve-view=true)
+* [Update an existing Azure Cosmos DB account to enable Synapse Link](https://docs.microsoft.com/powershell/module/az.cosmosdb/update-azcosmosdbaccount?view=azps-5.5.0&preserve-view=true)
++
+The following links shows how to enabled Synapse Link by using PowerShell:
+ ## <a id="create-analytical-ttl"></a> Create an Azure Cosmos container with analytical store You can turn on analytical store on an Azure Cosmos container while creating the container. You can use the Azure portal or configure the `analyticalTTL` property during container creation by using the Azure Cosmos DB SDKs.
except exceptions.CosmosResourceExistsError:
print('A container with already exists') ```
-### <a id="update-analytical-ttl"></a> Update the analytical store time to live
+### Azure CLI
+
+The following links show how to create an analytical store enabled containers by using Azure CLI:
-After the analytical store is enabled with a particular TTL value, you can update it to a different valid value later. You can update the value by using the Azure portal or SDKs. For information on the various Analytical TTL config options, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
+* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/cli/azure/cosmosdb/mongodb/collection?view=azure-cli-latest#az_cosmosdb_mongodb_collection_create-examples&preserve-view=true)
+* [Azure Cosmos DB SQL API](https://docs.microsoft.com/cli/azure/cosmosdb/sql/container?view=azure-cli-latest#az_cosmosdb_sql_container_create&preserve-view=true)
-#### Azure portal
+### PowerShell
+
+The following links show how to create an analytical store enabled containers by using PowerShell:
+
+* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection?view=azps-5.5.0#description&preserve-view=true)
+* [Azure Cosmos DB SQL API](https://docs.microsoft.com/cli/azure/cosmosdb/sql/container?view=azure-cli-latest#az_cosmosdb_sql_container_create&preserve-view=true)
++
+## <a id="update-analytical-ttl"></a> Optional - Update the analytical store time to live
+
+After the analytical store is enabled with a particular TTL value, you may want to update it to a different valid value later. You can update the value by using the Azure portal, Azure CLI, PowerShell, or Cosmos DB SDKs. For information on the various Analytical TTL config options, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
++
+### Azure portal
If you created an analytical store enabled container through the Azure portal, it contains a default analytical TTL of -1. Use the following steps to update this value:
If you created an analytical store enabled container through the Azure portal, i
* Select **On (no default)** or select **On** and set a TTL value * Click **Save** to save the changes.
-#### .NET SDK
+### .NET SDK
The following code shows how to update the TTL for analytical store by using the .NET SDK:
containerResponse.Resource. AnalyticalStorageTimeToLiveInSeconds = 60 * 60 * 24
await client.GetContainer("database", "container").ReplaceContainerAsync(containerResponse.Resource); ```
-#### Java V4 SDK
+### Java V4 SDK
The following code shows how to update the TTL for analytical store by using the Java V4 SDK:
containerProperties.setAnalyticalStoreTimeToLiveInSeconds (60 * 60 * 24 * 180 );
container.replace(containerProperties).block(); ```
+### Python V4 SDK
+
+Currently not supported.
++
+### Azure CLI
+
+The following links show how to update containers analytical TTL by using Azure CLI:
+
+* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/cli/azure/cosmosdb/mongodb/collection?view=azure-cli-latest#az_cosmosdb_mongodb_collection_update&preserve-view=true)
+* [Azure Cosmos DB SQL API](https://docs.microsoft.com/cli/azure/cosmosdb/sql/container?view=azure-cli-latest#az_cosmosdb_sql_container_update&preserve-view=true)
+
+### PowerShell
+
+The following links show how to update containers analytical TTL by using PowerShell:
+
+* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollection?view=azps-5.5.0&preserve-view=true)
+* [Azure Cosmos DB SQL API](https://docs.microsoft.com/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer?view=azps-5.5.0&preserve-view=true)
++ ## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace Use the instructions in [Connect to Azure Synapse Link](../synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md) on how to access an Azure Cosmos DB database from Azure Synapse Analytics Studio with Azure Synapse Link.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 03/03/2021 Last updated : 03/17/2021
The way you create a `TokenCredential` instance is beyond the scope of this arti
- [in .NET](https://docs.microsoft.com/dotnet/api/overview/azure/identity-readme#credential-classes) - [in Java](https://docs.microsoft.com/java/api/overview/azure/identity-readme#credential-classes)
+- [in JavaScript](https://docs.microsoft.com/javascript/api/overview/azure/identity-readme#credential-classes)
The examples below use a service principal with a `ClientSecretCredential` instance. ### In .NET
-> [!NOTE]
-> You must use the `preview` version of the Azure Cosmos DB .NET SDK to access this feature.
+The Azure Cosmos DB RBAC is currently supported in the `preview` version of the [.NET SDK V3](sql-api-sdk-dotnet-standard.md).
```csharp TokenCredential servicePrincipal = new ClientSecretCredential(
CosmosClient client = new CosmosClient("<account-endpoint>", servicePrincipal);
### In Java
+The Azure Cosmos DB RBAC is currently supported in the [Java SDK V4](sql-api-sdk-java-v4.md).
+ ```java TokenCredential ServicePrincipal = new ClientSecretCredentialBuilder() .authorityHost("https://login.microsoftonline.com")
CosmosAsyncClient Client = new CosmosClientBuilder()
.build(); ```
+### In JavaScript
+
+The Azure Cosmos DB RBAC is currently supported in the [JavaScript SDK V3](sql-api-sdk-node.md).
+
+```javascript
+const servicePrincipal = new ClientSecretCredential(
+ "<azure-ad-tenant-id>",
+ "<client-application-id>",
+ "<client-application-secret>");
+const client = new CosmosClient({
+ "<account-endpoint>",
+ aadCredentials: servicePrincipal
+});
+```
+ ## Auditing data requests When using the Azure Cosmos DB RBAC, [diagnostic logs](cosmosdb-monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the AAD identity used for every data request sent to your Azure Cosmos DB account.
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Download
-| | |
+| |Links |
||| |**MSI download**|[Microsoft Download Center](https://aka.ms/cosmosdb-emulator)| |**Get started**|[Develop locally with Azure Cosmos DB Emulator](local-emulator.md)|
cosmos-db Mongodb Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-feature-support.md
Following operators are supported with corresponding examples of their use. Cons
} ```
-Operator | Example |
- | |
-$eq | `{ "Volcano Name": { $eq: "Rainier" } }` | | -
-$gt | `{ "Elevation": { $gt: 4000 } }` | | -
-$gte | `{ "Elevation": { $gte: 4392 } }` | | -
-$lt | `{ "Elevation": { $lt: 5000 } }` | | -
-$lte | `{ "Elevation": { $lte: 5000 } }` | | -
-$ne | `{ "Elevation": { $ne: 1 } }` | | -
-$in | `{ "Volcano Name": { $in: ["St. Helens", "Rainier", "Glacier Peak"] } }` | | -
-$nin | `{ "Volcano Name": { $nin: ["Lassen Peak", "Hood", "Baker"] } }` | | -
-$or | `{ $or: [ { Elevation: { $lt: 4000 } }, { "Volcano Name": "Rainier" } ] }` | | -
-$and | `{ $and: [ { Elevation: { $gt: 4000 } }, { "Volcano Name": "Rainier" } ] }` | | -
-$not | `{ "Elevation": { $not: { $gt: 5000 } } }`| | -
-$nor | `{ $nor: [ { "Elevation": { $lt: 4000 } }, { "Volcano Name": "Baker" } ] }` | | -
-$exists | `{ "Status": { $exists: true } }`| | -
-$type | `{ "Status": { $type: "string" } }`| | -
-$mod | `{ "Elevation": { $mod: [ 4, 0 ] } }` | | -
-$regex | `{ "Volcano Name": { $regex: "^Rain"} }`| | -
+| Operator | Example |
+| | |
+| $eq | `{ "Volcano Name": { $eq: "Rainier" } }` |
+| $gt | `{ "Elevation": { $gt: 4000 } }` |
+| $gte | `{ "Elevation": { $gte: 4392 } }` |
+| $lt | `{ "Elevation": { $lt: 5000 } }` |
+| $lte | `{ "Elevation": { $lte: 5000 } }` |
+| $ne | `{ "Elevation": { $ne: 1 } }` |
+| $in | `{ "Volcano Name": { $in: ["St. Helens", "Rainier", "Glacier Peak"] } }` |
+| $nin | `{ "Volcano Name": { $nin: ["Lassen Peak", "Hood", "Baker"] } }` |
+| $or | `{ $or: [ { Elevation: { $lt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
+| $and | `{ $and: [ { Elevation: { $gt: 4000 } }, { "Volcano Name": "Rainier" } ] }` |
+| $not | `{ "Elevation": { $not: { $gt: 5000 } } }`|
+| $nor | `{ $nor: [ { "Elevation": { $lt: 4000 } }, { "Volcano Name": "Baker" } ] }` |
+| $exists | `{ "Status": { $exists: true } }`|
+| $type | `{ "Status": { $type: "string" } }`|
+| $mod | `{ "Elevation": { $mod: [ 4, 0 ] } }` |
+| $regex | `{ "Volcano Name": { $regex: "^Rain"} }`|
### Notes
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-async-java.md
The SQL API Async Java SDK differs from the SQL API Java SDK by providing asynch
> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide. >
-| | |
+| | Links |
||| | **SDK Download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) | |**API documentation** |[Java API reference documentation](/java/api/com.microsoft.azure.cosmosdb.rx.asyncdocumentclient) |
cosmos-db Sql Api Sdk Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-bulk-executor-dot-net.md
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-| | |
+| | Link/notes |
||| | **Description**| The .NET bulk executor library allows client applications to perform bulk operations on Azure Cosmos DB accounts. This library provides BulkImport, BulkUpdate, and BulkDelete namespaces. The BulkImport module can bulk ingest documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent. The BulkUpdate module can bulk update existing data in Azure Cosmos containers as patches. The BulkDelete module can bulk delete documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent.| |**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.BulkExecutor/) |
cosmos-db Sql Api Sdk Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-bulk-executor-java.md
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-| | |
+| | Link/notes |
||| |**Description**|The bulk executor library allows client applications to perform bulk operations in Azure Cosmos DB accounts. bulk executor library provides BulkImport, and BulkUpdate namespaces. The BulkImport module can bulk ingest documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent. The BulkUpdate module can bulk update existing data in Azure Cosmos containers as patches.| |**SDK download**|[Maven](https://search.maven.org/#search%7Cga%7C1%7Cdocumentdb-bulkexecutor)|
cosmos-db Sql Api Sdk Dotnet Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet-changefeed.md
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-| | |
+| | Links |
||| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.ChangeFeedProcessor/)| |**API documentation**|[Change Feed Processor library API reference documentation](/dotnet/api/microsoft.azure.documents.changefeedprocessor)|
cosmos-db Sql Api Sdk Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet-core.md
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-| | |
+| | Links |
||| |**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/)| |**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
cosmos-db Sql Api Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet-standard.md
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-| | |
+| | Links |
||| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)| |**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet.md
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-| | |
+| | Links |
||| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/)| |**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
cosmos-db Sql Api Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java.md
This is the original Azure Cosmos DB Sync Java SDK v2 for SQL API which supports
> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide. >
-| | |
+| | Links |
||| |**SDK Download**|[Maven](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.microsoft.azure%22%20AND%20a%3A%22azure-documentdb%22)| |**API documentation**|[Java API reference documentation](/java/api/com.microsoft.azure.documentdb)|
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-python.md
ms.devlang: python Last updated 08/12/2020-+ # Azure Cosmos DB Python SDK for SQL API: Release notes and resources
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-| | |
+| Page| Link |
||| |**Download SDK**|[PyPI](https://pypi.org/project/azure-cosmos)|
-|**API documentation**|[Python API reference documentation](/python/api/azure-cosmos/)|
+|**API documentation**|[Python API reference documentation](https://docs.microsoft.com/python/api/azure-cosmos/azure.cosmos?view=azure-python&preserve-view=true)|
|**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)| |**Get started**|[Get started with the Python SDK](create-sql-api-python.md)| |**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) and [Python 3.5.3+](https://www.python.org/downloads/)| ## Release history
-### 4.1.0 (2020-08-10)
+## 4.2.0
+
+**Bug fixes**
+- Fixed bug where continuation token is not honored when query_iterable is used to get results by page.
+- Fixed bug where resource tokens not being honored for document reads and deletes.
+
+**New features**
+- Added support for passing `partitionKey` while querying Change-Feed.
+
+## 4.1.0
- Added deprecation warning for "lazy" indexing mode. The backend no longer allows creating containers with this mode and will set them to consistent instead.
- Added the ability to set the analytical storage TTL when creating a new container. **Bug fixes**-- Fixed support for dicts as inputs for get_client APIs.
+- Fixed support for `dicts` as inputs for get_client APIs.
- Fixed Python 2/3 compatibility in query iterators.-- Fixed type hint error (Issue #12570).-- Fixed bug where options headers were not added to upsert_item function. Issue #11791 - thank you @aalapatirvbd.-- Fixed error raised when a non string ID is used in an item. It now raises TypeError rather than AttributeError (Issue #11793).
+- Fixed type hint error.
+- Fixed bug where options headers were not added to upsert_item function.
+- Fixed error raised when a non-string ID is used in an item. It now raises TypeError rather than AttributeError.
+
-### 4.0.0
+## 4.0.0
* Stable release. * Added HttpLoggingPolicy to pipeline to enable passing in a custom logger for request and response headers.
* Added query Distinct, Offset, and Limit support. * Default document query execution context now used for
- * ChangeFeed queries
- * single partition queries (partitionkey, partitionKeyRangeId is present in options)
+ * Change Feed queries
+ * single partition queries (`partitionkey`, `partitionKeyRangeId` is present in options)
* Non-document queries * Errors out for aggregates on multiple partitions, with enable cross partition query set to true, but no "value" keyword present
Microsoft provides notification at least **12 months** in advance of retiring an
| Version | Release Date | Retirement Date | | | | |
+| [4.2.0](#420) |Oct 09, 2020 | |
+| [4.1.0](#410) |Aug 10, 2020 | |
| [4.0.0](#400) |May 20, 2020 | | | [3.0.2](#302) |Nov 15, 2018 | | | [3.0.1](#301) |Oct 04, 2018 | |
cosmos-db Sql Query Datetimeadd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-datetimeadd.md
DateTimeAdd (<DateTimePart> , <numeric_expr> ,<DateTime>)
*DateTime* UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
DateTimeAdd (<DateTimePart> , <numeric_expr> ,<DateTime>)
Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
## Remarks
cosmos-db Sql Query Datetimediff https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-datetimediff.md
DateTimeDiff (<DateTimePart> , <StartDate> , <EndDate>)
*StartDate* UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
cosmos-db Sql Query Datetimefromparts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-datetimefromparts.md
DateTimeFromParts(<numberYear>, <numberMonth>, <numberDay> [, numberHour] [, nu
Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
cosmos-db Sql Query Datetimetotimestamp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-datetimetotimestamp.md
DateTimeToTimestamp (<DateTime>)
*DateTime* UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
cosmos-db Sql Query Getcurrentdatetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-getcurrentdatetime.md
GetCurrentDateTime ()
## Return types
- Returns the current UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+Returns the current UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
cosmos-db Sql Query Round https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-round.md
ROUND(<numeric_expr>)
## Remarks
- The rounding operation performed follows midpoint rounding away from zero. If the input is a numeric expression which falls exactly between two integers then the result will be the closest integer value away from zero. This system function will benefit from a [range index](index-policy.md#includeexclude-strategy).
+The rounding operation performed follows midpoint rounding away from zero. If the input is a numeric expression which falls exactly between two integers then the result will be the closest integer value away from zero. This system function will benefit from a [range index](index-policy.md#includeexclude-strategy).
- |<numeric_expr>|Rounded|
- |-|-|
- |-6.5000|-7|
- |-0.5|-1|
- |0.5|1|
- |6.5000|7||
+|<numeric_expr>|Rounded|
+|-|-|
+|-6.5000|-7|
+|-0.5|-1|
+|0.5|1|
+|6.5000|7|
## Examples
- The following example rounds the following positive and negative numbers to the nearest integer.
+The following example rounds the following positive and negative numbers to the nearest integer.
```sql SELECT ROUND(2.4) AS r1, ROUND(2.6) AS r2, ROUND(2.5) AS r3, ROUND(-2.4) AS r4, ROUND(-2.6) AS r5 ```
- Here is the result set.
+Here is the result set.
```json [{r1: 2, r2: 3, r3: 3, r4: -2, r5: -3}]
cosmos-db Sql Query Tickstodatetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-tickstodatetime.md
A signed numeric value, the current number of 100 nanosecond ticks that have ela
Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
cosmos-db Sql Query Timestamptodatetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-timestamptodatetime.md
A signed numeric value, the current number of milliseconds that have elapsed sin
Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
- |Format|Description|
- |-|-|
- |YYYY|four-digit year|
- |MM|two-digit month (01 = January, etc.)|
- |DD|two-digit day of month (01 through 31)|
- |T|signifier for beginning of time elements|
- |hh|two-digit hour (00 through 23)|
- |mm|two-digit minutes (00 through 59)|
- |ss|two-digit seconds (00 through 59)|
- |.fffffff|seven-digit fractional seconds|
- |Z|UTC (Coordinated Universal Time) designator||
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
cosmos-db Synapse Link Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-frequently-asked-questions.md
When planning to configure a multi-region Azure Cosmos DB account with analytica
When Azure Synapse Link is enabled for a multi-region account, the analytical store is created in all regions. The underlying data is optimized for throughput and transactional consistency in the transactional store.
+### Is analytical store supported in all Azure Cosmos DB regions?
+
+Yes.
+ ### Is backup and restore supported for Azure Synapse Link enabled accounts? For the containers with analytical store turned on, automatic backup and restore of your data in the analytical store is not supported at this time.
Currently, this feature is not available.
Currently Spark structured streaming support for Azure Cosmos DB is implemented using the change feed functionality of the transactional store and itΓÇÖs not yet supported from analytical store.
+### Is streaming supported?
+
+We do not support streaming of data from the analytical store.
+ ## Azure Synapse Studio ### In the Azure Synapse Studio, how do I recognize if I'm connected to an Azure Cosmos DB container with the analytics store enabled?
cosmos-db Table Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-dotnet-standard.md
> * [Node.js](table-sdk-nodejs.md) > * [Python](table-sdk-python.md)
-| | |
+| | Links |
||| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table)| |**Sample**|[Cosmos DB Table API .NET Sample](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started)|
cosmos-db Table Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-dotnet.md
> * [Node.js](table-sdk-nodejs.md) > * [Python](table-sdk-python.md)
-| | |
+| | Links |
||| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table)| |**Quickstart**|[Azure Cosmos DB: Build an app with .NET and the Table API](create-table-dotnet.md)|
cosmos-db Table Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-java.md
> * [Python](table-sdk-python.md)
-| | |
+| | Links |
||| |**SDK download**|[Download Options](https://github.com/azure/azure-storage-java#download)| |**API documentation**|[Java API reference documentation](https://azure.github.io/azure-storage-java/)|
cosmos-db Table Sdk Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-nodejs.md
> * [Python](table-sdk-python.md)
-| | |
+| | Links |
||| |**SDK download**|[NPM](https://www.npmjs.com/package/azure-storage)| |**API documentation**|[Node.js API reference documentation](https://azure.github.io/azure-storage-node/)|
cosmos-db Table Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-python.md
> * [Python](table-sdk-python.md)
-| | |
+| | Links |
||| |**SDK download**|[PyPI](https://pypi.python.org/pypi/azure-cosmosdb-table/)| |**API documentation**|[Python API reference documentation](/python/api/overview/azure/cosmosdb)|
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/view-reservations.md
Previously updated : 02/24/2021 Last updated : 03/17/2021
To allow other people to manage reservations, you have two options:
1. Select the user, and then select **Save**. - Add a user as billing administrator to an Enterprise Agreement or a Microsoft Customer Agreement:
- - For an Enterprise Agreement, add users with the _Enterprise Administrator_ role to view and manage all reservation orders that apply to the Enterprise Agreement. Users with the _Enterprise Administrator (read only)_ role can only view the reservation. Department admins and account owners can't view reservations _unless_ they're explicitly added to them using Access control (IAM). For more information, see [Managing Azure Enterprise roles](../manage/understand-ea-roles.md).
+ - For an Enterprise Agreement, add users with the _Enterprise Administrator_ role to view and manage all reservation orders that apply to the Enterprise Agreement. Enterprise administrators view and manage reservations in **Cost Management + Billing**, not **Reservations**. Users with the _Enterprise Administrator (read only)_ role can only view the reservation. Department admins and account owners can't view reservations _unless_ they're explicitly added to them using Access control (IAM). For more information, see [Managing Azure Enterprise roles](../manage/understand-ea-roles.md).
_Enterprise Administrators can take ownership of a reservation order and they can add other users to a reservation using Access control (IAM)._ - For a Microsoft Customer Agreement, users with the billing profile owner role or the billing profile contributor role can manage all reservation purchases made using the billing profile. Billing profile readers and invoice managers can view all reservations that are paid for with the billing profile. However, they can't make changes to reservations.
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipeline-execution-triggers.md
Tumbling window triggers are a type of trigger that fires at a periodic time int
For more information about tumbling window triggers and, for examples, see [Create a tumbling window trigger](how-to-create-tumbling-window-trigger.md).
-## Event-based trigger
-
-An event-based trigger runs pipelines in response to an event, such as the arrival of a file, or the deletion of a file, in Azure Blob Storage.
-
-For more information about event-based triggers, see [Create a trigger that runs a pipeline in response to an event](how-to-create-event-trigger.md).
- ## Examples of trigger recurrence schedules+ This section provides examples of recurrence schedules. It focuses on the **schedule** object and its elements. The examples assume that the **interval** value is 1 and that the **frequency** value is correct according to the schedule definition. For example, you can't have a **frequency** value of "day" and also have a **monthDays** modification in the **schedule** object. These kinds of restrictions are described in the table in the preceding section.
The examples assume that the **interval** value is 1 and that the **frequency**
| `{"minutes":[15,45], "hours":[5,17], "monthlyOccurrences":[{"day":"wednesday", "occurrence":3}]}` | Run at 5:15 AM, 5:45 AM, 5:15 PM, and 5:45 PM on the third Wednesday of every month. | ## Trigger type comparison+ The tumbling window trigger and the schedule trigger both operate on time heartbeats. How are they different? > [!NOTE]
The following table provides a comparison of the tumbling window trigger and sch
| **System variables** | Along with @trigger().scheduledTime and @trigger().startTime, it also supports the use of the **WindowStart** and **WindowEnd** system variables. Users can access `trigger().outputs.windowStartTime` and `trigger().outputs.windowEndTime` as trigger system variables in the trigger definition. The values are used as the window start time and window end time, respectively. For example, for a tumbling window trigger that runs every hour, for the window 1:00 AM to 2:00 AM, the definition is `trigger().outputs.windowStartTime = 2017-09-01T01:00:00Z` and `trigger().outputs.windowEndTime = 2017-09-01T02:00:00Z`. | Only supports default @trigger().scheduledTime and @trigger().startTime variables. | | **Pipeline-to-trigger relationship** | Supports a one-to-one relationship. Only one pipeline can be triggered. | Supports many-to-many relationships. Multiple triggers can kick off a single pipeline. A single trigger can kick off multiple pipelines. |
+## Event-based trigger
+
+An event-based trigger runs pipelines in response to an event. There are two flavors of event based triggers.
+
+* _Storage event trigger_ runs a pipeline against events happening in a Storage account, such as the arrival of a file, or the deletion of a file in Azure Blob Storage account.
+* _Custom event trigger_ processes and handles [custom topics](../event-grid/custom-topics.md) in Event Grid
+
+For more information about event-based triggers, see [Storage Event Trigger](how-to-create-event-trigger.md) and [Custom Event Trigger](how-to-create-custom-event-trigger.md).
+ ## Next steps+ See the following tutorials: - [Quickstart: Create a data factory by using the .NET SDK](quickstart-create-data-factory-dot-net.md)
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 02/10/2021 Last updated : 03/16/2021 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory
If the requirements aren't met, Azure Data Factory checks the settings and autom
4. `nullValue` is left as default or set to **empty string** (""), and `treatEmptyAsNull` is left as default or set to true. 5. `encodingName` is left as default or set to **utf-8**. 6. `quoteChar`, `escapeChar`, and `skipLineCount` aren't specified. PolyBase support skip header row, which can be configured as `firstRowAsHeader` in ADF.
- 7. `compression` can be **no compression**, **GZip**, or **Deflate**.
+ 7. `compression` can be **no compression**, **``GZip``**, or **Deflate**.
3. If your source is a folder, `recursive` in copy activity must be set to true.
Using COPY statement supports the following configuration:
2. Format settings are with the following:
- 1. For **Parquet**: `compression` can be **no compression**, **Snappy**, or **GZip**.
+ 1. For **Parquet**: `compression` can be **no compression**, **Snappy**, or **``GZip``**.
2. For **ORC**: `compression` can be **no compression**, **```zlib```**, or **Snappy**. 3. For **Delimited text**: 1. `rowDelimiter` is explicitly set as **single character** or "**\r\n**", the default value is not supported.
Using COPY statement supports the following configuration:
3. `encodingName` is left as default or set to **utf-8 or utf-16**. 4. `escapeChar` must be same as `quoteChar`, and is not empty. 5. `skipLineCount` is left as default or set to 0.
- 6. `compression` can be **no compression** or **GZip**.
+ 6. `compression` can be **no compression** or **``GZip``**.
3. If your source is a folder, `recursive` in copy activity must be set to true, and `wildcardFilename` need to be `*`.
Settings specific to Azure Synapse Analytics are available in the **Settings** t
- Recreate: The table will get dropped and recreated. Required if creating a new table dynamically. - Truncate: All rows from the target table will get removed.
-**Enable staging:** Determines whether or not to use [PolyBase](/sql/relational-databases/polybase/polybase-guide) when writing to Azure Synapse Analytics. The staging storage is configured in [Execute Data Flow activity](control-flow-execute-data-flow-activity.md).
+**Enable staging:** This enables loading into Azure Synapse Analytics SQL Pools using the copy command and is recommended for most Synpase sinks. The staging storage is configured in [Execute Data Flow activity](control-flow-execute-data-flow-activity.md).
- When you use managed identity authentication for your storage linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively. - If your Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage).
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
description: Learn how to copy data from a cloud or on-premises HTTP source to s
Previously updated : 12/10/2019 Last updated : 03/16/2021 # Copy data from an HTTP endpoint by using Azure Data Factory
The following properties are supported for the HTTP linked service:
| type | The **type** property must be set to **HttpServer**. | Yes | | url | The base URL to the web server. | Yes | | enableServerCertificateValidation | Specify whether to enable server TLS/SSL certificate validation when you connect to an HTTP endpoint. If your HTTPS server uses a self-signed certificate, set this property to **false**. | No<br /> (the default is **true**) |
-| authenticationType | Specifies the authentication type. Allowed values are **Anonymous**, **Basic**, **Digest**, **Windows**, and **ClientCertificate**. <br><br> See the sections that follow this table for more properties and JSON samples for these authentication types. | Yes |
+| authenticationType | Specifies the authentication type. Allowed values are **Anonymous**, **Basic**, **Digest**, **Windows**, and **ClientCertificate**. User-based OAuth isn't supported. You can additionally configure authentication headers in `authHeader` property. See the sections that follow this table for more properties and JSON samples for these authentication types. | Yes |
+| authHeaders | Additional HTTP request headers for authentication.<br/> For example, to use API key authentication, you can select authentication type as ΓÇ£AnonymousΓÇ¥ and specify API key in the header. | No |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to use to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, the default Azure Integration Runtime is used. |No | ### Using Basic, Digest, or Windows authentication
If you use **certThumbprint** for authentication and the certificate is installe
} ```
+### Using authentication headers
+
+In addition, you can configure request headers for authentication along with the built-in authentication types.
+
+**Example: Using API key authentication**
+
+```json
+{
+ "name": "HttpLinkedService",
+ "properties": {
+ "type": "HttpServer",
+ "typeProperties": {
+ "url": "<HTTP endpoint>",
+ "authenticationType": "Anonymous",
+ "authHeader": {
+ "x-api-key": {
+ "type": "SecureString",
+ "value": "<API key>"
+ }
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
description: Learn how to copy data from a cloud or on-premises REST source to s
Previously updated : 12/08/2020 Last updated : 03/16/2021 # Copy data from and to a REST endpoint by using Azure Data Factory
The following properties are supported for the REST linked service:
| type | The **type** property must be set to **RestService**. | Yes | | url | The base URL of the REST service. | Yes | | enableServerCertificateValidation | Whether to validate server-side TLS/SSL certificate when connecting to the endpoint. | No<br /> (the default is **true**) |
-| authenticationType | Type of authentication used to connect to the REST service. Allowed values are **Anonymous**, **Basic**, **AadServicePrincipal**, and **ManagedServiceIdentity**. Refer to corresponding sections below on more properties and examples respectively. | Yes |
+| authenticationType | Type of authentication used to connect to the REST service. Allowed values are **Anonymous**, **Basic**, **AadServicePrincipal**, and **ManagedServiceIdentity**. User-based OAuth isn't supported. You can additionally configure authentication headers in `authHeader` property. Refer to corresponding sections below on more properties and examples respectively.| Yes |
+| authHeaders | Additional HTTP request headers for authentication.<br/> For example, to use API key authentication, you can select authentication type as ΓÇ£AnonymousΓÇ¥ and specify API key in the header. | No |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to use to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, this property uses the default Azure Integration Runtime. |No | ### Use basic authentication
Set the **authenticationType** property to **ManagedServiceIdentity**. In additi
} ```
+### Using authentication headers
+
+In addition, you can configure request headers for authentication along with the built-in authentication types.
+
+**Example: Using API key authentication**
+
+```json
+{
+ "name": "RESTLinkedService",
+ "properties": {
+ "type": "RestService",
+ "typeProperties": {
+ "url": "<REST endpoint>",
+ "authenticationType": "Anonymous",
+ "authHeader": {
+ "x-api-key": {
+ "type": "SecureString",
+ "value": "<API key>"
+ }
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ## Dataset properties This section provides a list of properties that the REST dataset supports.
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sftp.md
Previously updated : 08/28/2020 Last updated : 03/16/2021 # Copy data from and to the SFTP server by using Azure Data Factory
The SFTP connector is supported for the following activities:
Specifically, the SFTP connector supports: -- Copying files from and to the SFTP server by using *Basic* or *SshPublicKey* authentication.
+- Copying files from and to the SFTP server by using **Basic**, **SSH public key** or **multi-factor** authentication.
- Copying files as is or by parsing or generating files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). ## Prerequisites
The following properties are supported for the SFTP linked service:
| port | The port on which the SFTP server is listening.<br/>The allowed value is an integer, and the default value is *22*. |No | | skipHostKeyValidation | Specify whether to skip host key validation.<br/>Allowed values are *true* and *false* (default). | No | | hostKeyFingerprint | Specify the fingerprint of the host key. | Yes, if the "skipHostKeyValidation" is set to false. |
-| authenticationType | Specify the authentication type.<br/>Allowed values are *Basic* and *SshPublicKey*. For more properties, see the [Use basic authentication](#use-basic-authentication) section. For JSON examples, see the [Use SSH public key authentication](#use-ssh-public-key-authentication) section. |Yes |
+| authenticationType | Specify the authentication type.<br/>Allowed values are *Basic*, *SshPublicKey* and *MultiFactor*. For more properties, see the [Use basic authentication](#use-basic-authentication) section. For JSON examples, see the [Use SSH public key authentication](#use-ssh-public-key-authentication) section. |Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. To learn more, see the [Prerequisites](#prerequisites) section. If the integration runtime isn't specified, the service uses the default Azure Integration Runtime. |No | ### Use basic authentication
To use basic authentication, set the *authenticationType* property to *Basic*, a
```json { "name": "SftpLinkedService",
- "type": "linkedservices",
"properties": { "type": "Sftp", "typeProperties": {
To use SSH public key authentication, set "authenticationType" property as **Ssh
```json { "name": "SftpLinkedService",
- "type": "Linkedservices",
"properties": { "type": "Sftp", "typeProperties": {
To use SSH public key authentication, set "authenticationType" property as **Ssh
} ```
+### Use multi-factor authentication
+
+To use multi-factor authentication which is a combination of basic and SSH public key authentications, specify the user name, password and the private key info described in above sections.
+
+**Example: multi-factor authentication**
+
+```json
+{
+ "name": "SftpLinkedService",
+ "properties": {
+ "type": "Sftp",
+ "typeProperties": {
+ "host": "<host>",
+ "port": 22,
+ "authenticationType": "MultiFactor",
+ "userName": "<username>",
+ "password": {
+ "type": "SecureString",
+ "value": "<password>"
+ },
+ "privateKeyContent": {
+ "type": "SecureString",
+ "value": "<base64 encoded private key content>"
+ },
+ "passPhrase": {
+ "type": "SecureString",
+ "value": "<passphrase for private key>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of integration runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ## Dataset properties For a full list of sections and properties that are available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
if ($predeployment -eq $true) {
#Stop all triggers Write-Host "Stopping deployed triggers`n" $triggersToStop | ForEach-Object {
- if ($_.TriggerType -eq "BlobEventsTrigger") {
+ if ($_.TriggerType -eq "BlobEventsTrigger" -or $_.TriggerType -eq "CustomEventsTrigger") {
Write-Host "Unsubscribing" $_.Name "from events" $status = Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name while ($status.Status -ne "Disabled"){
else {
Write-Host "Deleting trigger " $_.Name $trig = Get-AzDataFactoryV2Trigger -name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName if ($trig.RuntimeState -eq "Started") {
- if ($_.TriggerType -eq "BlobEventsTrigger") {
+ if ($_.TriggerType -eq "BlobEventsTrigger" -or $_.TriggerType -eq "CustomEventsTrigger") {
Write-Host "Unsubscribing trigger" $_.Name "from events" $status = Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name while ($status.Status -ne "Disabled"){
else {
#Start active triggers - after cleanup efforts Write-Host "Starting active triggers" $triggersToStart | ForEach-Object {
- if ($_.TriggerType -eq "BlobEventsTrigger") {
+ if ($_.TriggerType -eq "BlobEventsTrigger" -or $_.TriggerType -eq "CustomEventsTrigger") {
Write-Host "Subscribing" $_.Name "to events" $status = Add-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name while ($status.Status -ne "Enabled"){
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
These system variables can be referenced anywhere in the trigger JSON for trigge
| @triggerBody().folderName |Path to the folder that contains the file specified by `@triggerBody().fileName`. The first segment of the folder path is the name of the Azure Blob Storage container. | | @trigger().startTime |Time at which the trigger fired to invoke the pipeline run. |
+## Custom event trigger scope
+
+These system variables can be referenced anywhere in the trigger JSON for triggers of type [CustomEventsTrigger](concepts-pipeline-execution-triggers.md#event-based-trigger).
+
+>[!NOTE]
+>Azure Data Factory expects custom event to be formatted with [Azure Event Grid event schema](../event-grid/event-schema.md).
+
+| Variable Name | Description
+| | |
+| @triggerBody().event.eventType | Type of events that triggered the Custom Event Trigger run. Event type is customer defined field and take on any values of string type. |
+| @triggerBody().event.subject | Subject of the custom event that caused the trigger to fire. |
+| @triggerBody().event.data._keyName_ | Data field in custom event is a free from JSON blob, which customer can use to send messages and data. Please use data._keyName_ to reference each field. For example, @triggerBody().event.data.callback returns the value for the _callback_ field stored under _data_. |
+| @trigger().startTime | Time at which the trigger fired to invoke the pipeline run. |
+ ## Next steps * For information about how these variables are used in expressions, see [Expression language & functions](control-flow-expression-language-functions.md).
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-webhook-activity.md
Last updated 03/25/2019
A webhook activity can control the execution of pipelines through your custom code. With the webhook activity, customers' code can call an endpoint and pass it a callback URL. The pipeline run waits for the callback invocation before it proceeds to the next activity.
+> [!IMPORTANT]
+> WebHook activity now allows you to surface error status and custom messages back to activity and pipeline. Set _reportStatusOnCallBack_ to true, and include _StatusCode_ and _Error_ in callback payload. For more information, see [Additional Notes](#additional-notes) section.
+ ## Syntax ```json
A webhook activity can control the execution of pipelines through your custom co
"key": "value" }, "timeout": "00:03:00",
+ "reportStatusOnCallBack": false,
"authentication": { "type": "ClientCertificate", "pfx": "****",
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-troubleshooting.md
When the copy performance doesn't meet your expectation, to troubleshoot single
- Consider to gradually tune the [parallel copies](copy-activity-performance-features.md), note that too many parallel copies may even hurt the performance.
-## Connector and IR performance
+## Connector and IR performance
This section explores some performance troubleshooting guides for particular connector type or integration runtime.
This section explores some performance troubleshooting guides for particular con
Activity execution time varies when the dataset is based on different Integration Runtime. -- **Symptoms**: Simply toggling the Linked Service dropdown in the dataset performs the same pipeline activities, but has drastically different run-times. When the dataset is based on the Managed Virtual Network Integration Runtime, it takes more than 2 minutes on average to complete the run, but it takes approximately 20 seconds to complete when based on the Default Integration Runtime.
+- **Symptoms**: Simply toggling the Linked Service dropdown in the dataset performs the same pipeline activities, but has drastically different run-times. When the dataset is based on the Managed Virtual Network Integration Runtime, it takes more time on average than the run when based on the Default Integration Runtime.
+
+- **Cause**: Checking the details of pipeline runs, you can see that the slow pipeline is running on Managed VNet (Virtual Network) IR while the normal one is running on Azure IR. By design, Managed VNet IR takes longer queue time than Azure IR as we are not reserving one compute node per data factory, so there is a warm up for each copy activity to start, and it occurs primarily on VNet join rather than Azure IR.
+ -- **Cause**: Checking the details of pipeline runs, you can see that the slow pipeline is running on Managed VNet (Virtual Network) IR while the normal one is running on Azure IR. By design, Managed VNet IR takes longer queue time than Azure IR as we are not reserving one compute node per data factory, so there is a warm up around 2 minutes for each copy activity to start, and it occurs primarily on VNet join rather than Azure IR. ### Low performance when loading data into Azure SQL Database
data-factory Data Factory Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-tutorials.md
+
+ Title: Azure Data Factory tutorials
+description: A list of tutorials demonstrating Azure Data Factory concepts
++++ Last updated : 03/16/2021++
+# Azure Data Factory tutorials
++
+Below is a list of tutorials to help explain and walk through a series of Data Factory concepts and scenarios.
+
+## Copy and ingest data
+
+[Copy data tool](tutorial-copy-data-tool.md)
+
+[Copy activity in pipeline](tutorial-copy-data-portal.md)
+
+[Copy data from on-premises to the cloud](tutorial-hybrid-copy-data-tool.md)
+
+[Amazon S3 to ADLS Gen2](load-azure-data-lake-storage-gen2.md)
+
+[Incremental copy pattern overview](tutorial-incremental-copy-overview.md)
+
+[Incremental pattern with change tracking](tutorial-incremental-copy-change-tracking-feature-portal.md)
+
+[Incremental SQL DB single table](tutorial-incremental-copy-portal.md)
+
+[Incremental SQL DB multiple tables](tutorial-incremental-copy-multiple-tables-portal.md)
+
+[CDC copy pipeline with SQL MI](tutorial-incremental-copy-change-data-capture-feature-portal.md)
+
+[Copy from SQL DB to Synapse SQL Pools](load-azure-sql-data-warehouse.md)
+
+[Copy SAP BW to ADLS Gen2](load-sap-bw-data.md)
+
+[Copy Office 365 to Azure Blob Store](load-office-365-data.md)
+
+[Bulk copy multiple tables](tutorial-bulk-copy-portal.md)
+
+[Copy pipeline with managed VNet](tutorial-copy-data-portal-private.md)
+
+## Data flows
+
+[Data flow tutorial videos](data-flow-tutorials.md)
+
+[Code-free data transformation at scale](tutorial-data-flow.md)
+
+[Delta lake transformations](tutorial-data-flow-delta-lake.md)
+
+[Data wrangling with Power Query](wrangling-tutorial.md)
+
+[Data flows inside managed VNet](tutorial-data-flow-private.md)
+
+## External data services
+
+[Azure Databricks notebook activity](transform-data-using-databricks-notebook.md)
+
+[HDI Spark transformations](tutorial-transform-data-spark-portal.md)
+
+[Hive transformations](tutorial-transform-data-hive-virtual-network-portal.md)
+
+## Pipelines
+
+[Control flow](tutorial-control-flow-portal.md)
+
+## SSIS
+
+[SSIS integration runtime](tutorial-deploy-ssis-packages-azure.md)
+
+## Data share
+
+[Data integration with Azure Data Share](lab-data-flow-data-share.md)
+
+## Data lineage
+
+[Azure Purview](turorial-push-lineage-to-purview.md)
+
+## Next steps
+Learn more about Data Factory [pipelines](concepts-pipelines-activities.md) and [data flows](concepts-data-flow-overview.md).
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-custom-event-trigger.md
+
+ Title: Create custom event triggers in Azure Data Factory
+description: Learn how to create a custom trigger in Azure Data Factory that runs a pipeline in response to a custom event published to Event Grid.
+++++ Last updated : 03/11/2021++
+# Create a trigger that runs a pipeline in response to a custom event (Preview)
++
+This article describes the Custom Event Triggers that you can create in your Data Factory pipelines.
+
+Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require Data Factory customers to trigger pipelines based on certain events happening. Data Factory native integration with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) now covers [Custom Events](../event-grid/custom-topics.md): customers send arbitrary events to an event grid topic, and Data Factory subscribes and listens to the topic and triggers pipelines accordingly.
+
+> [!NOTE]
+> The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more info, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the *Microsoft.EventGrid/eventSubscriptions/** action. This action is part of the EventGrid EventSubscription Contributor built-in role.
+
+Furthermore, combining pipeline parameters and Custom Event Trigger, customers can parse and reference custom _data_ payload in pipeline runs. _data_ field in Custom Event payload is a free-form json key-value structure, giving customers maximum control over the event driven pipeline runs.
+
+> [!IMPORTANT]
+> Every so often, a key referenced in parameterization may be missing in custom event payload. The _trigger run_ will fail with an error, stating that expression cannot be evaluated because property _keyName_ doesn't exist. __No__ _pipeline run_ will be triggered by the event.
+
+## Setup Event Grid Custom Topic
+
+To use the Custom Event Trigger in Data Factory, you need to _first_ set up a [Custom Topic in Event Grid](../event-grid/custom-topics.md). The workflow is different from Storage Event Trigger, where Data Factory will set up the topic for you. Here you need to navigate the Azure Event Grid and create the topic yourself. For more information on how to create the custom topic, see Azure Event Grid [Portal Tutorials](../event-grid/custom-topics.md#azure-portal-tutorials) and [CLI Tutorials](../event-grid/custom-topics.md#azure-cli-tutorials)
+
+Data Factories expect the events to follow [Event Grid event schema](../event-grid/event-schema.md). Make sure event payloads have following fields.
+
+```json
+[
+ {
+ "topic": string,
+ "subject": string,
+ "id": string,
+ "eventType": string,
+ "eventTime": string,
+ "data":{
+ object-unique-to-each-publisher
+ },
+ "dataVersion": string,
+ "metadataVersion": string
+ }
+]
+```
+
+## Data Factory UI
+
+This section shows you how to create a storage event trigger within the Azure Data Factory User Interface.
+
+1. Switch to the **Edit** tab, shown with a pencil symbol.
+
+1. Select **Trigger** on the menu, then select **New/Edit**.
+
+1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**.
+
+1. Select trigger type **Custom Events**
+
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-1-creation.png" alt-text="Screenshot of Author page to create a new custom event trigger in Data Factory UI." lightbox="media/how-to-create-custom-event-trigger/custom-event-1-creation-expanded.png":::
+
+1. Select your custom topic from the Azure subscription dropdown or manually enter the event topic scope.
+
+ > [!NOTE]
+ > To create a new or modify an existing Custom Event Trigger, the Azure account used to log into Data Factory and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on topic. No additional permission is required: Service Principal for the Azure Data Factory does _not_ need special permission to Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
+
+1. The **Subject begins with** and **Subject ends with** properties allow you to filter events for which you want to trigger pipeline. Both properties are optional.
+
+1. Use **+ New** to add **Event Types** you want to filter on. Custom Event trigger employee an OR relationship for the list: if a custom event has an _eventType_ property that matches any listed here, it will trigger a pipeline run. The event type is case insensitive. For instance, in the screenshot below, the trigger matches all _copycompleted_ or _copysucceeded_ events with subject starts with _factories_
+
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-2-properties.png" alt-text="Screenshot of Edit Trigger page to explain Event Types and Subject filtering in Data Factory UI.":::
+
+1. Custom event trigger can parse and send custom _data_ payload to your pipeline. First create the pipeline parameters, and fill in the values on the **Parameters** page. Use format **@triggerBody().event.data._keyName_** to parse the data payload, and pass values to pipeline parameters. For detailed explanation, see [Reference Trigger Metadata in Pipelines](how-to-use-trigger-parameterization.md) and [System Variables in Custom Event Trigger](control-flow-system-variables.md#custom-event-trigger-scope)
+
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-4-trigger-values.png" alt-text="Screenshot of pipeline Parameters setting.":::
+
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-3-parameters.png" alt-text="Screenshot of Parameters page to reference data payload in custom event.":::
+
+1. Click **OK** once you are done.
+
+## JSON schema
+
+The following table provides an overview of the schema elements that are related to custom event triggers:
+
+| **JSON Element** | **Description** | **Type** | **Allowed Values** | **Required** |
+| - | | -- | | |
+| **scope** | The Azure Resource Manager resource ID of the event grid topic. | String | Azure Resource Manager ID | Yes |
+| **events** | The type of events that cause this trigger to fire. | Array of strings | | Yes, at least one value is expected |
+| **subjectBeginsWith** | Subject field must begin with the pattern provided for the trigger to fire. For example, `factories` only fires the trigger for event subject starting with `factories`. | String | | No |
+| **subjectEndsWith** | Subject field must end with the pattern provided for the trigger to fire. | String | | No |
+
+## Role-based access control
+
+Azure Data Factory uses Azure role-based access control (Azure RBAC) to ensure that unauthorized access to listen to, subscribe to updates from, and trigger pipelines linked to custom events, are strictly prohibited.
+
+* To successfully create a new or update an existing Custom Event Trigger, the Azure account signed into the Data Factory needs to have appropriate access to the relevant storage account. Otherwise, the operation with fail with _Access Denied_.
+* Data Factory needs no special permission to your Event Grid, and you do _not_ need to assign special Azure RBAC permission to Data Factory service principal for the operation.
+
+Specifically, customer needs _Microsoft.EventGrid/EventSubscriptions/Write_ permission on _/subscriptions/####/resourceGroups//####/providers/Microsoft.EventGrid/topics/someTopics_
+
+## Next steps
+
+* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution).
+* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
This section shows you how to create a storage event trigger within the Azure Da
> The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account. > [!NOTE]
- > To create and modify a new Storage Event Trigger, the Azure account used to log into Data Factory and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No additional permission is required: Service Principal for the Azure Data Factory does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
+ > To create a new or modify an existing Storage Event Trigger, the Azure account used to log into Data Factory and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No additional permission is required: Service Principal for the Azure Data Factory does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
1. The **Blob path begins with** and **Blob path ends with** properties allow you to specify the containers, folders, and blob names for which you want to receive events. Your storage event trigger requires at least one of these properties to be defined. You can use variety of patterns for both **Blob path begins with** and **Blob path ends with** properties, as shown in the examples later in this article.
This section shows you how to create a storage event trigger within the Azure Da
:::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image3.png" alt-text="Screenshot of storage event trigger preview page.":::
-1. To attach a pipeline to this trigger, go to the pipeline canvas and click **Add trigger** and select **New/Edit**. When the side nav appears, click on the **Choose trigger...** dropdown and select the trigger you created. Click **Next: Data preview** to confirm the configuration is correct and then **Next** to validate the Data preview is correct.
+1. To attach a pipeline to this trigger, go to the pipeline canvas and click **Trigger** and select **New/Edit**. When the side nav appears, click on the **Choose trigger...** dropdown and select the trigger you created. Click **Next: Data preview** to confirm the configuration is correct and then **Next** to validate the Data preview is correct.
1. If your pipeline has parameters, you can specify them on the trigger runs parameter side nav. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For detailed explanation, see [Reference Trigger Metadata in Pipelines](how-to-use-trigger-parameterization.md) :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image4.png" alt-text="Screenshot of storage event trigger mapping properties to pipeline parameters.":::
-
+ In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively. 1. Click **Finish** once you are done.
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-use-trigger-parameterization.md
This article describes how trigger metadata, such as trigger start time, can be
Pipeline sometimes needs to understand and reads metadata from trigger that invokes it. For instance, with Tumbling Window Trigger run, based upon window start and end time, pipeline will process different data slices or folders. In Azure Data Factory, we use Parameterization and [System Variable](control-flow-system-variables.md) to pass meta data from trigger to pipeline.
-This pattern is especially useful for [Tumbling Window Trigger](how-to-create-tumbling-window-trigger.md), where trigger provides window start and end time.
+This pattern is especially useful for [Tumbling Window Trigger](how-to-create-tumbling-window-trigger.md), where trigger provides window start and end time, and [Custom Event Trigger](how-to-create-custom-event-trigger.md), where trigger parse and process values in [custom defined _data_ field](../event-grid/event-schema.md).
> [!NOTE] > Different trigger type provides different meta data information. For more information, see [System Variable](control-flow-system-variables.md)
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 03/10/2021 Last updated : 03/17/2021 # Azure Policy built-in definitions for Data Factory (Preview)
data-factory Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/samples-powershell.md
Previously updated : 01/16/2018 Last updated : 03/16/2021 # Azure PowerShell samples for Azure Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
digital-twins How To Create Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-azure-function.md
You can set up security access for the function app using either the Azure CLI o
# [CLI](#tab/cli) You can run these commands in [Azure Cloud Shell](https://shell.azure.com) or a [local Azure CLI installation](/cli/azure/install-azure-cli).
+You can use the function app's system-managed identity to give it the _**Azure Digital Twins Data Owner**_ role for your Azure Digital Twins instance. This will give the function app permission in the instance to perform data plane activities. Then, make the URL of Azure Digital Twins instance accessible to your function by setting an environment variable.
### Assign access role + The function skeleton from earlier examples requires that a bearer token to be passed to it, in order to be able to authenticate with Azure Digital Twins. To make sure that this bearer token is passed, you'll need to set up [Managed Service Identity (MSI)](../active-directory/managed-identities-azure-resources/overview.md) permissions for the function app to access Azure Digital Twins. This only needs to be done once for each function app.
-You can use the function app's system-managed identity to give it the _**Azure Digital Twins Data Owner**_ role for your Azure Digital Twins instance. This will give the function app permission in the instance to perform data plane activities. Then, make the URL of Azure Digital Twins instance accessible to your function by setting an environment variable.
1. Use the following command to see the details of the system-managed identity for the function. Take note of the _principalId_ field in the output.
Complete the following steps in the [Azure portal](https://portal.azure.com/).
### Assign access role + A system assigned managed identity enables Azure resources to authenticate to cloud services (for example, Azure Key Vault) without storing credentials in code. Once enabled, all necessary permissions can be granted via Azure role-based access control. The lifecycle of this type of managed identity is tied to the lifecycle of this resource. Additionally, each resource can only have one system assigned managed identity. 1. In the [Azure portal](https://portal.azure.com/), search for your function app by typing its name into the search bar. Select your app from the results.
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
Add the setting with this Azure CLI command:
az functionapp config appsettings set --settings "ADT_SERVICE_URL=https://<Azure Digital Twins instance _host name_>" -g <resource group> -n <your App Service (function app) name> ```
-Ensure that the permissions and Managed Identity role assignment are configured correctly for the function app, as described in the section [*Assign permissions to the function app*](tutorial-end-to-end.md#assign-permissions-to-the-function-app) in the end-to-end tutorial.
+Ensure that the permissions and Managed Identity role assignment are configured correctly for the function app, as described in the section [*Assign permissions to the function app*](tutorial-end-to-end.md#configure-permissions-for-the-function-app) in the end-to-end tutorial.
### Create Device Provisioning enrollment
Next, you will need to configure the function environment variable for connectin
az functionapp config appsettings set --settings "EVENTHUB_CONNECTIONSTRING=<Event Hubs SAS connection string Listen>" -g <resource group> -n <your App Service (function app) name> ```
-Ensure that the permissions and Managed Identity role assignment are configured correctly for the function app, as described in the section [*Assign permissions to the function app*](tutorial-end-to-end.md#assign-permissions-to-the-function-app) in the end-to-end tutorial.
+Ensure that the permissions and Managed Identity role assignment are configured correctly for the function app, as described in the section [*Assign permissions to the function app*](tutorial-end-to-end.md#configure-permissions-for-the-function-app) in the end-to-end tutorial.
### Create an IoT Hub route for lifecycle events
digital-twins How To Use Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
To use the control plane APIs:
- [**Java**](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins)) - [**JavaScript**](https://www.npmjs.com/package/@azure/arm-digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/arm-digitaltwins)) - [**Python**](https://pypi.org/project/azure-mgmt-digitaltwins/) ([source](https://github.com/Azure/azure-sdk-for-python/tree/release/v3/sdk/digitaltwins/azure-mgmt-digitaltwins))
- - [**Go**](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt/2020-10-31/digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-go/tree/master/services/digitaltwins/mgmt/2020-10-31/digitaltwins))
+ - [**Go**](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt) ([source](https://github.com/Azure/azure-sdk-for-go/tree/master/services/digitaltwins/mgmt))
You can also exercise control plane APIs by interacting with Azure Digital Twins through the [Azure portal](https://portal.azure.com) and [CLI](how-to-use-cli.md).
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
Back in your Visual Studio window where the _**AdtE2ESample**_ project is open,
[!INCLUDE [digital-twins-publish-azure-function.md](../../includes/digital-twins-publish-azure-function.md)]
-For your function app to be able to access Azure Digital Twins, it will need to have a system-managed identity with permissions to access your Azure Digital Twins instance. You'll set that up next.
+For your function app to be able to access Azure Digital Twins, it will need to have permissions to access your Azure Digital Twins instance and the instance's host name. You'll configure these next.
-### Assign permissions to the function app
+### Configure permissions for the function app
-To enable the function app to access Azure Digital Twins, the next step is to configure an app setting, assign the app a system-managed Azure AD identity, and give this identity the *Azure Digital Twins Data Owner* role in the Azure Digital Twins instance. This role is required for any user or function that wants to perform many data plane activities on the instance. You can read more about security and role assignments in [*Concepts: Security for Azure Digital Twins solutions*](concepts-security.md).
+There are two settings that need to be set for the function app to access your Azure Digital Twins instance. These can both be done via commands in the [Azure Cloud Shell](https://shell.azure.com).
-In Azure Cloud Shell, use the following command to set an application setting which your function app will use to reference your Azure Digital Twins instance. Fill in the placeholders with the details of your resources (remember that your Azure Digital Twins instance URL is its host name preceded by *https://*).
+#### Assign access role
-```azurecli-interactive
-az functionapp config appsettings set -g <your-resource-group> -n <your-App-Service-(function-app)-name> --settings "ADT_SERVICE_URL=<your-Azure-Digital-Twins-instance-URL>"
-```
+The first setting gives the function app the **Azure Digital Twins Data Owner** role in the Azure Digital Twins instance. This role is required for any user or function that wants to perform many data plane activities on the instance. You can read more about security and role assignments in [*Concepts: Security for Azure Digital Twins solutions*](concepts-security.md).
-The output is the list of settings for the Azure Function, which should now contain an entry called **ADT_SERVICE_URL**.
+1. Use the following command to see the details of the system-managed identity for the function. Take note of the **principalId** field in the output.
-Use the following command to create the system-managed identity. Look for the **principalId** field in the output.
+ ```azurecli-interactive
+ az functionapp identity show -g <your-resource-group> -n <your-App-Service-(function-app)-name>
+ ```
-```azurecli-interactive
-az functionapp identity assign -g <your-resource-group> -n <your-App-Service-(function-app)-name>
-```
+ >[!NOTE]
+ > If the result is empty instead of showing details of an identity, create a new system-managed identity for the function using this command:
+ >
+ >```azurecli-interactive
+ >az functionapp identity assign -g <your-resource-group> -n <your-App-Service-(function-app)-name>
+ >```
+ >
+ > The output will then display details of the identity, including the **principalId** value required for the next step.
+
+1. Use the **principalId** value in the following command to assign the function app's identity to the **Azure Digital Twins Data Owner** role for your Azure Digital Twins instance.
-Use the **principalId** value from the output in the following command, to assign the function app's identity to the *Azure Digital Twins Data Owner* role for your Azure Digital Twins instance.
+ ```azurecli-interactive
+ az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<principal-ID>" --role "Azure Digital Twins Data Owner"
+ ```
+The result of this command is outputted information about the role assignment you've created. The function app now has permissions to access data in your Azure Digital Twins instance.
+
+#### Configure application settings
+
+The second setting creates an **environment variable** for the function with the URL of your Azure Digital Twins instance. The function code will use this to refer to your instance. For more information about environment variables, see [*Manage your function app*](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
+
+Run the command below, filling in the placeholders with the details of your resources.
```azurecli-interactive
-az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<principal-ID>" --role "Azure Digital Twins Data Owner"
+az functionapp config appsettings set -g <your-resource-group> -n <your-App-Service-(function-app)-name> --settings "ADT_SERVICE_URL=https://<your-Azure-Digital-Twins-instance-hostname>"
```
-The result of this command is outputted information about the role assignment you've created. The function app now has permissions to access your Azure Digital Twins instance.
+The output is the list of settings for the Azure Function, which should now contain an entry called **ADT_SERVICE_URL**.
+ ## Process simulated telemetry from an IoT Hub device
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
event-hubs Event Hubs Availability And Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-availability-and-consistency.md
We recommend sending events to an event hub without setting partition informatio
In this section, you learn how to send events to a specific partition using different programming languages. ### [.NET](#tab/dotnet)
-To send events to a specific partition, create the batch using the [EventHubProducerClient.CreateBatchAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.createbatchasync#Azure_Messaging_EventHubs_Producer_EventHubProducerClient_CreateBatchAsync_Azure_Messaging_EventHubs_Producer_CreateBatchOptions_System_Threading_CancellationToken_) method by specifying either the `PartitionId` or the `PartitionKey` in [CreateBatchOptions](//dotnet/api/azure.messaging.eventhubs.producer.createbatchoptions). The following code sends a batch of events to a specific partition by specifying a partition key.
+To send events to a specific partition, create the batch using the [EventHubProducerClient.CreateBatchAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.createbatchasync#Azure_Messaging_EventHubs_Producer_EventHubProducerClient_CreateBatchAsync_Azure_Messaging_EventHubs_Producer_CreateBatchOptions_System_Threading_CancellationToken_) method by specifying either the `PartitionId` or the `PartitionKey` in [CreateBatchOptions](//dotnet/api/azure.messaging.eventhubs.producer.createbatchoptions). The following code sends a batch of events to a specific partition by specifying a partition key. Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival.
```csharp var batchOptions = new CreateBatchOptions { PartitionKey = "cities" };
event-hubs Event Hubs Dedicated Cluster Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dedicated-cluster-create-portal.md
In this article, you created an Event Hubs cluster. For step-by-step instruction
- [.NET Core](event-hubs-dotnet-standard-getstarted-send.md) - [Java](event-hubs-java-get-started-send.md) - [Python](event-hubs-python-get-started-send.md)
- - [JavaScript](event-hubs-java-get-started-send.md)
+ - [JavaScript](event-hubs-node-get-started-send.md)
- [Use Azure portal to enable Event Hubs Capture](event-hubs-capture-enable-through-portal.md) - [Use Azure Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md)
event-hubs Event Hubs Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-diagnostic-logs.md
Customer-managed key user log JSON includes elements listed in the following tab
- [.NET Core](event-hubs-dotnet-standard-getstarted-send.md) - [Java](event-hubs-java-get-started-send.md) - [Python](event-hubs-python-get-started-send.md)
- - [JavaScript](event-hubs-java-get-started-send.md)
+ - [JavaScript](event-hubs-node-get-started-send.md)
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
For more information about Event Hubs, visit the following links:
- [.NET](event-hubs-dotnet-standard-getstarted-send.md) - [Java](event-hubs-java-get-started-send.md) - [Python](event-hubs-python-get-started-send.md)
- - [JavaScript](event-hubs-java-get-started-send.md)
+ - [JavaScript](event-hubs-node-get-started-send.md)
* [Event Hubs programming guide](event-hubs-programming-guide.md) * [Availability and consistency in Event Hubs](event-hubs-availability-and-consistency.md) * [Event Hubs FAQ](event-hubs-faq.md)
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
For more information about Event Hubs, visit the following links:
- [.NET Core](event-hubs-dotnet-standard-getstarted-send.md) - [Java](event-hubs-java-get-started-send.md) - [Python](event-hubs-python-get-started-send.md)
- - [JavaScript](event-hubs-java-get-started-send.md)
+ - [JavaScript](event-hubs-node-get-started-send.md)
* [Event Hubs FAQ](event-hubs-faq.md) * [Sample applications that use Event Hubs](https://github.com/Azure/azure-event-hubs/tree/master/samples)
event-hubs Event Hubs Node Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-node-get-started-send.md
Be sure to record the connection string and container name for later use in the
// Subscribe to the events, and specify handlers for processing the events and errors. const subscription = consumerClient.subscribe({ processEvents: async (events, context) => {
+ if (events.length === 0) {
+ console.log(`No events received within wait time. Waiting for next interval`);
+ return;
+ }
+
for (const event of events) { console.log(`Received event: '${event.body}' from partition: '${context.partitionId}' and consumer group: '${context.consumerGroup}'`); }
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
expressroute Expressroute Erdirect About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-erdirect-about.md
Title: 'About Azure ExpressRoute Direct'
-description: Learn about key features of Azure ExpressRoute Direct and information needed to onboard to ExpressRoute Direct, like available SKUs and technical requirements.
+description: Learn about key features of Azure ExpressRoute Direct and information needed to onboard to ExpressRoute Direct, like available SKUs, and technical requirements.
- Previously updated : 08/12/2019 Last updated : 03/17/2021 -- # About ExpressRoute Direct
-ExpressRoute Direct gives you the ability to connect directly into MicrosoftΓÇÖs global network at peering locations strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or 10 Gbps connectivity, which supports Active/Active connectivity at scale.
+ExpressRoute Direct gives you the ability to connect directly into MicrosoftΓÇÖs global network at peering locations strategically distributed around the world. ExpressRoute Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale.
Key features that ExpressRoute Direct provides include, but aren't limited to:
Key features that ExpressRoute Direct provides include, but aren't limited to:
## Onboard to ExpressRoute Direct
-Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, send an Email to <ExpressRouteDirect@microsoft.com> with your subscription ID, including the following details:
+Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, run the following commands using Azure PowerShell:
+
+1. Sign in to Azure and select the subscription you wish to enroll.
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+
+ Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>"
+ ```
+
+1. Register your subscription for Public Preview using the following command:
+1.
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -FeatureName AllowExpressRoutePorts -ProviderNamespace Microsoft.Network
+ ```
+
+Once enrolled, verify that **Microsoft.Network** resource provider is registered to your subscription. Registering a resource provider configures your subscription to work with the resource provider.
+
+1. Access your subscription settings as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+
+1. In your subscription, for **Resource Providers**, verify **Microsoft.Network** provider shows a **Registered** status. If the Microsoft.Network resource provider isn't present in the list of registered providers, add it.
-* Scenarios you're looking to accomplish with **ExpressRoute Direct**
-* Location preferences - see [Partners and peering locations](expressroute-locations-providers.md) for a complete list of all locations
-* Timeline for implementation
-* Any other questions
+If you begin to use ExpressRoute Direct and notice that there are no available ports in your chosen peering location, email ExpressRouteDirect@microsoft.com to request more inventory.
## ExpressRoute using a service provider and ExpressRoute Direct | **ExpressRoute using a service provider** | **ExpressRoute Direct** | | | |
-| Utilizes service providers to enable fast onboarding and connectivity into existing infrastructure | Requires 100 Gbps/10 Gbps infrastructure and full management of all layers
+| Uses service providers to enable fast onboarding and connectivity into existing infrastructure | Requires 100 Gbps/10 Gbps infrastructure and full management of all layers
| Integrates with hundreds of providers including Ethernet and MPLS | Direct/Dedicated capacity for regulated industries and massive data ingestion |
-| Circuits SKUs from 50 Mbps to 10 Gbps | Customer may select a combination of the following circuit SKUs on 100 Gbps ExpressRoute Direct: <ul><li>5 Gbps</li><li>10 Gbps</li><li>40 Gbps</li><li>100 Gbps</li></ul> Customer may select a combination of the following circuit SKUs on 10 Gbps ExpressRoute Direct:<ul><li>1 Gbps</li><li>2 Gbps</li><li>5 Gbps</li><li>10 Gbps</li></ul>
+| Circuits SKUs from 50 Mbps to 10 Gbps | Customer may select a combination of the following circuit SKUs on 100-Gbps ExpressRoute Direct: <ul><li>5 Gbps</li><li>10 Gbps</li><li>40 Gbps</li><li>100 Gbps</li></ul> Customer may select a combination of the following circuit SKUs on 10-Gbps ExpressRoute Direct:<ul><li>1 Gbps</li><li>2 Gbps</li><li>5 Gbps</li><li>10 Gbps</li></ul>
| Optimized for single tenant | Optimized for single tenant with multiple business units and multiple work environments ## ExpressRoute Direct circuits
-Microsoft Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, and Microsoft 365.
+Microsoft Azure ExpressRoute allows you to extend your on-premises network into the Microsoft cloud over a private connection made easier by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, and Microsoft 365.
-Each peering location has access to MicrosoftΓÇÖs global network and can access any region in a geopolitical zone by default and can access all global regions with a premium circuit.
+Each peering location has access to MicrosoftΓÇÖs global network and can access any region in a geopolitical zone by default. You can access all global regions with a premium circuit.
-The functionality in most scenarios is equivalent to circuits that utilize an ExpressRoute service provider to operate. To support further granularity and new capabilities offered using ExpressRoute Direct, there are certain key capabilities that exist on ExpressRoute Direct Circuits.
+The functionality in most scenarios is equivalent to circuits that use an ExpressRoute service provider to operate. To support further granularity and new capabilities offered using ExpressRoute Direct, there are certain key capabilities that exist on ExpressRoute Direct Circuits.
## Circuit SKUs
-ExpressRoute Direct supports massive data ingestion scenarios into Azure storage and other big data services. ExpressRoute circuits on 100 Gbps ExpressRoute Direct now also support **40 Gbps** and **100 Gbps** circuit SKUs. The physical port pairs are **100 or 10 Gbps** only and can have multiple virtual circuits. Circuit sizes:
+ExpressRoute Direct supports massive data ingestion scenarios into Azure storage and other big data services. ExpressRoute circuits on 100-Gbps ExpressRoute Direct now also support **40 Gbps** and **100-Gbps circuit SKUs. The physical port pairs are **100 Gbps or 10 Gbps** only and can have multiple virtual circuits. Circuit sizes:
-| **100 Gbps ExpressRoute Direct** | **10 Gbps ExpressRoute Direct** |
+| **100-Gbps ExpressRoute Direct** | **10-Gbps ExpressRoute Direct** |
| | | | **Subscribed Bandwidth**: 200 Gbps | **Subscribed Bandwidth**: 20 Gbps | | <ul><li>5 Gbps</li><li>10 Gbps</li><li>40 Gbps</li><li>100 Gbps</li></ul> | <ul><li>1 Gbps</li><li>2 Gbps</li><li>5 Gbps</li><li>10 Gbps</li></ul>
ExpressRoute Direct supports massive data ingestion scenarios into Azure storage
## Technical Requirements * Microsoft Enterprise Edge Router (MSEE) Interfaces:
- * Dual 10 or 100 Gigabit Ethernet ports only across router pair
+ * Dual 10 Gigabit or 100-Gigabit Ethernet ports only across router pair
* Single Mode LR Fiber connectivity * IPv4 and IPv6 * IP MTU 1500 bytes
-* Switch/Router Layer 2/Layer 3 Connectivity:
+* Switch/Router Layer 2/Layer three Connectivity:
* Must support 1 802.1Q (Dot1Q) tag or two Tag 802.1Q (QinQ) tag encapsulation * Ethertype = 0x8100 * Must add the outer VLAN tag (STAG) based on the VLAN ID specified by Microsoft - *applicable only on QinQ* * Must support multiple BGP sessions (VLANs) per port and device
- * IPv4 and IPv6 connectivity. *For IPv6 no additional sub-interface will be created. IPv6 address will be added to existing sub-interface*.
+ * IPv4 and IPv6 connectivity. *For IPv6 no extra subinterface will be created. IPv6 address will be added to existing subinterface*.
* Optional: [Bidirectional Forwarding Detection (BFD)](./expressroute-bfd.md) support, which is configured by default on all Private Peerings on ExpressRoute circuits ## VLAN Tagging ExpressRoute Direct supports both QinQ and Dot1Q VLAN tagging.
-* **QinQ VLAN Tagging** allows for isolated routing domains on a per ExpressRoute circuit basis. Azure dynamically allocates an S-Tag at circuit creation and cannot be changed. Each peering on the circuit (Private and Microsoft) will utilize a unique C-Tag as the VLAN. The C-Tag is not required to be unique across circuits on the ExpressRoute Direct ports.
+* **QinQ VLAN Tagging** allows for isolated routing domains on a per ExpressRoute circuit basis. Azure dynamically gives an S-Tag at circuit creation and cannot be changed. Each peering on the circuit (Private and Microsoft) will use a unique C-Tag as the VLAN. The C-Tag isn't required to be unique across circuits on the ExpressRoute Direct ports.
* **Dot1Q VLAN Tagging** allows for a single tagged VLAN on a per ExpressRoute Direct port pair basis. A C-Tag used on a peering must be unique across all circuits and peerings on the ExpressRoute Direct port pair.
ExpressRoute Direct supports both QinQ and Dot1Q VLAN tagging.
## SLA
-ExpressRoute Direct provides the same enterprise-grade SLA with Active/Active redundant connections into the Microsoft Global Network. ExpressRoute infrastructure is redundant and connectivity into the Microsoft Global Network is redundant and diverse and scales accordingly with customer requirements.
+ExpressRoute Direct provides the same enterprise-grade SLA with Active/Active redundant connections into the Microsoft Global Network. ExpressRoute infrastructure is redundant and connectivity into the Microsoft Global Network is redundant and diverse and scales correctly with customer requirements.
## Next steps
-[Configure ExpressRoute Direct](expressroute-howto-erdirect.md)
+[Configure ExpressRoute Direct](expressroute-howto-erdirect.md)
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
If your service provider offers ExpressRoute at both sites, you can work with yo
Yes. You can have multiple ExpressRoute circuits with the same or different service providers. If the metro has multiple ExpressRoute peering locations and the circuits are created at different peering locations, you can link them to the same virtual network. If the circuits are created at the same peering location, you can link up to four circuits to the same virtual network.
-### How do I connect my virtual networks to an ExpressRoute circuit
+### How do I connect my virtual networks to an ExpressRoute circuit?
The basic steps are:
You will also have to follow up with your connectivity provider to ensure that t
You can update the bandwidth of the ExpressRoute circuit using the REST API or PowerShell cmdlet.
+### I received a notification about maintenance on my ExpressRoute circuit. What is the technical impact of this maintenance?
+
+You should experience minimal to no impact during maintenance if you operate your circuit in [active-active mode](https://docs.microsoft.com/azure/expressroute/designing-for-high-availability-with-expressroute#active-active-connections). We perform maintenance on the primary and secondary connections of your circuit separately. Scheduled maintenance will usually be performed outside of business hours in the time zone of the peering location, and you cannot select a maintenance time.
+
+### I received a notification about a software upgrade or maintenance on my ExpressRoute gateway. What is the technical impact of this maintenance?
+
+You should experience minimal to no impact during a software upgrade or maintenance on your gateway. The ExpressRoute gateway is comprised of multiple instance and during upgrades, instances are taken offline one at a time. While this may cause your gateway to temporarily support lower network throughput to the virtual network, the gateway itself will not experience any downtime.
++ ## ExpressRoute premium ### What is ExpressRoute premium?
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Canberra** | [CDC](https://cdcdatacentres.com.au/content/about-cdc) | 1 | Australia Central | 10G, 100G | CDC | | **Canberra2** | [CDC](https://cdcdatacentres.com.au/content/about-cdc) | 1 | Australia Central 2| 10G, 100G | CDC, Equinix | | **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, Teraco |
-| **Chennai** | Tata Communications | 2 | South India | 10G | Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea |
+| **Chennai** | Tata Communications | 2 | South India | 10G | BSNL, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea |
| **Chennai2** | Airtel | 2 | South India | 10G | Airtel | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo | | **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | 10G | Interxion |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Equinix, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo|
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Equinix, Internet2, Level 3 Communications, Megaport, Neutrona Networks, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo|
| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | n/a | CoreSite, Megaport, Zayo | | **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | n/a | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, Megaport, Orange, Orixcom |
The following table shows connectivity locations and the service providers for e
| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | 10G | Colt, Equinix, Fastweb, IRIDEOS, Retelit | | **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | 10G, 100G | Cologix, Megaport | | **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | 10G, 100G | Bell Canada, Cologix, Fibrenoire, Megaport, Telus, Zayo |
-| **Mumbai** | Tata Communications | 2 | West India | 10G | DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon |
+| **Mumbai** | Tata Communications | 2 | West India | 10G | BSNL, DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon |
| **Mumbai2** | Airtel | 2 | West India | 10G | Airtel, Sify, Vodafone Idea | | **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | 10G | DE-CIX | | **New York** | [Equinix NY9](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny9/) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Colt, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported |Cape Town, Johannesburg| | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported |Montreal, Toronto, Quebec City | | **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported |Amsterdam, Amsterdam2, Chicago, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
+| **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported |Chennai, Mumbai |
| **[C3ntro](https://www.c3ntro.com/data1/express-route1.php)** |Supported |Supported |Miami | | **CDC** | Supported | Supported | Canberra, Canberra2 | | **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London2, New York, Paris, San Antonio, Silicon Valley, Tokyo, Toronto, Washington DC, Washington DC2 |
The following table shows locations by service provider. If you want to view ava
| **[Optus](https://www.optus.com.au/enterprise/)** |Supported |Supported |Melbourne, Sydney | | **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported |Amsterdam, Amsterdam2, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC | | **[Orixcom](https://www.orixcom.com/cloud-solutions/)** | Supported | Supported | Dubai2 |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported |Chicago, Las Vegas, Silicon Valley, Washington DC |
+| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported |Chicago, Dallas, Las Vegas, Silicon Valley, Washington DC |
| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported |Chicago, Hong Kong, Hong Kong2, London, Singapore2 | | **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan |
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Untrusted customer signed certificates|Customer signed certificates are not trus
|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|Fix scheduled for GA.| |IDPS Bypass|IDPS Bypass doesn't work for TLS terminated traffic, and Source IP address and Source IP Groups aren't supported.|Fix scheduled for GA.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.|--
+|KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|Fix scheduled for GA.|
## Next steps
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-custom-domain-https.md
Grant Azure Front Door permission to access the certificates in your Azure Key
Azure Front Door lists the following information: - The key vault accounts for your subscription ID. - The certificates (secrets) under the selected key vault.
- - The available certificate versions.
+ - The available certificate versions.
+
+> [!NOTE]
+> Leaving the certificate version as blank would lead to:
+> - The latest version of the certificate getting selected.
+> - Automatic rotation of certificates to the latest version, when a newer version of the certificate is available in your Key Vault.
5. When you use your own certificate, domain validation is not required. Proceed to [Wait for propagation](#wait-for-propagation).
In this tutorial, you learned how to:
* Upload a certificate to Key Vault. * Validate a domain.
-* Enable HTTPS for your custom domain.
+* Enable HTTPS for you custom domain.
-To learn how to set up a geo-filtering policy for your Front Door, continue to the next tutorial.
+To learn how to set up a geo-filtering policy for you Front Door, continue to the next tutorial.
> [!div class="nextstepaction"]
-> [Set up a geo-filtering policy](front-door-geo-filtering.md)
+> [Set up a geo-filtering policy](front-door-geo-filtering.md)
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
After you create an Azure Front Door Standard/Premium profile, the default front
## Add a new custom domain
+> [!NOTE]
+> While in Public Preview, using Azure DNS to create Apex domains is not supported on Azure Front Door Standard/Premium. There are other DNS providers that support CNAME flattening or DNS chasing that will allow APEX domains to be used for Azure Front Door Standard/Premium.
+ A custom domain is managed by Domains section in the portal. A custom domain can be created and validated before association to an endpoint. A custom domain and its subdomains can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Doors. You can also map custom domains with different subdomains to the same Front Door endpoint. 1. Under Settings for your Azure Front Door profile, select *Domains* and then the **Add a domain** button.
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 01/29/2021 Last updated : 03/17/2021 # Azure Policy assignment structure
You use JSON to create a policy assignment. The policy assignment contains eleme
- non-compliance messages - parameters
-For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic parameters:
+For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic
+parameters:
```json {
often assigned together, to use an [initiative](./initiative-definition-structur
## Non-compliance messages
-To set a custom message that describe why a resource is non-compliant with the policy or initiative
+To set a custom message that describes why a resource is non-compliant with the policy or initiative
definition, set `nonComplianceMessages` in the assignment definition. This node is an array of `message` entries. This custom message is in addition to the default error message for non-compliance and is optional.
+> [!IMPORTANT]
+> Custom messages for non-compliance are only supported on definitions or initiatives with
+> [Resource Manager modes](./definition-structure.md#resource-manager-modes) definitions.
+ ```json "nonComplianceMessages": [ {
governance Initiative Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/initiative-definition-structure.md
Title: Details of the initiative definition structure description: Describes how policy initiative definitions are used to group policy definitions for deployment to Azure resources in your organization. Previously updated : 10/07/2020 Last updated : 03/16/2021 # Azure Policy initiative definition structure
This information is:
- Displayed in the Azure portal on the overview of a **control** on a Regulatory Compliance initiative. - Available via REST API. See the `Microsoft.PolicyInsights` resource provider and the
- [policyMetadata operation group](/rest/api/policy-insights/policymetadata/getresource).
+ [policyMetadata operation group](/rest/api/policy/policymetadata/getresource).
- Available via Azure CLI. See the [az policy metadata](/cli/azure/policy/metadata) command. > [!IMPORTANT]
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources. Previously updated : 10/05/2020 Last updated : 03/16/2021 # Get compliance data of Azure resources
updated and the frequency and events that trigger an evaluation cycle.
The results of a completed evaluation cycle are available in the `Microsoft.PolicyInsights` Resource Provider through `PolicyStates` and `PolicyEvents` operations. For more information about the operations of the Azure Policy Insights REST API, see
-[Azure Policy Insights](/rest/api/policy-insights/).
+[Azure Policy Insights](/rest/api/policy/).
Evaluations of assigned policies and initiatives happen as the result of various events:
the reason a resource is **non-compliant** or to find the change responsible, se
The same information available in the portal can be retrieved with the REST API (including with [ARMClient](https://github.com/projectkudu/ARMClient)), Azure PowerShell, and Azure CLI. For full
-details on the REST API, see the [Azure Policy Insights](/rest/api/policy-insights/) reference. The
-REST API reference pages have a green 'Try It' button on each operation that allows you to try it
-right in the browser.
+details on the REST API, see the [Azure Policy](/rest/api/policy/) reference. The REST API reference
+pages have a green 'Try It' button on each operation that allows you to try it right in the browser.
Use ARMClient or a similar tool to handle authentication to Azure for the REST API examples.
Use ARMClient or a similar tool to handle authentication to Azure for the REST A
With the REST API, summarization can be performed by container, definition, or assignment. Here is an example of summarization at the subscription level using Azure Policy Insight's [Summarize For
-Subscription](/rest/api/policy-insights/policystates/summarizeforsubscription):
+Subscription](/rest/api/policy/policystates/summarizeforsubscription):
```http POST https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/summarize?api-version=2019-10-01
Your results resemble the following example:
``` For more information about querying policy events, see the
-[Azure Policy Events](/rest/api/policy-insights/policyevents) reference article.
+[Azure Policy Events](/rest/api/policy/policyevents) reference article.
### Azure CLI
$policyEvents = Get-AzPolicyEvent -Filter "ResourceType eq '/Microsoft.Network/v
$policyEvents | ConvertTo-Csv | Out-File 'C:\temp\policyEvents.csv' ```
-The output of the `$policyEvents` object looks like the following:
+The output of the `$policyEvents` object looks like the following output:
```output Timestamp : 9/19/2020 5:18:53 AM
governance Programmatically Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/programmatically-create.md
Title: Programmatically create policies description: This article walks you through programmatically creating and managing policies for Azure Policy with Azure CLI, Azure PowerShell, and REST API. Previously updated : 08/17/2020 Last updated : 03/16/2021 # Programmatically create policies
Review the following articles for more information about the commands and querie
- [Azure REST API Resources](/rest/api/resources/) - [Azure PowerShell Modules](/powershell/module/az.resources/#policy) - [Azure CLI Policy Commands](/cli/azure/policy)-- [Azure Policy Insights resource provider REST API reference](/rest/api/policy-insights)
+- [Azure Policy resource provider REST API reference](/rest/api/policy)
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 03/10/2021 Last updated : 03/17/2021
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 03/10/2021 Last updated : 03/17/2021
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-managed-application](../../../../includes/policy/reference/bycat/policies-managed-application.md)]
+## Migrate
++ ## Monitoring [!INCLUDE [azure-policy-reference-policies-monitoring](../../../../includes/policy/reference/bycat/policies-monitoring.md)]
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
A DTDL model can be a _no-component_ or a _multi-component_ model:
- No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _default component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model. - Multi-component model. A more complex model that includes two or more components. These components include a single default component, and one or more additional nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.
-To learn more, see [IoT Plug and Play components in models](../../iot-pnp/concepts-components.md)
+To learn more, see [IoT Plug and Play modeling guide](../../iot-pnp/concepts-modeling-guide.md)
### Conventions
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
+
+ Title: Azure IoT Central operator guide
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of the operator role in IoT Central.
++ Last updated : 03/17/2021+++++++
+# IoT Central operator guide
+
+This article provides an overview of the operator role in IoT Central. An operator manages the devices connected to the application.
+
+An _operator_ manages the devices connected to the application.
+
+As an operator, you can:
+
+* Use the **Devices** page to view, add, and delete devices connected to your Azure IoT Central application.
+* Import and export devices in bulk.
+* Maintain an up-to-date inventory of your devices.
+* Keep your device metadata up to date by changing the values stored in the device properties from your views.
+* Control the behavior of your devices by updating a setting on a specific device from your views.
+
+## IoT Central homepage
+
+The [IoT Central homepage](https://aka.ms/iotcentral-get-started) page is the place where you can learn more about the latest news and features available on IoT Central, create new applications, and see and launch your existing application.
++
+## View your devices
+
+To view an individual device:
+
+1. Choose **Devices** on the left pane. Here you see a list of all devices and of your device templates.
+
+1. Choose a device template.
+
+1. In the right-hand pane of the **Devices** page, you see a list of devices created from that device template. Choose an individual device to see the device details page for that device:
+
+ ![Screenshot of Device Details Page](./media/overview-iot-central-operator/device-list.png)
+
+## Add a device
+
+To add a device to your Azure IoT Central application:
+
+1. Choose **Devices** on the left pane.
+
+1. Choose the device template from which you want to create a device.
+
+1. Choose + **New**.
+
+1. Turn the **Simulated** toggle to **On** or **Off**. A real device is for a physical device that you connect to your Azure IoT Central application. A simulated device has sample data generated for you by Azure IoT Central.
+
+1. Select **Create**.
+
+1. This device now appears in your device list for this template. Select the device to see the device details page that contains all views for the device.
+
+## Import devices
+
+To connect large number of devices to your application, you can bulk import devices from a CSV file. The CSV file should have the following columns and headers:
+
+* **IOTC_DeviceID** - the device ID can contain letters, numbers, and the `-` character.
+* **IOTC_DeviceName** - this column is optional.
+
+To bulk-register devices in your application:
+
+1. Choose **Devices** on the left pane.
+
+1. On the left panel, choose the device template for which you want to bulk create the devices.
+
+ > [!NOTE]
+ > If you don't have a device template yet then you can import devices under **All devices** and register them without a template. After devices have been imported, you can then migrate them to a template.
+
+1. Select **Import**.
+
+ ![Screenshot of Import Action](./media/overview-iot-central-operator/bulk-import-1-a.png)
++
+1. Select the CSV file that has the list of Device IDs to be imported.
+
+1. Device import starts once the file has been uploaded. You can track the import status in the Device Operations panel. This panel appears automatically after the import starts or you can access it through the bell icon in the top right-hand corner.
+
+1. Once the import completes, a success message is shown in the Device Operations panel.
+
+ ![Screenshot of Import Success](./media/overview-iot-central-operator/bulk-import-3-a.png)
+
+If the device import operation fails, you see an error message on the Device Operations panel. A log file capturing all the errors is generated that you can download.
+
+## Migrate devices to a template
+
+If you register devices by starting the import under **All devices**, then the devices are created without any device template association. Devices must be associated with a template to explore the data and other details about the device. Follow these steps to associate devices with a template:
+
+1. Choose **Devices** on the left pane.
+
+1. On the left panel, choose **All devices**:
+
+ ![Screenshot of Unassociated Devices](./media/overview-iot-central-operator/unassociated-devices-2-a.png)
+
+1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassociated** for any of your devices.
+
+1. Select the devices you want to associate with a template:
+
+1. Select **Migrate**:
+
+ ![Screenshot of Associate Devices](./media/overview-iot-central-operator/unassociated-devices-1-a.png)
+
+1. Choose the template from the list of available templates and select **Migrate**.
+
+1. The selected devices are associated with the device template you chose.
+
+## Export devices
+
+To connect a real device to IoT Central, you need its connection string. You can export device details in bulk to get the information you need to create device connection strings. The export process creates a CSV file with the device identity, device name, and keys for all the selected devices.
+
+To bulk export devices from your application:
+
+1. Choose **Devices** on the left pane.
+
+1. On the left pane, choose the device template from which you want to export the devices.
+
+1. Select the devices that you want to export and then select the **Export** action.
+
+ ![Screenshot of Export](./media/overview-iot-central-operator/export-1-a.png)
+
+1. The export process starts. You can track the status using the Device Operations panel.
+
+1. When the export completes, a success message is shown along with a link to download the generated file.
+
+1. Select the **Download File** link to download the file to a local folder on the disk.
+
+ ![Screenshot of Export Success](./media/overview-iot-central-operator/export-2-a.png)
+
+1. The exported CSV file contains the following columns: device ID, device name, device keys, and X509 certificate thumbprints:
+
+ * IOTC_DEVICEID
+ * IOTC_DEVICENAME
+ * IOTC_SASKEY_PRIMARY
+ * IOTC_SASKEY_SECONDARY
+ * IOTC_X509THUMBPRINT_PRIMARY
+ * IOTC_X509THUMBPRINT_SECONDARY
+
+For more information about connection strings and connecting real devices to your IoT Central application, see [Device connectivity in Azure IoT Central](concepts-get-connected.md).
+
+## Delete a device
+
+To delete either a real or simulated device from your Azure IoT Central application:
+
+1. Choose **Devices** on the left pane.
+
+1. Choose the device template of the device you want to delete.
+
+1. Use the filter tools to filter and search for your devices. Check the box next to the devices to delete.
+
+1. Choose **Delete**. You can track the status of this deletion in your Device Operations panel.
+
+## Change a property
+
+Cloud properties are the device metadata associated with the device, such as city and serial number. Cloud properties only exist in the IoT Central application and aren't synchronized to your devices. Writeable properties control the behavior of a device and let you set the state of a device remotely, for example by setting the target temperature of a thermostat device. Device properties are set by the device and are read-only within IoT Central. You can view and update properties on the **Device Details** views for your device.
+
+1. Choose **Devices** on the left pane.
+
+1. Choose the device template of the device whose properties you want to change and select the target device.
+
+1. Choose the view that contains properties for your device, this view enables you to input values and select **Save** at the top of the page. Here you see the properties your device has and their current values. Cloud properties and writeable properties have editable fields, while device properties are read-only. For writeable properties, you can see their sync status at the bottom of the field.
+
+1. Modify the properties to the values you need. You can modify multiple properties at a time and update them all at the same time.
+
+1. Choose **Save**. If you saved writeable properties, the values are sent to your device. When the device confirms the change for the writeable property, the status returns back to **synced**. If you saved a cloud property, the value is updated.
+
+## Next steps
+
+Now that you've learned how to manage devices in your Azure IoT Central application, the suggested next step is to learn how to [Configure rules](howto-configure-rules.md) for your devices.
iot-hub Iot Hub Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ip-filtering.md
IP filter rules are *allow* rules and applied without ordering. Only IP addresse
For example, if you want to accept addresses in the range `192.168.100.0/22` and reject everything else, you only need to add one rule in the grid with address range `192.168.100.0/22`.
-### Azure portal
-
-IP filter rules are also applied when using IoT Hub through Azure portal. This is because API calls to the IoT Hub service are made directly using your browser with your credentials, which is consistent with other Azure services. To access IoT Hub using Azure portal when IP filter is enabled, add your computer's IP address to the allow list.
- ## Retrieve and update IP filters using Azure CLI Your IoT Hub's IP filters can be retrieved and updated through [Azure CLI](/cli/azure/).
iot-hub Iot Hub Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-public-network-access.md
To turn on public network access, selected **All networks**, then **Save**.
## Accessing the IoT Hub after disabling public network access
-After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md). This restriction includes accessing through Azure portal, because API calls to the IoT Hub service are made directly using your browser with your credentials.
+After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md).
## IoT Hub endpoint, IP address, and ports after disabling public network access
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
iot-hub Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/security-baseline.md
description: The Azure IoT Hub security baseline provides procedural guidance an
Previously updated : 09/03/2020 Last updated : 03/16/2021
# Azure security baseline for Azure IoT Hub
-The Azure Security Baseline for Microsoft Azure IoT Hub contains recommendations that will help you improve the security posture of your deployment. The baseline for this service is drawn from the [Azure Security Benchmark version 1.0](../security/benchmarks/overview.md), which provides recommendations on how you can secure your cloud solutions on Azure with our best practices guidance. For more information, see [Azure Security Baselines overview](../security/benchmarks/security-baselines-overview.md).
+This security
+baseline applies guidance from the [Azure Security Benchmark version
+1.0](../security/benchmarks/overview-v1.md) to Microsoft Azure IoT Hub. The Azure Security Benchmark
+provides recommendations on how you can secure your cloud solutions on Azure.
+The content is grouped by the **security controls** defined by the Azure
+Security Benchmark and the related guidance applicable to Azure IoT Hub. **Controls** not applicable to Azure IoT Hub have been excluded.
-## Network security
+
+To see how Azure IoT Hub completely maps to the Azure
+Security Benchmark, see the [full Azure IoT Hub security baseline mapping
+file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Offer%20Security%20Baselines).
+
+## Network Security
-*For more information, see the [Azure Security Benchmark: Network security](../security/benchmarks/security-control-network-security.md).*
+*For more information, see the [Azure Security Benchmark: Network Security](../security/benchmarks/security-control-network-security.md).*
### 1.1: Protect Azure resources within virtual networks
-**Guidance**: By default, IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in over wide-area networks and on-premises networks can all access it.
+**Guidance**: IoT Hub is a multi-tenant Platform-as-a-Service (PaaS), different customers share the same pool of compute, networking, and storage hardware resources. IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in over wide-area networks and on-premises networks can all access it. Microsoft designed the service for complete isolation between each tenant's data, and works continuously to ensure this result.
IoT Hub features including message routing, file upload, and bulk device import/export also require connectivity from IoT Hub to a customer-owned Azure resource over its public endpoint. These connectivity paths collectively make up the egress traffic from IoT Hub to customer resources.
-Recommend restricting connectivity to your Azure resources (including Azure IoT Hub) through a virtual network that you own and operate to reduce connectivity exposure in an isolated network and enable on-premises network connectivity directly to Azure backbone network. Use Azure Private Link and Azure Private Endpoint, where feasible, to enable private access to your services from other virtual networks.
+Recommend restricting connectivity to your Azure resources (including Azure IoT Hub) through a virtual network that you own and operate to reduce connectivity exposure in an isolated network and enable on-premises network connectivity directly to Azure backbone network. Use Azure Private Link and Azure Private Endpoint, where feasible, to enable private access to your services from other virtual networks.
+
+Once private access is established, disable public network access for the IoT Hub for additional security. This network level control is enforced on a specific IoT hub resource, ensuring isolation. To keep the service active for other customer resources using the public path, its public endpoint remains resolvable, IP addresses discoverable, and ports remain open. This is not a cause for concern as Microsoft integrates multiple layers of security to ensure complete isolation between tenants.
Keep open hardware ports in your devices to a bare minimum to avoid unwanted access. Additionally, build mechanisms to prevent or detect physical tampering of the device. - [IoT virtual networks support](virtual-network-support.md)-- [loT networking best practice](../iot-fundamentals/security-recommendations.md?context=azure%2fiot-hub%2frc%2frc#networking)+
+- [Manage public network access for IoT hub](iot-hub-public-network-access.md)
+
+- [Tenant isolation in Azure](https://docs.microsoft.com/azure/security/fundamentals/isolation-choices#tenant-level-isolation)
+
+- [loT networking best practice](https://docs.microsoft.com/azure/iot-fundamentals/security-recommendations#networking)
+ - [Azure Private Link overview](../private-link/private-link-overview.md)-- [Azure network security group](../virtual-network/network-security-groups-overview.md)
-**Azure Security Center monitoring**: Yes
+- [Azure network security group](../virtual-network/network-security-groups-overview.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.2: Monitor and log the configuration and traffic of virtual networks, subnets, and NICs **Guidance**: Use Azure Security Center and follow the network protection recommendations to help secure your Azure network resources. Enable network security group flow logs and send the logs to an Azure Storage account for auditing. You can also send the flow logs to a Log Analytics workspace and then use Traffic Analytics to provide insights into traffic patterns in your Azure cloud. Some advantages of Traffic Analytics are the ability to visualize network activity, identify hot spots and security threats, understand traffic flow patterns, and pinpoint network misconfigurations.
Keep open hardware ports in your devices to a bare minimum to avoid unwanted acc
- [Understand network security provided by Azure Security Center](../security-center/security-center-network-recommendations.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.3: Protect critical web applications **Guidance**: Not applicable; this recommendation is intended for web applications running on Azure App Service or compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 1.4: Deny communications with known malicious IP addresses
-**Guidance**: Block known malicious IPs with IoT Hub IP filter rules . Malicious attempts are also recorded and alerted via Azure Security Center for IoT.
+**Guidance**: Block known malicious IPs with IoT Hub IP filter rules. Malicious attempts are also recorded and alerted via Azure Security Center for IoT.
Azure DDoS Protection Basic is already enabled and available for no additional cost as part of IoT Hub. Always-on traffic monitoring, and real-time mitigation of common network-level attacks, provide the same defenses utilized by Microsoft's online services. The entire scale of Azure's global network can be used to distribute and mitigate attack traffic across regions. - [IoT Hub IP filter](iot-hub-ip-filtering.md) -- [Azure Security Center for IoT suspicious IP address communication](../defender-for-iot/concept-security-alerts.md)
+- [Azure Security Center for IoT suspicious IP address communication](/azure/asc-for-iot/concept-security-alerts)
- [Manage Azure DDoS Protection Basic](../ddos-protection/ddos-protection-overview.md) - [Threat protection in Azure Security Center](../security-center/azure-defender.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.5: Record network packets **Guidance**: Not applicable; this recommendation is intended for offerings that produce network packets that can be recorded and viewed by customers. IoT Hub does not produce network packets that are customer facing, and is not designed to deploy directly into Azure virtual networks.
-**Azure Security Center monitoring**: No
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 1.6: Deploy network-based intrusion detection/intrusion prevention systems (IDS/IPS) **Guidance**: Select an offer from Azure Marketplace that supports IDS/IPS functionality with payload inspection capabilities. When payload inspection is not a requirement, Azure Firewall threat intelligence can be used. Azure Firewall threat intelligence-based filtering is used to alert on and/or block traffic to and from known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed.
Deploy the firewall solution of your choice at each of your organization's netwo
- [How to configure alerts with Azure Firewall](../firewall/threat-intel.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.7: Manage traffic to web applications **Guidance**: Not applicable; this recommendation is intended for web applications running on Azure App Service or compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 1.8: Minimize complexity and administrative overhead of network security rules **Guidance**: For resources that need access to your Azure IoT Hub, use Virtual Network service tags to define network access controls on network security Groups or Azure Firewall. You can use service tags in place of specific IP addresses when creating security rules. By specifying the service tag name (for example, AzureIoTHub) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change.
Deploy the firewall solution of your choice at each of your organization's netwo
- [How to use service tags for Azure IoT](iot-hub-understand-ip-address.md) - [For more information about using service tags](../virtual-network/service-tags-overview.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.9: Maintain standard security configurations for network devices **Guidance**: Define and implement standard security configurations for network resources associated with your Azure IoT Hub namespaces with Azure Policy. Use Azure Policy aliases in the "Microsoft.Devices" and "Microsoft.Network" namespaces to create custom policies to audit or enforce the network configuration of your Machine Learning namespaces. - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.10: Document traffic configuration rules **Guidance**: Use tags for network resources associated with your Azure IoT Hub deployment in order to logically organize them into a taxonomy. - [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.11: Use automated tools to monitor network resource configurations and detect changes **Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network resources related to Azure IoT Hub. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](https://docs.microsoft.com/azure/azure-monitor/essentials/activity-log#view-the-activity-log)
- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
-## Logging and monitoring
-
-*For more information, see the [Azure Security Benchmark: Logging and monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
-
-### 2.1: Use approved time synchronization sources
-
-**Guidance**: Microsoft maintains the time source used for Azure resources such as Azure IoT Hub for timestamps in the logs.
+**Azure Security Center monitoring**: None
-**Azure Security Center monitoring**: Not Applicable
+## Logging and Monitoring
-**Responsibility**: Microsoft
+*For more information, see the [Azure Security Benchmark: Logging and Monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
### 2.2: Configure central security log management **Guidance**: Ingest logs via Azure Monitor to aggregate security data generated by Azure IoT Hub. In Azure Monitor, use Log Analytics workspaces to query and perform analytics, and use storage accounts for long-term/archival storage. Alternatively, you can enable and on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM). -- [Set up Azure IoT logs](monitor-iot-hub-reference.md#resource-logs)-- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
+- [Set up Azure IoT logs](https://docs.microsoft.com/azure/iot-hub/monitor-iot-hub-reference#resource-logs)
-**Azure Security Center monitoring**: Yes
+- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.3: Enable audit logging for Azure resources **Guidance**: Enable Azure IoT diagnostic settings on Azure resources for access to audit, security, and resource logs. Activity logs, which are automatically available, include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. -- [Set up Azure IoT Hub logs](monitor-iot-hub-reference.md#resource-logs)
+- [Set up Azure IoT Hub logs](https://docs.microsoft.com/azure/iot-hub/monitor-iot-hub-reference#resource-logs)
- [How to collect platform logs and metrics with Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) - [Understand logging and different log types in Azure](../azure-monitor/essentials/platform-logs-overview.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+
+**Azure Policy built-in definitions - Microsoft.Devices**:
++ ### 2.4: Collect security logs from operating systems **Guidance**: Not applicable; this recommendation is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 2.5: Configure security log storage retention **Guidance**: In Azure Monitor, set the log retention period for Log Analytics workspaces associated with your Azure IoT Hub instances according to your organization's compliance regulations. -- [How to set log retention parameters](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)-
-**Azure Security Center monitoring**: Not Applicable
+- [How to set log retention parameters](https://docs.microsoft.com/azure/azure-monitor/logs/manage-cost-storage#change-the-data-retention-period)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.6: Monitor and review Logs **Guidance**: Analyze and monitor logs for anomalous behavior and regularly review the results from your Azure IoT Hub. Use Azure Monitor and a Log Analytics workspace to review logs and perform queries on log data.
Deploy the firewall solution of your choice at each of your organization's netwo
Alternatively, you can enable and on-board data to Azure Sentinel or a third-party SIEM. - [Monitor Azure IoT health](monitor-iot-hub.md)+ - [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) - [Getting started with Log Analytics queries](../azure-monitor/logs/log-analytics-tutorial.md) - [ How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.7: Enable alerts for anomalous activities **Guidance**: Use Azure Security Center for IoT with a Log Analytics workspace for monitoring and alerting on anomalous activity found in security logs and events. Alternatively, you can enable and on-board data to Azure Sentinel. You can also define operational alerts with Azure Monitor that may have security implications, such as when traffic drops unexpectedly. - [Monitor Azure IoT Hub health](monitor-iot-hub.md)+ - [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)-- [Azure Security Center for IoT alerts](../defender-for-iot/concept-security-alerts.md) -- [How to alert on log analytics log data](../azure-monitor/alerts/tutorial-response.md)
+- [Azure Security Center for IoT alerts](/azure/asc-for-iot/concept-security-alerts)
-**Azure Security Center monitoring**: Yes
+- [How to alert on log analytics log data](../azure-monitor/alerts/tutorial-response.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.8: Centralize anti-malware logging **Guidance**: Not applicable; Azure IoT Hub does not process or produce anti-malware related logs.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 2.9: Enable DNS query logging **Guidance**: Not applicable; Azure IoT Hub does not process or produce DNS-related logs.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 2.10: Enable command-line audit logging **Guidance**: Not applicable; this recommendation is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
-## Identity and access control
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Identity and access control](../security/benchmarks/security-control-identity-access-control.md).*
+## Identity and Access Control
-### 3.1: Maintain an inventory of administrative accounts
+*For more information, see the [Azure Security Benchmark: Identity and Access Control](../security/benchmarks/security-control-identity-access-control.md).*
-**Guidance**: Azure role-based access control (Azure RBAC) allows you to manage access to Azure IoT hub through role assignments. You can assign these roles to users, groups service principals, and managed identities. There are pre-defined built-in roles for certain resources, and these roles can be inventoried or queried through tools such as Azure CLI, or Azure PowerShell, or the Azure portal.
+### 3.1: Maintain an inventory of administrative accounts
-- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
+**Guidance**: Azure role-based access control (Azure RBAC) allows you to manage access to Azure IoT hub through role assignments. You can assign these roles to users, groups service principals, and managed identities. There are pre-defined built-in roles for certain resources, and these roles can be inventoried or queried through tools such as Azure CLI, or Azure PowerShell, or the Azure portal.
-- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
+- [How to get a directory role in Azure Active Directory (Azure AD) with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole)
-**Azure Security Center monitoring**: Yes
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.2: Change default passwords where applicable **Guidance**: Access management to Azure IoT Hub resources is controlled through Azure Active Directory (Azure AD). Azure AD does not have the concept of default passwords.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.3: Use dedicated administrative accounts **Guidance**: Create standard operating procedures around the use of dedicated administrative accounts.
-You can also enable just-in-time access to administrative accounts by using Azure AD Privileged Identity Management and Azure Resource Manager.
--- [Learn more about Privileged Identity Management](../active-directory/privileged-identity-management/index.yml)
+You can also enable just-in-time access to administrative accounts by using Azure Active Directory (Azure AD) Privileged Identity Management and Azure Resource Manager.
-**Azure Security Center monitoring**: Yes
+- [Learn more about Privileged Identity Management](/azure/active-directory/privileged-identity-management/)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.4: Use single sign-on (SSO) with Azure Active Directory
-**Guidance**: For users accessing IoT Hub, use Azure Active Directory SSO. Use Azure Security Center identity and access recommendations.
+**Guidance**: For users accessing IoT Hub, use Azure Active Directory (Azure AD) SSO. Use Azure Security Center identity and access recommendations.
- [Understand SSO with Azure AD](../active-directory/manage-apps/what-is-single-sign-on.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.5: Use multi-factor authentication for all Azure Active Directory based access
-**Guidance**:
-Enable Azure AD MFA to protect your overall Azure tenant, benefiting all services. IoT Hub service doesn't have MFA support.
+**Guidance**: Enable Azure Active Directory (Azure AD) multifactor authentication to protect your overall Azure tenant, benefiting all services. IoT Hub service doesn't have multifactor authentication support.
-- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
+- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
- [How to monitor identity and access within Azure Security Center](../security-center/security-center-identity-access.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
+**Azure Security Center monitoring**: None
-**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges.
+### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
-- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
+**Guidance**: Use a secure privileged access workstation (PAW) for administrative tasks that require elevated privileges.
-- [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)
+- [Understand secure, privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
-**Azure Security Center monitoring**: Not Applicable
+- [How to enable Azure Active Directory (Azure AD) multifactor authentication](../active-directory/authentication/howto-mfa-getstarted.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.7: Log and alert on suspicious activities from administrative accounts
-**Guidance**: Use Azure Active Directory security reports and monitoring to detect when suspicious or unsafe activity occurs in the environment. Use Azure Security Center to monitor identity and access activity.
+**Guidance**: Use Azure Active Directory (Azure AD) security reports and monitoring to detect when suspicious or unsafe activity occurs in the environment. Use Azure Security Center to monitor identity and access activity.
- [How to identify Azure AD users flagged for risky activity](../active-directory/identity-protection/overview-identity-protection.md)-- [How to monitor users' identity and access activity in Azure Security Center](../security-center/security-center-identity-access.md)
-**Azure Security Center monitoring**: Yes
+- [How to monitor users' identity and access activity in Azure Security Center](../security-center/security-center-identity-access.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.8: Manage Azure resources only from approved locations
-**Guidance**:
-For users accessing IoT Hub, conditional access isn't supported. To mitigate this, use Azure AD named locations to allow access only from specific logical groupings of IP address ranges or countries/regions for your overall Azure tenant, benefitting all services including IoT Hub.
+**Guidance**: For users accessing IoT Hub, conditional access isn't supported. To mitigate this, use Azure Active Directory (Azure AD) named locations to allow access only from specific logical groupings of IP address ranges or countries/regions for your overall Azure tenant, benefitting all services including IoT Hub.
- [How to configure Azure AD named locations](../active-directory/reports-monitoring/quickstart-configure-named-locations.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.9: Use Azure Active Directory **Guidance**: For user access to IoT Hub, Use Azure Active Directory (Azure AD) as the central authentication and authorization system. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
For users accessing IoT Hub, conditional access isn't supported. To mitigate thi
For device and service access, IoT Hub uses security tokens and Shared Access Signature (SAS) tokens to authenticate devices and services to avoid sending keys on network. - [How to create and configure an Azure AD instance](../active-directory/fundamentals/active-directory-access-create-new-tenant.md)-- [IoT Hub security tokens](../iot-fundamentals/iot-security-deployment.md#iot-hub-security-tokens)-
-**Azure Security Center monitoring**: Not Applicable
+- [IoT Hub security tokens](https://docs.microsoft.com/azure/iot-fundamentals/iot-security-deployment#iot-hub-security-tokens)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.10: Regularly review and reconcile user access
-**Guidance**: Azure AD provides logs to help discover stale accounts. In addition, use Azure AD identity and access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access.
+**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure AD identity and access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access.
Use Azure AD Privileged Identity Management (PIM) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment. -- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
+- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+ - [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)-- [Deploy Azure AD Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-deployment-plan.md)
-**Azure Security Center monitoring**: Yes
+- [Deploy Azure AD Privileged Identity Management (PIM)](/azure/active-directory/privileged-identity-management/pim-deployment-plan)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.11: Monitor attempts to access deactivated credentials
-**Guidance**:
-You have access to Azure AD sign-in activity, audit, and risk event log sources, which allow you to integrate with any SIEM/monitoring tool.
+**Guidance**: You have access to Azure Active Directory (Azure AD) sign-in activity, audit, and risk event log sources, which allow you to integrate with any SIEM/monitoring tool.
-You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired alerts within Log Analytics workspace.
+You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired alerts within Log Analytics workspace.
User Azure Monitor resource logs to monitor unauthorized connection attempts in the Connections category. -- [How to integrate Azure activity logs with Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)--- [Configure resource logs for IoT hub](monitor-iot-hub.md#collection-and-routing)
+- [How to integrate Azure activity logs with Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
-**Azure Security Center monitoring**: Not Applicable
+- [Configure resource logs for IoT hub](https://docs.microsoft.com/azure/iot-hub/monitor-iot-hub#collection-and-routing)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.12: Alert on account login behavior deviation
-**Guidance**: Use Azure AD Identity Protection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
-
+**Guidance**: Use Azure Active Directory (Azure AD) Identity Protection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
+ - [ How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
-
+ - [ How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
-
-- [ How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Not Applicable
+- [ How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
-### 3.13: Provide Microsoft with access to relevant customer data during support scenarios
+**Azure Security Center monitoring**: None
-**Guidance**: In support scenarios where Microsoft needs to access customer data, it will be requested directly from the customer.
+### 3.13: Provide Microsoft with access to relevant customer data during support scenarios
-**Azure Security Center monitoring**: Not Applicable
+**Guidance**: In support scenarios where Microsoft needs to access customer data, it will be requested directly from the customer.
**Responsibility**: Customer
-## Data protection
+**Azure Security Center monitoring**: None
+
+## Data Protection
-*For more information, see the [Azure Security Benchmark: Data protection](../security/benchmarks/security-control-data-protection.md).*
+*For more information, see the [Azure Security Benchmark: Data Protection](../security/benchmarks/security-control-data-protection.md).*
### 4.1: Maintain an inventory of sensitive Information
User Azure Monitor resource logs to monitor unauthorized connection attempts in
- [ How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.2: Isolate systems storing or processing sensitive information **Guidance**: Implement isolation using separate subscriptions and management groups for individual security domains such as environment type and data sensitivity level. You can restrict the level of access to your Azure resources that your applications and enterprise environments demand. You can control access to Azure resources via Azure RBAC. - [ How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)+ - [ How to create management groups](../governance/management-groups/create-management-group-portal.md)-- [ How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
+- [ How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.3: Monitor and block unauthorized transfer of sensitive information **Guidance**: Use a third-party solution from Azure Marketplace in network perimeters to monitor for unauthorized transfer of sensitive information and block such transfers while alerting information security professionals.
For the underlying platform managed by Microsoft, Microsoft treats all customer
- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.4: Encrypt all sensitive information in transit **Guidance**: IoT Hub uses Transport Layer Security (TLS) to secure connections from IoT devices and services. Three versions of the TLS protocol are currently supported, namely versions 1.0, 1.1, and 1.2. It is strongly recommended that you use TLS 1.2 as the preferred TLS version when connecting to IoT Hub.
For the underlying platform managed by Microsoft, Microsoft treats all customer
Follow Azure Security Center recommendations for encryption at rest and encryption in transit, where applicable. - [TLS support in IoT Hub](iot-hub-tls-support.md)-- [Understand encryption in transit with Azure](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit)-
-**Azure Security Center monitoring**: Not Applicable
+- [Understand encryption in transit with Azure](https://docs.microsoft.com/azure/security/fundamentals/encryption-overview#encryption-of-data-in-transit)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.5: Use an active discovery tool to identify sensitive data **Guidance**: Data identification, classification, and loss prevention features are not yet available for Azure IoT Hub. Implement a third-party solution if required for compliance purposes.
For the underlying Azure platform managed by Microsoft, Microsoft treats all cus
- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.6: Use Azure RBAC to manage access to resources **Guidance**: For control plane user access to IoT Hub, use Azure RBAC to control access. For data plane access to IoT Hub, use shared access policies for IoT Hub.
For the underlying Azure platform managed by Microsoft, Microsoft treats all cus
- [Control access to IoT Hub](iot-hub-devguide-security.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
-### 4.7: Use host-based data loss prevention to enforce access control
-
-**Guidance**: Not applicable; this guideline is intended for compute resources.
-
-Microsoft manages the underlying infrastructure for Azure IoT Hub and has implemented strict controls to prevent the loss or exposure of customer data.
--- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)-
-**Azure Security Center monitoring**: Not Applicable
-
-**Responsibility**: Microsoft
-
-### 4.8: Encrypt sensitive information at rest
-
-**Guidance**: IoT Hub supports encryption of data at rest with customer-managed keys (CMK), also known as "bring your own key" (BYOK).
-
-Azure IoT Hub provides encryption of data at rest and in-transit as it is written in our datacenters and decrypts it for you as you access it. By default, IoT Hub uses Microsoft-managed keys to encrypt the data at rest.
--- [Encryption of data at rest with customer-managed keys for IoT Hub](iot-hub-customer-managed-keys.md)--- [Understand encryption at rest in Azure](../security/fundamentals/encryption-atrest.md)-
-**Azure Security Center monitoring**: Not Applicable
-
-**Responsibility**: Microsoft
+**Azure Security Center monitoring**: None
### 4.9: Log and alert on changes to critical Azure resources
Azure IoT Hub provides encryption of data at rest and in-transit as it is writte
- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
-## Vulnerability management
-
-*For more information, see the [Azure Security Benchmark: Vulnerability management](../security/benchmarks/security-control-vulnerability-management.md).*
-
-### 5.1: Run automated vulnerability scanning tools
-
-**Guidance**: Not applicable; Microsoft performs vulnerability management on the underlying systems that support Azure IoT Hub.
-
-**Azure Security Center monitoring**: Not Applicable
-
-**Responsibility**: Microsoft
-
-### 5.2: Deploy automated operating system patch management solution
-
-**Guidance**: Not applicable; Microsoft performs patch management on the underlying systems that support Azure IoT Hub.
+**Azure Security Center monitoring**: None
-**Azure Security Center monitoring**: Not Applicable
+## Vulnerability Management
-**Responsibility**: Microsoft
+*For more information, see the [Azure Security Benchmark: Vulnerability Management](../security/benchmarks/security-control-vulnerability-management.md).*
### 5.3: Deploy an automated patch management solution for third-party software titles **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 5.4: Compare back-to-back vulnerability scans **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 5.5: Use a risk-rating process to prioritize the remediation of discovered vulnerabilities **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
-## Inventory and asset management
+**Azure Security Center monitoring**: None
+
+## Inventory and Asset Management
-*For more information, see the [Azure Security Benchmark: Inventory and asset management](../security/benchmarks/security-control-inventory-asset-management.md).*
+*For more information, see the [Azure Security Benchmark: Inventory and Asset Management](../security/benchmarks/security-control-inventory-asset-management.md).*
### 6.1: Use automated asset discovery solution **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 6.2: Maintain asset metadata **Guidance**: Apply tags to Azure resources (not all resources support tags, but most do) to logically organize them into a taxonomy. - [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.3: Delete unauthorized Azure resources **Guidance**: Use tagging, management groups, and separate subscriptions where appropriate, to organize and track assets. Reconcile inventory on a regular basis and ensure unauthorized resources are deleted from the subscription in a timely manner. - [ How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-
-- [ How to create management groups](../governance/management-groups/create-management-group-portal.md)
-
-- [ How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
+
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.4: Define and maintain an inventory of approved Azure resources **Guidance**: Create an inventory of approved Azure resources and approved software for compute resources as per your organizational needs.
-Each IoT Hub has an identity registry that can be used to create per-device resources in the service. Individual or groups of device identities can be added to an allow list, or a block list, enabling complete control over device access.
+Each IoT Hub has an identity registry that can be used to create per-device resources in the service. Individual or groups of device identities can be added to an allowlist, or a blocklist, enabling complete control over device access.
- [IoT Hub identity registry](iot-hub-devguide-identity-registry.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.5: Monitor for unapproved Azure resources
-**Guidance**:
-Use Azure Policy to put restrictions on the type of resources that can be created in your subscriptions.
+**Guidance**: Use Azure Policy to put restrictions on the type of resources that can be created in your subscriptions.
Use Azure Resource Graph to query for and discover resources within their subscriptions. Ensure that all Azure resources present in the environment are approved.
Use Azure Resource Graph to query for and discover resources within their subscr
- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.6: Monitor for unapproved software applications within compute resources **Guidance**: Not applicable; this recommendation is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 6.7: Remove unapproved Azure resources and software applications **Guidance**: Not applicable; this recommendation is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 6.8: Use only approved applications **Guidance**: Not applicable; this recommendation is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 6.9: Use only approved Azure services **Guidance**: Use Azure Policy to put restrictions on the type of resources that can be created in customer subscriptions using the following built-in policy definitions:
In addition, use the Azure Resource Graph to query/discover resources within the
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) - [How to create queries with Azure Graph](../governance/resource-graph/first-query-portal.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.10: Maintain an inventory of approved software titles **Guidance**: Not applicable; this recommendation is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 6.11: Limit users' ability to interact with Azure Resource Manager
-**Guidance**: Use Azure AD Conditional Access to limit users' ability to interact with Azure Resource Manager by configuring "Block access" for the "Microsoft Azure Management" App.
-
-- [ How to configure Conditional Access to block access to Azure Resource Manager](../role-based-access-control/conditional-access-azure-management.md)
+**Guidance**: Use Azure Active Directory (Azure AD) Conditional Access to limit users' ability to interact with Azure Resource Manager by configuring "Block access" for the "Microsoft Azure Management" App.
-**Azure Security Center monitoring**: Not Applicable
+- [ How to configure Conditional Access to block access to Azure Resource Manager](../role-based-access-control/conditional-access-azure-management.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.12: Limit users' ability to execute scripts in compute resources **Guidance**: Not applicable; this recommendation is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 6.13: Physically or logically segregate high risk applications **Guidance**: Not applicable; this recommendation is intended for web applications running on Azure App Service or compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
-## Secure configuration
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Secure configuration](../security/benchmarks/security-control-secure-configuration.md).*
+## Secure Configuration
+
+*For more information, see the [Azure Security Benchmark: Secure Configuration](../security/benchmarks/security-control-secure-configuration.md).*
### 7.1: Establish secure configurations for all Azure resources
Azure Resource Manager has the ability to export the template in JavaScript Obje
You can also use the recommendations from Azure Security Center as a secure configuration baseline for your Azure resources. -- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
+- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias)
- [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md)
You can also use the recommendations from Azure Security Center as a secure conf
- [Security recommendations - a reference guide](../security-center/recommendations-reference.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.2: Establish secure operating system configurations **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 7.3: Maintain secure Azure resource configurations **Guidance**: Use Azure Policy [deny] and [deploy if not exist] to enforce secure settings across your Azure resources. In addition, you can use Azure Resource Manager templates to maintain the security configuration of your Azure resources required by your organization.
You can also use the recommendations from Azure Security Center as a secure conf
- [Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md) - [Azure Resource Manager templates overview](../azure-resource-manager/templates/overview.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.4: Maintain secure operating system configurations **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 7.5: Securely store configuration of Azure resources **Guidance**: If using custom Azure Policy definitions for your Azure IoT Hub or related resources, use Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops)-- [Azure Repos Documentation](/azure/devops/repos/index?view=azure-devops)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow)
-**Azure Security Center monitoring**: Not Applicable
+- [Azure Repos Documentation](/azure/devops/repos)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.6: Securely store custom operating system images **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 7.7: Deploy configuration management tools for Azure resources **Guidance**: Use Azure Policy aliases in the "Microsoft.Devices" namespace to create custom policies to alert, audit, and enforce system configurations. Additionally, develop a process and pipeline for managing policy exceptions. - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)-- [How to use aliases](../governance/policy/concepts/definition-structure.md#aliases)-
-**Azure Security Center monitoring**: Not Applicable
+- [How to use aliases](https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure#aliases)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.8: Deploy configuration management tools for operating systems **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 7.9: Implement automated configuration monitoring for Azure resources **Guidance**: Use Azure Security Center to perform baseline scans for your Azure Resources. Additionally, use Azure Policy to alert and audit Azure resource configurations. - [ How to remediate recommendations in Azure Security Center](../security-center/security-center-remediate-recommendations.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.10: Implement automated configuration monitoring for operating systems **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 7.11: Manage Azure secrets securely **Guidance**: IoT Hub uses security tokens and Shared Access Signature (SAS) tokens to authenticate devices and services to avoid sending keys on network. Use managed identities in conjunction with Azure Key Vault to simplify secret management for your cloud applications. -- [IoT Hub security tokens](../iot-fundamentals/iot-security-deployment.md#iot-hub-security-tokens)-- [How to use managed identities for IoT Hub](virtual-network-support.md#turn-on-managed-identity-for-iot-hub)
+- [IoT Hub security tokens](https://docs.microsoft.com/azure/iot-fundamentals/iot-security-deployment#iot-hub-security-tokens)
+
+- [How to use managed identities for IoT Hub](https://docs.microsoft.com/azure/iot-hub/virtual-network-support#turn-on-managed-identity-for-iot-hub)
- [How to create a key vault](../key-vault/general/quick-create-portal.md)-- [How to provide Key Vault authentication with a managed identity](../key-vault/general/assign-access-policy-portal.md)
-**Azure Security Center monitoring**: Yes
+- [How to provide Key Vault authentication with a managed identity](../key-vault/general/assign-access-policy-portal.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.12: Manage identities securely and automatically
-**Guidance**: IoT Hub uses security tokens and Shared Access Signature (SAS) tokens to authenticate devices and services to avoid sending keys on the network.
+**Guidance**: IoT Hub uses security tokens and Shared Access Signature (SAS) tokens to authenticate devices and services to avoid sending keys on the network.
-Use managed identities to provide Azure services with an automatically managed identity in Azure AD. Managed identities allow you to authenticate to any service that supports Azure AD authentication, including Key Vault, without any credentials in your code.
+Use managed identities to provide Azure services with an automatically managed identity in Azure Active Directory (Azure AD). Managed identities allow you to authenticate to any service that supports Azure AD authentication, including Key Vault, without any credentials in your code.
-- [IoT Hub security tokens](../iot-fundamentals/iot-security-deployment.md#iot-hub-security-tokens)-- [How to configure managed identities for IoT Hub](virtual-network-support.md#turn-on-managed-identity-for-iot-hub)
+- [IoT Hub security tokens](https://docs.microsoft.com/azure/iot-fundamentals/iot-security-deployment#iot-hub-security-tokens)
-**Azure Security Center monitoring**: Not Applicable
+- [How to configure managed identities for IoT Hub](https://docs.microsoft.com/azure/iot-hub/virtual-network-support#turn-on-managed-identity-for-iot-hub)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.13: Eliminate unintended credential exposure **Guidance**: Implement Credential Scanner to identify credentials within code. Credential Scanner will also encourage moving discovered credentials to more secure locations such as Azure Key Vault. - [ How to set up Credential Scanner](https://secdevtools.azurewebsites.net/helpcredscan.html)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
-## Malware defense
-
-*For more information, see the [Azure Security Benchmark: Malware defense](../security/benchmarks/security-control-malware-defense.md).*
-
-### 8.1: Use centrally managed antimalware software
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-Microsoft anti-malware is enabled on the underlying host that supports Azure services (for example, Azure App Service), however it does not run on customer content.
+**Azure Security Center monitoring**: None
-**Azure Security Center monitoring**: Not Applicable
+## Malware Defense
-**Responsibility**: Microsoft
+*For more information, see the [Azure Security Benchmark: Malware Defense](../security/benchmarks/security-control-malware-defense.md).*
### 8.2: Pre-scan files to be uploaded to non-compute Azure resources
Microsoft anti-malware is enabled on the underlying host that supports Azure ser
It is your responsibility to pre-scan any content being uploaded to non-compute Azure resources. Microsoft cannot access customer data, and therefore cannot conduct anti-malware scans of customer content on your behalf.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
-### 8.3: Ensure antimalware software and signatures are updated
-
-**Guidance**: Not applicable; this benchmark is intended for compute resources. Microsoft Antimalware is enabled on the underlying host that supports Azure services, however it does not run on customer content.
-
-**Azure Security Center monitoring**: Not Applicable
-
-**Responsibility**: Microsoft
+**Azure Security Center monitoring**: None
-## Data recovery
+## Data Recovery
-*For more information, see the [Azure Security Benchmark: Data recovery](../security/benchmarks/security-control-data-recovery.md).*
+*For more information, see the [Azure Security Benchmark: Data Recovery](../security/benchmarks/security-control-data-recovery.md).*
### 9.1: Ensure regular automated back ups
It is your responsibility to pre-scan any content being uploaded to non-compute
- [How to clone IoT Hub](iot-hub-how-to-clone.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 9.2: Perform complete system backups and backup any customer-managed keys **Guidance**: Azure IoT Hub recommends the secondary IoT hub must contain all device identities that can connect to the solution. The solution should keep geo-replicated backups of device identities, and upload them to the secondary IoT hub before switching the active endpoint for the devices. The device identity export functionality of IoT Hub is useful in this context. -- [IoT Hub high availability and disaster recovery](iot-hub-ha-dr.md#achieve-cross-region-ha)
+- [IoT Hub high availability and disaster recovery](https://docs.microsoft.com/azure/iot-hub/iot-hub-ha-dr#achieve-cross-region-ha)
- [IoT Hub device identity export](iot-hub-bulk-identity-mgmt.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 9.3: Validate all backups including customer-managed keys **Guidance**: Azure IoT Hub recommends the secondary IoT hub must contain all device identities that can connect to the solution. The solution should keep geo-replicated backups of device identities, and upload them to the secondary IoT hub before switching the active endpoint for the devices. The device identity export functionality of IoT Hub is useful in this context. Periodically perform data restoration of content in backup. Ensure that you can restore backed-up customer-managed keys. -- [IoT Hub high availability and disaster recovery](iot-hub-ha-dr.md#achieve-cross-region-ha)
+- [IoT Hub high availability and disaster recovery](https://docs.microsoft.com/azure/iot-hub/iot-hub-ha-dr#achieve-cross-region-ha)
- [IoT Hub device identity export](iot-hub-bulk-identity-mgmt.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 9.4: Ensure protection of backups and customer-managed keys **Guidance**: Enable soft delete and purge protection in Key Vault to protect keys against accidental or malicious deletion. If Azure Storage is used to store backups, enable soft delete to save and recover your data when blobs or blob snapshots are deleted.
-
+ - [Understand Azure RBAC](../role-based-access-control/overview.md)-- [Soft delete for Azure Blob storage](../storage/blobs/soft-delete-blob-overview.md?tabs=azure-portal)
-**Azure Security Center monitoring**: Not Applicable
+- [Soft delete for Azure Blob storage](../storage/blobs/soft-delete-blob-overview.md)
**Responsibility**: Customer
-## Incident response
+**Azure Security Center monitoring**: None
+
+## Incident Response
-*For more information, see the [Azure Security Benchmark: Incident response](../security/benchmarks/security-control-incident-response.md).*
+*For more information, see the [Azure Security Benchmark: Incident Response](../security/benchmarks/security-control-incident-response.md).*
### 10.1: Create an incident response guide
Periodically perform data restoration of content in backup. Ensure that you can
- [ Use NIST's Computer Security Incident Handling Guide to aid in the creation of your own incident response plan](https://csrc.nist.gov/publications/detail/sp/800-61/rev-2/final)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.2: Create an incident scoring and prioritization procedure **Guidance**: Azure Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the analytically used to issue the alert as well as the confidence level that there was malicious intent behind the activity that led to the alert.
Periodically perform data restoration of content in backup. Ensure that you can
- [ Use tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.3: Test security response procedures **Guidance**: Conduct exercises to test your systems' incident response capabilities on a regular cadence to help protect your Azure resources. Identify weak points and gaps and then revise your response plan as needed. - [ NIST's publication--Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities](https://csrc.nist.gov/publications/detail/sp/800-84/final)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.4: Provide security incident contact details and configure alert notifications for security incidents **Guidance**: Security incident contact information will be used by Microsoft to contact you if the Microsoft Security Response Center (MSRC) discovers that your data has been accessed by an unlawful or unauthorized party. Review incidents after the fact to ensure that issues are resolved. - [ How to set the Azure Security Center security contact](../security-center/security-center-provide-security-contact-details.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.5: Incorporate security alerts into your incident response system **Guidance**: Export your Azure Security Center alerts and recommendations using the continuous export feature to help identify risks to Azure resources. Continuous export allows you to export alerts and recommendations either manually or in an ongoing, continuous fashion. You can use the Azure Security Center data connector to stream the alerts to Azure Sentinel.
Periodically perform data restoration of content in backup. Ensure that you can
- [ How to stream alerts into Azure Sentinel](../sentinel/connect-azure-security-center.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.6: Automate the response to security alerts **Guidance**: Use workflow automation feature Azure Security Center to automatically trigger responses to security alerts and recommendations to protect your Azure resources. - [ How to configure workflow automation in Security Center](../security-center/workflow-automation.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
-## Penetration tests and red team exercises
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Penetration tests and red team exercises](../security/benchmarks/security-control-penetration-tests-red-team-exercises.md).*
+## Penetration Tests and Red Team Exercises
+
+*For more information, see the [Azure Security Benchmark: Penetration Tests and Red Team Exercises](../security/benchmarks/security-control-penetration-tests-red-team-exercises.md).*
### 11.1: Conduct regular penetration testing of your Azure resources and ensure remediation of all critical security findings
Periodically perform data restoration of content in backup. Ensure that you can
- [Microsoft Cloud Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ## Next steps -- See the [Azure security benchmark](../security/benchmarks/overview.md)-- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
+- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)
+- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
iot-pnp Concepts Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-components.md
- Title: Understand components in IoT Plug and Play models | Microsoft Docs
-description: Understand difference between IoT Plug and Play DTDL models that use components and models that don't use components.
-- Previously updated : 07/07/2020----
-# As a device builder, I want to understand difference between DTDL models that use components and models that don't use components.
--
-# IoT Plug and Play components in models
-
-In the IoT Plug and Play conventions, a device is an IoT Plug and Play device if it presents its digital twins definition language (DTDL) model ID when it connects to an IoT hub.
-
-The following snippet shows some example model IDs:
-
-```json
- "@id": "dtmi:com:example:TemperatureController;1"
- "@id": "dtmi:com:example:Thermostat;1",
-```
-
-## No components
-
-A simple model doesn't use embedded or cascaded components. It includes header information and a contents section to define telemetry, properties, and commands.
-
-The following example shows part of a simple model that doesn't use components:
-
-```json
-{
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:example:Thermostat;1",
- "@type": "Interface",
- "displayName": "Thermostat",
- "description": "Reports current temperature and provides desired temperature control.",
- "contents": [
- {
- "@type": [
- "Telemetry",
- "Temperature"
- ],
- "name": "temperature",
- "displayName": "Temperature",
- "description": "Temperature in degrees Celsius.",
- "schema": "double",
- "unit": "degreeCelsius"
- },
- {
- "@type": [
- "Property",
-...
-```
-
-Although the model doesn't explicitly define a component, it behaves as if there is a single, _default component_, with all the telemetry, property, and command definitions.
-
-The following screenshot shows how the model displays in the Azure IoT explorer tool:
--
-The model ID is stored in a device twin property as the following screenshot shows:
--
-A DTDL model without components is a useful simplification for a device or IoT Edge module with a single set of telemetry, properties, and commands. A model that doesn't use components makes it easy to migrate an existing device or module to be an IoT Plug and Play device or module - you create a DTDL model that describes your actual device or module without the need to define any components.
-
-> [!TIP]
-> A module can be a device [module](../iot-hub/iot-hub-devguide-module-twins.md) or an [IoT Edge module](../iot-edge/about-iot-edge.md).
-
-## Multiple components
-
-Components let you build a model interface as an assembly of other interfaces.
-
-For example, the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) interface is defined as a model. You can incorporate this interface as one or more components when you define the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json). In the following example, these components are called `thermostat1` and `thermostat2`.
-
-For a DTDL model with multiple components, there are two or more component sections. Each section has `@type` set to `Component` and explicitly refers to a schema as shown in the following snippet:
-
-```json
-{
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:example:TemperatureController;1",
- "@type": "Interface",
- "displayName": "Temperature Controller",
- "description": "Device with two thermostats and remote reboot.",
- "contents": [
-...
- {
- "@type" : "Component",
- "schema": "dtmi:com:example:Thermostat;1",
- "name": "thermostat1",
- "displayName": "Thermostat One",
- "description": "Thermostat One of Two."
- },
- {
- "@type" : "Component",
- "schema": "dtmi:com:example:Thermostat;1",
- "name": "thermostat2",
- "displayName": "Thermostat Two",
- "description": "Thermostat Two of Two."
- },
- {
- "@type": "Component",
- "schema": "dtmi:azure:DeviceManagement:DeviceInformation;1",
- "name": "deviceInformation",
- "displayName": "Device Information interface",
- "description": "Optional interface with basic device hardware information."
- }
-...
-```
-
-This model has three components defined in the contents section - two `Thermostat` components and a `DeviceInformation` component. There's also a default component.
-
-## Next steps
-
-Now that you've learned about model components, here are some additional resources:
--- [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md)-- [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)-- [Model repositories](./concepts-model-repository.md)
iot-pnp Concepts Convention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-convention.md
You describe the telemetry, properties, and commands that an IoT Plug and Play d
- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level properties in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. - **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with additional telemetry, properties, and commands.
-For more information, see [IoT Plug and Play components in models](concepts-components.md).
+For more information, see [IoT Plug and Play modeling guide](concepts-modeling-guide.md).
## Identify the model
Now that you've learned about IoT Plug and Play conventions, here are some addit
- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) - [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device)-- [Model components](./concepts-components.md)
+- [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-pnp Concepts Developer Guide Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-developer-guide-device.md
This guide describes the basic steps required to create a device, module, or IoT
To build an IoT Plug and Play device, module, or IoT Edge module, follow these steps: 1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub.
-1. Create a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-components.md).
+1. Create a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md).
1. Update your device or module to announce the `model-id` as part of the device connection. 1. Implement telemetry, properties, and commands using the [IoT Plug and Play conventions](concepts-convention.md)
Now that you've learned about IoT Plug and Play device development, here are som
- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) - [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device)-- [Model components](concepts-components.md)
+- [Understand components in IoT Plug and Play models](concepts-modeling-guide.md)
- [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md) - [IoT Plug and Play service developer guide](concepts-developer-guide-service.md)
iot-pnp Concepts Developer Guide Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-developer-guide-service.md
Now that you've learned about device modeling, here are some additional resource
- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) - [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device)-- [Model components](./concepts-components.md)
+- [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
- [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md)
iot-pnp Concepts Modeling Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-modeling-guide.md
+
+ Title: Understand IoT Plug and Play device models | Microsoft Docs
+description: Understand the Digital Twins Definition Language (DTDL) modeling language for IoT Plug and Play devices. The article describes primitive and complex datatypes, reuse patterns that use components and inheritance, and semantic types. The article provides guidance on the choice of device twin model identifier and tooling support for model authoring.
++ Last updated : 03/09/2021++++
+# As a device builder, I want to understand how to design and author a DTDL model for an IoT Plug and Play device.
+++
+# IoT Plug and Play modeling guide
+
+At the core of IoT Plug and Play, is a device _model_ that describes a device's capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of interfaces that define:
+
+- _Properties_ that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property.
+- _Telemetry_ fields that define the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message.
+- _Commands_ that describe a function or operation that can be done on a device. For example, a command could reboot a gateway or take a picture using a remote camera.
+
+To learn more about how IoT Plug and Play uses device models, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md) and [IoT Plug and Play service developer guide](concepts-developer-guide-service.md).
+
+To define a model, you use the Digital Twins Definition Language (DTDL). DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that:
+
+- Has a unique model ID: `dtmi:com:example:Thermostat;1`.
+- Sends temperature telemetry.
+- Has a writable property to set the target temperature.
+- Has a read-only property to report the maximum temperature since the last reboot.
+- Responds to a command that requests maximum, minimum and average temperatures over a time period.
+
+```json
+{
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:Thermostat;1",
+ "@type": "Interface",
+ "displayName": "Thermostat",
+ "description": "Reports current temperature and provides desired temperature control.",
+ "contents": [
+ {
+ "@type": [
+ "Telemetry",
+ "Temperature"
+ ],
+ "name": "temperature",
+ "displayName": "Temperature",
+ "description": "Temperature in degrees Celsius.",
+ "schema": "double",
+ "unit": "degreeCelsius"
+ },
+ {
+ "@type": [
+ "Property",
+ "Temperature"
+ ],
+ "name": "targetTemperature",
+ "schema": "double",
+ "displayName": "Target Temperature",
+ "description": "Allows to remotely specify the desired target temperature.",
+ "unit": "degreeCelsius",
+ "writable": true
+ },
+ {
+ "@type": [
+ "Property",
+ "Temperature"
+ ],
+ "name": "maxTempSinceLastReboot",
+ "schema": "double",
+ "unit": "degreeCelsius",
+ "displayName": "Max temperature since last reboot.",
+ "description": "Returns the max temperature since last device reboot."
+ },
+ {
+ "@type": "Command",
+ "name": "getMaxMinReport",
+ "displayName": "Get Max-Min report.",
+ "description": "This command returns the max, min and average temperature from the specified time to the current time.",
+ "request": {
+ "name": "since",
+ "displayName": "Since",
+ "description": "Period to return the max-min report.",
+ "schema": "dateTime"
+ },
+ "response": {
+ "name": "tempReport",
+ "displayName": "Temperature Report",
+ "schema": {
+ "@type": "Object",
+ "fields": [
+ {
+ "name": "maxTemp",
+ "displayName": "Max temperature",
+ "schema": "double"
+ },
+ {
+ "name": "minTemp",
+ "displayName": "Min temperature",
+ "schema": "double"
+ },
+ {
+ "name": "avgTemp",
+ "displayName": "Average Temperature",
+ "schema": "double"
+ },
+ {
+ "name": "startTime",
+ "displayName": "Start Time",
+ "schema": "dateTime"
+ },
+ {
+ "name": "endTime",
+ "displayName": "End Time",
+ "schema": "dateTime"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+The thermostat model has a single interface. Later examples in this article show more complex models that use components and inheritance.
+
+This article describes how to design and author your own models and covers topics such as data types, model structure, and tools.
+
+To learn more, see the [Digital Twins Definition Language v2](https://github.com/Azure/opendigitaltwins-dtdl) specification.
+
+## Model structure
+
+Properties, telemetry, and commands are grouped into interfaces. This section describes how you can use interfaces to describe simple and complex models by using components and inheritance.
+
+### Model IDs
+
+Every interface has a unique digital twin model identifier (DTMI). Complex models use DTMIs to identify components. Applications can use the DTMIs that devices send to locate model definitions in a repository.
+
+DTMIs should follow the naming convention required by the [IoT Plug and Play model repository](https://github.com/Azure/iot-plugandplay-models):
+
+- The DTMI prefix is `dtmi:`.
+- The DTMI suffix is version number for the model such as `;2`.
+- The body of the DTMI maps to the folder and file in the model repository where the model is stored. The version number is part of the file name.
+
+For example, the model identified by the DTMI `dtmi:com:Example:Thermostat;2` is stored in the *dtmi/com/example/thermostat-2.json* file.
+
+The following snippet shows the outline of an interface definition with its unique DTMI:
+
+```json
+{
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:Thermostat;2",
+ "@type": "Interface",
+ "displayName": "Thermostat",
+ "description": "Reports current temperature and provides desired temperature control.",
+ "contents": [
+ ...
+ ]
+}
+```
+
+### No components
+
+A simple model, such as the thermostat shown previously, doesn't use embedded or cascaded components. Telemetry, properties, and commands are defined in the `contents` node of the interface.
+
+The following example shows part of a simple model that doesn't use components:
+
+```json
+{
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:Thermostat;1",
+ "@type": "Interface",
+ "displayName": "Thermostat",
+ "description": "Reports current temperature and provides desired temperature control.",
+ "contents": [
+ {
+ "@type": [
+ "Telemetry",
+ "Temperature"
+ ],
+ "name": "temperature",
+ "displayName": "Temperature",
+ "description": "Temperature in degrees Celsius.",
+ "schema": "double",
+ "unit": "degreeCelsius"
+ },
+ {
+ "@type": [
+ "Property",
+...
+```
+
+Tools such as Azure IoT Explorer and the IoT Central device template designer label a standalone interface like the the thermostat as a _default component_.
+
+The following screenshot shows how the model displays in the Azure IoT explorer tool:
++
+The following screenshot shows how the model displays as the default component in the IoT Central device template designer. Select **View identity** to see the DTMI of the model:
++
+The model ID is stored in a device twin property as the following screenshot shows:
++
+A DTDL model without components is a useful simplification for a device or an IoT Edge module with a single set of telemetry, properties, and commands. A model that doesn't use components makes it easy to migrate an existing device or module to be an IoT Plug and Play device or module - you create a DTDL model that describes your actual device or module without the need to define any components.
+
+> [!TIP]
+> A module can be a device [module](../iot-hub/iot-hub-devguide-module-twins.md) or an [IoT Edge module](../iot-edge/about-iot-edge.md).
+
+### Reuse
+
+There are two ways to reuse interface definitions. Use multiple components in a model to reference other interface definitions. Use inheritance to extend existing interface definitions.
+
+### Multiple components
+
+Components let you build a model interface as an assembly of other interfaces.
+
+For example, the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) interface is defined as a model. You can incorporate this interface as one or more components when you define the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json). In the following example, these components are called `thermostat1` and `thermostat2`.
+
+For a DTDL model with multiple components, there are two or more component sections. Each section has `@type` set to `Component` and explicitly refers to a schema as shown in the following snippet:
+
+```json
+{
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:TemperatureController;1",
+ "@type": "Interface",
+ "displayName": "Temperature Controller",
+ "description": "Device with two thermostats and remote reboot.",
+ "contents": [
+ {
+ "@type": [
+ "Telemetry",
+ "DataSize"
+ ],
+ "name": "workingSet",
+ "displayName": "Working Set",
+ "description": "Current working set of the device memory in KiB.",
+ "schema": "double",
+ "unit": "kibibyte"
+ },
+ {
+ "@type": "Property",
+ "name": "serialNumber",
+ "displayName": "Serial Number",
+ "description": "Serial number of the device.",
+ "schema": "string"
+ },
+ {
+ "@type": "Command",
+ "name": "reboot",
+ "displayName": "Reboot",
+ "description": "Reboots the device after waiting the number of seconds specified.",
+ "request": {
+ "name": "delay",
+ "displayName": "Delay",
+ "description": "Number of seconds to wait before rebooting the device.",
+ "schema": "integer"
+ }
+ },
+ {
+ "@type" : "Component",
+ "schema": "dtmi:com:example:Thermostat;1",
+ "name": "thermostat1",
+ "displayName": "Thermostat One",
+ "description": "Thermostat One of Two."
+ },
+ {
+ "@type" : "Component",
+ "schema": "dtmi:com:example:Thermostat;1",
+ "name": "thermostat2",
+ "displayName": "Thermostat Two",
+ "description": "Thermostat Two of Two."
+ },
+ {
+ "@type": "Component",
+ "schema": "dtmi:azure:DeviceManagement:DeviceInformation;1",
+ "name": "deviceInformation",
+ "displayName": "Device Information interface",
+ "description": "Optional interface with basic device hardware information."
+ }
+ ]
+}
+```
+
+This model has three components defined in the contents section - two `Thermostat` components and a `DeviceInformation` component. The contents section also includes property, telemetry, and command definitions.
+
+The following screenshots show how this model appears in IoT Central. The property, telemetry, and command definitions in the temperature controller appear in the top-level **Default component**. The property, telemetry, and command definitions for each thermostat appear in the component definitions:
+++
+To learn how to write device code that interacts with components, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md).
+
+To learn how to write service code that intercats with components on a device, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md).
+
+### Inheritance
+
+Inheritance lets you reuse capabilities in a base interfaces to extend the capabilities of an interface. For example, several device models can share common capabilities such as a serial number:
++
+The following snippet shows a DTML model that uses the `extends` keyword to define the inheritance relationship shown in the previous diagram:
+
+```json
+[
+ {
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:Thermostat;1",
+ "@type": "Interface",
+ "contents": [
+ {
+ "@type": "Telemetry",
+ "name": "temperature",
+ "schema": "double",
+ "unit": "degreeCelsius"
+ },
+ {
+ "@type": "Property",
+ "name": "targetTemperature",
+ "schema": "double",
+ "unit": "degreeCelsius",
+ "writable": true
+ }
+ ],
+ "extends": [
+ "dtmi:com:example:baseDevice;1"
+ ]
+ },
+ {
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:baseDevice;1",
+ "@type": "Interface",
+ "contents": [
+ {
+ "@type": "Property",
+ "name": "SerialNumber",
+ "schema": "double",
+ "writable": false
+ }
+ ]
+ }
+]
+```
+
+The following screenshot shows this model in the IoT Central device template environment:
++
+When you write device or service-side code, your code doesn't need to do anything special to handle inherited interfaces. In the example shown in this section, your device code reports the serial number as if it's part of the thermostat interface.
+
+### Tips
+
+You can combine components and inheritance when you create a model. The following diagram shows a `thermostat` model inheriting from a `baseDevice` interface. The `baseDevice` interface has a component, that itself inherits from another interface:
++
+The following snippet shows a DTML model that uses the `extends` and `component` keywords to define the inheritance relationship and component usage shown in the previous diagram:
+
+```json
+[
+ {
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:Thermostat;1",
+ "@type": "Interface",
+ "contents": [
+ {
+ "@type": "Telemetry",
+ "name": "temperature",
+ "schema": "double",
+ "unit": "degreeCelsius"
+ },
+ {
+ "@type": "Property",
+ "name": "targetTemperature",
+ "schema": "double",
+ "unit": "degreeCelsius",
+ "writable": true
+ }
+ ],
+ "extends": [
+ "dtmi:com:example:baseDevice;1"
+ ]
+ },
+ {
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:baseDevice;1",
+ "@type": "Interface",
+ "contents": [
+ {
+ "@type": "Property",
+ "name": "SerialNumber",
+ "schema": "double",
+ "writable": false
+ },
+ {
+ "@type" : "Component",
+ "schema": "dtmi:com:example:baseComponent;1",
+ "name": "baseComponent"
+ }
+ ]
+ }
+]
+```
+
+## Data types
+
+Use data types to define telemetry, properties, and command parameters. Data types can be primitive or complex. Complex datatypes use primitives or other complex types. The maximum depth for complex types is five levels.
+
+### Primitive types
+
+The following table shows the set of primitive types you can use:
+
+| Primitive type | Description |
+| | |
+| `boolean` | A boolean value |
+| `date` | A full-date as defined in [section 5.6 of RFC 3339](https://tools.ietf.org/html/rfc3339#section-5.6) |
+| `dateTime` | A date-time as defined in [RFC 3339](https://tools.ietf.org/html/rfc3339) |
+| `double` | An IEEE 8-byte floating point |
+| `duration` | A duration in ISO 8601 format |
+| `float` | An IEEE 4-byte floating point |
+| `integer` | A signed 4-byte integer |
+| `long` | A signed 8-byte integer |
+| `string` | A UTF8 string |
+| `time` | A full-time as defined in [section 5.6 of RFC 3339](https://tools.ietf.org/html/rfc3339#section-5.6) |
+
+The following snippet shows an example telemetry definition that uses the `double` type in the `schema` field:
+
+```json
+{
+ "@type": "Telemetry",
+ "name": "temperature",
+ "displayName": "Temperature",
+ "schema": "double"
+}
+```
+
+### Complex datatypes
+
+Complex datatypes are one of *array*, *enumeration*, *map*, *object*, or one of the geospatial types.
+
+#### Arrays
+
+An array is an indexable data type where all elements are the same type. The element type can be a primitive or complex type.
+
+The following snippet shows an example telemetry definition that uses the `Array` type in the `schema` field. The elements of the array are booleans:
+
+```json
+{
+ "@type": "Telemetry",
+ "name": "ledState",
+ "schema": {
+ "@type": "Array",
+ "elementSchema": "boolean"
+ }
+}
+```
+
+#### Enumerations
+
+An enumeration describes a type with a set of named labels that map to values. The values can be either integers or strings, but the labels are always strings.
+
+The following snippet shows an example telemetry definition that uses the `Enum` type in the `schema` field. The values in the enumeration are integers:
+
+```json
+{
+ "@type": "Telemetry",
+ "name": "state",
+ "schema": {
+ "@type": "Enum",
+ "valueSchema": "integer",
+ "enumValues": [
+ {
+ "name": "offline",
+ "displayName": "Offline",
+ "enumValue": 1
+ },
+ {
+ "name": "online",
+ "displayName": "Online",
+ "enumValue": 2
+ }
+ ]
+ }
+}
+```
+
+#### Maps
+
+A map is a type with key-value pairs where the values all have the same type. The key in a map must be a string. The values in a map can be any type, including another complex type.
+
+The following snippet shows an example property definition that uses the `Map` type in the `schema` field. The values in the map are strings:
+
+```json
+{
+ "@type": "Property",
+ "name": "modules",
+ "writable": true,
+ "schema": {
+ "@type": "Map",
+ "mapKey": {
+ "name": "moduleName",
+ "schema": "string"
+ },
+ "mapValue": {
+ "name": "moduleState",
+ "schema": "string"
+ }
+ }
+}
+```
+
+### Objects
+
+An object type is made up of named fields. The types of the fields in an object map can be primitive or complex types.
+
+The following snippet shows an example telemetry definition that uses the `Object` type in the `schema` field. The fields in the object are `dateTime`, `duration`, and `string` types:
+
+```json
+{
+ "@type": "Telemetry",
+ "name": "monitor",
+ "schema": {
+ "@type": "Object",
+ "fields": [
+ {
+ "name": "start",
+ "schema": "dateTime"
+ },
+ {
+ "name": "interval",
+ "schema": "duration"
+ },
+ {
+ "name": "status",
+ "schema": "string"
+ }
+ ]
+ }
+}
+```
+
+#### Geospatial types
+
+DTDL provides a set of geospatial types, based on [GeoJSON](https://geojson.org/), for modeling geographic data structures: `point`, `multiPoint`, `lineString`, `multiLineString`, `polygon`, and `multiPolygon`. These types are predefined nested structures of arrays, objects, and enumerations.
+
+The following snippet shows an example telemetry definition that uses the `point` type in the `schema` field:
+
+```json
+{
+ "@type": "Telemetry",
+ "name": "location",
+ "schema": "point"
+}
+```
+
+Because the geospatial types are array-based, they can't currently be used in property definitions.
+
+## Semantic types
+
+The datatype of a property or telemetry definition specifies the format of the data that a device exchanges with a service. The semantic type provides information about telemetry and properties that an application can use to determine how to process or display a value. Each semantic type has one or more associated units. For example, celsius and fahrenheit are units for the temperature semantic type. IoT Central dashboards and analytics can use the semantic type information to determine how to plot telemetry or property values and display units. To learn how you can use the model parser to read the semantic types, see [Understand the digital twins model parser](concepts-model-parser.md).
+
+The following snippet shows an example telemetry definition that includes semantic type information. The semantic type `Temperature` is added to the `@type` array, and the `unit` value, `degreeCelsius` is one of the valid units for the semantic type:
+
+```json
+{
+ "@type": [
+ "Telemetry",
+ "Temperature"
+ ],
+ "name": "temperature",
+ "schema": "double",
+ "unit": "degreeCelsius"
+}
+```
+
+## Localization
+
+Applications, such as IoT Central, use information in the model to dynamically build a UI around the data that's exchanged with an IoT Plug and Play device. For example, tiles on a dashboard can display names and descriptions for telemetry, properties, and commands.
+
+The optional `description` and `displayName` fields in the model hold strings intended for use in a UI. These fields can hold localized strings that an application can use to render a localized UI.
+
+The following snippet shows an example temperature telemetry definition that includes localized strings:
+
+```json
+{
+ "@type": [
+ "Telemetry",
+ "Temperature"
+ ],
+ "description": {
+ "en": "Temperature in degrees Celsius.",
+ "it": "Temperatura in gradi Celsius."
+ },
+ "displayName": {
+ "en": "Temperature",
+ "it": "Temperatura"
+ },
+ "name": "temperature",
+ "schema": "double",
+ "unit": "degreeCelsius"
+}
+```
+
+Adding localized strings is optional. The following example has only a single, default language:
+
+```json
+{
+ "@type": [
+ "Telemetry",
+ "Temperature"
+ ],
+ "description": "Temperature in degrees Celsius.",
+ "displayName": "Temperature",
+ "name": "temperature",
+ "schema": "double",
+ "unit": "degreeCelsius"
+}
+```
+
+## Lifecycle and tools
+
+The four lifecycle stages for a device model are authoring, publication, use, and versioning:
+
+### Author
+
+DTML device models are JSON documents that you can create in a text editor. However, in IoT Central you can use the device template GUI environment to create a DTML model. In IoT Central you can:
+
+- Create interfaces that define properties, telemetry, and commands.
+- Use components to assemble multiple interfaces together.
+- Define inheritance relationships between interfaces.
+- Import and export DTML model files.
+
+To learn more, see [Define a new IoT device type in your Azure IoT Central application](../iot-central/core/howto-set-up-template.md).
+
+The [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) gives you a text-based editing environment with syntax validation and autocomplete for finer control over the model authoring experience.
+
+### Publish
+
+To make your DTML models shareable and discoverable, you publish them in a device models repository.
+
+Before you publish a model in the public repository, you can use the `dmr-client` tools to validate your model.
+
+To learn more, see [Device models repository](concepts-model-repository.md).
+
+### Use
+
+Applications, such as IoT Central, use device models. In IoT Central, a model is part of the device template that describes the capabilities of the device. IoT Central uses the device template to dynamically build a UI for the device, including dashboards and analytics.
+
+A custom solution can use the [digital twins model parser](concepts-model-parser.md) to understand the capabilities of a device that implements the model. To learn more, see [Use IoT Plug and Play models in an IoT solution](concepts-model-discovery.md).
+
+### Version
+
+To ensure devices and server-side solutions that use models continue to work, published models are immutable.
+
+The DTMI includes a version number that you can use to create multiple versions of a model. Devices and server-side solutions can use the specific version they were designed to use.
+
+IoT Central implements more versioning rules for device models. If you version a device template and its model in IoT Central, you can migrate devices from previous versions to later versions. However, migrated devices can't use new capabilities without a firmware upgrade. To learn more, see [Create a new device template version](../iot-central/core/howto-version-device-template.md).
+
+## Limits and constraints
+
+The following list summarizes some key constraints and limits on models:
+
+- Currently, the maximum depth for arrays, maps, and objects is five levels of depth.
+- You can't use arrays in property definitions.
+- You can extend interfaces to a depth of 10 levels.
+- An interface can extend at most two other interfaces.
+- A component can't contain another component.
+
+## Next steps
+
+Now that you've learned about device modeling, here are some additional resources:
+
+- [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md)
+- [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Model repositories](./concepts-model-repository.md)
key-vault How To Export Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/how-to-export-certificate.md
$pfxFileByte = $x509Cert.Export($type, $password)
[System.IO.File]::WriteAllBytes("KeyVault.pfx", $pfxFileByte) ```
-This command exports the entire chain of certificates with private key. The certificate is password protected.
-For more information on the **Get-AzKeyVaultCertificate** command and parameters, see [Get-AzKeyVaultCertificate - Example 2](/powershell/module/az.keyvault/Get-AzKeyVaultCertificate).
+This command exports the entire chain of certificates with private key (i.e. the same as it was imported). The certificate is password protected.
+For more information on the **Get-AzKeyVaultCertificate** command and parameters, see [Get-AzKeyVaultCertificate - Example 2](/powershell/module/az.keyvault/Get-AzKeyVaultCertificate?view=azps-4.4.0).
# [Portal](#tab/azure-portal)
key-vault Key Vault Integrate Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/key-vault-integrate-kubernetes.md
Last updated 09/25/2020
> [!IMPORTANT] > Secrets Store CSI Driver is an open source project that is not supported by Azure technical support. Please report all feedback and issues related to CSI Driver Key Vault integration on the github link at the bottom of the page. This tool is provided for users to self-install into clusters and gather feedback from our community.
-In this tutorial, you access and retrieve secrets from your Azure key vault by using the Secrets Store Container Storage Interface (CSI) driver to mount the secrets into Kubernetes pods.
+In this tutorial, you access and retrieve secrets from your Azure key vault by using the Secrets Store Container Storage Interface (CSI) driver to mount the secrets into Kubernetes pods as a volume.
In this tutorial, you learn how to:
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/private-link-service.md
az network private-dns link vnet create --resource-group {RG} --virtual-network
```
-### Add Private DNS Records
-```azurecli
-# https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli#create-an-additional-dns-record
-az network private-dns zone list -g $rg_name
-az network private-dns record-set a add-record -g $rg_name -z "privatelink.vaultcore.azure.net" -n $vault_name -a $kv_network_interface_private_ip
-az network private-dns record-set list -g $rg_name -z "privatelink.vaultcore.azure.net"
-
-# From home/public network, you wil get a public IP. If inside a vnet with private zone, nslookup will resolve to the private ip.
-nslookup $vault_name.vault.azure.net
-nslookup $vault_name.privatelink.vaultcore.azure.net
-```
- ### Create a Private Endpoint (Automatically Approve) ```azurecli
-az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.KeyVault/vaults/ {KEY VAULT NAME}" --group-ids vault --connection-name {Private Link Connection Name} --location {AZURE REGION}
+az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.KeyVault/vaults/{KEY VAULT NAME}" --group-ids vault --connection-name {Private Link Connection Name} --location {AZURE REGION}
``` ### Create a Private Endpoint (Manually Request Approval) ```azurecli
-az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.KeyVault/vaults/ {KEY VAULT NAME}" --group-ids vault --connection-name {Private Link Connection Name} --location {AZURE REGION} --manual-request
+az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME} --subnet {subnet NAME} --name {Private Endpoint Name} --private-connection-resource-id "/subscriptions/{AZURE SUBSCRIPTION ID}/resourceGroups/{RG}/providers/Microsoft.KeyVault/vaults/{KEY VAULT NAME}" --group-ids vault --connection-name {Private Link Connection Name} --location {AZURE REGION} --manual-request
``` ### Manage Private Link Connections
az keyvault private-endpoint-connection reject --rejection-description {"OPTIONA
az keyvault private-endpoint-connection delete --resource-group {RG} --vault-name {KEY VAULT NAME} --name {PRIVATE LINK CONNECTION NAME} ```
+### Add Private DNS Records
+```azurecli
+# Determine the Private Endpoint IP address
+az network private-endpoint show -g {RG} -n {PE NAME} # look for the property networkInterfaces then id; the value must be placed on {PE NIC} below.
+az network nic show --ids {PE NIC} # look for the property ipConfigurations then privateIpAddress; the value must be placed on {NIC IP} below.
+
+# https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli#create-an-additional-dns-record
+az network private-dns zone list -g {RG}
+az network private-dns record-set a add-record -g {RG} -z "privatelink.vaultcore.azure.net" -n {KEY VAULT NAME} -a {NIC IP}
+az network private-dns record-set list -g {RG} -z "privatelink.vaultcore.azure.net"
+
+# From home/public network, you wil get a public IP. If inside a vnet with private zone, nslookup will resolve to the private ip.
+nslookup {KEY VAULT NAME}.vault.azure.net
+nslookup {KEY VAULT NAME}.privatelink.vaultcore.azure.net
+```
+ ## Validate that the private link connection works
For more, see [Azure Private Link service: Limitations](../../private-link/priva
## Next Steps - Learn more about [Azure Private Link](../../private-link/private-link-service-overview.md)-- Learn more about [Azure Key Vault](overview.md)
+- Learn more about [Azure Key Vault](overview.md)
key-vault Vault Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/vault-create-template.md
tags: azure-resource-manager
Previously updated : 10/5/2020 Last updated : 3/14/2021 #Customer intent: As a security admin who's new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
You can deploy access policies to an existing key vault without redeploying the
"permissions": { "keys": "[parameters('keysPermissions')]", "secrets": "[parameters('secretsPermissions')]",
- "certificates": "[parameters('certificatesPermissions')]"
+ "certificates": "[parameters('certificatePermissions')]"
} } ]
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-template.md
Two resources are defined in the template:
More Azure Key Vault template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Keyvault&pageNumber=1&sort=Popular).
+## Deploy the template
+You can use [Azure portal](https://docs.microsoft.com/azure/azure-resource-manager/templates/deploy-portal), Azure PowerShell, Azure CLI, or REST API. To learn about deployment methods, see [Deploy templates](https://docs.microsoft.com/azure/azure-resource-manager/templates/deploy-powershell).
+ ## Review deployed resources You can either use the Azure portal to check the key vault and the key, or use the following Azure CLI or Azure PowerShell script to list the key created.
In this quickstart, you created a key vault and a key using an ARM template, and
- Read an [Overview of Azure Key Vault](../general/overview.md) - Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)-- Review the [Key Vault security overview](../general/security-overview.md)
+- Review the [Key Vault security overview](../general/security-overview.md)
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
load-balancer Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/security-baseline.md
description: The Azure Load Balancer security baseline provides procedural guida
Previously updated : 09/28/2020 Last updated : 03/16/2021
# Azure security baseline for Azure Load Balancer
-The Azure Security Baseline for Microsoft Azure Load Balancer contains recommendations that will help you improve the security posture of your deployment. The baseline for this service is drawn from the [Azure Security Benchmark version 1.0](../security/benchmarks/overview.md), which provides recommendations on how you can secure your cloud solutions on Azure with our best practices guidance. For more information, see [Azure Security Baselines overview](../security/benchmarks/security-baselines-overview.md).
+This security
+baseline applies guidance from the [Azure Security Benchmark version
+1.0](../security/benchmarks/overview-v1.md) to Microsoft Azure Load Balancer. The Azure Security Benchmark
+provides recommendations on how you can secure your cloud solutions on Azure.
+The content is grouped by the **security controls** defined by the Azure
+Security Benchmark and the related guidance applicable to Azure Load Balancer. **Controls** not applicable to Azure Load Balancer have been excluded.
-## Network security
+
+To see how Azure Load Balancer completely maps to the Azure
+Security Benchmark, see the [full Azure Load Balancer security baseline mapping
+file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Offer%20Security%20Baselines).
+
+## Network Security
-*For more information, see the [Azure Security Benchmark: Network security](../security/benchmarks/security-control-network-security.md).*
+*For more information, see the [Azure Security Benchmark: Network Security](../security/benchmarks/security-control-network-security.md).*
### 1.1: Protect Azure resources within virtual networks **Guidance**: Use internal Azure Load Balancers to only allow traffic to backend resources from within certain virtual networks or peered virtual networks without exposure to the internet. Implement an external Load Balancer with Source Network+ Address Translation (SNAT) to masquerade the IP addresses of backend resources for protection from direct internet exposure. Azure offers two types of Load Balancer offerings, Standard and Basic. Use the Standard Load Balancer for all production workloads. Implement network
-security groups and only allow access to your application's trusted ports and IP address ranges. In cases where there is no network security group assigned to the backend subnet or NIC of the backend virtual machines, traffic will not be not allowed to these resources from the load balancer. With Standard Load Balancers, provide outbound rules to define outbound NAT with a network security group. Review these outbound rules to tune the behavior of your outbound connections.
+security groups and only allow access to your application's trusted ports and IP address ranges. In cases where there is no network security group assigned to the backend subnet or NIC of the backend virtual machines, traffic will not be not allowed to these resources from the load balancer. With Standard Load Balancers, provide outbound rules to define outbound NAT with a network security group. Review these outbound rules to tune the behavior of your outbound connections.
-Using a Standard Load Balancer is recommended for your production workloads and typically the Basic Load Balancer is only used for testing since the basic type is open to connections from the internet by default, and doesn't require network security groups for operation.
+Using a Standard Load Balancer is recommended for your production workloads and typically the Basic Load Balancer is only used for testing since the basic type is open to connections from the internet by default, and doesn't require network security groups for operation.
- [Outbound connections in Azure](load-balancer-outbound-connections.md) -- [Upgrade Azure Public Load Balancer](./upgrade-basic-standard.md)-
-**Azure Security Center monitoring**: Yes
+- [Upgrade Azure Public Load Balancer](upgrade-basic-standard.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+
+**Azure Policy built-in definitions - Microsoft.Network**:
++ ### 1.2: Monitor and log the configuration and traffic of virtual networks, subnets, and NICs **Guidance**: The Load Balancer is a pass through service as it relies on the network security groups rules applied to backend resources and the configured outbound rules to control internet access.
-Review the outbound rules configured for your Standard Load Balancer through the Outbound Rules blade of your Load Balancer and the Load Balancing Rules blade where you may have Implicit outbound rules enabled.
+Review the outbound rules configured for your Standard Load Balancer through the Outbound Rules blade of your Load Balancer and the load-balancing rules blade where you may have Implicit outbound rules enabled.
Monitor the count of your outbound connections to track how often your resources are reaching out to the internet.
Also send the flow logs to a Log Analytics workspace and then use Traffic Analyt
- [Understand network security provided by Azure Security Center](../security-center/security-center-network-recommendations.md) -- [How do I check my outbound connection statistics](./load-balancer-standard-diagnostics.md#how-do-i-check-my-outbound-connection-statistics)-
-**Azure Security Center monitoring**: Yes
+- [How do I check my outbound connection statistics](https://docs.microsoft.com/azure/load-balancer/load-balancer-standard-diagnostics#how-do-i-check-my-outbound-connection-statistics)
**Responsibility**: Customer
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+
+**Azure Policy built-in definitions - Microsoft.Network**:
++ ### 1.3: Protect critical web applications **Guidance**: Explicitly define internet connectivity and valid source IPs through outbound rules and network security groups with your Load Balancer to use Microsoft's threat intelligence for protecting your web applications. - [Integrate the Azure Firewall](../firewall/integrate-lb.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.4: Deny communications with known malicious IP addresses **Guidance**: Enable Azure Distributed Denial of Service (DDoS) Standard protection on your Azure Virtual Network to guard against DDoS attacks.
Use Security Center's Adaptive Network Hardening feature to recommend network se
- [Integrate Azure Firewall with your Load Balancer](../firewall/overview.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+
+**Azure Policy built-in definitions - Microsoft.Network**:
++ ### 1.5: Record network packets **Guidance**: Enable Network Watcher packet capture to investigate anomalous activities. - [How to create a Network Watcher instance](../network-watcher/network-watcher-create.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+
+**Azure Policy built-in definitions - Microsoft.Network**:
++ ### 1.6: Deploy network-based intrusion detection/intrusion prevention systems (IDS/IPS) **Guidance**: Implement an offer from the Azure Marketplace that supports IDS/IPS functionality with payload inspection capabilities to the environment of your Load Balancer.
-Use Azure Firewall threat intelligence If payload inspection is not a requirement.
+Use Azure Firewall threat intelligence if payload inspection is not a requirement.
Azure Firewall threat intelligence-based filtering is used to alert on and/or block traffic to and from known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed. Deploy the firewall solution of your choice at each of your organization's network boundaries to detect and/or block malicious traffic.
Deploy the firewall solution of your choice at each of your organization's netwo
- [How to configure alerts with Azure Firewall](../firewall/threat-intel.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.7: Manage traffic to web applications **Guidance**: Explicitly define internet connectivity and valid source IPs through outbound rules and network security groups with your Load Balancer to use Microsoft's threat intelligence features to protect your web applications. - [Integrate the Azure Firewall](../firewall/integrate-lb.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.8: Minimize complexity and administrative overhead of network security rules **Guidance**: Use service tags in place of specific IP addresses when creating security rules. Specify the service tag name in the source or destination field of a rule to allow or deny the traffic for the corresponding service.
By default, every network security group includes the service tag AzureLoadBalan
Refer to Azure documentation for all the service tags available for use in network security group rules. -- [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags)-
-**Azure Security Center monitoring**: Yes
+- [Available service tags](https://docs.microsoft.com/azure/virtual-network/service-tags-overview#available-service-tags)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.9: Maintain standard security configurations for network devices **Guidance**: Define and implement standard security configurations for network resources with Azure Policy.
Apply the blueprint to new subscriptions, and fine-tune control and management t
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [Azure Policy samples for networking](../governance/policy/samples/built-in-policies.md#network)
+- [Azure Policy samples for networking](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#network)
- [How to create an Azure Blueprint](../governance/blueprints/create-blueprint-portal.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.10: Document traffic configuration rules **Guidance**: Use resource tags for network security groups and other resources related to network security and traffic flow.
Use Azure PowerShell or Azure CLI to look up or perform actions on resources bas
- [How to filter network traffic with network security group rules](../virtual-network/tutorial-filter-network-traffic.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.11: Use automated tools to monitor network resource configurations and detect changes **Guidance**: Use Azure Activity log to monitor resource configurations and detect changes to your Azure resources. Create alerts in Azure Monitor to notify you when critical resources are changed. -- [How to view and retrieve Azure Activity log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
+- [How to view and retrieve Azure Activity log events](https://docs.microsoft.com/azure/azure-monitor/essentials/activity-log#view-the-activity-log)
- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-## Logging and monitoring
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Logging and monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
+## Logging and Monitoring
+
+*For more information, see the [Azure Security Benchmark: Logging and Monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
### 2.2: Configure central security log management
Enable and on-board this data to Azure Sentinel or a third-party SIEM based on y
- [Platform Activity logs](../azure-monitor/essentials/activity-log.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.3: Enable audit logging for Azure resources **Guidance**: Review the Control and Management Plane logging and audit information captured with Activity logs for the Basic Load Balancer. These capture settings are enabled by default.
Enable and on-board data to Azure Sentinel or a third-party SIEM based on your b
- [Review this article with step-by-step instructions for each method detailed in the Audit operations with Resource Manager](../azure-resource-manager/management/view-activity-logs.md) -- [Azure Monitor logs for public Basic Load Balancer](./load-balancer-monitor-log.md)
+- [Azure Monitor logs for public Basic Load Balancer](load-balancer-monitor-log.md)
- [View activity logs to monitor actions on resources](../azure-resource-manager/management/view-activity-logs.md) -- [Retrieve multi-dimensional metrics programmatically via APIs](./load-balancer-standard-diagnostics.md#retrieve-multi-dimensional-metrics-programmatically-via-apis)
+- [Retrieve multi-dimensional metrics programmatically via APIs](https://docs.microsoft.com/azure/load-balancer/load-balancer-standard-diagnostics#retrieve-multi-dimensional-metrics-programmatically-via-apis)
- [How to get started with Azure Monitor and third-party SIEM integration](https://azure.microsoft.com/blog/use-azure-monitor-to-integrate-with-siem-tools)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.5: Configure security log storage retention **Guidance**: The Activity log is enabled by default and is preserved for 90 days in Azure's Event Logs store. Set your Log Analytics workspace retention period according to your organization's compliance regulations in Azure Monitor. Use Azure Storage accounts for long-term and archival storage. - [View activity logs to monitor actions on resources article](../azure-resource-manager/management/view-activity-logs.md) -- [Change the data retention period in Log Analytics](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)--- [How to configure retention policy for Azure Storage account logs](../storage/common/manage-storage-analytics-logs.md#configure-logging)
+- [Change the data retention period in Log Analytics](https://docs.microsoft.com/azure/azure-monitor/logs/manage-cost-storage#change-the-data-retention-period)
-**Azure Security Center monitoring**: Yes
+- [How to configure retention policy for Azure Storage account logs](https://docs.microsoft.com/azure/storage/common/manage-storage-analytics-logs#configure-logging)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.6: Monitor and review Logs **Guidance**: Monitor, manage, and troubleshoot Standard Load Balancer resources using the Load Balancer page in the Azure portal and the Resource Health page under Azure Monitor. Available metrics for security include information on Source Network Address Translation (SNAT) connections, ports. Additionally metrics on SYN (synchronize) packets and packet counters are also available.
Use Microsoft Power BI with the Azure Audit Logs content pack and analyze your d
Stream logs to an event hub or a Log Analytics workspace. They can also be extracted from Azure blob storage, and viewed in different tools, such as Excel and Power BI. You can enable and on-board data to Azure Sentinel or a third-party SIEM. -- [Load Balancer health probes](./load-balancer-custom-probe-overview.md)
+- [Load Balancer health probes](load-balancer-custom-probe-overview.md)
- [Azure Monitor REST API](/rest/api/monitor) - [How to retrieve metrics via REST API](/rest/api/monitor/metrics/list) -- [Standard Load Balancer diagnostics with metrics, alerts, and resource health](./load-balancer-standard-diagnostics.md)--- [Azure Monitor logs for public Basic Load Balancer](./load-balancer-monitor-log.md)
+- [Standard Load Balancer diagnostics with metrics, alerts, and resource health](load-balancer-standard-diagnostics.md)
-- [View your load balancer metrics in the Azure portal](./load-balancer-standard-diagnostics.md#view-your-load-balancer-metrics-in-the-azure-portal)
+- [Azure Monitor logs for public Basic Load Balancer](load-balancer-monitor-log.md)
-**Azure Security Center monitoring**: Yes
+- [View your load balancer metrics in the Azure portal](https://docs.microsoft.com/azure/load-balancer/load-balancer-standard-diagnostics#view-your-load-balancer-metrics-in-the-azure-portal)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.7: Enable alerts for anomalous activities **Guidance**: Use Security Center with Log Analytics workspace for monitoring and alerting on anomalous activity related to Load Balancer in security logs and events.
Enable and on-board data to Azure Sentinel or a third-party SIEM tool.
- [How to alert on log analytics log data](../azure-monitor/alerts/tutorial-response.md)
-**Azure Security Center monitoring**: Yes
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Identity and Access Control
+
+*For more information, see the [Azure Security Benchmark: Identity and Access Control](../security/benchmarks/security-control-identity-access-control.md).*
+
+### 3.1: Maintain an inventory of administrative accounts
+
+**Guidance**: Azure role-based access control (Azure RBAC) allows you to manage access to Azure resources such as your Load Balancer through role assignments. Assign these roles to users, groups service principals, and managed identities.
+
+Inventory Pre-defined and built-in roles for certain resources with tools like Azure CLI, Azure PowerShell or the Azure portal.
+
+- [How to get a directory role in Azure Active Directory (Azure AD) with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole)
+
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 3.5: Use multi-factor authentication for all Azure Active Directory based access
+
+**Guidance**: Enable Azure Active Directory (Azure AD) multifactor authentication and follow Security Center's Identity and Access Management recommendations.
+
+- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
+
+- [How to monitor identity and access within Azure Security Center](../security-center/security-center-identity-access.md)
**Responsibility**: Customer
-### 2.8: Centralize anti-malware logging
+**Azure Security Center monitoring**: None
+
+### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
-**Guidance**: Not applicable to Azure Load Balancer. This recommendation is intended for compute resources.
+**Guidance**: Use Privileged Access Workstations (PAW) with multifactor authentication configured to manage and access Azure network resources.
-**Azure Security Center monitoring**: Not applicable
+- [Learn about Privileged Access Workstations](/security/compass/privileged-access-devices)
+
+- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
**Responsibility**: Customer
-### 2.9: Enable DNS query logging
+**Azure Security Center monitoring**: None
+
+### 3.8: Manage Azure resources only from approved locations
-**Guidance**:
-Not applicable as Azure Load Balancer is a core networking service that does not make DNS queries.
+**Guidance**: Use Conditional Access named locations to allow access from only specific logical groupings of IP address ranges or countries/regions.
-**Azure Security Center monitoring**: Not applicable
+- [How to configure named locations in Azure](../active-directory/reports-monitoring/quickstart-configure-named-locations.md)
**Responsibility**: Customer
-### 2.10: Enable command-line audit logging
+**Azure Security Center monitoring**: None
-**Guidance**: Not applicable to Azure Load Balancer as this recommendation applies to compute resources.
+### 3.9: Use Azure Active Directory
-**Azure Security Center monitoring**: Not applicable
+**Guidance**: Use Azure Active Directory (Azure AD) as a central authentication and authorization system for your services. Azure AD protects data by using strong encryption for data at rest and in transit and also salts, hashes, and securely stores user credentials.
+
+- [How to create and configure an Azure AD instance](../active-directory-domain-services/tutorial-create-instance.md)
**Responsibility**: Customer
-## Identity and access control
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Identity and access control](../security/benchmarks/security-control-identity-access-control.md).*
+### 3.10: Regularly review and reconcile user access
-### 3.1: Maintain an inventory of administrative accounts
+**Guidance**: Use Azure Active Directory (Azure AD) to provide logs to help discover stale accounts.
-**Guidance**: Azure role-based access control (Azure RBAC) allows you to manage access to Azure resources such as your Load Balancer through role assignments. Assign these roles to users, groups service principals, and managed identities.
+Azure Identity Access Reviews can be performed to efficiently manage group memberships, access to enterprise applications, and role assignments. User access should be reviewed on a regular basis to make sure only the active users have continued access.
-Inventory Pre-defined and built-in roles for certain resources with tools like Azure CLI, Azure PowerShell or the Azure portal.
+- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+
+- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 3.11: Monitor attempts to access deactivated credentials
+
+**Guidance**: Integrate Azure Active Directory (Azure AD) Sign-in Activity, Audit and Risk Event log sources, with any SIEM or Monitoring tool based on your access.
+
+Streamline this process by creating Diagnostic Settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics Workspace. Any desired alerts can be configured within Log Analytics Workspace.
+
+- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
-- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
+### 3.12: Alert on account login behavior deviation
-- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
+**Guidance**: Use Azure Active Directory (Azure AD)'s Risk and Identity Protection features to configure automated responses to detected suspicious actions related to user identities. Ingest data into Azure Sentinel for any further investigations.
-**Azure Security Center monitoring**: Yes
+- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+
+- [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
+
+- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
-## Data protection
+**Azure Security Center monitoring**: None
+
+## Data Protection
+
+*For more information, see the [Azure Security Benchmark: Data Protection](../security/benchmarks/security-control-data-protection.md).*
-*For more information, see the [Azure Security Benchmark: Data protection](../security/benchmarks/security-control-data-protection.md).*
+### 4.4: Encrypt all sensitive information in transit
+
+**Guidance**: Ensure that any clients connecting to your Azure resources are able to negotiate TLS 1.2 or greater.
+
+Follow Azure Security Center recommendations for encryption at rest and encryption in transit, where applicable.
+
+- [Understand encryption in transit with Azure](https://docs.microsoft.com/azure/security/fundamentals/encryption-overview#encryption-of-data-in-transit)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
### 4.6: Use Azure RBAC to manage access to resources
Inventory Pre-defined and built-in roles for certain resources with tools like A
- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.7: Use host-based data loss prevention to enforce access control **Guidance**: Load Balancer is a pass through service that does not store customer data. It is a part of the underlying platform that is managed by Microsoft.
To ensure customer data in Azure remains secure, Microsoft has implemented and m
- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ### 4.9: Log and alert on changes to critical Azure resources **Guidance**: Use Azure Monitor with the Azure Activity log to create alerts when changes take place to critical Azure resources, such as Load Balancers used for important production workloads. - [How to create alerts for Azure Activity log events](../azure-monitor/alerts/alerts-activity-log.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-## Inventory and asset management
+**Azure Security Center monitoring**: None
+
+## Inventory and Asset Management
-*For more information, see the [Azure Security Benchmark: Inventory and asset management](../security/benchmarks/security-control-inventory-asset-management.md).*
+*For more information, see the [Azure Security Benchmark: Inventory and Asset Management](../security/benchmarks/security-control-inventory-asset-management.md).*
### 6.1: Use automated asset discovery solution
-**Guidance**: Use Azure Resource Graph to query for and discover all resources (such as compute, storage, network, ports, protocols, and so on) in your subscriptions. Azure Resource Manager is recommended to create and use current resources.
+**Guidance**: Use Azure Resource Graph to query for and discover all resources (such as compute, storage, network, ports, protocols, and so on) in your subscriptions. Azure Resource Manager is recommended to create and use current resources.
Ensure appropriate (read) permissions in your tenant and enumerate all Azure subscriptions and resources in your subscriptions. - [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure subscriptions](/powershell/module/az.accounts/get-azsubscription?view=azps-3.0.0)
+- [How to view your Azure subscriptions](/powershell/module/az.accounts/get-azsubscription)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-### 6.2: Maintain asset metadata
+**Azure Security Center monitoring**: None
-**Guidance**:
-Apply tags to Azure resources with metadata to logically organize according to a taxonomy.
+### 6.2: Maintain asset metadata
-- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
+**Guidance**: Apply tags to Azure resources with metadata to logically organize according to a taxonomy.
-**Azure Security Center monitoring**: Not applicable
+- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.3: Delete unauthorized Azure resources **Guidance**: Use tagging, management groups, and separate subscriptions where appropriate, to organize and track assets.
Reconcile inventory on a regular basis and ensure unauthorized resources are del
- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-### 6.4: Define and maintain an inventory of approved Azure resources
+**Azure Security Center monitoring**: None
-**Guidance**: Create a list of approved Azure resources per your organizational needs which you can leverage as a allow list mechanism. This will allow your organization to onboard any newly available Azure services after they are formally reviewed and approved by your organization's typical security evaluation processes.
+### 6.4: Define and maintain an inventory of approved Azure resources
-**Azure Security Center monitoring**: Not applicable
+**Guidance**: Create a list of approved Azure resources per your organizational needs which you can leverage as an allowlist mechanism. This will allow your organization to onboard any newly available Azure services after they are formally reviewed and approved by your organization's typical security evaluation processes.
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.5: Monitor for unapproved Azure resources **Guidance**: Use Azure Policy to put restrictions on the type of resources that can be created in your subscriptions.
Ensure all Azure resources present in the environment are approved.
- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md)
-**Azure Security Center monitoring**: Not applicable
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.9: Use only approved Azure services
+
+**Guidance**: Use Azure Policy to put restrictions on the type of resources that can be created in customer subscriptions using the following built-in policy definitions:
+- Not allowed resource types
+- Allowed resource types
+
+- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
+
+- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+
+- [Azure policy sample built-ins for virtual network](/azure/virtual-network/policy-samples)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.11: Limit users' ability to interact with Azure Resource Manager
-**Guidance**: Use Azure AD Conditional Access to limit users' ability to interact with Azure Resource Manager by configuring "Block access" for the "Microsoft Azure Management" App.
+**Guidance**: Use Azure Active Directory (Azure AD) Conditional Access to limit users' ability to interact with Azure Resource Manager by configuring "Block access" for the "Microsoft Azure Management" App.
- [How to configure Conditional Access to block access to Azure Resources Manager](../role-based-access-control/conditional-access-azure-management.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.13: Physically or logically segregate high risk applications **Guidance**: Software that is required for business operations, but may incur higher risk for the organization, should be isolated within its own virtual machine and/or virtual network and sufficiently secured with either an Azure Firewall or a network security group.
Ensure all Azure resources present in the environment are approved.
- [How to create a network security group with a security config](../virtual-network/tutorial-filter-network-traffic.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
-## Secure configuration
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Secure configuration](../security/benchmarks/security-control-secure-configuration.md).*
+## Secure Configuration
+
+*For more information, see the [Azure Security Benchmark: Secure Configuration](../security/benchmarks/security-control-secure-configuration.md).*
### 7.1: Establish secure configurations for all Azure resources
Ensure all Azure resources present in the environment are approved.
Azure Resource Manager has the ability to export the template in JavaScript Object Notation (JSON), which should be reviewed to ensure that the configurations meet the security requirements for your organization.
-Export Azure Resource Manager templates into JavaScript Object Notation (JSON) formats, and periodically review them to ensure that the configurations meet your organizational security requirements.
+Export Azure Resource Manager templates into JavaScript Object Notation (JSON) formats, and periodically review them to ensure that the configurations meet your organizational security requirements.
-Implement recommendations from Security Center as a secure configuration baseline for your Azure resources.
+Implement recommendations from Security Center as a secure configuration baseline for your Azure resources.
-- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
+- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias)
- [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md)
Implement recommendations from Security Center as a secure configuration baselin
- [Security recommendations - a reference guide](../security-center/recommendations-reference.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.3: Maintain secure Azure resource configurations **Guidance**: Use Azure Policy [deny] and [deploy if not exist] to enforce secure settings across your Azure resources. Also, you can use Azure Resource Manager templates to maintain the security configuration of your Azure resources required by your organization.
Implement recommendations from Security Center as a secure configuration baselin
- [Azure Resource Manager templates overview](../azure-resource-manager/templates/overview.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.5: Securely store configuration of Azure resources
-**Guidance**: Use Azure DevOps to securely store and manage your code like custom Azure Policy definitions, Azure Resource Manager templates, and desired state configuration scripts.
+**Guidance**: Use Azure DevOps to securely store and manage your code like custom Azure Policy definitions, Azure Resource Manager templates, and desired state configuration scripts.
-Grant or deny permissions to specific users, built-in security groups, or groups defined in Azure Active Directory (Azure AD) if it is integrated with Azure DevOps, or in Active Directory if integrated with TFS.
+Grant or deny permissions to specific users, built-in security groups, or groups defined in Azure Active Directory (Azure AD) if it is integrated with Azure DevOps, or in Azure AD if integrated with TFS.
-- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow)
- [About permissions and groups in Azure DevOps](/azure/devops/organizations/security/about-permissions)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.7: Deploy configuration management tools for Azure resources **Guidance**: Define and implement standard security configurations for Azure resources using Azure Policy. Use Azure Policy aliases to create custom policies to audit or enforce the network configuration of your Azure resources. Implement built-in policy definitions related to your specific Azure Load Balancer resources. Also, use Azure Automation to deploy configuration changes. to your Azure Load Balancer resources. - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to use aliases](../governance/policy/concepts/definition-structure.md#aliases)-
-**Azure Security Center monitoring**: Yes
+- [How to use aliases](https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure#aliases)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.9: Implement automated configuration monitoring for Azure resources **Guidance**: Use Security Center to perform baseline scans for your Azure Resources and Azure Policy to alert and audit resource configurations. - [How to remediate recommendations in Azure Security Center](../security-center/security-center-remediate-recommendations.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-## Incident response
+**Azure Security Center monitoring**: None
+
+## Incident Response
+
+*For more information, see the [Azure Security Benchmark: Incident Response](../security/benchmarks/security-control-incident-response.md).*
+
+### 10.1: Create an incident response guide
+
+**Guidance**: Build out an incident response guide for your organization. Ensure that there are written incident response plans that define all roles of personnel as well as phases of incident handling/management from detection to post-incident review.
+
+- [How to configure Workflow Automations within Azure Security Center](../security-center/security-center-planning-and-operations-guide.md)
+
+- [Guidance on building your own security incident response process](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process/)
+
+- [Microsoft Security Response Center's Anatomy of an Incident](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process/)
+
+- [Customer may also leverage NIST's Computer Security Incident Handling Guide to aid in the creation of their own incident response plan](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf)
+
+**Responsibility**: Customer
-*For more information, see the [Azure Security Benchmark: Incident response](../security/benchmarks/security-control-incident-response.md).*
+**Azure Security Center monitoring**: None
### 10.2: Create an incident scoring and prioritization procedure
It is your responsibility to prioritize the remediation of alerts based on the c
- [Use tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Yes
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 10.3: Test security response procedures
+
+**Guidance**: Conduct exercises to test your systems' incident response capabilities on a regular cadence to help protect your Azure resources. Identify weak points and gaps and then revise your response plan as needed.
+
+- [NIST's publication--Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities](https://csrc.nist.gov/publications/detail/sp/800-84/final)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+
+### 10.4: Provide security incident contact details and configure alert notifications for security incidents
+
+**Guidance**: Security incident contact information will be used by Microsoft to contact you if the Microsoft Security Response Center (MSRC) discovers that your data has been accessed by an unlawful or unauthorized party. Review incidents after the fact to ensure that issues are resolved.
+
+- [How to set the Azure Security Center security contact](../security-center/security-center-provide-security-contact-details.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+ ### 10.5: Incorporate security alerts into your incident response system **Guidance**: Export your Security Center alerts and recommendations using the continuous export feature to help identify risks to Azure resources.
Utilize the Security Center data connector to stream the alerts to Azure Sentine
- [How to stream alerts into Azure Sentinel](../sentinel/connect-azure-security-center.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.6: Automate the response to security alerts **Guidance**: Use the Workflow Automation feature in Security Center to automatically trigger responses to security alerts and recommendations to protect your Azure resources. - [How to configure workflow automation in Security enter](../security-center/workflow-automation.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-## Penetration tests and red team exercises
+**Azure Security Center monitoring**: None
+
+## Penetration Tests and Red Team Exercises
-*For more information, see the [Azure Security Benchmark: Penetration tests and red team exercises](../security/benchmarks/security-control-penetration-tests-red-team-exercises.md).*
+*For more information, see the [Azure Security Benchmark: Penetration Tests and Red Team Exercises](../security/benchmarks/security-control-penetration-tests-red-team-exercises.md).*
### 11.1: Conduct regular penetration testing of your Azure resources and ensure remediation of all critical security findings
Utilize the Security Center data connector to stream the alerts to Azure Sentine
- [Microsoft Cloud Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
-**Azure Security Center monitoring**: Not applicable
+**Responsibility**: Customer
-**Responsibility**: Shared
+**Azure Security Center monitoring**: None
## Next steps -- See the [Azure security benchmark](../security/benchmarks/overview.md)-- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
+- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)
+- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021 ms.suite: integration
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-forecast.md
You can also use the `forecast_destination` parameter in the `forecast()` functi
```python label_query = test_labels.copy().astype(np.float) label_query.fill(np.nan)
-label_fcst, data_trans = fitted_pipeline.forecast(
+label_fcst, data_trans = fitted_model.forecast(
test_data, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) ```
day_datetime,store,week_of_year
01/01/2019,A,1 ```
-Repeat the necessary steps to load this future data to a dataframe and then run `best_run.predict(test_data)` to predict future values.
+Repeat the necessary steps to load this future data to a dataframe and then run `best_run.forecast(test_data)` to predict future values.
> [!NOTE] > In-sample predictions are not supported for forecasting with automated ML when `target_lags` and/or `target_rolling_window_size` are enabled.
machine-learning How To Compute Cluster Instance Os Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-compute-cluster-instance-os-upgrade.md
+
+ Title: Upgrade host OS for compute cluster and instance
+
+description: Upgrade the host OS for compute cluster and compute instance from Ubuntu 16.04 LTS to 18.04 LTS.
++++++ Last updated : 03/03/2021+++++
+# Upgrade compute instance and compute cluster host OS
+
+Azure Machine Learning __compute cluster__ and __compute instance__ are managed compute infrastructure. As a managed service, Microsoft manages the host OS and the packages and software versions that are installed.
+
+The host OS for compute cluster and compute instance has been Ubuntu 16.04 LTS. On **April 30, 2021**, Ubuntu is ending support for 16.04. Starting on __March 15, 2021__, Microsoft will automatically update the host OS to Ubuntu 18.04 LTS. Updating to 18.04 will ensure continued security updates and support from the Ubuntu community. For more information on Ubuntu ending support for 16.04, see the [Ubuntu release blog](https://wiki.ubuntu.com/Releases).
+
+> [!TIP]
+> * The host OS is not the OS version you might specify for an [environment](how-to-use-environments.md) when training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
+> * If you are currently using Ubuntu 16.04 based environments for training or deployment, Microsoft recommends that you switch to using Ubuntu 18.04 based images. For more information, see [How to use environments](how-to-use-environments.md) and the [Azure Machine Learning containers repository](https://github.com/Azure/AzureML-Containers/tree/master/base).
+> * When using an Azure Machine Learning compute instance based on Ubuntu 18.04, the default Python version is _Python 3.8_.
+## Creating new resources
+
+Compute cluster or compute instances created after __March 15, 2021__ use Ubuntu 18.04 LTS as the host OS by default. You cannot select a different host OS.
+
+## Upgrade existing resources
+
+If you have existing compute clusters or compute instances created before __March 15, 2021__, you need take action to upgrade the host OS to Ubuntu 18.04:
+
+* __Azure Machine Learning compute cluster__:
+
+ * If the cluster is configured with __min nodes = 0__, it will automatically be upgraded when all jobs are completed and it reduces to zero nodes.
+ * If __min nodes > 0__, temporarily change the minimum nodes to zero and allow the cluster to reduce to zero nodes.
+
+ For more information on changing the minimum nodes, see the [az ml computetarget update amlcompute](https://docs.microsoft.com/cli/azure/ext/azure-cli-ml/ml/computetarget/update#ext_azure_cli_ml_az_ml_computetarget_update_amlcompute) Azure CLI command, or the [AmlCompute.update()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#update-min-nodes-none--max-nodes-none--idle-seconds-before-scaledown-none-) SDK reference.
+
+* __Azure Machine Learning compute instance__: Create a new compute instance (which will use Ubuntu 18.04) and delete the old instance.
+
+ * Any notebook stored in the workspace file share, data stores, of datasets will be accessible from the new compute instance.
+ * If you have created custom conda environments, you can export those environments from the existing instance and import on the new instance. For information on conda export and import, see [Conda documentation](https://docs.conda.io/) at docs.conda.io.
+
+ For more information, see the [What is compute instance](concept-compute-instance.md) and [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) articles
+
+## Check host OS version
+
+For information on checking the host OS version, see the Ubuntu community wiki page on [checking your Ubuntu version](https://help.ubuntu.com/community/CheckingYourUbuntuVersion).
+
+> [!TIP]
+> To use the `lsb_release -a` command from the wiki, you can [use a terminal session on a compute instance](how-to-access-terminal.md).
+## Next steps
+
+If you have any further questions or concerns, contact us at [ubuntu18azureml@service.microsoft.com](mailto:ubuntu18azureml@service.microsoft.com).
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 10/06/2020 Last updated : 03/17/2021
For more information on setting up a Private Link workspace, see [How to configu
Azure Machine Learning supports storage accounts configured to use either service endpoints or private endpoints. In this section, you learn how to secure an Azure storage account using service endpoints. For private endpoints, see the next section.
-> [!IMPORTANT]
-> You can place the both the _default storage account_ for Azure Machine Learning, or _non-default storage accounts_ in a virtual network.
->
-> The default storage account is automatically provisioned when you create a workspace.
->
-> For non-default storage accounts, the `storage_account` parameter in the [`Workspace.create()` function](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) allows you to specify a custom storage account by Azure resource ID.
- To use an Azure storage account for the workspace in a virtual network, use the following steps: 1. In the Azure portal, go to the storage service you want to use in your workspace. [![The storage that's attached to the Azure Machine Learning workspace](./media/how-to-enable-virtual-network/workspace-storage.png)](./media/how-to-enable-virtual-network/workspace-storage.png#lightbox)
-1. On the storage service account page, select __Firewalls and virtual networks__.
+1. On the storage service account page, select __Networking__.
- ![The "Firewalls and virtual networks" area on the Azure Storage page in the Azure portal](./media/how-to-enable-virtual-network/storage-firewalls-and-virtual-networks.png)
+ ![The networking area on the Azure Storage page in the Azure portal](./media/how-to-enable-virtual-network/storage-firewalls-and-virtual-networks.png)
-1. On the __Firewalls and virtual networks__ page, do the following actions:
+1. On the __Firewalls and virtual networks__ tab, do the following actions:
1. Select __Selected networks__. 1. Under __Virtual networks__, select the __Add existing virtual network__ link. This action adds the virtual network where your compute resides (see step 1).
To use Azure Container Registry inside a virtual network, you must meet the foll
Once those requirements are fulfilled, use the following steps to enable Azure Container Registry.
+> [!TIP]
+> If you did not use an existing Azure Container Registry when creating the workspace, one may not exist. By default, the workspace will not create an ACR instance until it needs one. To force the creation of one, train or deploy a model using your workspace before using the steps in this section.
+ 1. Find the name of the Azure Container Registry for your workspace, using one of the following methods: __Azure portal__
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/10/2021 Last updated : 03/17/2021
machine-learning Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/security-baseline.md
Title: Azure security baseline for Azure Machine Learning
description: The Azure Machine Learning security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark. - Previously updated : 08/19/2020 Last updated : 03/16/2021
# Azure security baseline for Azure Machine Learning
-The Azure Security Baseline for Microsoft Azure Machine Learning contains recommendations that will help you improve the security posture of your deployment. The baseline for this service is drawn from the [Azure Security Benchmark version 1.0](../security/benchmarks/overview.md), which provides recommendations on how you can secure your cloud solutions on Azure with our best practices guidance. For more information, see [Azure Security Baselines overview](../security/benchmarks/security-baselines-overview.md).
+This security
+baseline applies guidance from the [Azure Security Benchmark version
+1.0](../security/benchmarks/overview-v1.md) to Microsoft Azure Machine Learning. The Azure Security Benchmark
+provides recommendations on how you can secure your cloud solutions on Azure.
+The content is grouped by the **security controls** defined by the Azure
+Security Benchmark and the related guidance applicable to Azure Machine Learning. **Controls** not applicable to Azure Machine Learning have been excluded.
-## Network security
+
+To see how Azure Machine Learning completely maps to the Azure
+Security Benchmark, see the [full Azure Machine Learning security baseline mapping
+file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Offer%20Security%20Baselines).
+
+## Network Security
-*For more information, see the [Azure Security Benchmark: Network security](../security/benchmarks/security-control-network-security.md).*
+*For more information, see the [Azure Security Benchmark: Network Security](../security/benchmarks/security-control-network-security.md).*
### 1.1: Protect Azure resources within virtual networks
Azure Firewall can be used to control access to your Azure Machine Learning work
- [Use workspace behind Azure Firewall for Azure Machine Learning](how-to-access-azureml-behind-firewall.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.2: Monitor and log the configuration and traffic of virtual networks, subnets, and NICs **Guidance**: Azure Machine Learning relies on other Azure services for compute resources. Assign network security groups to the networks that are created as your Machine Learning deployment.
Enable network security group flow logs and send the logs to an Azure Storage ac
- [Understand network security provided by Azure Security Center](../security-center/security-center-network-recommendations.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.3: Protect critical web applications **Guidance**: You can enable HTTPS to secure communication with web services deployed by Azure Machine Learning. Web services are deployed on Azure Kubernetes Services (AKS) or Azure Container Instances (ACI) and secure the data submitted by clients. You can also use private IP with AKS to restrict scoring, so that only clients behind a virtual network can access the web service.
Enable network security group flow logs and send the logs to an Azure Storage ac
- [Virtual network isolation and privacy overview](how-to-network-security-overview.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.4: Deny communications with known malicious IP addresses **Guidance**: Enable DDoS Protection Standard on the virtual networks associated with your Machine Learning instance to guard against distributed denial-of-service (DDoS) attacks. Use Azure Security Center Integrated threat detection to detect communications with known malicious or unused Internet IP addresses.
Deploy Azure Firewall at each of the organization's network boundaries with thre
- [For more information about the Azure Security Center threat detection](../security-center/azure-defender.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.5: Record network packets **Guidance**: For any VMs with the proper extension installed in your Azure Machine Learning services, you can enable Network Watcher packet capture to investigate anomalous activities. - [How to create a Network Watcher instance](../network-watcher/network-watcher-create.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.6: Deploy network-based intrusion detection/intrusion prevention systems (IDS/IPS) **Guidance**: Deploy the firewall solution of your choice at each of your organization's network boundaries to detect and/or block malicious traffic.
Select an offer from Azure Marketplace that supports IDS/IPS functionality with
- [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/?term=Firewall)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.7: Manage traffic to web applications **Guidance**: Not applicable; this recommendation is intended for web applications running on Azure App Service or compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 1.8: Minimize complexity and administrative overhead of network security rules **Guidance**: For resources that need access to your Azure Machine Learning account, use Virtual Network service tags to define network access controls on network security groups or Azure Firewall. You can use service tags in place of specific IP addresses when creating security rules. By specifying the service tag name (for example, AzureMachineLearning) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change.
Azure Machine Learning service documents a list of service tags for its compute
- [Virtual network isolation and privacy overview](how-to-network-security-overview.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.9: Maintain standard security configurations for network devices **Guidance**: Define and implement standard security configurations for network resources associated with your Azure Machine Learning namespaces with Azure Policy. Use Azure Policy aliases in the "Microsoft.MachineLearning" and "Microsoft.Network" namespaces to create custom policies to audit or enforce the network configuration of your Machine Learning namespaces. - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.10: Document traffic configuration rules **Guidance**: Use tags for network resources associated with your Azure Machine Learning deployment in order to logically organize them according to a taxonomy.
For a resource in your Azure Machine Learning virtual network that support the D
- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 1.11: Use automated tools to monitor network resource configurations and detect changes **Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network resources related to Azure Machine Learning. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)--- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
+- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
-**Azure Security Center monitoring**: Not Applicable
+- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
**Responsibility**: Customer
-## Logging and monitoring
-
-*For more information, see the [Azure Security Benchmark: Logging and monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
+**Azure Security Center monitoring**: None
-### 2.1: Use approved time synchronization sources
+## Logging and Monitoring
-**Guidance**: Microsoft maintains the time source used for Azure resources such as Azure Machine Learning for timestamps in the logs.
-
-**Azure Security Center monitoring**: Not Applicable
-
-**Responsibility**: Microsoft
+*For more information, see the [Azure Security Benchmark: Logging and Monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
### 2.2: Configure central security log management
logs via Azure Monitor to aggregate security data generated by Azure Machine
Learning. In Azure Monitor, use Log Analytics workspaces to query and perform analytics, and use Azure Storage accounts for long term and archival storage. Alternatively, you may enable, and on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM). -- [How to configure diagnostic logs for Azure Machine Learning](monitor-azure-machine-learning.md#configuration)
+- [How to configure diagnostic logs for Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/monitor-azure-machine-learning#configuration)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.3: Enable audit logging for Azure resources **Guidance**: Enable diagnostic settings on Azure resources for access to audit, security, and diagnostic logs. Activity logs, which are automatically available, include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. You can also correlate Machine Learning service operation logs for security and compliance purposes. -- [How to collect platform logs and metrics with Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md)
+- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)
-- [Understand logging and different log types in Azure](../azure-monitor/essentials/platform-logs-overview.md)
+- [Understand logging and different log types in Azure](/azure/azure-monitor/platform/platform-logs-overview)
-- [Enable logging in Azure Machine Learning](./how-to-track-experiments.md)
+- [Enable logging in Azure Machine Learning](how-to-track-experiments.md)
- [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.4: Collect security logs from operating systems **Guidance**: If the compute resource is owned by Microsoft, then Microsoft is responsible for collecting and monitoring it. Azure Machine Learning has varying support across different compute resources and even your own compute resources. For any compute resources that are owned by your organization, use Azure Security Center to monitor the operating system. -- [How to collect Azure Virtual Machine internal host logs with Azure Monitor](../azure-monitor/vm/quick-collect-azurevm.md)
+- [How to collect Azure Virtual Machine internal host logs with Azure Monitor](/azure/azure-monitor/learn/quick-collect-azurevm)
- [Understand Azure Security Center data collection](../security-center/security-center-enable-data-collection.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ### 2.5: Configure security log storage retention **Guidance**: In Azure Monitor, set the log retention period for Log Analytics workspaces associated with your Azure Machine Learning instances according to your organization's compliance regulations. -- [How to set log retention parameters](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)-
-**Azure Security Center monitoring**: Not Applicable
+- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.6: Monitor and review Logs **Guidance**: Analyze and monitor logs for anomalous behavior and regularly review the results from your Azure Machine Learning. Use Azure Monitor and a Log Analytics workspace to review logs and perform queries on log data. Alternatively, you can enable and on-board data to Azure Sentinel or a third-party SIEM. -- [How to perform queries for Azure Machine Learning in Log Analytics Workspaces](monitor-azure-machine-learning.md#analyzing-log-data)
+- [How to perform queries for Azure Machine Learning in Log Analytics Workspaces](https://docs.microsoft.com/azure/machine-learning/monitor-azure-machine-learning#analyzing-log-data)
-- [Enable logging in Azure Machine Learning](./how-to-track-experiments.md)
+- [Enable logging in Azure Machine Learning](how-to-track-experiments.md)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) - [Getting started with Log Analytics queries](../azure-monitor/logs/log-analytics-tutorial.md) -- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)-
-**Azure Security Center monitoring**: Not Applicable
+- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.7: Enable alerts for anomalous activities **Guidance**: In Azure Monitor, configure logs related to Azure Machine Learning within the Activity Log, and Machine Learning diagnostic settings to send logs into a Log Analytics workspace to be queried or into a storage account for long-term archival storage. Use Log Analytics workspace to create alerts for anomalous activity found in security logs and events. Alternatively, you may enable and on-board data to Azure Sentinel. -- [For more information on Azure Machine Learning alerts](monitor-azure-machine-learning.md#alerts)
+- [For more information on Azure Machine Learning alerts](https://docs.microsoft.com/azure/machine-learning/monitor-azure-machine-learning#alerts)
-- [How to alert on Log Analytics workspace log data](../azure-monitor/alerts/tutorial-response.md)
+- [How to alert on Log Analytics workspace log data](/azure/azure-monitor/learn/tutorial-response)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.8: Centralize anti-malware logging
-**Guidance**: If the compute resource is owned by Microsoft, then Microsoft is responsible for Antimalware deployment of Azure Machine Learning service.
+**Guidance**: If the compute resource is owned by Microsoft, then Microsoft is responsible for Antimalware deployment of Azure Machine Learning service.
Azure Machine Learning has varying support across different compute resources and even your own compute resources. For compute resources that are owned by your organization, enable antimalware event collection for Microsoft Antimalware for Azure Cloud Services and Virtual Machines. -- [How to configure Microsoft Antimalware for a virtual machine](/powershell/module/servicemanagement/azure.service/set-azurevmmicrosoftantimalwareextension)--- [How to configure the Microsoft Antimalware extension for cloud services](/powershell/module/servicemanagement/azure.service/set-azureserviceantimalwareextension?view=azuresmps-4.0.0)
+- [How to configure the Microsoft Antimalware extension for cloud services](/powershell/module/servicemanagement/azure.service/set-azurevmmicrosoftantimalwareextension)
- [Understand Microsoft Antimalware](../security/fundamentals/antimalware.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ### 2.9: Enable DNS query logging **Guidance**: Not applicable; Azure Machine Learning does not process or produce DNS-related logs.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
+**Azure Security Center monitoring**: None
+ ### 2.10: Enable command-line audit logging **Guidance**: Azure Machine Learning has varying support across different compute resources and even your own compute resources. For compute resources are owned by your organization, use Azure Security Center to enable security event log monitoring for Azure virtual machines. Azure Security Center provisions the Log Analytics agent on all supported Azure VMs, and any new ones that are created if automatic provisioning is enabled. Or you can install the agent manually. The agent enables the process creation event 4688 and the commandline field inside event 4688. New processes created on the VM are recorded by event log and monitored by Security Center's detection services.
-**Azure Security Center monitoring**: Yes
+- [Data collection in Azure Security Center](https://docs.microsoft.com/azure/security-center/security-center-enable-data-collection#data-collection-tier)
**Responsibility**: Customer
-## Identity and access control
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Identity and access control](../security/benchmarks/security-control-identity-access-control.md).*
+## Identity and Access Control
+
+*For more information, see the [Azure Security Benchmark: Identity and Access Control](../security/benchmarks/security-control-identity-access-control.md).*
### 3.1: Maintain an inventory of administrative accounts
You can also use the Azure AD PowerShell module to perform adhoc queries to disc
- [Understand Azure role-based access control in Azure Machine Learning](how-to-assign-roles.md) -- [How to get a directory role in Azure Active Directory with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)-
-**Azure Security Center monitoring**: Yes
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.2: Change default passwords where applicable **Guidance**: Access management to Machine Learning resources is controlled through Azure Active Directory (Azure AD). Azure AD does not have the concept of default passwords.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.3: Use dedicated administrative accounts **Guidance**: Azure Machine Learning comes with three default roles when a new workspace is created, creating standard operating procedures around the use of owner accounts.
-You can also enable a just-in-time access to administrative accounts by using Azure AD Privileged Identity Management and Azure Resource Manager.
+You can also enable a just-in-time access to administrative accounts by using Azure Active Directory (Azure AD) Privileged Identity Management and Azure Resource Manager.
-- [To learn more Machine Learning default roles](how-to-assign-roles.md#default-roles)
+- [To learn more Machine Learning default roles](https://docs.microsoft.com/azure/machine-learning/how-to-assign-roles#default-roles)
-- [Learn more about Privileged Identity Management](../active-directory/privileged-identity-management/index.yml)-
-**Azure Security Center monitoring**: Yes
+- [Learn more about Privileged Identity Management](/azure/active-directory/privileged-identity-management/index)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.4: Use single sign-on (SSO) with Azure Active Directory
-**Guidance**: Machine Learning is integrated with Azure Active Directory, use Azure Active Directory SSO instead of configuring individual stand-alone credentials per-service. Use Azure Security Center identity and access recommendations.
+**Guidance**: Machine Learning is integrated with Azure Active Directory (Azure AD), use Azure AD SSO instead of configuring individual stand-alone credentials per-service. Use Azure Security Center identity and access recommendations.
- [Understand SSO with Azure AD](../active-directory/manage-apps/what-is-single-sign-on.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.5: Use multi-factor authentication for all Azure Active Directory based access
-**Guidance**: Enable Azure Active Directory Multi-Factor Authentication and follow Azure Security Center identity and access recommendations.
+**Guidance**: Enable Azure Active Directory (Azure AD) multifactor authentication and follow Azure Security Center identity and access recommendations.
-- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
+- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
- [How to monitor identity and access within Azure Security Center](../security-center/security-center-identity-access.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
-**Guidance**: Use
-a secure, Azure-managed workstation (also known as a Privileged Access Workstation,
-or PAW) for administrative tasks that require elevated privileges.
+**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges.
- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) -- [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)-
-**Azure Security Center monitoring**: Not Applicable
+- [How to enable Azure Active Directory (Azure AD) multifactor authentication](../active-directory/authentication/howto-mfa-getstarted.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.7: Log and alert on suspicious activities from administrative accounts
-**Guidance**: Use Azure Active Directory security reports and monitoring to detect when suspicious or unsafe activity occurs in the environment. Use Azure Security Center to monitor identity and access activity.
+**Guidance**: Use Azure Active Directory (Azure AD) security reports and monitoring to detect when suspicious or unsafe activity occurs in the environment. Use Azure Security Center to monitor identity and access activity.
- [How to identify Azure AD users flagged for risky activity](../active-directory/identity-protection/overview-identity-protection.md) - [How to monitor users' identity and access activity in Azure Security Center](../security-center/security-center-identity-access.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.8: Manage Azure resources only from approved locations
-**Guidance**: Use Azure AD named locations to allow access only from specific logical groupings of IP address ranges or countries/regions.
-
-
-
-- [How to configure Azure AD named locations](../active-directory/reports-monitoring/quickstart-configure-named-locations.md)
+**Guidance**: Use Azure Active Directory (Azure AD) named locations to allow access only from specific logical groupings of IP address ranges or countries/regions.
-**Azure Security Center monitoring**: Not Applicable
+- [How to configure Azure AD named locations](../active-directory/reports-monitoring/quickstart-configure-named-locations.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.9: Use Azure Active Directory **Guidance**: Use Azure Active Directory (Azure AD) as the central authentication and authorization system. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
Role access can be scoped to multiple levels in Azure. For Machine Learning, rol
- [How to create and configure an Azure AD instance](../active-directory/fundamentals/active-directory-access-create-new-tenant.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.10: Regularly review and reconcile user access
-**Guidance**: Azure AD provides logs to help discover stale accounts. In addition, use Azure AD identity and access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access.
-
-Use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment.
+**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure AD identity and access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access.
+
+Use Azure AD Privileged Identity Management (PIM) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment.
-- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
+- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md) - [Deploy Azure AD Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-deployment-plan.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.11: Monitor attempts to access deactivated credentials
-**Guidance**: You have access to Azure AD sign-in activity, audit, and risk event log sources, which allow you to integrate with any SIEM/monitoring tool.
+**Guidance**: You have access to Azure Active Directory (Azure AD) sign-in activity, audit, and risk event log sources, which allow you to integrate with any SIEM/monitoring tool.
You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired alerts within Log Analytics workspace.
-
-
-- [How to integrate Azure activity logs with Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
-**Azure Security Center monitoring**: Not Applicable
+- [How to integrate Azure activity logs with Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.12: Alert on account login behavior deviation
-**Guidance**: Use Azure AD Identity Protection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
-
+**Guidance**: Use Azure Active Directory (Azure AD) Identity Protection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
+ - [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
-
+ - [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
-
-- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Not Applicable
+- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.13: Provide Microsoft with access to relevant customer data during support scenarios **Guidance**: Not applicable; Azure Machine Learning service doesnΓÇÖt support customer lockbox.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
-## Data protection
+**Azure Security Center monitoring**: None
+
+## Data Protection
-*For more information, see the [Azure Security Benchmark: Data protection](../security/benchmarks/security-control-data-protection.md).*
+*For more information, see the [Azure Security Benchmark: Data Protection](../security/benchmarks/security-control-data-protection.md).*
### 4.1: Maintain an inventory of sensitive Information
You can streamline this process by creating diagnostic settings for Azure AD use
- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.2: Isolate systems storing or processing sensitive information **Guidance**: Implement isolation using separate subscriptions and management groups for individual security domains such as environment type and data sensitivity level. You can restrict the level of access to your Azure resources that your applications and enterprise environments demand. You can control access to Azure resources via Azure RBAC.
You can streamline this process by creating diagnostic settings for Azure AD use
- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md) - [How to create management groups](../governance/management-groups/create-management-group-portal.md)+ - [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.3: Monitor and block unauthorized transfer of sensitive information **Guidance**: Use a third-party solution from Azure Marketplace in network perimeters to monitor for unauthorized transfer of sensitive information and block such transfers while alerting information security professionals.
For the underlying platform, which is managed by Microsoft, Microsoft treats all
- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.4: Encrypt all sensitive information in transit **Guidance**: Web services deployed through Azure Machine Learning only support TLS version 1.2 that enforces data encryption in transit. - [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.5: Use an active discovery tool to identify sensitive data **Guidance**: Data identification, classification, and loss prevention features are not yet available for Azure Machine Learning. Implement a third-party solution if necessary for compliance purposes.
For the underlying platform, which is managed by Microsoft, Microsoft treats all
- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.6: Use Azure RBAC to manage access to resources **Guidance**: Azure Machine Learning supports using Azure Active Directory (Azure AD) to authorize requests to Machine Learning resources. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, or an application service principal. - [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md)-- [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md)
-**Azure Security Center monitoring**: Not Applicable
+- [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md)
**Responsibility**: Customer
-### 4.7: Use host-based data loss prevention to enforce access control
-
-**Guidance**: Not applicable; this guideline is intended for compute resources.
-
-Microsoft manages the underlying infrastructure for Machine Learning and has implemented strict controls to prevent the loss or exposure of customer data.
--- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)-
-**Azure Security Center monitoring**: Not Applicable
-
-**Responsibility**: Microsoft
+**Azure Security Center monitoring**: None
### 4.8: Encrypt sensitive information at rest **Guidance**: Azure Machine Learning stores snapshots, output, and logs in the Azure Blob storage account that's tied to the Azure Machine Learning workspace and your subscription. All the data stored in Azure Blob storage is encrypted at rest with Microsoft-managed keys. You can also encrypt data stored in Azure Blob storage with your own keys in Machine Learning service. -- [Azure Machine Learning data encryption at rest](concept-enterprise-security.md#encryption-at-rest)
+- [Azure Machine Learning data encryption at rest](https://docs.microsoft.com/azure/machine-learning/concept-enterprise-security#encryption-at-rest)
- [Understand encryption at rest in Azure](../security/fundamentals/encryption-atrest.md) -- [How to configure customer managed encryption keys](../storage/common/customer-managed-keys-configure-key-vault.md)-
-**Azure Security Center monitoring**: Not Applicable
+- [How to configure customer-managed encryption keys](../storage/common/customer-managed-keys-configure-key-vault.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.9: Log and alert on changes to critical Azure resources **Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production instances of Azure Machine Learning and other critical or related resources. -- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)-
-**Azure Security Center monitoring**: Not Applicable
+- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
**Responsibility**: Customer
-## Vulnerability management
+**Azure Security Center monitoring**: None
+
+## Vulnerability Management
-*For more information, see the [Azure Security Benchmark: Vulnerability management](../security/benchmarks/security-control-vulnerability-management.md).*
+*For more information, see the [Azure Security Benchmark: Vulnerability Management](../security/benchmarks/security-control-vulnerability-management.md).*
### 5.1: Run automated vulnerability scanning tools
Azure Machine Learning has varying support across different compute resources an
- [How to implement Azure Security Center vulnerability assessment recommendations](../security-center/deploy-vulnerability-assessment-vm.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ### 5.2: Deploy automated operating system patch management solution **Guidance**: If the compute resource is owned by Microsoft, then Microsoft is responsible for patch management of Azure Machine Learning service.
Azure Machine Learning has varying support across different compute resources an
- [Understand Azure security policies monitored by Security Center](../security-center/policy-reference.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ### 5.3: Deploy an automated patch management solution for third-party software titles **Guidance**: Azure Machine Learning has varying support across different compute resources and even your own compute resources. For compute resources that are owned by your organization, use a third-party patch management solution. Customers already using Configuration Manager in their environment can also use System Center Updates Publisher, allowing them to publish custom updates into Windows Server Update Service. This allows Update Management to patch machines that use Configuration Manager as their update repository with third-party software.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 5.4: Compare back-to-back vulnerability scans **Guidance**: Azure Machine Learning has varying support across different compute resources and even your own compute resources. For compute resources that are owned by your organization, follow recommendations from Azure Security Center for performing vulnerability assessments on your Azure virtual machines, container images, and SQL servers. Export scan results at consistent intervals and compare the results with previous scans to verify that vulnerabilities have been remediated. When using vulnerability management recommendations suggested by Azure Security Center, you can pivot into the selected solution's portal to view historical scan data. - [How to implement Azure Security Center vulnerability assessment recommendations](../security-center/deploy-vulnerability-assessment-vm.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 5.5: Use a risk-rating process to prioritize the remediation of discovered vulnerabilities **Guidance**: Not applicable; this guideline is intended for compute resources.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Not Applicable
-## Inventory and asset management
+**Azure Security Center monitoring**: None
-*For more information, see the [Azure Security Benchmark: Inventory and asset management](../security/benchmarks/security-control-inventory-asset-management.md).*
+## Inventory and Asset Management
+
+*For more information, see the [Azure Security Benchmark: Inventory and Asset Management](../security/benchmarks/security-control-inventory-asset-management.md).*
### 6.1: Use automated asset discovery solution
-**Guidance**: Use Azure Resource Graph to query for and discover resources (such as compute, storage, network, ports, and protocols etc.) in your subscriptions. Ensure appropriate (read) permissions in your tenant and enumerate all Azure subscriptions as well as resources in your subscriptions.
+**Guidance**: Use Azure Resource Graph to query for and discover resources (such as compute, storage, network, ports, and protocols etc.) in your subscriptions. Ensure appropriate (read) permissions in your tenant and enumerate all Azure subscriptions as well as resources in your subscriptions.
Although classic Azure resources can be discovered via Azure Resource Graph Explorer, it is highly recommended to create and use Azure Resource Manager resources going forward. - [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure subscriptions](/powershell/module/az.accounts/get-azsubscription?view=azps-3.0.0)
+- [How to view your Azure subscriptions](/powershell/module/az.accounts/get-azsubscription)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.2: Maintain asset metadata **Guidance**: Apply tags to Azure resources, adding metadata to logically organize according into a taxonomy. - [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.3: Delete unauthorized Azure resources **Guidance**: Use tagging, management groups, and separate subscriptions where appropriate, to organize and track assets. Reconcile inventory on a regular basis and ensure unauthorized resources are deleted from the subscription in a timely manner.
-
-
-
+ - [ How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-
-
-
+ - [ How to create management groups](../governance/management-groups/create-management-group-portal.md)
-
-
- [ How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.4: Define and maintain an inventory of approved Azure resources **Guidance**: Create an inventory of approved Azure resources and approved software for compute resources as per your organizational needs.
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.5: Monitor for unapproved Azure resources **Guidance**: Use Azure Policy to put restrictions on the type of resources that can be created in customer subscriptions using the following built-in policy definitions:
In addition, use the Azure Resource Graph to query/discover resources within the
- [How to create queries with Azure Graph](../governance/resource-graph/first-query-portal.md)
-**Azure Security Center monitoring**: Not Applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.6: Monitor for unapproved software applications within compute resources **Guidance**: Azure Machine Learning has varying support across different compute resources and even your own compute resources. For compute resources that are owned by